Hello,
Referring to this post
http://www.keil.com/forum/59779/
Maybe it is time to remove the simulator from uv5 to prevent confusion...?
I'd have to say I agree. I'd lean to just having a special faux ARM7 or CM3/4 with a generic set of peripherals and LEDs, 7-segments, scope, etc for experimenters and learners.
Either that, or open up the emulation/scripting a lot more to allow people to build and share there own models.
"This will gave me permission to read or modify any of the SYSCTL, GPIO, etc. registers."
No, it merely gives you the illusion this happens, nothing actually gets read, written or modified in a way that's at all helpful or has any bearing on reality. Wouldn't it be easier just to remove the code that uses/depends on them?
Without simulation people can just as well test their code on a PC using Code::Blocks with gcc/MinGW.
Having access to good peripheral simulation really is the killer functionality.
When the simulation was created the situation was very different. MCUs did often neither offer a simple way to program a ROM memory nor any debug support by the hardware. Also peripherals where mainly analog and digital interfaces where far less complex. Emulators where expensive and restrictive. Today you'll find more complex debug logic on the devices that can be accessed with little effort. Technologies like ETM trace give deep insight into your application. Today's peripherals often connect to complex busses and networks (USB, Ethernet, BTLE, etc) and most effort would be to create valid test cases with pretty smart remote nodes. Testing the application in a real world scenario (real hardware, production type application) is most often the more accessible option now. This is why you'll find almost no peripheral simulation for newer devices in the uVision simulation.
But still simulation has its use-cases. One of them is validation of less hardware-dependend algorithms using the pure core simulation. And as time goes on MCU software will have more and more software components that are hardware agnostic (think of signal processing, codecs, compression, protocols, etc...)
As a reference you might check out the example from http://www.keil.com/appnotes/docs/apnt_277.asp . The example does use simulation to validate the function of ROM self-test with CRC including a simple fault injection.
Regards, Matthias
Just that using JTAG etc is only meaningful if you already have reasonable hardware to run it on. But it costs too much to wait for hardware.
With previous generation hardware, I have managed to get all the UART, timer, I2C, SPI, ... functionality up and running and fully debugged in the simulator using test scripts for the circuitry outside of the processor. So the same day the prototype arrived I have been able to run fully implemented live code.
And as noted - for core-only, I don't even involve the Keil tools since it's easier to test using x86-based tools.
People doing gate level SoC have entirely different expectations, and are used to writing much bigger cheques for their tools, simulators and staff. They employ people that code with the expectation the design will be right the first time around when committed to ROM/Silicon.
For most embedded board designs, it is NOT that hard to mash-up development and break-out boards, that can be a close proxy for the final design.
People need to think about portability from the outset, and be able to test/validate code in a framework they build themselves that provides the level hardware independence they need. All the processing heavy stuff I work on has all the bugs and algorithm stuff beaten out of it on PC workstations. People who need to do it by single stepping their ARM code are just fools.
If your developers can't fashion workable code without the exact/specific hardware the product will ultimately use, then you need to find better people. Unfortunately the money tends to draw a lot of talentless people into the field.
Keil isn't going to be able to manage the hundreds and thousands of chip combinations that now exist. The testing/validation of the simulation just explodes exponentially.
If your developers can't fashion workable code without the exact/specific hardware the product will ultimately use, then you need to find better people. Unfortunately the money tends to draw a lot of talentless people into the field. lucky you! you evidently only work with hardware that has complete and accurate specificatins
So the same day the prototype arrived I have been able to run fully implemented live code.
These days you would usually have no chance to repeat that feat, because PCB fabrication times have gone way down compared to what they used to be. E.g. at my place of work, we don't even usually bother with readily available devel/eval boards any more, because the can expect the first batch of engineering boards built from our actual target schematic to be delivered well before people could learn significantly more from an eval board than from the documentation.
Also, the fact that the lowest levels of drivers are often acquired ready-made from third parties these days renders the point of preparing your own drivers extremely quickly after the CPU has been decided upon relatively unimportant.
I tend to stay away from third party drivers and middleware as much as I can.
Yes PCB manufacturing can be quite quick, but I can start working on the software from very, very preliminary schematics, i.e. well before there are any layout to send out for ordering of any PCB.
For testing code on a systems level, I have done a bit of work with posix threads and sockets or shared memory - so a separate program or some additional threads have represented the outside world. So the "microcontroller software" adds SPI data to a send queue and a different thread picks up the data, decodes and sends back responses. This makes it easy to perform repeatable tests. In some situations, I have written GUI software displaying a graphical keypad etc.
What remains for the actual hardware is to verify actual load + timing - the PC simulation can't detect if there will be FIFO overruns on the real hardware or how many percent of the processor capacity that will be consumed by ISR code.
Also, I need to verify on real hardware that I haven't misunderstood the register descriptions for the peripheral hardware.
"you evidently only work with hardware that has complete and accurate specificatins"
This can be a significant issue.
I had one project where I implemented the full software in N hours. Then I had to spend 5*N hours to implement workarounds for bugs in the middleware. When reporting bugs, the company in question claimed there were no bugs - even when supplied with test code repeatable showing failures.
Then when middleware 3.0 or maybe 4.0 was released, I noticed that trace output indicated that the workarounds did not trig anymore - so I could spend two days removing workaround after workaround. The middleware without bugs had suddenly started to function as their documentation claimed - but without any release document ever admitting to any fixes. All I know is that the company who developed the middleware had got at least one new developer on their team - I can only guess that this was the magical difference.
Anyway - all methods of simulation are good. Options are always good.
Options are always good. exactly.
I 'never' use the simulator, but if I have some heavy crunching non-I/O stuff, I do. I 'never' use printf debug, but when there is an issue that show up once a month at a customer site, I stick in a printf so the customer can report I 'never' ... ,but...
I'm actually using printf() debugging a lot.
I get information without stopping any real-time code and wrecking the interaction with timed operations.
And I normally always have enough CPU cycles and flash space that I can ship with trace functionality in place. Some few lines always enabled. Some possible to turn on/off on command.
It's nice to be able to sign in to a live unit out in the field and look at performance statistics etc.
Printouts might be very much 1980 but they just happen to work very well. It's often enough with a very minor hint to be able to visualize what a unit is doing and why. Especially since most issues tends to be caused by incorrect configurations.
Or we know what the hardware does, and can validate and test enough that a ROM'd boot loader is actually going to function out of the gate, or the library functions do what they are supposed to on any ARM7, ARM9 or CM3, etc.
The idea that the software devs get to *** around and cause the SoC IP to be respun multiple times is alien, yes.
Perhaps you should focus on why your documentation is incomplete and inaccurate.