Hi, I am running UART solution on 8051. There is only UART interrupt handler and no other interrupts. All code runs in polling mode, having 4 TASK and using minimal OS (RTX)
There are two specific functions which randomly gets called sometimes. These functions are system initialisation functions and are not executed in any path during traffic test. so surely, there is corruption happening, which causes these functions to get called randomly.
I want to understand what are tools/methods to follow to find out root cause for such corruption symptoms. i would like to have suggestion for best way to debug this issue on 8051 platform.
Best Regards. Thanks for your time.
-Rajan Batra
Hi, Any thoughts on it ?
Best Regards, -Rajan Batra
Insufficiently large stack, stack auto/local variables accessed beyond bounds, or otherwise being corrupted?
Errant pointers, or other code, corrupting OS tables or control structures?
Your options would depend on what "8051" we're talking about. If you've got an EPROM device, then you'll want to add code to check stack usage, or memory corruption, and what routines are being run. This might permit you notice any commonality in the failure.
If you have external memory you could trace execution via a logic analyzer.
You could consider an ARM part with a richer debug environment. It's 2014, pretty close to being 2015
Your options would depend on what "8051" we're talking about. ...... You could consider an ARM part with a richer debug environment. there are (e.g. SLabs) '51s with a "rich debug environment" the ARM/51 decision need not be on that base.
There are several jobs that still will get done better/faster/cheaper with a '51 than an ARM. On the other hand e.g. banking '51s is ridiculous with the availability of reasonably priced ARMs
Orbital Sciences might be using 40 year old rocket engines that were plastic wrapped in Siberia for decades, but I'm pretty sure they weren't using 8051 chips to complement them. And while NASA were scrounging up 8086 devices for the Shuttle program they stopped doing that.
There comes a time when the teaching establishment needs to teach microprocessors and assembly language using a clean 32-bit device with an orthogonal instruction set. In the late 70's, early 80's this would have been the 68K, but by the mid/late 80's its the ARM.
And I'm a guy who's coded on more microprocessor and SoC architectures than I've had birthdays. And I'm old. Believe me, if you have a solid grasp of one micro the others are all very similar, as are the concepts of assemblers, compilers, linkers and loaders.
And I'm old
But are you as old as the great Erik Malund?
He's posted more comments on this site alone than most of us youngsters have had hot dinners.
there are places for every technology, and while I agree with you re "the teaching establishment" I would not use an ARM, but a 4-bitter for certain apps.
Just to clarify, I have developed many things using the ARM, so my 'defense' of the '51 is not due to me being "stuck in old technology" I just can not agree that "unconditianally the ARM" is a valid statement.
White Pages says >65, so yes, but still younger than my dad, and a few decades my senior.
ARM is a clean and clever design, by a couple of really sharp dudes (at least formerly) from Cambridge. They know their stuff, and the tools, including the availability of Keil ones in the context here, make it a very good architecture to study, and an expanding sphere of use. Someone specializing in microprocessors would do well to study a lot of them, both RISC and CISC, but as an educational vessel the ARM is a good ship to start out on.
The 8051 is clever too, but it's role is diminishing, and awkwardly overloaded, to the point it's basically caretaker silicon for hardware that's orders of magnitude more complex. Most people doing SoC work would pick something else, for non caretaker / power management roles. Stand-alone processors are becoming more irrelevant.
Note that a modern 32-bit microcontroller would probably draw less power than a 4-bit processor.
Sounds strange but with x billions shipped 32-bit ARM cores, there is a market for producing them in quite fine geometries. So the processor core is puny inside the ring of huge I/O transistors connected to the external pins.
No one has a big enough market to take the time/cost of moving a 4-bit processor to a reasonably modern geometry. So a single transistor in that 4-bit processor will end up consuming as much energy as a large number of transistors in the 32-bit processor.
And both the 4-bit and the 32-bit processor can be found in puny outlines.
The Cortex-M0 in 90nm can reach 12.5uW/MHz dynamic power at 1.2V with a 0.03mm2 large core. Switch to 40nm and you can get 5,3uW/MHz at 1.1V with a 0.008mm2 large core.
The 8051 is clever too, but it's role is diminishing, and awkwardly overloaded, to the point it's basically caretaker silicon 1) the "awkwardly overloaded" should cease, agreed. With the availability of cost comparative soultions that are not "awkwardly overloaded". In the olden days a lot of '51 work was done that jumped through hoops (e.g. banking) just to use the cheap processor.
HOWEVER, there are still many many new chips that include a '51 core with their 'primary function' e.g. transceivers, sensors, ..... where some processing capability is desirable.
For a mundane task the ARM 'startup code' can be too heavy a burden.
Again, I like and use the ARM, just do not agree to "use 32 bit or you are wrong"
"HOWEVER, there are still many many new chips that include a '51 core with their 'primary function' e.g. transceivers, sensors, ..... where some processing capability is desirable.
For a mundane task the ARM 'startup code' can be too heavy a burden."
The only reason I can see why there are a number of chips with an 8051 hidden somewhere inside is licensing costs.
The ARM startup code isn't any burden - have you looked at the requirements for some modern Cortex chips? Being able to write even the startup code in C (minus any assumptions about initialized global variables etc) means most developers can manage to write their own startup code.
The ARM startup code isn't any burden - have you looked at the requirements for some modern Cortex chips? Being able to write even the startup code in C (minus any assumptions about initialized global variables etc) means most developers can manage to write their own startup code. The 'burden' of the startup is not writing the code, but the size of flash it takes. The cost of a die is related to area and a '51 with 2k Flash takes a lot less space than an ARM with 8K. Many of the 'combined' chips need a larger geometry for various reasons and then this becones a real issue.
Of course, licensing coast will be a factor too.
Again, I use the ARM quite frequently and have no issue with the points in this thread, just the ARM is not pantyhose (does not fit all).
Erik
"The cost of a die is related to area and a '51 with 2k Flash takes a lot less space than an ARM with 8K."
This is only true when the two chips are implemented using the same technology.
But just as the 32-bit processors normally have a core with much smaller individual transistors because of a newer process technology, they also have their flash region implemented in a newer technology. So 8kB of flash in a brand new Cortex-M0 consumes much less die area than 2kB of a "normal" 8-bit processor. Same with RAM - the average 32-bit processor can have more flash and more RAM while still consuming less die space.
Starting with a 32-bit addressable memory range means the chip manufacturer can have a range of compatible processors from variants with quite little memory and few pins and up to heavy-duty chips with lots and lots of peripherial pins, peripherial devices and memory. So the total design costs can be shared, while giving the customers a wide range of chips. Which also means that the companies designing in the chips knows that they can reuse their software and their hardware when later releasing luxury editions of their products.
So even if the 4-bit or 8-bit processor has less transistors, you can normally still come ahead in cost, size, power, ... with a 32-bit processor choice. Volume is after all the main driving factor when it comes to costs. And with 32-bit processors able to also cover 8-bit tasks, you just don't get the volumes in the 8-bit market except for very specific chips. Not too many companies has the volumes where they can call a chip manufacturer and say that they need 100 million custom-adapted chips.
If implementing a lamp timer, any processor architecture can be used. A bike computer? Same there - you are free to chose 8-bit or 32-bit and the price will still be low enough and the power consumption low enough. But the 32-bit choices will have a higher reusability factor, because the smaller transistors used because of the newer manufacturing processes means the 32-bit choice can throw in 10k extra transistors for peripherial functionality to use - or not use - as needed. The extran 0.001 mm2 of die space doesn't matter compared to the die space consumed by the I/O transistors + bonding pads. And they do not consume extra power because they are only powered up if the extra serial port or extra timer or extra DAC is actually enabled and used.
It's quite a number of years since "total number of transistors" actually mattered "on the outside". Within a specific family of chips, you pay more for more transistors, because the chip vendor wants more money for the big brother. But if you instead compare between different architectures, then "price per transistor" breaks down. The chip cost isn't based on number of transistors, but on "how much can I charge and get the market I want/need?". And when comparing between different families or architectures, the "power per transistor" also breaks down. Because manufacturing processes means so much more than the actual number of transistors implementing the core, the peripherial logic and the memory blocks.
Another factor here because the core transistors costs so little in both die size, fabrication cost and power cost, compared to the costs of the I/O pins, is that it's possible to include additional processor cores for almost the same cost. Which allows a "microcontroller" to still get a slave "I/O controller". The interprocess communication can be done using tiny 1.2V core transistors instead of bulky I/O-pad transistors. So suddenly you can get hard real-time for specific I/O needs while still having 90% of the actual software written using a software design that need not worry about the real-time requirements. All because of two interrupt controllers, and two PC + register banks. And this allows the 32-bit processors to claim even more market shares and split the development costs over even more sold pieces, making it even more attractive to move the production to newer fab lines with even lower transistor sizes, capacitances and trace distances.
So in the end, a 8051 with 8kB of flash is more expensive than a 8051 with 2kB of flash. But you might get a 32-bit ARM with 16kB of flash for the same price - while getting full 32-bit timers and baudrate generators that generates correct baudrates for whatever crystal you select thanks to fractional division.
And a program custom-designed for a 8051 is more expensive than a program written for a general-purpose processor where it's possible to use a "driver layer" to separate processor-specific code from business logic. So the same code can continue to live 20 years later and having been used in a number of different processors from different vendors, in different families and often using different architectures.
It's easy to think "this processor fits perfectly for this product". But it's almost impossible to predict how the market will move, and what requirements there will be on the product revision 2 or revision 3 or revision 4. So a product might have started with RS-232. Then moved to RS-422. Then Ethernet. Then wireless. Code written for a 32-bit processor is easier to move between processors than code squeezed into a "perfect fit" 8-bit processor.
Per,
So in the end[smaller geometry causes], a 8051 with 8kB of flash is more expensive than a 8051 with 2kB of flash. you totally ignored: "Many of the 'combined' chips need a larger geometry for various reasons" for instance a USB driver must 'flip' 5V and, maybe, a bit of logic in the driver chip would make sense
And a program custom-designed for a 8051 is more expensive than a program written for a general-purpose processor where it's possible to use a "driver layer" to separate processor-specific code from business logic. Maybe not the best example, but, if I design a separate chip sensor (requiring large geometry) with a bit of code the above is totally invalid
you keep referring to 'general purpose' where I refer to 'single purpose'
No, I use General Purpose because I mean General Purpose.
The traditional ARM microcontrollers are not "one purpose" chips. They have a general I/O peripherial setup allowing the same chip to be sold for use in a very wide number of applications. People only power up the features that are needed - but the core logic doesn't really need any extra space because it is using the internal low-voltage power domain.
Most ARM chips already have power converters. They may be driven by 3.3V or 5V but internally steps down that voltage to a much lower voltage for the actual logic. Yes, the power converter takes some space, but the advantage is that the core logic instead takes almost zero space because of the much smaller geometries. It's just the I/O pad trandistors that are large - and they are the same size whatever geometry you use of a 4-bit or 8-bit or 32-bit processor. Their size is directly related to how much ESD-protection you want and how high current drive/sink support you want.
What you might miss here, is that when using really small geometries, the engine of a CAN controller or a USB controller does't really take much more space than a lowly UART. And when special electrical hardware is needed, you normally use an external circuit for handling the CAN wire or Ethernet wire - especially since you want separate protection and EMI filtering when using special I/O. But for most microcontrollers, the actual I/O pins has the same internal electronics if the pin is just GPIO or if the pin can be switched from GPIO into UART, USB, Ethernet, CAN, ... It's basically only I2C and ADC pins that may have different circuitry. And there, the needs are the same for a 4-bit processor or a 32-bit processor.
and, if it was general urpose, I'd most likely use an ARM
but for special purpose where the geometry may be determined by the sensing element the advantage of using a general purpose processor is NIL