Hey
I looking through some code, where the type of a parameter in a function call is stated as unsigned.
The data type (char, int, long) is not stated. Does any one know which type is assumed when not stated.
Yes - any 'C' textbook will tell you that.
(it's not specific to C51).
okay, so unsigned int is used...
Indeed.
In general, 'C' assumes that everything is an int unless specifically stated otherwise.
This can be bad news for for an 8-bit target where the compiler uses 16 (or more) bits for an int - as C51 does...
As far as I know, ints have to be at least 16 bits wide in C. An 8-bit int would not conform to the specification.
But there are some non-conforming compilers out there!
There is also something of a contradiction in the standard: it recommends that int should be the "natural" word size of the target architecture - which would mean an 8-bit int for an 8-bit architecture...
No contradictin. An int should try to be a natural size that the target can run efficient with but with the addendum that it must be at least 16-bit large.
A lot of code that implements larger integer arithmetic based on bytes would need to be modified significantly if it would take "long" to store something larger than a byte. But if you have an "int" that is 8 bits and a "long int" that is at least 32 bits, then you get a very interesting efficiency problem for your 8-bit code when implementing multi-precision arithmetic. §5.2.4.2.1 concludes that <limits.h> should contain the relevant limits for the implementation, but that the implementation-specific values should be >= the values in the standards paragraph. So INT_MAX must be >= 32767. And LONG_MAX must be >= 2147483647.
On one hand, C is defined to try to be as natural as possible - to get an easy translation from C into native and efficient machine instructions.
But at the same time, you have to make at least some hard design decisions to give the developers some fixed points of reference. With no guarantees, you would have to use the data type "unsigned long long" and you would still not be sure that you would be able to store the value 1000 in your variable. Such a "do whatever you like" standard would make it impossible to program. And there would be an awful mapping between source code and machine instructions.
Oh yes there is!
"An int should try to be a natural size that the target can run efficient with but with the addendum that it must be at least 16-bit large."
What the spec actually says (section 6.1.2.5) is:
"A 'plain' int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range INT_MIN to INT_MAX as defined in the header <limits.h>)."
But where do you see a contradiction?
Note that the text does not say anything about an int having to fit in a single register or in the accumulator. Or that a multiply or add must be possible with a single machine instruction. So there isn't any contradiction.
The text talks about a natural (not native) data type large enough to store at least the range specified by INT_MIN and INT_MAX in <limits.h>? The standard explicitly says that INT_MIN and INT_MAX must span (at least) -32767 to 32767. So the natural (not native) size for an 8-bit processor would then be 16 bits since that is the most efficient data size that fulfills the INT_MIN/INT_MAX requirements.
Letting the 8-bitter have a 32-bit int would not be a natural choice since it would require a lot of extra instructions that are not required to fulfill the standard. A 16-bit int is a natural choice since it is the simplest-to-implement and fastest data type that does fulfill the standard. An 8-bit int is not a natural choice since it is a size that explicitly violates the requirements of the standard.
Use a [fairly] standard typedef for declarations and make notes within that file if the processor has restrictions...
/*--------------------------------------------------------------------------. ; declare my own typedeefs of the standard data types ; '--------------------------------------------------------------------------*/ typedef unsigned char u8; // CAUTION 16-bit only machine ... etc. typedef signed char s8; // CAUTION 16-bit only machine ... etc. typedef unsigned int u16; typedef signed int s16; typedef unsigned long u32; typedef signed long s32; typedef float f32; /*----------------------------------------------------------------------. ; if so desired, the volatile data-type is defined ; '----------------------------------------------------------------------*/ typedef volatile unsigned char vu8; // CAUTION 16-bit only machine ... etc. typedef volatile signed char vs8; // CAUTION 16-bit only machine ... etc. typedef volatile unsigned int vu16; typedef volatile signed int vs16; typedef volatile unsigned long vu32; typedef volatile signed long vs32; typedef volatile float vf32;
And then it becomes rather simple to know the sign and bit-width.
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
No! Don't rely on notes - code monkeys will miss them!
Use conditional compilation to ensure that it's right!
#if defined COMPILER_A // Definitions for Compiler 'A' typedef unsigned char u8; typedef signed char s8; typedef unsigned int u16; typedef signed int s16; typedef unsigned long u32; typedef signed long s32; typedef float f32; #elif defined COMPILER_B // Definitions for Compiler 'B' typedef unsigned char u8; typedef signed char s8; typedef unsigned short u16; typedef signed short s16; typedef unsigned int u32; typedef signed int s32; typedef float f32; #else #error Unknown Compiler! #endif
Or
#if defined COMPILER_A #include "compiler_a.h" #elif defined COMPILER_B #include "compiler_b.h" #else #error Unknown Compiler! #endif
Although, if you're just starting now, it'd make sense to use the C99 standard names rather than u8, s16, etc...
Any, I understand that. It's not a 'bad' way to go, especially since code-monkeys have... uhm, lethologically challened at the moment... uhm, "issues."
I'm just not fond of conditional compilation. I think that this particular conditional compilation would be worth the effort if incorporated properly.
I would limit the conditional to e.g. #ifndef KeilC51 wrong definitions, make your own #endif
that keeps the clutter down.
and the "make your own" should scare a code monkey to quit
BTW Vince, I, personally, prefer U8 to u8 ....
Erik
Hmmmm, I use ALL_CAPITAL_LETTERS for #defines.
Granted a typedef is a form of #define, but I, personally, prefer u8 as opposed to U8.
(After checking, my editor [CodeWright v7.5] recognizes U8 as a typedef, whereas my code does not identify U8 as being typedef'd. Thanks for helping me catch that).