Hi, I get a curious result when compiling such a following code :
typedef union { unsigned char cCtrlFullByte; struct { unsigned char bEnable : 1; unsigned char cUnused : 7; } cCtrlStruct; } CtrlUnion; void main (void) { unsigned char dummy = 0x55; CtrlUnion xdata bitUnion; bitUnion.cCtrlStruct.bEnable = dummy & 0x40; return; }
MOV A,#0x55 ANL A,#0x00 MOV R7,A MOV DPTR, #0x0000 MOVX A,@DPTR ANL A,#0xFE ORL A,R7 MOVX @DPTR, A
In the standard it is stated that a bit-field is interpreted as a signed or unsigned integer type consisting of the specified number of bits. In your case it's unsigned integer 1 bit wide. Conversion rules for unsigned integers imply that the compiler should take the LSB, which it did. Regards, Mike
Why not, but how do you explain a different casting rule when the destination is a "real bit"
void main (void) { unsigned char dummy = 0x55; bit bitValue; bitValue = dummy & 0x40; return; }
MOV A,#0x55 MOV C, 0xE0.6 MOV 0x20.0, C
MOV A,#0x55 ANL A,#0x44 ADD A,#0xFF MOV 0x20.0, C
The bit type is a specific Keil extension, so they can implement it in the most efficient, target-specific way they can think of - without having to worry about any other ANSI constraints; Bitfields, on the other hand, are standard ANSI - so Keil may well be constrained in their implementation (and implementations of bitfields are notorious for being inefficient!) BTW: Have you disabled the defualt ANSI Integer Promotions?
That bit of assembly code looks wrong. Should it not read:
... ANL A,#0x40 ...
bitUnion.cCtrlStruct.bEnable = ( dummy & 0x40 ) != 0;
> The bit type is a specific Keil extension, > so they can implement it in the most efficient, > target-specific way they can think of - without > having to worry about any other ANSI constraints; Why not, but it is not a reason to have opposite result when using a bit type or a bit in bit-field type !!! > Bitfields, on the other hand, are standard ANSI > - so Keil may well be constrained in their > implementation (and implementations of bitfields > are notorious for being inefficient!) My problem is not efficiency but COHERENCY of the compiled code ! If I used a bit-field type stored in BDATA, or a byte with sbit declarations, the result will be different !!! So, if you write your code using Keil bit type, the result will be good, whereas using bit-field type (for portability), the result will be false !!! Not really coherent ! ;-( Is that a compiler bug or a normal issue ? > BTW: Have you disabled the defualt ANSI Integer Promotions? Obviously Yes, but no change. Arnaud DELEULE
OK, good workaround ! I will try to make it a habit ! ;-) It will avoid compiler-dependant result. Arnaud DELEULE
The code in Araund's example is correct. The first example where C is set from E0.6 (actually Accumulator bit 6 .. E0 is the ACC sfr) is exactly the value specified. In the second example, the and with 0x44 and add 0xff sets carry if either of the two bits in the mask are non-zero. Looks to me like the compiler did exactly as it should.
You're right. It does exactly what it should when the destination is a Keil "bit" type. If the type of the destination is a 1-bit wide field in a bit-field structure, the result will be different. For me, that should not happen. Regards Arnaud
I recently invented this macro to access Bits in a Byte:
#define BITREF(aByte,aPos)((struct {unsigned char _0:1;unsigned char _1:1;unsigned char _2:1;unsigned char _3:1;unsigned char _4:1;unsigned char _5:1;unsigned char _6:1;unsigned char _7:1;}*)&aByte)->_##aPos
byte xdata b; BITREF(b,4) = 1; BITREF(b,7) = BITREF(b,4);
C51 does not currently generate efficient code accesses to 1-bit fields – it is as if C51 treats all bit fields the same way, no special case is made when the field size is 1-bit. That is a pity because 1-bit fields are very common in real-time programming and handling them more efficiently would be a significant benefit. Perhaps Keil can fix this for us? [rant] Why did Keil implement bit-size variables the way that they did? Currently, Keil C51 programs have to include this sort of thing:
sbit return_path_signalling_pin = P1^0; sbit line_fault_relay_pin = P1^1; sbit control_output_pin = P1^2; sbit control_output_plugin_pin = P1^4; sbit line_fail_plug_in_pin = P1^5; sbit comms_fail_plug_in_pin = P1^6;
typedef enum ( FALSE, TRUE ) boolean; … bit boolean carry _at_ 0xD7; bit boolean aux_carry _at_ 0xD6; bit boolean f0 _at_ 0xD5; bit boolean overflow _at_ 0xD2; bit boolean f1 _at_ 0xD1; bit boolean parity _at_ 0xD0; typedef struct { boolean carry : 1; boolean aux_carry : 1; boolean f0 : 1; unsigned char register_bank : 2; boolean overflow : 1; boolean f1 : 1; boolean parity : 1; } psw_type data psw_type PSW _at_ 0xD0; … { PSW.f0 = 0; // bit addressable. PCON.gf0 = 1; // not bit addressable. …
Variables with leading underscores are reserved for use by the implentation. As are functions starting with 'str'. Be aware. - Mark
I'm aware. These aren't really variables. I dont see any possibility for an implementation to interfere with _mbr in:
struct {some_type_here _mbr;} variable;
You are not comparing apples to apples here. In one case, you are assigning a value to a bit variable that is stored in the bit memory space. This is NOT a defined type in the ANSI C specification. As such, you can simply set or clear the bit without masking other adjacent bits. In the other case, you are assigning a value to a bit field in a stucture. The bit field is a part of a char and so the changed value must be masked and logically ORed with the original byte. This generates more code. Jon
If I write a compiler and have some internal _mbr variable used in one of my libraries it would be within my right. Then you'd have a problem. My point is, never use _ as a leading char in anything. It's just safer. - Mark
The reason that sbit and sfr were chosen to work the way they do is because of historical reasons. At the time the first C compiler for the 8051 was introduced, EVERYONE thought that a C compiler for the 8051 was a joke. The THEN-STANDARD was the Intel PL/M-51 compiler and the ASM51 assembler. If you do a little research, you'll figure out that the Keil Compiler and Assembler mimic many of the commands and keywords used by these Intel tools (that are no longer available). The reason for this was simple...make it easy for people to intermix the Intel and the Keil tools. Eventually, Intel left the 8051 tools marketplace and they left with it a lot of technology that was invented in 1980-1982 -- 20 years ago. The 8051 has held up unbelievably well over the past 20 years. However, some of the original limitations are only now being addressed. As an innovator in the 8051 marketplace Keil Software has 2 challenges. 1) Add new functionality and features that customers request and need. 2) Break as few existing applications as possible. Keil is not the only tool vendor that most developers work with. There are board companies, emulator companies, code generation tools, device programmers, libraries, and so on. Radical changes to the structure of the C51 Programming Language and to the OMF51 Object File causes a ripple effect through the tools industry. And, it typically takes YEARS for other third-party tools vendors to accommodate. Ergo, we avoid changing the product in radical ways that will negatively affect customers. I mean, it really sucks when we add a new feature that would help a particular customer only to find out that company XYZ has not integrated OMF changes from 3 years ago. If I were KING of the embedded universe, there are a lot of things I would change about tools (in general) and about almost every 8-bit, 16-bit, and 32-bit architecture I've worked with. But, then, there would be no software development challenge and no interesting problems to solve. Jon