This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Two Chars for a Counter vs 1 Int?

I have a case where I need to count up a variable for more than 255 for a unsigned char.

I have a friend who works on 8-bit MCUs and swears that you should not use anything but 8 bit data types (like the unsigned char) for optimal code speed.

So to count up using chars as the datatype, I have to increment one char and when it overflows you increment the second char until you hit the timing you want. (This is precisely how timers are operated in C51, hi-byte / lo-byte etc..).

Is this an old wives tale? Is it really that inefficient to use an unsigned int? Does the compiler more or less break an int down in to two chars for arithmetic purposes anyway?

I've got it wired up using the two chars, but I have used an int in other circumstances as well.

For the amount of hand-wringing in the code, if you just used an unsigned int, you end up with less code that will make more sense to the next guy to look at it.