Hi
I am receiving:
*** ERROR L121: IMPROPER FIXUP
MODULE: C:\KEIL\C51\LIB\C51C.LIB (PRINTF)
SEGMENT: ?PR?PRINTF?PRINTF
OFFSET: 0068H
I have tryied to read threads about this, but cannot understand it since it appears in a .LIB file
Any help? Really needing it. Thanks for your time Dario
You are using the COMPACT memory model and the fixup error indicates a problem with the PDATA addressing.
Is the PDATA space within 256 bytes? What version of the toolchain are you using?
I still don't get the problem. It worked fine but i modify code and does not work anymore. How come that i encounter this issue? I have no idea on the PDATA space.
I will need more assistance with this one. Sorry. I am a beginner...
C comp v7.20 Asse : v7.10 Link/Loc: v5.11
" It worked fine but i modify code and does not work anymore."
Yes, you get that! This is known as breaking the code; such code is said to be broken!
;-)
But seriously:
"I have no idea on the PDATA space."
So don't use it!
Use either the Small or Large memory model instead:
http://www.keil.com/support/man/docs/c51/c51_le_memmodels.htm
I think you can safely leave PDATA for another day as an "advanced" topic...
"I am a beginner...
C comp v7.20 Asse : v7.10 Link/Loc: v5.11"
Those are quite old versions to be starting out with...
>>Yes, you get that! This is known as breaking the >>code; such code is said to be broken! >> >>;-)
Good one.
But i am working over an existing code that uses that model(compact). So i cannot change the code, i can go to the previous version and start the last changes again, but iÂ'd rather understand what caused this issue. How comes that deleting code (so deleting variables) causes that now the space is insufficient? ShouldnÂ't i expect the other way round. Deleting lines that declare variables (some of them static), shouldnÂ't free space of the pdata?
Since i have no time demanding issues, maybe Large model is ok, but i do not feel comfortable changing things without knowing was is going on...
thanks for you reply Dario
How comes that deleting code (so deleting variables) causes that now the space is insufficient? do you have any linker warnings "... L16 uncalled segment ..." ?
maybe Large model is ok No, it is not OK, go for 'small' if you decide to change.
Erik
Plenty, arround 75 warnings.
Best practice is to solve the errors/warnings in the order they appear.
Is the fixup error the first error that you are getting from the linker?
You really should start fixing all the warnings.
"Best practice is to solve the errors/warnings in the order they appear."
Absolutely - one warning very often leads to another!
Thus it's often best to address them one at a time and in order
maybe Large model is ok
No, it is not OK, go for 'small' if you decide to change.
Please explain, if you can, why you think it is not ok for the OP to change from the compact model to the large model.
the large model is slower than a snail and is a glutton for codespace.
PS I respond to this from you simply because you screwed up and made a reasonable request. Do not expect me to pick up responding to your usual crap.
"the large model is slower than a snail ..."
But the OP specifically stated that he has, "no time demanding issues".
If a slow snail is fast enough, then there's no reason not to use it!
"...and is a glutton for codespace"
Again, if codespace is not at a premium, that's not a problem.
With a large application, where most data is going to have to be in XDATA anyhow, I can't see why using the Large model would be a big issue?
As you've just said yourself elsewhere, you're happy to take the codespace hit and turn off the optimiser to give debuggable code - so I don't see why it's evil to take these hits and use the Large model where it makes sense...
the problem I have with using the large model is that it is far easier to correctly assign xdata to variables where "slow is allowed" that to use the large model and 'catch' all cases where DATA should be used.
I, personally, have a problem with the attitude "this is not critical, so let us not worry about it" since that usually cones back and bites you in a large muscle.
you're happy to take the codespace hit and turn off the optimiser to give debuggable code YES, there is the advantage of "debuggable code", the only 'advantage' of the LARGE model is that it allows you to be lazy.
Please do not make that I am not using the optimizer, into that I am not interested in optimal code. I do everything reasonable to get the fastest, smallest result withot using the optimizer.
"the problem I have with using the large model is that it is far easier to correctly assign xdata to variables where "slow is allowed" that to use the large model and 'catch' all cases where DATA should be used."
I see; but I was specifically talking about large applications where most data is going to have to be in XDATA anyhow - in which case littering 98% of all definitions with an 'xdata' qualifier is pointless and just adds unhelpful "noise" to the source.
In such cases, "slow is allowed" is the rule and "fast required" is the exception - so only those should be specially qualified as DATA (or IDATA or whatever)
"Please do not make that I am not using the optimizer, into that I am not interested in optimal code."
Never intended to do that: just pointing out that getting the utmost smallest code size is not "optimum" in all cases - other requirements may take precedence...