Noticed particularly the stack usage difference of a function _printf_f between ARM Compiler 6 (tested with 6.11) and ARMCC 5.06 (update 6) is a whopping 6x more on the AC6 (1968 bytes on AC6 vs 320 bytes on AC5), as reported by the linker static analysis.
Any reason why AC6 uses more stack in this case?
Appreciate any thoughts on this.
ST
Hi,
which optimization levels are you using?
Hi Milorad,
I have tried compiling from -O1 to -O3 without LTO, the stack analysis gave the same result.
The actual runtime stack watermark also rose significantly compared to the image compiled with AC5.
Hi Seng,
I tried AC5 V5.06 update 6 (build 750) and AC6 V6.10.1 and result was identical for _printf_f_.
BTW, how are you using AC6 V6.11 when last version released with MDK was V6.10.1?
Best regards, Milorad
Thanks for the feedback
I went a step further and downloaded from the ARM website.
I just tried to re-compile with 6.10.1 but it is still showing the same result. Aside from _prinf_f, another library function that caught my attention is _printf_fp_dec_real, from 0x68 to 0x340 bytes.
Due to some special requirements of my project, I am executing all codes from SRAM. Not sure if this makes the difference?
Regards, ST
perhaps AC5 on high optimization decided to do some things differently for example use some RAM as global scratch thus reducing stack usage for the function itself but the complete RAM usage is similar. This could depend on code and the way compiler optimizes things.
Is RAM size similar in both cases?
I'm not sure what produces such results in your case, as my test showed identical stack usage for both AC5 and AC6 and it was 324 bytes + Unknown in both cases (AC5 and AC6).
Those are both with optimization level 3, hope you are not using optimization level 0.
I think the differences are only significant on certain (maybe C) library functions, which so far I have found particularly on the floating point formatting printf functions...the rest of my functions showed more or less the same stack usages as well.
Please have a look if your project links _printf_f (and the stack usage can be explicitly shown in the callgraph) and only then we could be on the same page :)
I used:
printf("Test %f", 1./3);
BTW, I would suggest you try from Blinky project and make the experimentation, or that you reduce your project to simple code that you can figure out why you have such difference.
You can try what I did use a Blinky with line I mentioned and see if you get same result with both compiler and go from there.
Yes you are right, I tried with Blinky and the two compilers produced the same analysis results...hmm then it gets interesting.
Let's see what I can find out.
Thanks!
Let us know what you find out Seng.
Just had the time to go back to this subject.
I got rid of all my floating point string formatting in sprintf and snprintf while replacing it with function below:
static void printf_float (const char * s, uint8_t decimal, float value) { char *tmpSign = (value < 0) ? "-" : " "; float tmpVal = (value < 0) ? -value : value; int tmpInt1 = tmpVal; // typecast float tmpFrac = tmpVal - tmpInt1; // Get fraction //determine number of digits used by tmpInt1 uint8_t sigDigits = fminf(6,trunc(log10f(fmaxf(1,fabsf(tmpInt1))))+1); int tempGain = powf(10,decimal); // //limit decimal decimal = fminf(decimal,fmaxf(0,5-sigDigits)); int tmpInt2 = trunc(tmpFrac * tempGain); // Turn into integer. //Print to string if (decimal>0) snprintf(lcd_str_large,STRBUFFSIZE,"%s%s%-*d.%-*d",s,tmpSign,sigDigits,tmpInt1,decimal,tmpInt2); else //rpint significant numbers only snprintf(lcd_str_large,STRBUFFSIZE,"%s%s%-5d",s,tmpSign,tmpInt1); }
_printf_f wasn't linked in the the final output and stack usage was reduced significantly by slashing almost 2000 bytes to the AC5 level!
Though I am still not sure whats really the difference but definitely something in the library.
Regards, Seng Tak
"I got rid of all my floating point string formatting in sprintf and snprintf"
That's your answer right there!
OK.
What I am not getting is, I did the same (by using floating point formatting) in AC5 but stack usage was 2000 bytes lower.
Both sprintf are ARMABI calls.
What I did was merely recompiling the same sources after changing the compiler, and the increment was noticed.
So why is there such a difference between these two libraries? Just being curious...
AC6 is awful, it optimize codes but not working well.
Two facts speak against the validaty of that judgement: you have demonstrated a lack of qualification to make such a broad statement, and you felt you had to make into a months-old, solved, thread that has nothing to do with that judgement.