Hi keil, Recently I find a strong function in MDK-5 that I can measure the codes execution time by using t1 and t2 in the bottom right corner in debug mode of the emulation,but I find out that there is difference between the time I get from step by step execution and the time I get from the full speed execution in the same function. Why does it occur? Does the time I get from t1 and t2 near the execution time in real run environment? And how does keil get the execution in t1 and t2?By calcution according to machine period or by other way? Waiting online anxiously... Best regards, Frank
Most Cortex-M3/4/7 designs integrate DWT_CYCCNT, people frequently use that. For other designs, configure a high-rate timer and read the count over the code in question.
Hi, Do you mean that keil uses the integration DWT_CYCCNT to calculate the t1 and t2 or other? Best regards, Frank
I mean people really use DWT_CYCCNT when they want to do cycle accurate benchmarks. It is a counter advancing in the time domain.