I wrote a small application using Keil RTX for smartfusion device and probed RTX overhead using Cortex-M3 systick timer (in Digital CRO). I have changed the OS_TICK macro to 50 micro sec. (Even though as per manual its suggested to set it >= 1msec). Using RTX as per our design we wanted to run the foreground processing frame every 100 micro sec. When probing the wave pattern in CRO i see RTX overhead time of ~= 34 micro sec (This changes with OS_TICK value). Am i doing something wrong or this is how it is suppose to behave ? (We won't be able to achive time resolution in micro sec using RTX os)
In task phaseA i set os_itv_set (2). The periodic wakeup interval is 2 sys ticks (i.e 100 micro sec). i switch on and off LED_A.
When i probe the via in eval board for LED_A (D1). I see every 134.2 micro sec the On operation is performed.
File: RTC_Conf_CM.c // </h> // <h>SysTick Timer Configuration // ============================= // <o>Timer clock value [Hz] <1-1000000000> // Set the timer clock value for selected timer. // Default: 100000000 (100MHz) #ifndef OS_CLOCK #define OS_CLOCK 100000000 #endif // <o>Timer tick value [us] <1-1000000> // Set the timer tick value for selected timer. // Default: 10000 (10ms) #ifndef OS_TICK #define OS_TICK 50 #endif
File: main.c #include <RTL.h> #include "a2fxxxm3.h" /* A2FxxxM3x definitions */ OS_TID t_phaseA; /* assigned task id of task: phase_a */ #define LED_A 0x01 #define LED_On(led) GPIO->GPIO_OUT &= ~led #define LED_Off(led) GPIO->GPIO_OUT |= led __task void phaseA (void) { os_evt_wait_and (0x0001, 0xffff); /* wait for an event flag 0x0001 */ os_itv_set (2); for (;;) { os_itv_wait (); LED_On (LED_A); LED_Off(LED_A); } } __task void init (void) { GPIO->GPIO_0_CFG = 5; /* Configure GPIO for LEDs */ GPIO->GPIO_1_CFG = 5; GPIO->GPIO_2_CFG = 5; GPIO->GPIO_3_CFG = 5; GPIO->GPIO_4_CFG = 5; GPIO->GPIO_5_CFG = 5; GPIO->GPIO_6_CFG = 5; GPIO->GPIO_7_CFG = 5; GPIO->GPIO_OUT |= 0xFF; t_phaseA = os_tsk_create (phaseA, 0); /* start task phaseA */ os_evt_set (0x0001, t_phaseA); /* send signal event to task phaseA */ os_tsk_delete_self (); } int main (void) { WATCHDOG->WDOGENABLE = 0x4C6E55FA; /* Disable the watchdog */ os_sys_init (init); /* Initialize RTX and start init */ }
I think you are on a fast track into disaster.
If you do need some real-time-critical task to perform at such a high frequency, then you should consider using a timer ISR to perform this work, and leave the RTOS to handle normal tasks.
Doing something every 100 us obviously also requires that task to finish extremely quickly to stop it from consuming too much of your total CPU capital.
Another thing is that you have to think twice about interrupt priorities to get your other interrupts to cooperate well with this interrupt-driven "tasklet".
Yep i get your point. What do u thing i didn't thought of it ? So from timer ISR will i be able to using RTX os_XXX calls (i.e kernel mailbox) ?
And i guess while servicing the timer ISR routine RTX scheduler won't be able to preempt it even when highest priority task (254) is suppose to run at regular interval of time (os_itv_set). Let me know if u agree ?
If you did think about using an ISR, I would have expected you to post a comment about why thought it would be better to try to push the RTOS to perform task switches at that huge speed. It's common to try to motivate the reason for trying "strange" concepts.
No - an ISR can't call any os_xxx() functions. Only the isr_xxx() functions may be called.
Using the same above example even if i am setting OS_TICK to 10milli sec, My Digital CRO waveform is actually showing 13.4 milli sec. Now this is not good for a RTOS to behave.
The design is for Level A software. If you understand the meaning!!! There are existing RTOS which are able to do the task switching job in micro sec delay with ease and precision. Just trying my luck with RTX. Seems like its not upto mark...
There is, of course, an alternative - that your code isn't as correct as you think it is.
Starting to post sentences with !!! after them might not be the best route to go. If you understand the meaning!!!
The intent of code is simple. I guess u understood it.Can you share something using RTX kernel which does the job in micro sec delay (With precision) ?
Not a huge fan of RTX, but if this what you measured I can assure you 100% that it's your code that is at fault.
This link gives the performance for RTX: www.keil.com/.../rlarm_ar_timing_spec.htm
see RTX overhead time of ~= 34 micro sec
I think you can find RTX performance metrices on this website. Either way, a context switch within 34 microseconds is not bad at all! What is your processor speed? If you internal flash MAM enabled? Either way as already mentioned, you are trying to solve your problem with the wrong method.
Per,
I was too lazy to search for that link :-)
From your post i understood one thing for sure. Even if people are not asking ur expertise you always have to poke ur nose in all threads just to show ur presence and techinal capabilities. Come down, and show me some real code (from ur "alternative" brain) which uses RTX kernel and give me a precision of micro sec delay in cortex-m3 (smartfusion).
ok - I'll stay away.