Hi !
I'm using MCB2300 and Tcpnet V3.7.
I have the following application :
process tsk : while( true ) Wait (event) led0_off do some stuff timer1 match interrupt : led0_on set event
process tsk has the highest priority.
I check with oscilloscope latency time which is about 25us, and is stable. When I used TCPNet (with 100ms tick) I noticed that the latency time jitters up to 43 us.
The only way I reproduced this phenomenon is creating a task that loops (for()) and has some code with tsk_lock()/tsk_unlock().
So my conclusion is that even if TCPnet is standalone it "detects" RTX presence and disable the scheduler to protect some non-reentrant functions ?
What do you think about it ?
I think you need to ask Keil support. In theory, maybe FlashFS can detect the presence of a Kernel but I doubt whether Keil do that - more likely that they disable interrupts, which is the cause of your increased jitter.
I sent support request to keil. I may point that what jitters is not the period of the interrupt but the switch from interrupt to task.
With TCPnet, I do not send or receive any frame. And I did the following tests :
Start the target with the eth cable unplugged : initialization lasts much more. No jit. Start the target with the eth cable plugged and then unplugg it : latency jitters.
There are 2 types of task switch in RTX library for ARM7/ARM9:
Reduced context switch does not store all registers to the stack on task switch. It is triggered explicitly by some os_ functions (os_tsk_pass, os_dly_wait, os_sem_wait, etc.) Reduced task switch is faster and needs less space on the stack.
Full context switch stores all registers to the stack. This task switch is used for example on timeouts (round-robin timeout). This task switch is slower and needs more space on the stack.
In your case there was a mixture of both types of context switch and that is why you see a jitter.
Thanks for reply. I may add that I did not enable Round-Robin Scheduling. Os task switch are done via : os_itv_wait, isr_event_set (through it), isr_event_wait and os_tsk_pass() (in tcpnet task).
I gave most priority to the process task that last 150us, and the timer1 interrupts every 500us.
I wonder what can cause a different context switch using TCPnet, because if I implement a dummy task that makes for() loop forever, latency 'never' jitter.
Even though round-robin is disabled, a task switch from tcpnet task to some other task is a reduced-context switch (on os_tsk_pass) or full-context switch (on os_dly_wait, isr_evt_set, etc.) of a hither priority task, that becomes ready. That is why latency jitter.
You should be right about task switch distinction, but you may consider that context switch is not 'triggered' from the TCPnet code. This is : interruption + isr_event_set() + switch to task that IS waiting for the event. Whatever the task executed that does not lock the scheduler or disable OS IT, the duration for switching to timer IT context and should not vary unless some other interrupt occurs.
In doubt about the way I launched the experience, I added to my test 'numerical measure' with timer2 and display on LCD the max latency measured.
I do the following :
plug ethernet cable reset CPU. Unplug ethernet cable (restart if value display is 10% over the min value measured, because it means that an ethernet interrupt may occured).
I let the stuff run for hours. After 10 minutes no jitter is observed.
When I let the cable plugged there is still some ethernet activity ( process ARP request, etc ...) that may influence the test.
I will give the result of this test but I'm confident that there may not be any jitter anymore.