Thought I would ask the question to see if any one else has come accross this problem before
STM32F103 @ 72Mhz using 4.21 RTX and FS FLASH using the Keil SDIO driver (4 bit mode) for this device SDIO_stm32F103.c using the latest version of the tool chain using a 8K cache not using the journaling FAT ( even worse timing problems) the SD card is transend 2GB - FAT 16 tried formatting in FAT 32 8 other tasks running. rtx configured to run in priviliged mode stack checking Each task has a user set stack thats set to run at roughly 60% capacity already been onto the Keil agent but not heard anything back but it getting pressing as I have a customer who wants a solution asap.
I have a system that data logs - when the system is powered up it opens a file in preperation to log data and then closes it. It tries to open this file in a subdirctory of root.
It then appends a header containing the systems configuration settings.
Once the system is set up and ready, from then on the data is buffered in ram to be appended in lumps of roughly 512 bytes to the file. Can take up to 15 mins + for the data to be collected.
In theory the system should only try to open a file once a day if powered continually * e.g new log file @ 2400 hrs but in practice the power is being removed by to 40 times a day in a particular installation.
It seems the more files thats exsist the slower the file system gets until its just a serious problem for the entire system. eg I have a watchdog that must be serviced and the FS locks the system for so long the watchdog resets the hardware - to the user it appears the product has crashed while displaying a splash screen on a LCD during power up. (Can be between 60 to 110 files in the sub directory)
The name of the file is a long file name eg LXXXXXXXXXXXXXXXXXXX_MMDDYYY_HHMMSS.txt this occupies 4 entries in the root directory entry table. eg file data plus long name.
what I have tried - Debug code *****************************
Opening a new file writing about 1K of data to file and then closing file. Repeat until it affects the a 100mS Keep alive can message sent out via another task.
What I have found is that after a sub directory has more than 52 files in, there is a interruption in the RTX tasks timings and by disabling the watchdog hardware when there is more that 160 files it can take up to 10secs to open a file. (e.g. Timing between breakpoints) furthest I have got is 260 files befoe all seemed dead eg no longer responding to debug. if the SD card is formatted it behaves best, but if the files are just deleted the performance starts to degrade badly at around 70 files.
What I am most curious about is why the RTX is not context switching any more ( eg 100ms CAN keep alive message and watchdog toggling have stopped - 2 seperate tasks - tick time triggerd.
The SDIO transfers are DMA controlled and the filesystem polls status registers bits in a while loop type structure. (In debugging sometime stopping the application when it appears to be stuck the debugger ends up in the loop)
I understand that to open a file it has to find an root directory entry in enter the new data in so say 100 files is 400 entries at 32 bytes each would be 25 + say 6 sector reads ( done with a 4 bit SDIO perpherial) then a couple of writes to place the new data in and update FAT tables etcs. ( sounds simple but I understand the implications) so why does it appear to take 10s or so ? could it be the polled status register is failing to give the sdio state and there is a timeout built into the SDIO_STM32F103.c driver ? Is the RTX context switching buggering this status bit polling up ?
I think I have a rough idea of where I need to look but just wanted to run by this other to see if I have overlooked the obvious and do not want to end up wasting time chasing an issue that doesnt exsist.
TIA Dan