improve the speed of Data Writing to micro SD card
Susie
Posts: 4
hello,could someone help me to improve the speed of writing data to micro sd card?
The input data are from another micro chip that transmitted by UART (TX) and the propeller receive data by UART (RX).
The data which propeller receive is a string data like that,for example "a1234b4567c", it represent there are "a" signal's data "1234" will be saved in card with csv file,and "b" signal's data "4567" be saved in another csv file.
The following is my code,here the storage data be saved which achieve 4416 bits per-second, but I hope that it could achieve 9600 bits per-second.
The input data are from another micro chip that transmitted by UART (TX) and the propeller receive data by UART (RX).
The data which propeller receive is a string data like that,for example "a1234b4567c", it represent there are "a" signal's data "1234" will be saved in card with csv file,and "b" signal's data "4567" be saved in another csv file.
The following is my code,here the storage data be saved which achieve 4416 bits per-second, but I hope that it could achieve 9600 bits per-second.
Comments
You want to buffer your data as much as possible. There needs to be a buffer somewhere between the
code that reads the serial data and the code that writes it, and these two need to be separated somehow.
The reason is, writing a single 512-byte block to the SD card can introduce a delay, sometimes of up to a
good fraction of a second, and incoming serial data may (if it is asynchronous) overflow whatever small
buffer it is using in its local cog.
FSRW by default does a pretty good job of hiding this delay from you (as long as you don't always
close the file and reopen it) because it uses a write-behind block layer, but sometimes it can't hide
the delay (like when it has to go searching for a new block to write the data to).
Easiest solution here is probably to get your serial code to use as large a buffer as practical. Then,
when writing the data, use pwrite() to write as big a chunk as possible at once (a whole line if
possible) since the per-character overhead of pputc (or any Spin method) will slow things down a lot.
I didn't look at your code (what's a rar?) but the same basic ideas apply.
-tom
rokicki's comment is right. I'm writing captured data to SD for CSV file. Timing is 10 times per second and CSV record is 144 bytes. So 11520 bits per second. A hardware trick to use is toggle a prop ping in your write loop so you can determine (with a scope) how much time you're using. In my system, entire loop (data acquisition and minimal display) takes apx 30-40 ms.
Rick Murray
So if you're getting data at 9600 baud, with no pauses, that would mean at least a 1K buffer. (And this buffer
needs to be available to be filled *while* you are writing, so often that means ping-ponging between two 1K
buffers, or writing things in 512-byte chunks and using a 2048-byte circular buffer, or some other similar
approach.) If you are guaranteed to get pauses (i.e., 4 NEMI strings per second) then you can scale this
buffer down appropriately. But you want to make sure there's no way you can lose data even if any single
particular write to the SD card takes up to one second to perform.
You may not see such high latency very often, but it *will* happen, and it's best to be prepared.
If you can't tolerate such high variance in write time, or it would require more buffering than you can
give it, then you should either figure out how to properly handle buffer overruns (i.e., choose what
data you can lose and recover gracefully), or use external SRAM or some such to increase the
buffering capacity.
SD cards sometimes have to do quite a bit internally in order to write that 512-byte block. Usually
they work very well and very quickly, but every so often they will "get busy".
There is a lot of debugging going on to send all of your reformatted data out the serial port, before it sets the variable Q:=1 to tell the SD cog to get to work. The data is ready, so why not give the SD cog the go-ahead earlier in the loop, so that the debugging and the SD writing operate in parallel? Subsequently check that Q==0 before starting the next write.
The original FullDuplexSerial has ridiculously small buffers (16 bytes for both rx and tx). There are variations that allow larger buffers. The 4-port object Duane mentioned is limited only by the extent of free memory.
Still, given that you are time-stamping incoming readings, you might need to read in data as is arrives, and not wait for any system slowdowns. Read it from the serial port, parse it and time stamp it into another big buffer, and from that buffer to the SD card. Double buffering, that is.
It might be easier always to start runs with a fresh, that is blank, SD card, so that blocks are plentiful and fsrw won't have to hit the bumps. Tomas, isn't it the case that the slowdowns come about when the card fills up and/or becomes fragmented?
probably doesn't "know" that it is blank, so the real latency killer (which is wear leveling and reallocation of
blocks, completely internal to the card) probably won't be affected much by "blanking" the card.
But certainly the *filesystem* will be happier (faster) the emptier it is.
-tom
Jonathan
Locking as in the write protect switch, or as in file locking so only one thread can write at a time (though perhaps others can read) ?
Since most software libraries do not support unlocking, the card would essentially look like it was dead unless unlocked.