Shop OBEX P1 Docs P2 Docs Learn Events
improve the speed of Data Writing to micro SD card — Parallax Forums

improve the speed of Data Writing to micro SD card

SusieSusie Posts: 4
edited 2012-04-12 06:19 in Propeller 1
hello,could someone help me to improve the speed of writing data to micro sd card?
The input data are from another micro chip that transmitted by UART (TX) and the propeller receive data by UART (RX).
The data which propeller receive is a string data like that,for example "a1234b4567c", it represent there are "a" signal's data "1234" will be saved in card with csv file,and "b" signal's data "4567" be saved in another csv file.
The following is my code,here the storage data be saved which achieve 4416 bits per-second, but I hope that it could achieve 9600 bits per-second.

Comments

  • rokickirokicki Posts: 1,000
    edited 2012-04-10 10:21
    I was going to ask if you're using Kye's code or mine, but then at the speeds you are talking it won't matter.

    You want to buffer your data as much as possible. There needs to be a buffer somewhere between the
    code that reads the serial data and the code that writes it, and these two need to be separated somehow.

    The reason is, writing a single 512-byte block to the SD card can introduce a delay, sometimes of up to a
    good fraction of a second, and incoming serial data may (if it is asynchronous) overflow whatever small
    buffer it is using in its local cog.

    FSRW by default does a pretty good job of hiding this delay from you (as long as you don't always
    close the file and reopen it) because it uses a write-behind block layer, but sometimes it can't hide
    the delay (like when it has to go searching for a new block to write the data to).

    Easiest solution here is probably to get your serial code to use as large a buffer as practical. Then,
    when writing the data, use pwrite() to write as big a chunk as possible at once (a whole line if
    possible) since the per-character overhead of pputc (or any Spin method) will slow things down a lot.

    I didn't look at your code (what's a rar?) but the same basic ideas apply.

    -tom
  • sidecar-racersidecar-racer Posts: 82
    edited 2012-04-10 10:52
    Susie,
    rokicki's comment is right. I'm writing captured data to SD for CSV file. Timing is 10 times per second and CSV record is 144 bytes. So 11520 bits per second. A hardware trick to use is toggle a prop ping in your write loop so you can determine (with a scope) how much time you're using. In my system, entire loop (data acquisition and minimal display) takes apx 30-40 ms.
    Rick Murray
  • Duane DegnDuane Degn Posts: 10,588
    edited 2012-04-10 11:30
    It might help with your project to look at the lastest incarnation of Tim Moore's four port serial object. Tracy Allen has made it easy to change both the tx and rx buffers of each port.
  • rokickirokicki Posts: 1,000
    edited 2012-04-10 15:41
    Just to make sure I'm clear: I recommend introducing buffering for about one total second of data, to be safe.

    So if you're getting data at 9600 baud, with no pauses, that would mean at least a 1K buffer. (And this buffer
    needs to be available to be filled *while* you are writing, so often that means ping-ponging between two 1K
    buffers, or writing things in 512-byte chunks and using a 2048-byte circular buffer, or some other similar
    approach.) If you are guaranteed to get pauses (i.e., 4 NEMI strings per second) then you can scale this
    buffer down appropriately. But you want to make sure there's no way you can lose data even if any single
    particular write to the SD card takes up to one second to perform.

    You may not see such high latency very often, but it *will* happen, and it's best to be prepared.

    If you can't tolerate such high variance in write time, or it would require more buffering than you can
    give it, then you should either figure out how to properly handle buffer overruns (i.e., choose what
    data you can lose and recover gracefully), or use external SRAM or some such to increase the
    buffering capacity.

    SD cards sometimes have to do quite a bit internally in order to write that 512-byte block. Usually
    they work very well and very quickly, but every so often they will "get busy".
    Duane Degn wrote: »
    It might help with your project to look at the lastest incarnation of Tim Moore's four port serial object. Tracy Allen has made it easy to change both the tx and rx buffers of each port.
  • Tracy AllenTracy Allen Posts: 6,664
    edited 2012-04-10 21:19
    Susie, I did take a look at your code, and it is nicely done the way you have your SD writing process going on in its own cog, while the serial data is coming in and being parsed in the original cog.

    There is a lot of debugging going on to send all of your reformatted data out the serial port, before it sets the variable Q:=1 to tell the SD cog to get to work. The data is ready, so why not give the SD cog the go-ahead earlier in the loop, so that the debugging and the SD writing operate in parallel? Subsequently check that Q==0 before starting the next write.

    The original FullDuplexSerial has ridiculously small buffers (16 bytes for both rx and tx). There are variations that allow larger buffers. The 4-port object Duane mentioned is limited only by the extent of free memory.

    Still, given that you are time-stamping incoming readings, you might need to read in data as is arrives, and not wait for any system slowdowns. Read it from the serial port, parse it and time stamp it into another big buffer, and from that buffer to the SD card. Double buffering, that is.

    It might be easier always to start runs with a fresh, that is blank, SD card, so that blocks are plentiful and fsrw won't have to hit the bumps. Tomas, isn't it the case that the slowdowns come about when the card fills up and/or becomes fragmented?
  • rokickirokicki Posts: 1,000
    edited 2012-04-11 11:18
    At the filesystem level, this is helpful, but internal to the SD card, I don't have any idea. Note that the SD card
    probably doesn't "know" that it is blank, so the real latency killer (which is wear leveling and reallocation of
    blocks, completely internal to the card) probably won't be affected much by "blanking" the card.

    But certainly the *filesystem* will be happier (faster) the emptier it is.

    -tom
    It might be easier always to start runs with a fresh, that is blank, SD card, so that blocks are plentiful and fsrw won't have to hit the bumps. Tomas, isn't it the case that the slowdowns come about when the card fills up and/or becomes fragmented?
  • lonesocklonesock Posts: 917
    edited 2012-04-11 11:43
    rokicki wrote: »
    ...Note that the SD card probably doesn't "know" that it is blank...
    Regarding this, there is an "Erase" command in the SD spec, which could be used instead of just zeroing the FAT. This might be a cool tool to build...the "quick SD zeroer" [8^)

    Jonathan
  • KyeKye Posts: 2,200
    edited 2012-04-11 21:13
    I was thinking about putting features in for SD card locking into my driver. I stopped after I realized how much pain and suffering I would unleash. :)
  • TubularTubular Posts: 4,706
    edited 2012-04-11 21:52
    Kye wrote: »
    I was thinking about putting features in for SD card locking into my driver. I stopped after I realized how much pain and suffering I would unleash. :)

    Locking as in the write protect switch, or as in file locking so only one thread can write at a time (though perhaps others can read) ?
  • KyeKye Posts: 2,200
    edited 2012-04-12 06:19
    No, as in the SD card itself is unreadable unless you unlock it with the password. ;)

    Since most software libraries do not support unlocking, the card would essentially look like it was dead unless unlocked.
Sign In or Register to comment.