Is this a reasonable way to use an SD card for temporarily storing data?
varnon
Posts: 184
Hello all.
I am currently writing some objects to be used in propeller driven research. I have a bit of programming experience, but it is not my profession, so feedback would be appreciated.
My colleges and I need to be able to record many series of events, the time that these events occur, then derive some temporal measures and export the data to a TXT or CSV file. I have written these types of programs before, but this is the first time I have been constrained by a micro-controller's memory.
On consideration of data storage, it does not appear that the propeller has enough memory. We want to record the time in milliseconds that events occur, and these values quickly become too large to be contained in anything but LONGS. After considering the number of events, and number of instances of each one of these events, it is clear that there are not enough LONGS available for use. (For example, 10 events, 2 initial attributes per event, with a maximum of 100 occurrences of the event would require 2000 longs. I would prefer a maximum closer to 50 events, 2 attributes, with 500 occurrences.)
I am now considering writing some data to an SD card in real time. I would only write the bare minimum, and event identifier and the time it occurred. For example "A1,2342," would be written to the file when the A1 event occurred. After a B2 event occurs the file may read "A1,2342,B2,4563,"
After the main loop of the program is complete, the memory file can be sorted and read. The file can be sorted for all A1 values, which can then be placed into a long array with other derived measures. Then this data can be written in an aesthetically pleasing manner to a separate data output file. The long array can then be zeroed. Next the memory file can be sorted for the next kind of event (B2 for example). This data can be placed into the same long array, then written to the data output file.
The purpose in this is to be able to quickly store data outside of the limited space in memory, then process it and write it to an output file later, when time is not critical.
Does this make sense?
I cannot see any drawbacks, and even with larger memory chips that are available, this seems like a better solution.
So far the SD card has been wonderfully easy to use. But maybe there are things of which I am unaware. Maybe the SD cards have very limited lifespans when written to so frequently? I have no idea. Maybe there is a much easier way? I really don't know.
Any feedback would be greatly appreciated.
I am currently writing some objects to be used in propeller driven research. I have a bit of programming experience, but it is not my profession, so feedback would be appreciated.
My colleges and I need to be able to record many series of events, the time that these events occur, then derive some temporal measures and export the data to a TXT or CSV file. I have written these types of programs before, but this is the first time I have been constrained by a micro-controller's memory.
On consideration of data storage, it does not appear that the propeller has enough memory. We want to record the time in milliseconds that events occur, and these values quickly become too large to be contained in anything but LONGS. After considering the number of events, and number of instances of each one of these events, it is clear that there are not enough LONGS available for use. (For example, 10 events, 2 initial attributes per event, with a maximum of 100 occurrences of the event would require 2000 longs. I would prefer a maximum closer to 50 events, 2 attributes, with 500 occurrences.)
I am now considering writing some data to an SD card in real time. I would only write the bare minimum, and event identifier and the time it occurred. For example "A1,2342," would be written to the file when the A1 event occurred. After a B2 event occurs the file may read "A1,2342,B2,4563,"
After the main loop of the program is complete, the memory file can be sorted and read. The file can be sorted for all A1 values, which can then be placed into a long array with other derived measures. Then this data can be written in an aesthetically pleasing manner to a separate data output file. The long array can then be zeroed. Next the memory file can be sorted for the next kind of event (B2 for example). This data can be placed into the same long array, then written to the data output file.
The purpose in this is to be able to quickly store data outside of the limited space in memory, then process it and write it to an output file later, when time is not critical.
Does this make sense?
I cannot see any drawbacks, and even with larger memory chips that are available, this seems like a better solution.
So far the SD card has been wonderfully easy to use. But maybe there are things of which I am unaware. Maybe the SD cards have very limited lifespans when written to so frequently? I have no idea. Maybe there is a much easier way? I really don't know.
Any feedback would be greatly appreciated.
Comments
I wouldn't worry too much about the lifetime of SD. But, if you were doing constant writing and are OK with volitile storage, you might think about using SRAM chips.
BTW: If your Prop has a 64kB EEPROM, you can also use the upper 32k for storage. Or, you can add a extra EEPROM chip on the same I2C bus (p28 and p29).
I agree with Rayman, this sounds like a great application to use a SD card to log the data.
I personally like to add a real time clock to my data logging projects. With the clock, I can add a time stamp with each event.
Kye's FAT driver in the OBEX includes code to use a DS1307 RTC to time stamp file creations. I use the same RTC object to add the time of the events I'm recording.
The wear leveling features in an SD card make it so that writes to data blocks on the SD card (~4kB in size on some cards) are spread out evenly across all blocks. Let's say each block fails after 100,000 writes, and you write 1 MB data every second to a 32GB card.
Then it will take (very roughly) about (32GB/4KB)/(1M/4KB)*(100,000)=3,276,800,000 seconds to wear out the SD card... which is about 103 years... obviously the card will fail from other reasons if used like this before 103 years. But, the point is to say that you need not worry about wearing out the SD card if it has a large capacity and you use very little of it. Unless, your plan is to run this system for years on end. Then you may wish to plan more carefully.
---
Remember that you need to use the block data transfer functions in the file system driver API to achieve any real speed when using the SD card.
It is good to have some validation for my thoughts before I begin coding.
Currently, I have the gadget gangster module which contains an SD card slot, so it is an easy thing to take advantage of. I may consider SRAM in the future.
For some programs, events may occur as little as 20 times in an hour... At the other extreme, maybe 5 events a second. I have a custom clock object for keeping track of time, running in milliseconds. Essentially I call the time since an event, and write that to the card. Sometimes it is the time since the start of the program, sometimes it is the time since another event. So far, there haven't been any problems with speed, but I'm just pressing one button on a breadboard to test it at the moment. I'm still working out reading the memory file and processing it. I think the file sizes will be in the range of a few KB, so it sounds like file wear shouldn't be a problem.
Thanks again for the thoughts!
I would really like to believe all of the above but at the moment I just
cannot. I believe we have had this debate before and I still haven't made any
progress with it. Do you have a pointer to a simple explanation from any SD card
manufacturer that what you describe might be so?
Firstly one explanation I found of wear leveling was that the SD storage is
divided up into blocks (I forget what size might be) which in turn
contain sectors (512 bytes). That within the blocks there were some space that
could be used for wear leveling within that block. That is to say, if you are
continually hitting a sector then that sector is not swapped for wear leveling
purposes with all the other 32MBs worth on your card but only a handful within the
block the sector lives in.
Sorry if my terminology is all wrong here but you see the basic idea.
Further, consider this: If I fill up my file system then there are for sure no
blocks available for any wear leveling even if it worked as you propose. If I
now delete all my files you might think that all sectors/blocks/pages whatever
are available for wear leveling again. BUT how does the SD card know that my
file system no longer needs those deleted sectors? It does not. The SD card
controller knows nothing of the file system structure and cannot assume any
block is now free.
Unless that is your storage device supports the TRIM command and your file
system makes use of it. From wikipedia: "The TRIM command allows an operating
system to inform a solid-state drive (SSD) which blocks of data are no longer
considered in use ....".
As far as I know SD cards do not support TRIM. Does any Propeller file system?
So far my conclusion is that wear leveling is not as sophisticated on SD cards
as we might like and you have described.
Turns out I am now surrounded by 4, 8, and 16GB SD cards that have only been is
use a short time but are now failing. They cannot be read by the linux dd
command after a few GB and they fail the f3 test. Hence my long ramble here:)
If an SD card is full, it does a lot of shuffling of data around from block to block to wear-level. How much depends on the card manufacturer since it is controller-specific. It's unfortunate that the SD card spec doesn't seem to have anything like a TRIM command yet. Given how handy SD is in embedded systems, it's practically a must.
One kicker is that a Kingston Class 4 4GB SD card is actually 7741440 sectors or 3963617280 bytes which means I cannot do a sector for sector copy of the partitions on my original Transcend card which is 4124049408 bytes! The copy (dd) fails of course and then the ext4 partition forever fails fsck.
A couple of weeks back I copied that 4GB image to an 8GB Kingston card which now can't be read past 2.5GB despite having passed the f3 test cleanly before use.
I'm starting to despair of finding any micro SD card that works.
But again:
Even if the card was once full and is now empty the controller knows nothing about that as there is no TRIM. So even if blocks were shuffled around in the entire size of the free space on the card, which I don't believe, there would still be a lot of shuffling on an empty card that was once full.
I have a Macbook Air with solid state drive, and despite the axe hanging there waiting to strike I wouldn't want to give up the speed convenience and mechanical ruggedness. (reminder to self: back up frequently!)
This means that TRIM support would not be necessary. You get the idea of TRIM, however. But, in order to get the benefit of what I described a brand new card is needed and no more than 1 MB of data is EVER written - I did not say... fill the card up and then delete.
And by ever written, I mean the disk has no more than 1MB on it at any time. You can write to that 1MB part all you want. But just not any more than that.
Yes, I understand your idea. And I suspect it has merit.
However I'm not convinced that when writing to restricted set of sectors, say 1MB's worth as in your example, that the wear leveling will then make use of all the unused 32GBytes worth of sectors.
Form what I have read, can't find the link now, the SD is divided up into many areas, each much less than 32GB. Each area has some extra spare blocks that don't normally show up in the size of the device. The wear leveling only works within those area using the areas spare space.
Your plan also falls down if I test my cards prior to use with a program like f3 or just writing and reading the whole card with the unix dd command. Such a test is desirable as there are many fake cards on the market of a lesser size than they are advertised. Even legitimate 4GB cards come in a variety of sizes around the 4G mark.
As it stands I have a pile of 4, 8, 16 GB uSD cards here, all quite new, none used heavily and most of them fail the f3 test, can't even be read all the way through with dd. Though why wearing out a sector would prevent it from being read is a mystery to me as well.
I just don't feel I can trust any SD card at the moment.
My thoughts about this reflect what I have learned about solid state disk. The process I described above is true for very advanced solid state disks with wear leveling. All vendors implement their own algorithm.
---
I believe there are more expensive SD cards on the market designed to have longer write lives.
Fine with me. You have good reason.
Propforth provides basic tools for logging to SD. The new version (when we finally get done testing) will have specific support for logging with "precision timing". This means it will be a bit more precise than using an RTC chip as a time base, but might still have drift of (milli?)seconds over days due to large temperature changes, etc. It should handle data at nearly the transfer speed/bandwidth of the SD. The data is stored in block format on the SD, "internal to forth", so a simple utility would be required to read it back off to a windows machine, etc.
This is a little off-topic, but maybe not. Perhaps a really old idea could be implemented using SD which would make it faster when using it for memory-type applications.
Anyone remember "Relative Files" on the Commodore 64?
http://www.atarimagazines.com/compute/issue40/relative_files.php
Maybe I'm barking up the wrong tree here, and perhaps accessing the sectors of the SD card makes the most sense, but something like this would keep the card FAT32 compatible, just add a few "funky" files.
OBC
The benefit is that you don't need to change or even read the FAT. The usual FSRW functions are still available and fully functional even if the VMem is used. This means that while using 4 different memory-mapped files you can also open another file with FSRW for read or read/write.
I think with some address-calculations this comes close to the concept of "Relative Files".