Fear about wearing out an SD card
Paul_Dlk
Posts: 19
Hi all,
I know the fact that SD cards having a limited number of write cycles (10,000) has been discussed before but...
1) Does this refer to writing to the same 'address' on the SD card 10,000 times or just 10,000 writes overall? I use address loosely here to indicate the same position on the card.
2) If I open an initially small file (12 bytes) for append and add approx 4 bytes to it every 250mSec or so, will this allow me to write to the file about 50 million times? As in a datalogger.
3) What 'could' happen to the SD card if it were removed during a write cycle to the append file? Could I still open the file, even though teh last few bytes would be corrupted?
Yours thoughts and assistance would be greatly appreciated.
Paul
I know the fact that SD cards having a limited number of write cycles (10,000) has been discussed before but...
1) Does this refer to writing to the same 'address' on the SD card 10,000 times or just 10,000 writes overall? I use address loosely here to indicate the same position on the card.
2) If I open an initially small file (12 bytes) for append and add approx 4 bytes to it every 250mSec or so, will this allow me to write to the file about 50 million times? As in a datalogger.
3) What 'could' happen to the SD card if it were removed during a write cycle to the append file? Could I still open the file, even though teh last few bytes would be corrupted?
Yours thoughts and assistance would be greatly appreciated.
Paul
Comments
1) No, the card actually does alot of complex stuff in the background to takecare of all write problems... You will never be able to wear one out in any devices lifespan. It would take 10 years to do so... which is longer than the lifespan of most devices.
2) If your writing data like that it will take alot longer than 10 years to wear the card out.
3) If you remove the card while writing data only the very end of the file will be corrupted and maybe the length of the file will be incorrect. If the length is incorrect it will be impossible to get at the end of stuff in the file. So you could still read most of the file since the last close.
It might be better to buffer a large chunk of data logged and then every minute of so open the file, append the data, and then close the file. This assures the data will be valid up till the last minute.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Nyamekye,
are you saying that I can't wear out a card?
My application consists of two files, one file contains data. This file is opened in read mode. The other file is basically an index file which points to the data that I wish to copy from the first file. Generally the data from the first file is read in a sequential manner.
My concern is that no matter what happens (SD card removed incorrectly, power crash etc) the index pointer will point to the last data read from the first file.
Perhaps I can have two index files, update one with the next address, then update the other with the same address.
At startup read both files:
If both values are the same then all is ok.
If one is greater than the other then use the higher value.
If one is corrupt then use the other value as index.
I will also have a log file to indicate if there were any inaccuracies.
Will this work? Any other suggestions?
Paul
per physical sector. For Single-Level Cell (SLC) Flash, up to 100,000 write cycles per
physical sector.
According to Toshiba, the inventor of·Flash Memory·“the 10,000 cycles of MLC NAND is more than sufficient for a wide range of consumer applications, from storing documents to digital photos. For example, if a 256-MB MLC NAND Flash-based card can typically store 250 pictures from a 4-megapixel camera (a conservative estimate), its 10,000 write/erase cycles, combined with wear-leveling algorithms in the controller, will enable the user to store and/or view approximately 2.5 million pictures within the expected useful life of
the card.”1
For USB Flash drives, Toshiba calculated that a 10,000 write cycle endurance would enable customers to “completely write and erase the entire contents once per day for 27 years, well beyond the life of the hardware.”
http://www.kingston.com/products/pdf_files/FlashMemGuide.pdf
What you don't want to do is defrag an SSD because there is no need to and it just adds to the wear.
One day I'm going to sacrifice a card to an experiment. Just set up a program to write and read a single file repeatedly until something breaks. Or write and read a given SD block repeatedly.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
As I was developing it, I corrupted many files both from software bugs and from the battery running low of power.
Usually, when the file was corrupted, the result was that Windows refused to open the file.
In two cases, the entire filesystem became so corrupted that Windows refused to even mount the card, and I had to reformat it.
It was so bothersome after taking a 3 hour hike to find that the log of the entire trip was lost that I modified the code to close the current logfile after about each 15 minutes of logging and open a new logfile, so less data would be lost if the file became corrupt.
In a way, that works even better, as after I've uploaded my trips to everytrail.com and mapped the trip, each different file displays in a different color, which helps visualize the trip.
The FAT filesystem is not robust. I've never had a problem with the SD card itself, but plenty of problems with FAT. About the only reason to continue to use FAT is to be able to move the data to Windows.
I've been thinking about abandoning FAT completely and using a simple sequential-sector filesystem instead, which for basic logging should
work fine. The only drawbacks would be that transferring data to a PC would be slower over a serial link, and it would need work on the PC to read
data and create a PC file (if that's where the data eventually needs to go). Plus I'd have to write the new filesystem code.
I've had an MMC card fail internally where a block of sectors fail to read, but I've never heard of anyone else actually experiencing the SD card write failure due to too many writes, so I'm a little skeptical of all the claims of how many writes an SD card can take without actual experience of it. I've also been tempted to take an old SD card and trying to write it until it fails to get a little hard data on this.
So from my experience you're about 10,000 times more likely to have FAT problems than you are to have SD card wear problems!
Paul
I have a lot of memory cards and thumb drives.· The only one not working is an early·design and it started acting up after I touched a static lightning device at the Franklin Institute and the sign said that the display had more electricity going through it than your average home; the chips were just placed on the board and enclosed in the case which means no surface mount soldering.· The chip(s) were just held in place.· I think more thumb drives get damaged from being in people's pockets and it depends on how they handle them and I suffer in my home from static electricity.· As a rule I replace my thumbdrives every two years with new ones because that is my backup plan and I keep the old thumbdrives or I may give them away.
·
for Propeller (actually Hydra) which would unreliably read games added to the card. I don't know why,
but different SD card drivers for the Propeller are consistently better than others at reading some files
and not others, such as fsrws for Propdos, femtobasic, and Hydra SD menu demo, although backing up
the card and reformatting it somehow improves the readability. This problem was noticed maybe 2 years
ago and I have not updated Hydra to use newer versions of those SD card apps.
There was some cheap fake flash a few years ago which would try to compress or just lie about being bigger
than it was, and would overwrite files when it was full, allegedly, because none that I ever had were like
that.
I HAVE NEVER HAD AN SD CARD FAIL IN A CAMERA NOR PC. I AM RUNNING OS ON ONE NOW.
My solution for the potential of failure is to never erase a file except by formatting. That should keep the
file system linear, unfragmented, and avoid wear leveling. I even plan to use SD cards in app specific situations
in such a way that data is appended a sector or block or 32K at a time, without a filesystem.
FLASH FAILURE I HAVE NOTICED IS A LOT WORSE THAN REPORTED:
I used PROMS and EPROMS and MICROCONTROLLERS that either programmed either once or were UV ERASABLE
and found those to be very reliable. When reprogrammed, they rarely were corrupt. But they are generally
rated for 10 year data retention and I've noticed that the older ones do tend to have failed almost exactly
ten years after programming.
EEPROM seems very reliable in the same way as EPROM type PROMs.
FLASH Firmware and Microcontrollers I found to be severely unreliable.
Examples:
PIC16F84 could only reprogram as few as ONCE compared to PIC16C84
89F51 could only reprogram as few as ONCE compared to 89C51
So I used the flash versions only as if they could not be reprogrammed,
especially because they were cheaper than reliable eeprom versions compared to.
Like EPROM, EEPROM has virtually never failed on me.
FLASH also tends to have weird features such as permanent partial write protection,
which may or may not be removable by an ERASE ALL feature. I have seen this in the
FLASH versions of eproms and eeproms. If ERASE ALL is disabled by these features,
the chip cannot be reprogrammed and is wasted even more than just by high premature
failure rate.
I am wary of wear levelling as it is a black box. (term used as "unknown process")
I would prefer that the logical addressing were preserved along with a map of bad blocks.
That is why I decided that using Flash in embedded apps I will avoid writing the same place twice
and avoid wear levelling, and try to keep a linear run of blocks by only appending. Deleteing is not
a reality anyway; it merely frags and corrupts the media by remarking the FAT to indicate unused
blocks, leaving the data within alone. Deleting is therefore a useless function compared to a
reformat, or a reformat and wipe in place of removing expired data and recycling the media.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
VIRAND, If you spent as much time SPINNING as you do Trolling the Forums,
you'd have tons of awesome code to post! (Note to self)
I have seen serial EEPROM of the type used to boot the Prop fail from overuse. This was an application done by a competitor of mine who stored the count of trucks weighed over a public scale in two EEPROM bytes. This meant the low byte was rewritten every time a truck passed over the scale, and this was on a very busy Interstate. This particular installation on I-12 got about 1500 trucks a day, and it would fail with great regularity every 6 months or so. Our competitor was charging the state $800 for a new motherboard each time this happened, I quickly figured out I could fix the problem with a $1.98 EEPROM.
It's not hard to avoid this by spreading the changes over a range of memory instead of hammering a single memory location.
The few times I've seen flash failures, they've all been traceable. Generally speaking, the majority of damage was done by ESD. Products that had inadequate grounding, weren't properly designed to handle ESD, etc. On rare occasions, I've seen cells get damaged from over-writing the same location, usually WELL past the number of write cycles the datasheet says (flash manufacturers tend to be conservative, and those values are running at maximum rated temperatures and maximum rated voltages)
In the above statements, I'm talking about the hardware reliability of flash. With regards to file system structures mapped to flash, just about any non-journaling file system is subject to corruption with partial writes, brown-outs, what have you. It's true of flash, hard drives, floppies, and punched tape. You can't write some of the data and expect it to be recoverable 100% of the time. Even journaling file systems are not 100% bullet-proof. It depends on how the data is damaged.
I have no qualms about designing flash into a product, assuming it's an appropriate use. Sometimes FRAM is better, sometimes battery backed-up RAM (in some cases with ghosting to flash or EEPROM, there's even parts that support this internally), sometimes hard drives.
Old MMC cards did not support wear leveling algorithms. SD cards do, as do Compact Flash (CF) cards. Wear leveling algorithms are not perfect. Repeatedly hammering the same logical sector over and over will indeed wear out the wear leveling bits (which themselves are wear leveled). This is why we don't recommend flash for /tmp file systems in Unix environments.
As far as data retention, I've yet to have one "forget". I've got two devices running in my basement that use flash for program storage. I built them in 1989, using several 8Kx8 flashes. They run 24/7/365, and they've never failed (I will qualify this with saying that they don't write the flash, it's program store for an 8086 in one device, and an 8051 in another).
I'm unclear what some peoples engineering experience is based on their statements, but as someone who has been doing this for almost 30 years, I can say that when used appropriately, and with understanding the requirements and limitations, flash is a safe, stable, long term technology.
--jc
One thing I do have to say, though -- those EEPROMs failed (I replaced six or seven of the damn things before we retired that stupid thing) at almost exactly the half-million point claimed by the manufacturer. Of course, that was also pretty much the worst case situation for abusing the part too.
I described my experience with various devices.
My solution for using flash is to avoid deletes and keep the fs linear until the chip is full,
then dump it and mark it clear. A quick format doesn't really erase every bit so it extends the life.
Doesn't this method avoid what you call "churning"?
A delete doesn't really erase anything so if you don't delete you don't need wear leveling.
My data has no "political" value whatsoever so there is no need to even think of erasing it,
since it almost never even fills up the (flash,etc) storage devices.
This means that if you buffer your data such that you always write 512 bytes to the SD card, you are golden, because you will get >10,000 erases for each sector.
This also means that if you write one byte at a time, and your file system flushes after every byte, you will wear out the card 512 times faster.
Ofcourse the above is based on hazy memories of what I read in the past, and your mileage may vary.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
www.mikronauts.com E-mail: mikronauts _at_ gmail _dot_ com 5.0" VGA LCD in stock!
Morpheus dual Prop SBC w/ 512KB kit $119.95, Mem+2MB memory/IO kit $89.95, both kits $189.95 SerPlug $9.95
Propteus and Proteus for Propeller prototyping 6.250MHz custom Crystals run Propellers at 100MHz
Las - Large model assembler Largos - upcoming nano operating system
Consider: A virgin file system. You create a file (writes the root directory sector). You add data. On the first write to the file, and after you cross a cluster boundary, the FAT gets updated. You *could* leave it in RAM, but if you lose power, you've lost the written data. So let's assume you write the FAT every time it needs to. Every 4K, 16K or 32K of data you write, you're rewriting the SAME FAT sector.
For FAT file systems, the directory sectors and the FAT sectors are subject to a massive number of writes, relative to the actual number of data sectors written. Is it enough to wear them out? Depends on how often you're causing that to happen. If you're recording a high speed data stream to flash, maybe. If you're writing a >byte< every 10 minutes, probably never.
A quick format actually ensures you don't have any lost clusters. Unless the file system driver performs absolutely no caching, it's possible to have lost clusters. And if the file system doesn't do any caching, you're going to increase the number of sector writes. Typically, where enough RAM is available, a file system will cache the FAT entry and the directory sector. As you write data to the file, the actual data sectors are updated, but the FAT table and directory sector doesn't get updated until a new FAT sector is required, or the file is closed. Typically, DOS will update the file size in the directory sector when the file allocates a new cluster, but this isn't always a given. There's a couple circumstances where it will hold off that write.
Ultimately, a format is a better choice, long term, because you won't lose sectors.
And in the best of worlds, FAT isn't used at all, because it's a poor file system architecture. No journaling, poor allocation strategies, and too much re-writing of the same sector (OK for hard drives, bad for floppies and flash).
--jc
Post Edited (jcwren) : 1/24/2010 2:11:15 AM GMT
I actually wrote my own flash file system for the W25Xnnn series of flash chips for Largos. It was designed for automatic wear leveling; I keep toying with the idea of making an SD version, however this fs is not open source.
I am pretty sure that the controllers on the SD cards are not running YAFFS2 [noparse]:)[/noparse]
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
www.mikronauts.com E-mail: mikronauts _at_ gmail _dot_ com 5.0" VGA LCD in stock!
Morpheus dual Prop SBC w/ 512KB kit $119.95, Mem+2MB memory/IO kit $89.95, both kits $189.95 SerPlug $9.95
Propteus and Proteus for Propeller prototyping 6.250MHz custom Crystals run Propellers at 100MHz
Las - Large model assembler Largos - upcoming nano operating system
What's Largos?
I am concerned about if and how wear levelling may work (expecting it not to) in that situation.
The virtual "Tape" would make a lot of sense for data logging, especially with time stamps.
Reading or Seeking by time stamp should be very simple and READ is not associated with wear as much as write.
It is not necessary to Read from the first sector, but to jump forward and backward in smaller jumps
until you are in the adjacent sector and just move to the one you want.
The method is so ANCIENT that perhaps most active programmers have never heard of it before.
No FAT updates are necessary. Perhaps some method of recalling the end of the written "tape" may
be desirable but not necessary. Especially since we are concerned about media that is no more reliable than tape.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
VIRAND, If you spent as much time SPINNING as you do Trolling the Forums,
you'd have tons of awesome code to post! (Note to self)
If you're really that concerned about data integrity, use a FRAM part for a buffer. Then you have data retention and no wear limits. When you fill the buffer, write it to the SD card. You can implement details like marking if the FRAM has been written or not, etc, and make your self a high reliability file system. Don't forget to add 32-bit CRCs or ECC, and read back everything you write. That way you can recover from resets, program crashes (you do write code that doesn't crash, right?), and to some extent, maybe even write faults to the card (if you add enough FRAM). You won't have to worry about batteries dieing and losing anything in battery backed up memory.
I currently have a prototype written in Spin for my Morpheus platform. The prototype implements a Unix-like shell with about three dozen commands, stdin, stdout, and my wear leveling file system (which implements a hierarchical file system). Later, I intend to re-write it in LMM using either Catalina, or PropellerBasic (the optimizing LMM basic compiler I am working on).
The idea is that Morpheus will be a fully self-hosted system, with its own unix-like message passing OS, supporting large model (and extended memory large model) programs - it is just taking quite a while to implement everything.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
www.mikronauts.com E-mail: mikronauts _at_ gmail _dot_ com 5.0" VGA LCD in stock!
Morpheus dual Prop SBC w/ 512KB kit $119.95, Mem+2MB memory/IO kit $89.95, both kits $189.95 SerPlug $9.95
Propteus and Proteus for Propeller prototyping 6.250MHz custom Crystals run Propellers at 100MHz
Las - Large model assembler Largos - upcoming nano operating system
There are versions of Linux that boot from CD-R, and save data on the majority of the CD which has not yet been burned.
The files are stored in Data CD format iso9660, so any other OS can read them. Like tape, it is a linear filesystem.
That works with Flash too, using FAT or a Linux fs.
Now in Linux for example, they have started to think about that. How to tell an SD card "I don't need that block any more"
such that it can be recycled. Sadly I don't recall what this system is called.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
I think that I will start with a small index file and simply append the last 'pointer address' to the end of the file, as recommended by Virand. I will need to do some experimenting and deliberately remove the SD card whilst writing to it to see what data I can recover from the card upon restarting. It should be fun!!
Thanks again.
Paul
A file-system wear leveling algorithm does this at a higher level, and is intended for block based NAND devices that have no underlying wear leveling algorithm. Running one on top of the other should be completely ineffective, because the top level has no idea how the lower layer has remapped the NAND blocks. To the best of my knowledge, SD cards have no public API for retrieving the remapping table.
--jc
It's called "Trim" and it's supported by some of the more sane vendors of late model SSD's. (Well, those that don't have terminally broken firmware anyway).
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Life may be "too short", but it's the longest thing we ever do.
There is the part I do not understand. Once you have written to a block, via a FAT or whatever file system, how can the SD card ever know that you no longer need that block? Once a block is written to it is "in use" and remains "in use" forever. The SD card can not know that you have deleted that file and no longer need it's blocks.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
For me, the past is not over yet.
Spot on, on all counts. The only difference is different manufacturers have different algorithms for wear leveling. A good rule of thumb is to go with the manufacturers that bet the farm on their flash storage. Someone like Sandisk is far more likely to have a great wear leveling algorithm than someone who imports cheap OEM knockoff cards and sticks their label on them (I'm looking at *you* HP).
Stacking wear leveling algorithms is counter-productive at best. Buy quality and put faith in the guys who rely on it to feed their families.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Life may be "too short", but it's the longest thing we ever do.