Shop OBEX P1 Docs P2 Docs Learn Events
Fear about wearing out an SD card — Parallax Forums

Fear about wearing out an SD card

Paul_DlkPaul_Dlk Posts: 19
edited 2010-01-25 01:40 in Propeller 1
Hi all,

I know the fact that SD cards having a limited number of write cycles (10,000) has been discussed before but...


1) Does this refer to writing to the same 'address' on the SD card 10,000 times or just 10,000 writes overall? I use address loosely here to indicate the same position on the card.

2) If I open an initially small file (12 bytes) for append and add approx 4 bytes to it every 250mSec or so, will this allow me to write to the file about 50 million times? As in a datalogger.

3) What 'could' happen to the SD card if it were removed during a write cycle to the append file? Could I still open the file, even though teh last few bytes would be corrupted?


Yours thoughts and assistance would be greatly appreciated.

Paul
«1

Comments

  • KyeKye Posts: 2,200
    edited 2010-01-23 14:50
    So...

    1) No, the card actually does alot of complex stuff in the background to takecare of all write problems... You will never be able to wear one out in any devices lifespan. It would take 10 years to do so... which is longer than the lifespan of most devices.

    2) If your writing data like that it will take alot longer than 10 years to wear the card out.

    3) If you remove the card while writing data only the very end of the file will be corrupted and maybe the length of the file will be incorrect. If the length is incorrect it will be impossible to get at the end of stuff in the file. So you could still read most of the file since the last close.

    It might be better to buffer a large chunk of data logged and then every minute of so open the file, append the data, and then close the file. This assures the data will be valid up till the last minute.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Nyamekye,
  • Paul_DlkPaul_Dlk Posts: 19
    edited 2010-01-23 15:24
    Thanks Kye,
    are you saying that I can't wear out a card?

    My application consists of two files, one file contains data. This file is opened in read mode. The other file is basically an index file which points to the data that I wish to copy from the first file. Generally the data from the first file is read in a sequential manner.

    My concern is that no matter what happens (SD card removed incorrectly, power crash etc) the index pointer will point to the last data read from the first file.

    Perhaps I can have two index files, update one with the next address, then update the other with the same address.

    At startup read both files:

    If both values are the same then all is ok.
    If one is greater than the other then use the higher value.
    If one is corrupt then use the other value as index.

    I will also have a log file to indicate if there were any inaccuracies.

    Will this work? Any other suggestions?

    Paul
  • edited 2010-01-23 15:52
    • Flash Cell Endurance: For Multi-Level Cell (MLC) Flash, up to 10,000 write cycles
    per physical sector. For Single-Level Cell (SLC) Flash, up to 100,000 write cycles per
    physical sector.

    According to Toshiba, the inventor of·Flash Memory·“the 10,000 cycles of MLC NAND is more than sufficient for a wide range of consumer applications, from storing documents to digital photos. For example, if a 256-MB MLC NAND Flash-based card can typically store 250 pictures from a 4-megapixel camera (a conservative estimate), its 10,000 write/erase cycles, combined with wear-leveling algorithms in the controller, will enable the user to store and/or view approximately 2.5 million pictures within the expected useful life of
    the card.”1

    For USB Flash drives, Toshiba calculated that a 10,000 write cycle endurance would enable customers to “completely write and erase the entire contents once per day for 27 years, well beyond the life of the hardware.”
    http://www.kingston.com/products/pdf_files/FlashMemGuide.pdf

    What you don't want to do is defrag an SSD because there is no need to and it just adds to the wear.
  • localrogerlocalroger Posts: 3,452
    edited 2010-01-23 17:47
    It is possible to wear out a card, but it takes some work. Using a SD as virtal RAM for software which uses a big stack frame (*cough* C *cough*) is one way to really test the wear leveller. Someone involved with the ZiCog (I don't remember who) managed to ruin a card using it as CP/M disk with such a swap file. However, this might also have to do with the quality of the card; there are low-end knockoffs and counterfeits out there. If you are only using a relatively small part of a large card (say a few megabytes of a 2 gig card) you should be able to abuse it pretty horribly for a long time.
  • heaterheater Posts: 3,370
    edited 2010-01-23 18:34
    localroger: What? Did I miss something? I've never heard of a ZiCog user having that problem. Neither CP/M or the ZiCog emulator have any kind of "swap" file.

    One day I'm going to sacrifice a card to an experiment. Just set up a program to write and read a single file repeatedly until something breaks. Or write and read a given SD block repeatedly.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    For me, the past is not over yet.
  • localrogerlocalroger Posts: 3,452
    edited 2010-01-23 19:36
    Hmmmm, can't seem to find it. I could have sworn someone complained that they couldn't write to the first couple of megabytes of a SD card after using it with an application that aggressively swapped data to the virtual floppy disks for awhile. Could have it mixed up with something else, the GOOG is not giving me any joy.
  • David BDavid B Posts: 592
    edited 2010-01-23 19:39
    I made a battery-powered GPS logger that uses FSRW to log to an SD card.

    As I was developing it, I corrupted many files both from software bugs and from the battery running low of power.

    Usually, when the file was corrupted, the result was that Windows refused to open the file.

    In two cases, the entire filesystem became so corrupted that Windows refused to even mount the card, and I had to reformat it.

    It was so bothersome after taking a 3 hour hike to find that the log of the entire trip was lost that I modified the code to close the current logfile after about each 15 minutes of logging and open a new logfile, so less data would be lost if the file became corrupt.

    In a way, that works even better, as after I've uploaded my trips to everytrail.com and mapped the trip, each different file displays in a different color, which helps visualize the trip.

    The FAT filesystem is not robust. I've never had a problem with the SD card itself, but plenty of problems with FAT. About the only reason to continue to use FAT is to be able to move the data to Windows.

    I've been thinking about abandoning FAT completely and using a simple sequential-sector filesystem instead, which for basic logging should
    work fine. The only drawbacks would be that transferring data to a PC would be slower over a serial link, and it would need work on the PC to read
    data and create a PC file (if that's where the data eventually needs to go). Plus I'd have to write the new filesystem code.

    I've had an MMC card fail internally where a block of sectors fail to read, but I've never heard of anyone else actually experiencing the SD card write failure due to too many writes, so I'm a little skeptical of all the claims of how many writes an SD card can take without actual experience of it. I've also been tempted to take an old SD card and trying to write it until it fails to get a little hard data on this.

    So from my experience you're about 10,000 times more likely to have FAT problems than you are to have SD card wear problems!
  • Paul_DlkPaul_Dlk Posts: 19
    edited 2010-01-23 20:29
    Thanks for the info guys, I've been away from teh PC for a few hours and I'm heading away again. I'll digest the info and see where I go from here.

    Paul
  • edited 2010-01-23 21:08
    I have a digital camera with 6 megapixels which I have had for a couple of years.· I have filled the camera up with pictures over and over again and I haven't worn the memory card out so I really don't understand how someone could wear out a memory card though it could happen.

    I have a lot of memory cards and thumb drives.· The only one not working is an early·design and it started acting up after I touched a static lightning device at the Franklin Institute and the sign said that the display had more electricity going through it than your average home; the chips were just placed on the board and enclosed in the case which means no surface mount soldering.· The chip(s) were just held in place.· I think more thumb drives get damaged from being in people's pockets and it depends on how they handle them and I suffer in my home from static electricity.· As a rule I replace my thumbdrives every two years with new ones because that is my backup plan and I keep the old thumbdrives or I may give them away.
    ·
  • VIRANDVIRAND Posts: 656
    edited 2010-01-23 21:38
    With SD cards I never had a problem except with a likely unrelated problem with one of the first drivers
    for Propeller (actually Hydra) which would unreliably read games added to the card. I don't know why,
    but different SD card drivers for the Propeller are consistently better than others at reading some files
    and not others, such as fsrws for Propdos, femtobasic, and Hydra SD menu demo, although backing up
    the card and reformatting it somehow improves the readability. This problem was noticed maybe 2 years
    ago and I have not updated Hydra to use newer versions of those SD card apps.

    There was some cheap fake flash a few years ago which would try to compress or just lie about being bigger
    than it was, and would overwrite files when it was full, allegedly, because none that I ever had were like
    that.


    I HAVE NEVER HAD AN SD CARD FAIL IN A CAMERA NOR PC. I AM RUNNING OS ON ONE NOW.

    My solution for the potential of failure is to never erase a file except by formatting. That should keep the
    file system linear, unfragmented, and avoid wear leveling. I even plan to use SD cards in app specific situations
    in such a way that data is appended a sector or block or 32K at a time, without a filesystem.

    FLASH FAILURE I HAVE NOTICED IS A LOT WORSE THAN REPORTED:

    I used PROMS and EPROMS and MICROCONTROLLERS that either programmed either once or were UV ERASABLE
    and found those to be very reliable. When reprogrammed, they rarely were corrupt. But they are generally
    rated for 10 year data retention and I've noticed that the older ones do tend to have failed almost exactly
    ten years after programming.

    EEPROM seems very reliable in the same way as EPROM type PROMs.

    FLASH Firmware and Microcontrollers I found to be severely unreliable.
    Examples:
    PIC16F84 could only reprogram as few as ONCE compared to PIC16C84
    89F51 could only reprogram as few as ONCE compared to 89C51
    So I used the flash versions only as if they could not be reprogrammed,
    especially because they were cheaper than reliable eeprom versions compared to.
    Like EPROM, EEPROM has virtually never failed on me.

    FLASH also tends to have weird features such as permanent partial write protection,
    which may or may not be removable by an ERASE ALL feature. I have seen this in the
    FLASH versions of eproms and eeproms. If ERASE ALL is disabled by these features,
    the chip cannot be reprogrammed and is wasted even more than just by high premature
    failure rate.

    I am wary of wear levelling as it is a black box. (term used as "unknown process")
    I would prefer that the logical addressing were preserved along with a map of bad blocks.
    That is why I decided that using Flash in embedded apps I will avoid writing the same place twice
    and avoid wear levelling, and try to keep a linear run of blocks by only appending. Deleteing is not
    a reality anyway; it merely frags and corrupts the media by remarking the FAT to indicate unused
    blocks, leaving the data within alone.
    Deleting is therefore a useless function compared to a
    reformat, or a reformat and wipe in place of removing expired data and recycling the media.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    VIRAND, If you spent as much time SPINNING as you do Trolling the Forums,
    you'd have tons of awesome code to post!
    (Note to self)
  • localrogerlocalroger Posts: 3,452
    edited 2010-01-23 22:05
    VIRAND, I have seen some detailed info about wear levelling online; my worry is that if you are REALLY churning the card you will wear out the flags needed by the wear levelling algorithm itself before you wear out the actual flash.

    I have seen serial EEPROM of the type used to boot the Prop fail from overuse. This was an application done by a competitor of mine who stored the count of trucks weighed over a public scale in two EEPROM bytes. This meant the low byte was rewritten every time a truck passed over the scale, and this was on a very busy Interstate. This particular installation on I-12 got about 1500 trucks a day, and it would fail with great regularity every 6 months or so. Our competitor was charging the state $800 for a new motherboard each time this happened, I quickly figured out I could fix the problem with a $1.98 EEPROM.

    It's not hard to avoid this by spreading the changes over a range of memory instead of hammering a single memory location.
  • jcwrenjcwren Posts: 44
    edited 2010-01-23 22:31
    I've been using flash-based devices since about 1989, from a variety of manufacturers. I've used NOR and NAND flash, from program storage to data storage to gigabyte sized flash file systems. I've used CF cards, SD cards, MMC cards, and some technologies that never made it to market (or didn't survive long). Some of these applications reprogrammed flash only once, others multiple thousands of times. Many of the flash devices were flash-based microprocessors, including PICs (yuck...), AVR's, 8051-class devices.

    The few times I've seen flash failures, they've all been traceable. Generally speaking, the majority of damage was done by ESD. Products that had inadequate grounding, weren't properly designed to handle ESD, etc. On rare occasions, I've seen cells get damaged from over-writing the same location, usually WELL past the number of write cycles the datasheet says (flash manufacturers tend to be conservative, and those values are running at maximum rated temperatures and maximum rated voltages)

    In the above statements, I'm talking about the hardware reliability of flash. With regards to file system structures mapped to flash, just about any non-journaling file system is subject to corruption with partial writes, brown-outs, what have you. It's true of flash, hard drives, floppies, and punched tape. You can't write some of the data and expect it to be recoverable 100% of the time. Even journaling file systems are not 100% bullet-proof. It depends on how the data is damaged.

    I have no qualms about designing flash into a product, assuming it's an appropriate use. Sometimes FRAM is better, sometimes battery backed-up RAM (in some cases with ghosting to flash or EEPROM, there's even parts that support this internally), sometimes hard drives.

    Old MMC cards did not support wear leveling algorithms. SD cards do, as do Compact Flash (CF) cards. Wear leveling algorithms are not perfect. Repeatedly hammering the same logical sector over and over will indeed wear out the wear leveling bits (which themselves are wear leveled). This is why we don't recommend flash for /tmp file systems in Unix environments.

    As far as data retention, I've yet to have one "forget". I've got two devices running in my basement that use flash for program storage. I built them in 1989, using several 8Kx8 flashes. They run 24/7/365, and they've never failed (I will qualify this with saying that they don't write the flash, it's program store for an 8086 in one device, and an 8051 in another).

    I'm unclear what some peoples engineering experience is based on their statements, but as someone who has been doing this for almost 30 years, I can say that when used appropriately, and with understanding the requirements and limitations, flash is a safe, stable, long term technology.

    --jc
  • localrogerlocalroger Posts: 3,452
    edited 2010-01-23 23:44
    jc, thanks for the insight. It's one thing to have suspicions based on what's been published, it's much better to have the results of actual practical experience.

    One thing I do have to say, though -- those EEPROMs failed (I replaced six or seven of the damn things before we retired that stupid thing) at almost exactly the half-million point claimed by the manufacturer. Of course, that was also pretty much the worst case situation for abusing the part too.
  • VIRANDVIRAND Posts: 656
    edited 2010-01-24 01:02
    localroger said...
    VIRAND, I have seen some detailed info about wear levelling online; my worry is that if you are REALLY churning the card you will wear out the flags needed by the wear levelling algorithm itself before you wear out the actual flash.

    I have seen serial EEPROM of the type used to boot the Prop fail from overuse. This was an application done by a competitor of mine who stored the count of trucks weighed over a public scale in two EEPROM bytes. This meant the low byte was rewritten every time a truck passed over the scale, and this was on a very busy Interstate. This particular installation on I-12 got about 1500 trucks a day, and it would fail with great regularity every 6 months or so. Our competitor was charging the state $800 for a new motherboard each time this happened, I quickly figured out I could fix the problem with a $1.98 EEPROM.

    It's not hard to avoid this by spreading the changes over a range of memory instead of hammering a single memory location.
    I agree with you.
    I described my experience with various devices.
    My solution for using flash is to avoid deletes and keep the fs linear until the chip is full,
    then dump it and mark it clear. A quick format doesn't really erase every bit so it extends the life.
    Doesn't this method avoid what you call "churning"?
    A delete doesn't really erase anything so if you don't delete you don't need wear leveling.
    My data has no "political" value whatsoever so there is no need to even think of erasing it,
    since it almost never even fills up the (flash,etc) storage devices.
  • localrogerlocalroger Posts: 3,452
    edited 2010-01-24 01:43
    VIRAND -- I would agree that you have a good strategy there; in fact, it's a good strategy even on devices that don't wear-level. I think the real problem with churning is that the wear levelling flags get worn out before you run out of sectors to buffer. This is more of a problem if you are trying to use a small part of the SD as SPI RAM. It's just not designed for that. But for data logging, where you are recording a stream of data for later analysis, I think your strategy is a winner.
  • Bill HenningBill Henning Posts: 6,445
    edited 2010-01-24 01:57
    I think I read that they use flash chips with 528 byte sectors, and use the extra 16 bytes to count how many times a particular sector has been erased.

    This means that if you buffer your data such that you always write 512 bytes to the SD card, you are golden, because you will get >10,000 erases for each sector.

    This also means that if you write one byte at a time, and your file system flushes after every byte, you will wear out the card 512 times faster.

    Ofcourse the above is based on hazy memories of what I read in the past, and your mileage may vary.
    localroger said...
    VIRAND, I have seen some detailed info about wear levelling online; my worry is that if you are REALLY churning the card you will wear out the flags needed by the wear levelling algorithm itself before you wear out the actual flash.

    I have seen serial EEPROM of the type used to boot the Prop fail from overuse. This was an application done by a competitor of mine who stored the count of trucks weighed over a public scale in two EEPROM bytes. This meant the low byte was rewritten every time a truck passed over the scale, and this was on a very busy Interstate. This particular installation on I-12 got about 1500 trucks a day, and it would fail with great regularity every 6 months or so. Our competitor was charging the state $800 for a new motherboard each time this happened, I quickly figured out I could fix the problem with a $1.98 EEPROM.

    It's not hard to avoid this by spreading the changes over a range of memory instead of hammering a single memory location.
    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    www.mikronauts.com E-mail: mikronauts _at_ gmail _dot_ com 5.0" VGA LCD in stock!
    Morpheus dual Prop SBC w/ 512KB kit $119.95, Mem+2MB memory/IO kit $89.95, both kits $189.95 SerPlug $9.95
    Propteus and Proteus for Propeller prototyping 6.250MHz custom Crystals run Propellers at 100MHz
    Las - Large model assembler Largos - upcoming nano operating system
  • jcwrenjcwren Posts: 44
    edited 2010-01-24 02:00
    To some extent, wear leveling *is* still necessary. The FAT file system formats are not friendly that way. Every time you write a new cluster, you're causing the file system allocation sectors to be re-written. Whether you format or delete, it makes little difference. It all depends on how much integrity you want.

    Consider: A virgin file system. You create a file (writes the root directory sector). You add data. On the first write to the file, and after you cross a cluster boundary, the FAT gets updated. You *could* leave it in RAM, but if you lose power, you've lost the written data. So let's assume you write the FAT every time it needs to. Every 4K, 16K or 32K of data you write, you're rewriting the SAME FAT sector.

    For FAT file systems, the directory sectors and the FAT sectors are subject to a massive number of writes, relative to the actual number of data sectors written. Is it enough to wear them out? Depends on how often you're causing that to happen. If you're recording a high speed data stream to flash, maybe. If you're writing a >byte< every 10 minutes, probably never.

    A quick format actually ensures you don't have any lost clusters. Unless the file system driver performs absolutely no caching, it's possible to have lost clusters. And if the file system doesn't do any caching, you're going to increase the number of sector writes. Typically, where enough RAM is available, a file system will cache the FAT entry and the directory sector. As you write data to the file, the actual data sectors are updated, but the FAT table and directory sector doesn't get updated until a new FAT sector is required, or the file is closed. Typically, DOS will update the file size in the directory sector when the file allocates a new cluster, but this isn't always a given. There's a couple circumstances where it will hold off that write.

    Ultimately, a format is a better choice, long term, because you won't lose sectors.

    And in the best of worlds, FAT isn't used at all, because it's a poor file system architecture. No journaling, poor allocation strategies, and too much re-writing of the same sector (OK for hard drives, bad for floppies and flash).

    --jc

    Post Edited (jcwren) : 1/24/2010 2:11:15 AM GMT
  • jcwrenjcwren Posts: 44
    edited 2010-01-24 02:07
    Usually those 16 bytes are used for ECC and meta information of the sector. YAFFS2 (a very good file system for flash) uses 6 bytes for ECC, and the rest as 'tag bytes'. The tag bytes are used to maintain block status, page status, whether to use ECC on that sector, things like that.
    Bill Henning said...
    I think I read that they use flash chips with 528 byte sectors, and use the extra 16 bytes to count how many times a particular sector has been erased.

    This means that if you buffer your data such that you always write 512 bytes to the SD card, you are golden, because you will get >10,000 erases for each sector.

    This also means that if you write one byte at a time, and your file system flushes after every byte, you will wear out the card 512 times faster.

    Ofcourse the above is based on hazy memories of what I read in the past, and your mileage may vary.

  • Bill HenningBill Henning Posts: 6,445
    edited 2010-01-24 02:11
    Thanks... I'll have to look up YAFFS2, it sounds quite interesting.

    I actually wrote my own flash file system for the W25Xnnn series of flash chips for Largos. It was designed for automatic wear leveling; I keep toying with the idea of making an SD version, however this fs is not open source.

    I am pretty sure that the controllers on the SD cards are not running YAFFS2 [noparse]:)[/noparse]
    jcwren said...
    Usually those 16 bytes are used for ECC and meta information of the sector. YAFFS2 (a very good file system for flash) uses 6 bytes for ECC, and the rest as 'tag bytes'. The tag bytes are used to maintain block status, page status, whether to use ECC on that sector, things like that.
    Bill Henning said...
    I think I read that they use flash chips with 528 byte sectors, and use the extra 16 bytes to count how many times a particular sector has been erased.

    This means that if you buffer your data such that you always write 512 bytes to the SD card, you are golden, because you will get >10,000 erases for each sector.

    This also means that if you write one byte at a time, and your file system flushes after every byte, you will wear out the card 512 times faster.

    Ofcourse the above is based on hazy memories of what I read in the past, and your mileage may vary.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    www.mikronauts.com E-mail: mikronauts _at_ gmail _dot_ com 5.0" VGA LCD in stock!
    Morpheus dual Prop SBC w/ 512KB kit $119.95, Mem+2MB memory/IO kit $89.95, both kits $189.95 SerPlug $9.95
    Propteus and Proteus for Propeller prototyping 6.250MHz custom Crystals run Propellers at 100MHz
    Las - Large model assembler Largos - upcoming nano operating system
  • jcwrenjcwren Posts: 44
    edited 2010-01-24 02:21
    The wear leveling algorithm on SD and CF cards is pretty much purely use-based, as they're file system agnostic. I doubt the controllers in them have much in the way of horsepower, given the power and space constraints.

    What's Largos?
    Bill Henning said...
    Thanks... I'll have to look up YAFFS2, it sounds quite interesting.

    I actually wrote my own flash file system for the W25Xnnn series of flash chips for Largos. It was designed for automatic wear leveling; I keep toying with the idea of making an SD version, however this fs is not open source.

    I am pretty sure that the controllers on the SD cards are not running YAFFS2 [noparse]:)[/noparse]
    jcwren said...
    Usually those 16 bytes are used for ECC and meta information of the sector. YAFFS2 (a very good file system for flash) uses 6 bytes for ECC, and the rest as 'tag bytes'. The tag bytes are used to maintain block status, page status, whether to use ECC on that sector, things like that.
    Bill Henning said...
    I think I read that they use flash chips with 528 byte sectors, and use the extra 16 bytes to count how many times a particular sector has been erased.

    This means that if you buffer your data such that you always write 512 bytes to the SD card, you are golden, because you will get >10,000 erases for each sector.

    This also means that if you write one byte at a time, and your file system flushes after every byte, you will wear out the card 512 times faster.

    Ofcourse the above is based on hazy memories of what I read in the past, and your mileage may vary.

  • VIRANDVIRAND Posts: 656
    edited 2010-01-24 02:48
    Consider not FAT, but using the FLASH like a reel of tape, appending blocks or sectors to the end of it only.
    I am concerned about if and how wear levelling may work (expecting it not to) in that situation.
    The virtual "Tape" would make a lot of sense for data logging, especially with time stamps.
    Reading or Seeking by time stamp should be very simple and READ is not associated with wear as much as write.
    It is not necessary to Read from the first sector, but to jump forward and backward in smaller jumps
    until you are in the adjacent sector and just move to the one you want.
    The method is so ANCIENT that perhaps most active programmers have never heard of it before.
    No FAT updates are necessary. Perhaps some method of recalling the end of the written "tape" may
    be desirable but not necessary. Especially since we are concerned about media that is no more reliable than tape.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    VIRAND, If you spent as much time SPINNING as you do Trolling the Forums,
    you'd have tons of awesome code to post!
    (Note to self)
  • jcwrenjcwren Posts: 44
    edited 2010-01-24 02:56
    Flash media is *substantially* more reliable than tape. And the method is not that "ancient" by any stretch of the imagination. What you lose, if you care, is the ability to read it as a native file system on any OS (unless, of course, you choose to write a file system driver for it). Which is the only reason so many people use FAT for a file system, is because it's ubiquitous.

    If you're really that concerned about data integrity, use a FRAM part for a buffer. Then you have data retention and no wear limits. When you fill the buffer, write it to the SD card. You can implement details like marking if the FRAM has been written or not, etc, and make your self a high reliability file system. Don't forget to add 32-bit CRCs or ECC, and read back everything you write. That way you can recover from resets, program crashes (you do write code that doesn't crash, right?), and to some extent, maybe even write faults to the card (if you add enough FRAM). You won't have to worry about batteries dieing and losing anything in battery backed up memory.
  • Bill HenningBill Henning Posts: 6,445
    edited 2010-01-24 02:58
    Largos is an operating system for the Propeller that I have been working on for a long time (whenever I can find spare time). It is a message passing, Unix-like, nano kernel OS. You can find some more info by following the "Largos" link in my sig.

    I currently have a prototype written in Spin for my Morpheus platform. The prototype implements a Unix-like shell with about three dozen commands, stdin, stdout, and my wear leveling file system (which implements a hierarchical file system). Later, I intend to re-write it in LMM using either Catalina, or PropellerBasic (the optimizing LMM basic compiler I am working on).

    The idea is that Morpheus will be a fully self-hosted system, with its own unix-like message passing OS, supporting large model (and extended memory large model) programs - it is just taking quite a while to implement everything.
    jcwren said...
    The wear leveling algorithm on SD and CF cards is pretty much purely use-based, as they're file system agnostic. I doubt the controllers in them have much in the way of horsepower, given the power and space constraints.

    What's Largos?
    Bill Henning said...
    Thanks... I'll have to look up YAFFS2, it sounds quite interesting.

    I actually wrote my own flash file system for the W25Xnnn series of flash chips for Largos. It was designed for automatic wear leveling; I keep toying with the idea of making an SD version, however this fs is not open source.

    I am pretty sure that the controllers on the SD cards are not running YAFFS2 [noparse]:)[/noparse]
    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    www.mikronauts.com E-mail: mikronauts _at_ gmail _dot_ com 5.0" VGA LCD in stock!
    Morpheus dual Prop SBC w/ 512KB kit $119.95, Mem+2MB memory/IO kit $89.95, both kits $189.95 SerPlug $9.95
    Propteus and Proteus for Propeller prototyping 6.250MHz custom Crystals run Propellers at 100MHz
    Las - Large model assembler Largos - upcoming nano operating system
  • VIRANDVIRAND Posts: 656
    edited 2010-01-24 06:21
    jcwren said...
    Flash media is *substantially* more reliable than tape. And the method is not that "ancient" by any stretch of the imagination. What you lose, if you care, is the ability to read it as a native file system on any OS (unless, of course, you choose to write a file system driver for it). Which is the only reason so many people use FAT for a file system, is because it's ubiquitous.

    If you're really that concerned about data integrity, use a FRAM part for a buffer. Then you have data retention and no wear limits. When you fill the buffer, write it to the SD card. You can implement details like marking if the FRAM has been written or not, etc, and make your self a high reliability file system. Don't forget to add 32-bit CRCs or ECC, and read back everything you write. That way you can recover from resets, program crashes (you do write code that doesn't crash, right?), and to some extent, maybe even write faults to the card (if you add enough FRAM). You won't have to worry about batteries dieing and losing anything in battery backed up memory.

    There are versions of Linux that boot from CD-R, and save data on the majority of the CD which has not yet been burned.
    The files are stored in Data CD format iso9660, so any other OS can read them. Like tape, it is a linear filesystem.
    That works with Flash too, using FAT or a Linux fs.
  • heaterheater Posts: 3,370
    edited 2010-01-24 06:42
    There is fundamental point being overlooked here. When you delete a file from FAT32 or ext3 file system or whatever the poor old SD card has no way to know that those file blocks are now free for use in any kind of ware levelling.

    Now in Linux for example, they have started to think about that. How to tell an SD card "I don't need that block any more"
    such that it can be recycled. Sadly I don't recall what this system is called.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    For me, the past is not over yet.
  • Paul_DlkPaul_Dlk Posts: 19
    edited 2010-01-24 16:08
    Many thanks for all your responses and opinions guys. A lot to digest and contemplate.

    I think that I will start with a small index file and simply append the last 'pointer address' to the end of the file, as recommended by Virand. I will need to do some experimenting and deliberately remove the SD card whilst writing to it to see what data I can recover from the card upon restarting. It should be fun!!

    Thanks again.

    Paul
  • jcwrenjcwren Posts: 44
    edited 2010-01-24 16:16
    You're talking about implementing wear leveling at the file system level and not at the device level. SD cards et al use a usage based wear leveling algorithm. It's based on how often the block is written, and internally gets remapped to equalize wear. In theory, if you hammer the same sector over and over with a write, it will be placing that newly written sector in the least recently used block on the SD card. Internally, the microcontroller maintains count and remapping tables.

    A file-system wear leveling algorithm does this at a higher level, and is intended for block based NAND devices that have no underlying wear leveling algorithm. Running one on top of the other should be completely ineffective, because the top level has no idea how the lower layer has remapped the NAND blocks. To the best of my knowledge, SD cards have no public API for retrieving the remapping table.

    --jc
    heater said...
    There is fundamental point being overlooked here. When you delete a file from FAT32 or ext3 file system or whatever the poor old SD card has no way to know that those file blocks are now free for use in any kind of ware levelling.

    Now in Linux for example, they have started to think about that. How to tell an SD card "I don't need that block any more"
    such that it can be recycled. Sadly I don't recall what this system is called.
  • BradCBradC Posts: 2,601
    edited 2010-01-24 16:23
    heater said...

    Now in Linux for example, they have started to think about that. How to tell an SD card "I don't need that block any more"
    such that it can be recycled. Sadly I don't recall what this system is called.

    It's called "Trim" and it's supported by some of the more sane vendors of late model SSD's. (Well, those that don't have terminally broken firmware anyway).

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Life may be "too short", but it's the longest thing we ever do.
  • heaterheater Posts: 3,370
    edited 2010-01-24 16:26
    jcwren: "...placing that newly written sector in the least recently used block on the SD card.."

    There is the part I do not understand. Once you have written to a block, via a FAT or whatever file system, how can the SD card ever know that you no longer need that block? Once a block is written to it is "in use" and remains "in use" forever. The SD card can not know that you have deleted that file and no longer need it's blocks.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    For me, the past is not over yet.
  • BradCBradC Posts: 2,601
    edited 2010-01-24 16:31
    jcwren said...
    You're talking about implementing wear leveling at the file system level and not at the device level. SD cards et al use a usage based wear leveling algorithm. It's based on how often the block is written, and internally gets remapped to equalize wear. In theory, if you hammer the same sector over and over with a write, it will be placing that newly written sector in the least recently used block on the SD card. Internally, the microcontroller maintains count and remapping tables.

    A file-system wear leveling algorithm does this at a higher level, and is intended for block based NAND devices that have no underlying wear leveling algorithm. Running one on top of the other should be completely ineffective, because the top level has no idea how the lower layer has remapped the NAND blocks. To the best of my knowledge, SD cards have no public API for retrieving the remapping table.

    Spot on, on all counts. The only difference is different manufacturers have different algorithms for wear leveling. A good rule of thumb is to go with the manufacturers that bet the farm on their flash storage. Someone like Sandisk is far more likely to have a great wear leveling algorithm than someone who imports cheap OEM knockoff cards and sticks their label on them (I'm looking at *you* HP).

    Stacking wear leveling algorithms is counter-productive at best. Buy quality and put faith in the guys who rely on it to feed their families.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Life may be "too short", but it's the longest thing we ever do.
Sign In or Register to comment.