newer Solid-state drives lose data if left without power for just a few days
Ron Czapala
Posts: 2,418
http://www.zdnet.com/article/solid-state-disks-lose-data-if-left-without-power-for-just-a-few-days/
New research suggests that newer solid-state hard drives, which are faster and offer better performance, are vulnerable to an inherent flaw -- they lose data when they're left dormant in storage for periods of time where the temperature isn't properly regulated.
The worrying factor is that the period of time can be weeks, months, but even in some circumstances -- just a few days.
Comments
The article states: "Most consumer solid-state drives, such as those in high-end performance desktops and certain notebooks (including Apple MacBooks), do not suffer as much. They are designed to retain data for about two years in storage under the right temperature."
Which is a decidedly fluffy statement: "Most" means some are vulnerable to this. "Suffer as much" means suffer. "About two years" could mean one or less. "Right temperature", well what is that?
Of course this is a statement by a spinning disk manufacturer so we can expect it's inflated a bit. Besides most of my spinning disks don't seem to last two years anyway!
This document from DELL suggests retention time down to 3 months for some drives and circumstances.http://www.dell.com/downloads/global/products/pvaul/en/solid-state-drive-faq-us.pdf
I start to wonder why bother with SSD, why not just have a ton of RAM?
I just bought a machine, actually two of them, for some CAD / CAM work. BTW, if you want single threaded performance, this chip: Intel® Core™ i7-4790K @ 4.3GHz, is as fast as one can get right now. I think the RAM is running at 2.4 or 2.8Ghz. And that's a bit overclocked, but stable given water cooling.
It's going to be a decade or more to move geometry kernels and associated other software sub-systems to a real multi-core model. Sad, but reality. This leaves me always looking for that single thread, sequential compute improvement. And it's always little. Slow. Steps. Maybe.
One has 32GB ram, the other 64. Huge! The CAD / CAM stuff needs it actually.
They both came with SSD, and a 1TB hard disk backup to clone the SSD onto every week or so. My experience with SSD drives is great, but they really do work, until they don't. Even when the software that is supposed to predict life says they are fine, they might enter "don't" mode, suddenly, and that's often no recovery possible. Some SSD drives just brick on failure. Access denied.
Realistically, a machine for serious purposes like this would need to be 250GB RAM. But, that would hold it all. OS, big apps, and data (really big). Maybe 200.
One could stream the whole thing to a hard disk somewhere for "initial boot" type tasks needed on power failure, or to a network, or something. Otherwise, it's all just running in RAM.
Disk I/O impacts the performance of these workstations, and an all RAM scenario would probably improve on actual time to complete tasks by a factor of two. Going from a reasonable i7 to this one took 6 hour tasks to 3. All RAM might take the 3 hour task to something like 2 hours, maybe a bit less, depending on what the data access needs to be. Count me in, if available.
But I would only buy in for big, expensive, painful tasks and requirements. Anything below that, and what we've got is excellent.
Truth is, we've topped out on sequential execute clocks for the time being. 4Ghz. RAM currently can go 2.8Ghz, or thereabouts. For sequential execute improvements, ongoing increases in RAM speed, even if quite expensive, would be worth it.
Some time ago, "the fastest one could get" was $10K. Now, with a good GPU, it's $3K. Plenty of people out there would pay gladly for meaningful sequential execute speed improvements.
So, no, battery backed is not needed at all.
An awful lot of the data we store does not need to be rewritable. Consider:
All those videos we have. Once they are made they are made. No one is ever going to go back and tweak on them years later.
Same for photos and audio recordings.
Same for source code (if code is in version control system those old version stay around, new versions accumulate)
Same for accounting records
And so on and so on.
In fact, what we are discussing here is that we don't want that data to be rewritable, we want it to stay around forever. We want immutable data.
So, why bother trying to make ever smaller and smaller rewritable memory cells? And then having to live with the potential short life time.
Why not go back to PROM?
Surely blowing some simple memory cell to permanently record a bit is much simpler, small and less power consuming than what we are trying to do now?
When the device is full, just get more and start filling that. It would be dirt cheap after all.
To help with this plan, we also get rid of the file system. We don't need rewritable files any more. File systems are just a lot of code that slows things down. Just have a gigantically huge memory mapped space to put it all in. 128 bit addresses should be sufficient to address every binary blob the human race is ever going to produce from now till the end of time.
Conveniently the RISC V processor design accommodates 128 bit processors...
There were WORM drives in the past but they were so rarely used I never got to see one. I think that about sums up that idea.
Who said anything about throwing anything away? I'm suggesting quite the opposite, immutable data.
When is full? You can never exhaust the address space. Just keep making more PROM and writing new stuff to it.
Now imagine something like this:
All the data you could ever hope to produce, photo, audio, video, virtual reality, whatever, or consume for one year could be held on something like a A4 sheet of paper. At the end of the year you may fill it and get another one for a few dollars. At the end of your life you have a stack of 100 or so sheets.
After that your children can bin it. Or perhaps salvage a few important, interesting items of family history. Or just keep it all.
Well, if we could make super small and cheap immutable PROM cells where we can't make super small and cheap immutable FLASH cells, even in similar capacities as SSD today, then we are nearly there as far as I'm concerned.
This is genius and yet seems so obvious... Places like archive.gov and Facebook and Google who keep information pretty much forever would also benefit. Obviously, it would still be nice to have a good amount of rewritable storage for scratch and pointers to current versions (but there are ways to get around needing these pointers) and fast-changing temporary data and such, but, the way you present it, everyone could use a few terabytes of PROM. I'm guessing it would be significantly physically smaller, too.
Why wouldn't we need file systems anymore? We still need a way to point to and identify data as it is added, although I guess 256-ary trees in NOR or NAND PROM that can be filled in as data is added would do the trick. Do you mean because fragmentation won't be a problem since allocation would be trivial?
My experience on IRIX was similar. Took one filesystem and just kept scaling it over time.
If a simple and robust means to keep organized were part of it all, count me in.
For years, I used a data manager for CAD that never over wrote anything. Very cool. One could delete, but it was not required. Just get more storage.
Lots of hard things got very easy on that kind of idea.
Re: MRAM
yeah ok. Do we have really big and really fast products today?
If so, why aren't they in use? I got big stuff to do, and it needs sequential compute and the fastest possible costs just a few grand.
2x memory speed would be huge! (For example)
I can't currently get a quicker CPU. Seems this need would drive this MRAM to rapid adoption. People like me would easily pay $10k for a faster sequential execute performance gain.
I can't believe you even tried to use that as an excuse! Of course WORM is small compared to today, tape and HDD was a hell of a lot smaller then too. It failed because rewritable was just as easy to make, and still is, therefore there will never be a case for write-once due to mass production rendering rewritable cheaper.
Longevity is important however, so if a particular tech can be shown to handle much harsher treatment and last longer in the process then it may manage to eke out a niche. It comes down to how long is really demanded, most people just assume it lasts forever but don't much care either way. But this point has nothing to do with whether it's rewritable or not.
No, not even close. But that is a simple case of chicken and egg isn't it. MRAM still needs investment to come to fruition, investment in refining the basics of performance even. It's not yet as perfect a solution as I may have made it out to be, there is technical hurdles for top performance. And what product that is on the market is targeted only as an alternative to EEPROM and battery backed SRAM (the really slow low power stuff) in embeded applications. And as Flash alternative for radiation hardened equipment.
The high volume markets are dominated by SDRAM and Flash. MRAM doesn't have a show of competing with Flash on capacity. And non-volatile main memory is not in demand.
So ... investment is tiny, I presume very few fabs even provide it ... parts are tiny capacity and not fast at all (no one is even using it as internal RAM on uControllers), prices aren't cheap ... all the usual market barriers.
I wouldn't be so sure about that, that was very true of the 486 to Pentium4 days but main memory speeds have really caught up over the past decade. I think you'll find CPU caches are having trouble making use of the of the available main memory bandwidth. At any rate, the memory bus speed these days has little to do with the speed of the memory itself.
A lot of that space is taken by my own photos and videos. Photos are in raw format of course, but I also store the smaller jpegs (makes it easier to browse). These days you quickly run into TB space even if you're just a hobby photo/video person with just a few cameras.
If I look at what I do at work, I can never get enough storage. I have to shuffle data around to where there's space (my PC's 4TB HD is puny in this context). All of that is really write-once (although copied to local storage now and then).
And then you have setups like CrashPlan, for continuous, never-delete backups. Local or cloud. Again, write-once.
Heater is right. What we need is safe, ever-lasting, endless-capacity, easy-access storage. A huge part of that storage, probably 99% or more, can be write-once if that makes it easier to make. Should be safer that way too, of course.
-Tor
In parallel execute scenarios, yes. That's true. It's not true at all for sequential execute. Faster RAM could improve what is currently topped out CPU execute.
Just remember, they work, until they don't. And when they don't, it's often a brick. Other than that, they are great, and I use the Smile out of mine, but I clone it every week too.
As for cheap, fast, long term storage, I would much prefer faster. From there, it can be powered easily enough to be useful. And for downtimes, slower, longer term storage is dirt cheap, and quick when it's done simple, stupid, "just write out all these bits sequentially" style.
The gain from an SSD in my uses cases is very significant. Running a system all RAM would be pretty great. One can almost do that now.
The point of write-once is that it's safer in the sense that if you can't re-write you can't lose it. Not a major point though. But if there is some huge, safe, and yes, fast, lasting storage in the works somewhere, and the catch is that it must be write-once - well then, bring it on. Because I truly believe in my suggestion that more than 99% of the need is write-once anyway.
Edit: And of course, with Heater's *really* large storage (128-bit addressable) then you don't need rewrite, even for rewritten data - just write a new version. Like an ever-lasting VMS filesystem.. (it has a built-in version system for those who are unfamiliar with. Every write creates a new version, until you issue a 'purge' command or run into a preset limit). It'll probably be more complex though, at least to start with - so I'll take the simple approach first.
-Tor
We can demonstrate this. When I run this code on my PC: It takes about 1 second to complete.
If I change that array access around to "array[j] = 42;" it becomes over ten times slower!
Clearly walking up the RAM linearly and having caches nicely filled all the time is much faster than hopping around in memory and having cache misses all the time.
Perhaps you could try this experiment on your machines and report back.
They're only that way because the market has higher priorities that skew it that way. HDD and tape are still the biggest capacities for long term storage. Flash still has to fill that role in a serious manner.
Hmm, well, if HDDs are the small capacity metric in this conversation then I guess we're on totally separate topics ...
But then I don't understand what the point of your comment was.
Changing to any other memory tech will still require bursting to combat the long latencies.
EDIT: I know, I did say MRAM would hardly need all the extra buffering that SDRAM has but to make it fit alongside a separate CPU would necessitate at least the same burst management even if the SRAM page buffer went away. The latencies are, by and large, still there.
You may be able to move 1 or two people very quickly over a long distance in a Ferrari but if you want to move a lot of people the slow old bus will win in throughput.
It may be possible to build a super quick access for a single integer, that's the Ferrari, but when it comes to moving a lot of data then burst transfers win, that's the bus.
Worse still, in the case of memory that single element access clogs up the highway for the burst transfers. It's as if only the Ferrari or the Bus can be on the highway at the same time.
End result is things are optimized for bulk transfer and us programmers have to try and work with that efficiently by maintaining locality of reference as much as possible.
I think that's a great idea.
Not that PROM is problem free. There were cases where the blown metal links actually grew back together due to the electrical potential causing the metal atoms to slowly migrate, but I'm sure that problem could be resolved.
Something like a 1TB or larger SD card for a dollar or two. We already have 256MB r/w SD cards so that should be possible since the prom cells are a lot smaller.
Flash is storing multiple bits per transistor now by treating the stored charge as an analogue device, that's one reason for the current failure rates. So, even compared on a same process basis Flash will be at least double the density of PROM. And that's not attempting to address the collateral damage of blowing a fuse in a seriously dense array.
Blowing fuses is not the only way to store data, and any technology that depends on a stored charge will not be a permanent storage solution. Storing multiple bits based on the amount of charge will be even less reliable.
There was some research back in the 1702 eprom era that was working on proms where the storage was based on changing the resistance of the material between the row and column electrodes. IIRC the material could be changed from an amorphous to crystalline (or maybe vice-versa) state by applying a voltage across the electrodes. The change was permanent and one of the advantages was the high density it made possible.
Something like that would make a great permanent storage medium. Not sure what happened to it, but I suspect it was forgotten in the stampede to produce and improve eproms and eeproms at the time. No one would have considered the technology viable as archival storage at that point due to the low capacity of the chips.