FRAM I believe is phase change based. HP's "memristor" might be as well. No matter how good the longevity is intended to be, they're all targeting at least limited rewritability.
To increase speed and optimize write/read strategies, modern SSDs use SRAM chip, which is backed up by supercap. So, if you were in middle of something (write), and you lost your computer power, the SRAM will keep the data as long as supercap lasts, but if you won't supply power to SSD, this data, which was in SRAM, will be lost.
That is a different problem. If your drive has RAM cache in front of the store.
What I have been reading about is the failure of the little FLASH cells as charge leaks out of it when not powered up. Which seems to depend on temperature and the number of times the cell had been rewritten previously.
That is a different problem. If your drive has RAM cache in front of the store.
What I have been reading about is the failure of the little FLASH cells as charge leaks out of it when not powered up. Which seems to depend on temperature and the number of times the cell had been rewritten previously.
Yep, Samsung have got into a bit of hot water recently with their early entry into using TLC Flash. The drive struggles to read old data from the Flash cells and as a result the data rate progressively takes a bigger and bigger hit as the data ages. Reports have it getting as low as 20MB/s read rates. This keeps up until a refresh is manually executed. Dunno if it eventually fails completely or not. It became glaringly obvious only with their newest Flash chips because only those ones degraded in a few months flat. The earlier TLC Flash chips had the same problem but, being less dense, were much slower degrading.
It maybe a variant of what Seagate has highlighted, dunno. The Samsung problem is not related to power down time though, it's just time since data was written. If it is related then the TLC parts are not the only ones degrading in this fashion.
Samsung's workaround has been to provide two firmware updates so far: First one didn't say what it did but must have been smarter tracking of variables like temperature to better judge the cell behaviours. The second update continuously refreshes the flash blocks during idle periods. There is no hard info on how fast the refresh runs. There is also reports of a new bug with the second firmware update that corrupts datablocks when queued TRIMming is enabled. OS workaround is to make TRIM operations unqueued.
Lol, maybe Samsung are just late to the solution and it's common practice to run background refreshing now. In which case power on time will matter if the refresh rate is slow.
Or maybe everyone is rushing to implement this Samsung refresh hack because it's a fundamental flaw of shrinking Flash cells so small. And Seagate have just taken the opportunity to score some much needed points.
Hehe, so, as Flash approaches the capacitance reduction limit it behaves more and more like DRAM. It's non-volatility progressively goes out the door. I guess that means they've already reached one practical limit.
Stacked silicon is coming up, and, rather conveniently, Flash is a good example of "dark silicon" in the extreme. What this means is that extreme levels of stacking can be packed into one device without concern for heat build up.
Oh well, turns out the original article is five years old so not scoring points off Samsung's recent misfortunes at all. More just an extreme what-if back when SSDs were still tiny.
However, it will be interesting to see if Samsung have bumped into a limit that is more obvious in TLC parts or whether it's a TLC problem only or just Samsung screwing up.
Oh well, turns out the original article is five years old....
Which article is that? The article linked in post 1 looks like its from this month.
If this is a problem, why haven we heard of it before? I have not heard of anyone losing data from an SSD being powered off too long or left in a hot car, etc. Do I need to get out more?
Which article is that? The article linked in post 1 looks like its from this month.
Sorry, I should have said the source material for the original article is five years old. And that source wasn't anything drastic like the original article portrayed.
Samsung's issues are a real concern though. Until that's resolved there is questions about reliability of Flash based SSDs.
I have been using a Samsung SSD 840 EVO 750GB drive for a few months now.
Keep an eye on loading speed. If it hasn't been patched then you should be able to measure a progressive reduction in read rates over the next few months. It might already be showing up. The test is simple, just do contiguous long reads. It's meant to be pinned at the max rate of the interface across the whole drive, something like 600MB/s for SATA3. However, for blocks that haven't been written to for a while this rate progressively gets worse and worse with age.
Samsung haven't given a technical explanation, that I know of, for why this occurs beyond saying it's a problem with cell voltages.
Yeah, I too saw the update on this story - the researcher said this was taken out of context also.
SSD still seems to be a bit immature though, what with degrading performance etc. So I don't think I will invest in expensive, large SSD disks just yet. Something affordable to be used as system drive, perhaps (with solid backups). Then hopefully quality will improve at some point - for a while now it has to some degree been the opposite, higher capacity but faster degradation / fewer rewrite cycles. Maybe flash will improve, or maybe FRAM or some other alternative tech will increase capacity sufficiently to take over. I'm a bit wary of technology based on keeping a charge.. DRAM being the extreme example of fast leaking.
I have been using a Samsung SSD 840 EVO 750GB drive for a few months now. When I set it up they leave about 70GB unallocated for some kind of special use. Not that I do not trust it but I keep a backup image of it on a magnetic HD.
Is that listed as reserved/unallocated by the partitioning/formatting software, or some special drive interrogation tool? Or are you talking about formatted vs unformatted space?
I'm not sure it is totally debunked. From a DELL document on SSD data retention I read:
It depends on the how much the flash has been used (P/E cycle used), type of flash, and storage
temperature. In MLC and SLC, this can be as low as 3 months and best case can be more than 10
years. The retention is highly dependent on temperature and workload.
DELL's very own table makes this look really bad. But the key is in the "@ rated P/E cycle" part. What is that?
Else where in the document they claim the following program/erase cycles:
30K-1M for SLC
2.5K-10K for MLC
10K-30K for eMLC
So, it all depends what you have got and what you are doing with it. The argument seems to be that most of us never get near those max P/E cycles so we should not worry.
Okay, so that's the special drive interrogation tool. Over-provisioning should be the area not reported to partitioning software. This sort of reservation exists even on magnetic hard discs. There it's purpose is for mapping out failed blocks. With SSDs I think that space serves a little more than that as well.
However, the example picture looks to be showing the partition table layout rather than special unreachable space. Odd.
In addition to what's shown there, there is also the excess available due to it being real silicon storage. Unlike a HDD, SSDs will be built in the usual cell array fashion. This means that there is three full banks of 256GiB, and each bank is built from 38 addressing bits:
2^38 = 274 877 906 944 bytes
3*(2^38) = 824 633 720 832 bytes, or 768GiB, of Flash memory in total.
Samsung Magician is reporting an unpartitioned disc size of 698.64GiB or 750 158 987 919.36. That's been rounded so there is room for aligning that to a boundary, there is a nearby 16k boundary of 750 158 987 264.
And 824 633 720 832 - 750 158 987 264 = 74 474 733 568. So there is an additional 74 474 733 568 / (2^30) = 69.36GiB of unreachable space, that Samsung has not listed, that will also be automatically used for over-provisioning. EDIT: Of course, 69.36 = 768 - 698.64. I felt the need to spell out the unlisted part.
Those read rates are crazy high. SATA3 should not be able to go that fast. When you're back in Debian do a
$ sudo dd if=/dev/sda of=/dev/zero bs=1M count=10k skip=0k
or similar. Whatever the device name is for you.
Comments
To increase speed and optimize write/read strategies, modern SSDs use SRAM chip, which is backed up by supercap. So, if you were in middle of something (write), and you lost your computer power, the SRAM will keep the data as long as supercap lasts, but if you won't supply power to SSD, this data, which was in SRAM, will be lost.
What I have been reading about is the failure of the little FLASH cells as charge leaks out of it when not powered up. Which seems to depend on temperature and the number of times the cell had been rewritten previously.
Small detail: Those will be SDRAM, a DRAM variant.
Yep, Samsung have got into a bit of hot water recently with their early entry into using TLC Flash. The drive struggles to read old data from the Flash cells and as a result the data rate progressively takes a bigger and bigger hit as the data ages. Reports have it getting as low as 20MB/s read rates. This keeps up until a refresh is manually executed. Dunno if it eventually fails completely or not. It became glaringly obvious only with their newest Flash chips because only those ones degraded in a few months flat. The earlier TLC Flash chips had the same problem but, being less dense, were much slower degrading.
It maybe a variant of what Seagate has highlighted, dunno. The Samsung problem is not related to power down time though, it's just time since data was written. If it is related then the TLC parts are not the only ones degrading in this fashion.
Samsung's workaround has been to provide two firmware updates so far: First one didn't say what it did but must have been smarter tracking of variables like temperature to better judge the cell behaviours. The second update continuously refreshes the flash blocks during idle periods. There is no hard info on how fast the refresh runs. There is also reports of a new bug with the second firmware update that corrupts datablocks when queued TRIMming is enabled. OS workaround is to make TRIM operations unqueued.
Or maybe everyone is rushing to implement this Samsung refresh hack because it's a fundamental flaw of shrinking Flash cells so small. And Seagate have just taken the opportunity to score some much needed points.
Stacked silicon is coming up, and, rather conveniently, Flash is a good example of "dark silicon" in the extreme. What this means is that extreme levels of stacking can be packed into one device without concern for heat build up.
However, it will be interesting to see if Samsung have bumped into a limit that is more obvious in TLC parts or whether it's a TLC problem only or just Samsung screwing up.
Which article is that? The article linked in post 1 looks like its from this month.
If this is a problem, why haven we heard of it before? I have not heard of anyone losing data from an SSD being powered off too long or left in a hot car, etc. Do I need to get out more?
about 70GB unallocated for some kind of special use. Not that I do not trust it but I keep a backup image of it on
a magnetic HD.
"Move along, folks. 'Nothing to see here."
-Phil
Sorry, I should have said the source material for the original article is five years old. And that source wasn't anything drastic like the original article portrayed.
Samsung's issues are a real concern though. Until that's resolved there is questions about reliability of Flash based SSDs.
Keep an eye on loading speed. If it hasn't been patched then you should be able to measure a progressive reduction in read rates over the next few months. It might already be showing up. The test is simple, just do contiguous long reads. It's meant to be pinned at the max rate of the interface across the whole drive, something like 600MB/s for SATA3. However, for blocks that haven't been written to for a while this rate progressively gets worse and worse with age.
Samsung haven't given a technical explanation, that I know of, for why this occurs beyond saying it's a problem with cell voltages.
SSD still seems to be a bit immature though, what with degrading performance etc. So I don't think I will invest in expensive, large SSD disks just yet. Something affordable to be used as system drive, perhaps (with solid backups). Then hopefully quality will improve at some point - for a while now it has to some degree been the opposite, higher capacity but faster degradation / fewer rewrite cycles. Maybe flash will improve, or maybe FRAM or some other alternative tech will increase capacity sufficiently to take over. I'm a bit wary of technology based on keeping a charge.. DRAM being the extreme example of fast leaking.
-Tor
Is that listed as reserved/unallocated by the partitioning/formatting software, or some special drive interrogation tool? Or are you talking about formatted vs unformatted space?
Thanks for confirming. I didn't think the original passed the sniff test.
Some folks will do anything for a few extra clicks.
It depends on the how much the flash has been used (P/E cycle used), type of flash, and storage
temperature. In MLC and SLC, this can be as low as 3 months and best case can be more than 10
years. The retention is highly dependent on temperature and workload.
Followed by this table:
DELL's very own table makes this look really bad. But the key is in the "@ rated P/E cycle" part. What is that?
Else where in the document they claim the following program/erase cycles:
30K-1M for SLC
2.5K-10K for MLC
10K-30K for eMLC
So, it all depends what you have got and what you are doing with it. The argument seems to be that most of us never get near those max P/E cycles so we should not worry.
http://www.dell.com/downloads/global/products/pvaul/en/Solid-State-Drive-FAQ-us.pdf
It is unallocated not formatted. I can't quite remember but I think it was my choosing to set it up to work faster it also uses some system ram.
IIRC it was because I enabled RAPID mode which moves things rather quickly on a Sata III 6Gb connection.
I was on my Debian drive when I first read the thread this morning. I went back to the Samsung drive and using Samsung's Magician software
these were the random and sequential read write times reported this morning. I figure because the software is named Magician those numbers
must be magic.
Edit: I found the reason for the 70GB unallocated. Here is a picture.
Okay, so that's the special drive interrogation tool. Over-provisioning should be the area not reported to partitioning software. This sort of reservation exists even on magnetic hard discs. There it's purpose is for mapping out failed blocks. With SSDs I think that space serves a little more than that as well.
However, the example picture looks to be showing the partition table layout rather than special unreachable space. Odd.
In addition to what's shown there, there is also the excess available due to it being real silicon storage. Unlike a HDD, SSDs will be built in the usual cell array fashion. This means that there is three full banks of 256GiB, and each bank is built from 38 addressing bits:
2^38 = 274 877 906 944 bytes
3*(2^38) = 824 633 720 832 bytes, or 768GiB, of Flash memory in total.
Samsung Magician is reporting an unpartitioned disc size of 698.64GiB or 750 158 987 919.36. That's been rounded so there is room for aligning that to a boundary, there is a nearby 16k boundary of 750 158 987 264.
And 824 633 720 832 - 750 158 987 264 = 74 474 733 568. So there is an additional 74 474 733 568 / (2^30) = 69.36GiB of unreachable space, that Samsung has not listed, that will also be automatically used for over-provisioning. EDIT: Of course, 69.36 = 768 - 698.64. I felt the need to spell out the unlisted part.
Those read rates are crazy high. SATA3 should not be able to go that fast. When you're back in Debian do a
$ sudo dd if=/dev/sda of=/dev/zero bs=1M count=10k skip=0k
or similar. Whatever the device name is for you.
In other words, delete that unneeded 69.87GiB OP reserved space and let the NTFS partition take it all.