Electronic FIRE for your PC. Boot in less than 5 seconds.

:freaked: I can only call this thing ELECTRONIC FIRE.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820227579&cm_re=revodrive-_-20-227-579-_-Product
I built a system for a friend that uses one of these along with tons of ram and a EE i7.
I soon realized that if anyone has a pci-e slot available in an older, slower system, adding one of these drives would make it SCREAM.
That is all.
Electronic FIRE!!!! :jumpin:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820227579&cm_re=revodrive-_-20-227-579-_-Product
I built a system for a friend that uses one of these along with tons of ram and a EE i7.
I soon realized that if anyone has a pci-e slot available in an older, slower system, adding one of these drives would make it SCREAM.
That is all.
Electronic FIRE!!!! :jumpin:
Comments
That officially kills off my hopes of vacuum tubes, relays & Fahnestock clips coming back in vogue.
How about the fact I never turn my machine off any way?
Every year both sd cards and hard drives increase in capacity, but if you are prepared to go back a few years, modern sd cards are now bigger than hard drives were then.
The one thing I've never seen a solution to is wear levelling for solid state drives. Windows assumes it can read and write to a drive all the time with the cache, but would a move to solid state mean a rewrite of the cache code for Windows? Maybe you have a local sdram cache. Or maybe there really is enough ram on a PC now (wasn't 64k always going to be more than enough?)
Jim
Seek time:
Revodrive seek time : 0.1ms
7200 rpm sata3.0 seek time : 8.9ms
Transfer rate:
Revo drive average : 500MB/sec
7200 rpm sata3.0 average: 60MB/sec.
(the interface is not what determines a 7200 hard drives transfer rate, its spindle speed and actuator arm do.) The sata 1.0, 2.0, 3.0gb/sec interface scam is just that, a scam, sata1 vs sata3 is no different in transfer rate, or seek time if using a 7200 rpm hard drive.) Sata newer standards only benefit from solid state hard drives. (save for ncq)
That is exactly why Ubuntu Linux is so wonderful. Nothing extra to buy.
Nice solid-state hard drive, but how is it deployed to prevent the incumbent wear issues with solid-state storage? Sure size is nice, but does this really have great engineering as well?
Uhh, that means ... 83,333 days
228 years.
25nm chips.
http://hothardware.com/Articles/Intel-Micron-Announce-25nm-NAND-Technology-/
http://www.imftech.com/news/release_1feb10.html
https://secure.wikimedia.org/wikipedia/en/wiki/Multi-level_cell
http://www.legitreviews.com/article/1410/1/
MTBF is a bit of deception if you have the same addresses written to over and over again by logs maintained in a boot process. And these days, journalized file systems (like NTFS) keep copying as many as 10 backup copies of file system information. Solid-state systems are not well adapted to journalizing file systems due to excessive wear of just a few blocks. And so, solid-state hard disks last much longer if blocks written to are rotated.
Solid-state memory tends to have to rewrite much larger scale blocks over and over again than the rather smaller units of conventional hard disk sectors. Some solid state memory schemes (like SDcards) manage to rotate the locations to wear level; but normally provide OS hard disk software doesn't do so. For example, many hobbyist by-pass the wear leveling in SDcard when using SPI and discover that the cards fail prematurely.
Linux has explore several specific hard disk schemes to specifically lengthen the useful life of solid-state storage, but the last time I researched - the schemes couldn't handle anything over about 64K bytes in any reasonable fashion.
So either this is a very big storage that has something new in sophisticated protection and wear-leveling (which is not clearly mentioned) OR you may have to wait for development of hardware specific file system software to get more than 3 or so years of use out of it.
This problem confronted me when I got an EEEpc 701-4G. No one has ever given a precise answer as to how to optimize wear in just the 4Gbytes. It appears to be the same issue is being ignored again. I finally accepted that I'd never know until the failure occurred. As it is, the EEEpc's power supply failed first.
So if it only lasted 100 years, then would it be covered under warranty?
What does this tell you? The math for MTBF is extremely generalized. while the issues with solid-state storage are very different from.
Do you have any electronic devices that have operated well for 228 years? Seems to be 'sucker bait'.
IF you go to their website, it appears that you might have to completely shut down BIOS support for mechanical harddisks in the same unit in order to get it to work. While that may be okay, it is just one of the hurdles that you are ignoring. The issue of proper wear leveling remains central to using solid-state hard disks. M$ Windows may well be the worst OS system to use with them. And the manufacturers are more than willing to sit on top of proprietary design rather than explain how they make their product reliable.
If the warranty is 3 years, the useful life is 3 years - plain and simple. (If the company stays in business for 3 years.)
Wikipedia presents a good introduction to the topic. It is up to you to be an informed consumer.
http://en.wikipedia.org/wiki/Solid-state_drive
(Feel free to ask an AutoCAD user if he wants it... Especially if he works on 50+MB files containing digitized maps... )
Some are using it in servers, which I consider stupid...
(Nothing that isn't Hot-plug and redundant goes into the servers I admin.)
This product uses two banks of FLASH memory to emulate HDDs in a RAID0 configuration. The theory is that files will be distributed evenly between them, so that writes can be done in half the time.
Unfortunately, with this product, if ONE bank fails, you have to toss out the entire card as the bank can't be replaced.
Also, RAID0 offers NO redundancy, so a read failure means lost data.
RAID0 = Two or more disks lumped together to operate as one.
RAID1 = pairs of drives mirror each other for redundancy.
RAID5 = 3 or more drives in an array, with data spread out and one disk worth of parity data.
(There are more variants. Suffice to say that I don't consider RAID0 a good solution... )
I wouldn't build a system with one of these devices unless the system also had a few hard drives in raid 1 as backup.
Any piece of new hardware is due to issues with configuration, and reliability failures.
I was able to run dual 7200 rpm hard drives in raid1 along side the revodrive, some motherboards have issues with this card, others do not, know what the hell your doing if you plan to spend 500$ on a single piece of hardware. I figured anyone spending that kinda money would expect issues, due to working with cutting edge hardware (which is common with fresh hardware)
The MTBF only suggests that this device has a fairly high life span.
In 3 years, i bet you this drive will cost 50$
I much prefer to smack in a RAID5 controller with half a GB of BatteryBacked cache, and preferably 15K RPM Dual-port SAS disks...
(track to track, 0.14mS, average seek, 2.58mS)
Of course, that's Server HDDs, not something you pick up in a 1337-store.
For PCs I think hybrid hard drives would be the best way to go but have not herd much about them in a long time. The idea is to merge a couple gigs of NAND flash memory with a large hard drive. The OS can use the solid state memory as a prefect buffer or to buffer writes while it is reading data.
I don't mean for direct online mirroring, I don't think the ssd needs that.
I mean the raid1 motor hard drives used for offline imaging of the ssd for non-live data backup image.
Ssd's aren't really "plug and play". I'm using Windows 7 which uses "TRIM" and there are other things that have to be set.
http://en.wikipedia.org/wiki/TRIM
The speed increase IMHO was worth the money spent for it. I also on a daily basis store a back up image of the system drive to
a 1.5Tb regular hard drive which is done in the background using Macrium Reflect-free edition.
But the current state of the software solutions does not quite make them ready for a 'first choice' for everyone.
And, the MTBF math just really annoys me. It means nothing in terms of what is really going on.
I have some 2.5" Ext IEEE1394b FW800 based drives with a rated 1000G non run / 400G run Shock rating . 0.0
there from G-Tech . not cheap but are some of the fastest drives I have ever used for a EXT storage solution .
I dont like SSDs that are used for the wrong reason . there not the be all and end all to the data world ..
mag HDs when running draw a quite flat power draw line( once spinning ).
FLASH SSDs draw peanuts for read power and read very fast but in Write can in some drives out draw a normal mag HD. and the read speeds are fast but unless you use a higher end drive or use SLC you might as well use a high end Mag HD . .
I used one for editing video in a friends laptop PC . and it was slower on sustained write then my Gtech 800FW .
2 years ago I read a white paper on SSDs in a server environment .
HP did a test l. and in a enterprise server they cooked a SSD in 27 days . 0.0 !!!!!!!!!!!
But what you didn't say is how fast and how much work was done in that 27 days.
Take any object and get it going at its maximum and it will eventually blow up.
http://www.youtube.com/watch?v=0-1CIkXfcr8
I wish I had the PDF it was a very well written artical.
ironicly it was a USB flash drive I sent in the wash I had my white papers on that failed ...
So if engineering is really going to be competent and comprehensive, it seems that the dialogue has to be out in the open. (Something the marketing department hates.)
Yes, for AutoCAD and movie making, these current devices really are useful. For everyday use in a generalized setting, maybe not.
Still, I'd go only with a Linux supported device at this point as I know I'd get real answers --- not 228 years of MTBF based on a lot of wild assumptions about use.
I only watch tractorpulling for the mayhem.
And that one is one of the better ones...
(Not often you see the tractor run over its own engine.)
Of course, the engines used in tractorpulling is pushed well beyond their design specs, or may be old(RR Merlin or Griffon engines haven't been made the last few years), or even in a manner not considered by the builders(mounting 4 or 5 helicopter TURBINE engines on a tractor and hooking them all up to the same 'gearbox'. Even more insane than doing it with a 4 or more V12 engines)
Anything pushed beyond specs will sooner or later fail...
(Usually sooner)
MTBF is estimated, usually by knowing how long products with similar technology lasts and a lot of guesswork. Sometimes they 'remove' some of the guesswork(Some HP Server HDDs have restrictions if in rack-mounted equipment if the location is over 1000meters altitude, there's temperature and humidity levels, how fast those values are allowed to change. AAARGH!), but it's still 'guesswork'...
It also helps to know that drives are designed towards a set MTBF as building for higher MTBF means more robust and expensive design.
Want something solid?
Go for server HW.
Want something cheap?
Hit Walmart...
Want something fast?
The nearest 1337-shop...
Want something fast AND solid?
Go for EXPENSIVE server gear...
A HP MSA2324fc, 24 x 600GB 10K RPM dial-port SAS HDDs(don't think they're available in larger than 146GB for 15K, yet) and a fibre-channel controller-card in your PC should do plenty...