Shop OBEX P1 Docs P2 Docs Learn Events
Electronic FIRE for your PC. Boot in less than 5 seconds. — Parallax Forums

Electronic FIRE for your PC. Boot in less than 5 seconds.

Clock LoopClock Loop Posts: 2,069
edited 2010-11-04 07:43 in General Discussion
:freaked: I can only call this thing ELECTRONIC FIRE.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820227579&cm_re=revodrive-_-20-227-579-_-Product

I built a system for a friend that uses one of these along with tons of ram and a EE i7.

I soon realized that if anyone has a pci-e slot available in an older, slower system, adding one of these drives would make it SCREAM.

That is all.

Electronic FIRE!!!! :jumpin:

Comments

  • ercoerco Posts: 20,261
    edited 2010-10-23 09:30
    Pretty amazing, looks like the wave of the future.

    That officially kills off my hopes of vacuum tubes, relays & Fahnestock clips coming back in vogue.
  • Heater.Heater. Posts: 21,230
    edited 2010-10-23 13:05
    So I should shell out 600 dollars to reduce my computers boot time? Time during which I'm might normally drinking the morning coffee anyway?

    How about the fact I never turn my machine off any way?
  • Martin_HMartin_H Posts: 4,051
    edited 2010-10-23 16:20
    Well we're expected to shell out extra to keep malware off Windows machines.
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2010-10-23 16:39
    I wonder if we are seeing the beginning of the end for mechanical hard drives?

    Every year both sd cards and hard drives increase in capacity, but if you are prepared to go back a few years, modern sd cards are now bigger than hard drives were then.

    The one thing I've never seen a solution to is wear levelling for solid state drives. Windows assumes it can read and write to a drive all the time with the cache, but would a move to solid state mean a rewrite of the cache code for Windows? Maybe you have a local sdram cache. Or maybe there really is enough ram on a PC now (wasn't 64k always going to be more than enough?)
  • kwinnkwinn Posts: 8,697
    edited 2010-10-23 18:03
    Dr_Acula wrote: »
    I wonder if we are seeing the beginning of the end for mechanical hard drives?
    Dr Acula, I think the answer to that is a definite yes. I am already thinking of having a solid state hard drive for the OS and standard programs I use on my next system. Data will still go on a mechanical HDD.
  • hover1hover1 Posts: 1,929
    edited 2010-10-23 18:09
    Don't take away my beloved Fahnestock clips!

    Jim
    229 x 283 - 55K
  • Clock LoopClock Loop Posts: 2,069
    edited 2010-10-23 18:43
    To put it in perspective:

    Seek time:
    Revodrive seek time : 0.1ms

    7200 rpm sata3.0 seek time : 8.9ms


    Transfer rate:
    Revo drive average : 500MB/sec

    7200 rpm sata3.0 average: 60MB/sec.


    (the interface is not what determines a 7200 hard drives transfer rate, its spindle speed and actuator arm do.) The sata 1.0, 2.0, 3.0gb/sec interface scam is just that, a scam, sata1 vs sata3 is no different in transfer rate, or seek time if using a 7200 rpm hard drive.) Sata newer standards only benefit from solid state hard drives. (save for ncq)
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2010-10-23 20:34
    Martin_H wrote: »
    Well we're expected to shell out extra to keep malware off Windows machines.

    That is exactly why Ubuntu Linux is so wonderful. Nothing extra to buy.

    Nice solid-state hard drive, but how is it deployed to prevent the incumbent wear issues with solid-state storage? Sure size is nice, but does this really have great engineering as well?
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2010-10-24 08:51
    I like the product, but fear it is not quite ready for the real world.

    MTBF is a bit of deception if you have the same addresses written to over and over again by logs maintained in a boot process. And these days, journalized file systems (like NTFS) keep copying as many as 10 backup copies of file system information. Solid-state systems are not well adapted to journalizing file systems due to excessive wear of just a few blocks. And so, solid-state hard disks last much longer if blocks written to are rotated.

    Solid-state memory tends to have to rewrite much larger scale blocks over and over again than the rather smaller units of conventional hard disk sectors. Some solid state memory schemes (like SDcards) manage to rotate the locations to wear level; but normally provide OS hard disk software doesn't do so. For example, many hobbyist by-pass the wear leveling in SDcard when using SPI and discover that the cards fail prematurely.

    Linux has explore several specific hard disk schemes to specifically lengthen the useful life of solid-state storage, but the last time I researched - the schemes couldn't handle anything over about 64K bytes in any reasonable fashion.

    So either this is a very big storage that has something new in sophisticated protection and wear-leveling (which is not clearly mentioned) OR you may have to wait for development of hardware specific file system software to get more than 3 or so years of use out of it.

    This problem confronted me when I got an EEEpc 701-4G. No one has ever given a precise answer as to how to optimize wear in just the 4Gbytes. It appears to be the same issue is being ignored again. I finally accepted that I'd never know until the failure occurred. As it is, the EEEpc's power supply failed first.
  • NWCCTVNWCCTV Posts: 3,629
    edited 2010-11-02 21:01
    I would say for the price of that thing I could build a system that SCREAMS!!!!
  • bill190bill190 Posts: 769
    edited 2010-11-02 22:36
    Clock Loop wrote: »
    The MTBF is 2,000,000 hours.

    Uhh, that means ... 83,333 days

    228 years.

    So if it only lasted 100 years, then would it be covered under warranty?
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2010-11-03 03:15
    228 years between failures.... AND only a 3 year warranty.

    What does this tell you? The math for MTBF is extremely generalized. while the issues with solid-state storage are very different from.

    Do you have any electronic devices that have operated well for 228 years? Seems to be 'sucker bait'.

    IF you go to their website, it appears that you might have to completely shut down BIOS support for mechanical harddisks in the same unit in order to get it to work. While that may be okay, it is just one of the hurdles that you are ignoring. The issue of proper wear leveling remains central to using solid-state hard disks. M$ Windows may well be the worst OS system to use with them. And the manufacturers are more than willing to sit on top of proprietary design rather than explain how they make their product reliable.

    If the warranty is 3 years, the useful life is 3 years - plain and simple. (If the company stays in business for 3 years.)

    Wikipedia presents a good introduction to the topic. It is up to you to be an informed consumer.

    http://en.wikipedia.org/wiki/Solid-state_drive
  • GadgetmanGadgetman Posts: 2,436
    edited 2010-11-03 04:29
    This card is for those who desperately NEED the extra speed.
    (Feel free to ask an AutoCAD user if he wants it... Especially if he works on 50+MB files containing digitized maps... )

    Some are using it in servers, which I consider stupid...
    (Nothing that isn't Hot-plug and redundant goes into the servers I admin.)

    This product uses two banks of FLASH memory to emulate HDDs in a RAID0 configuration. The theory is that files will be distributed evenly between them, so that writes can be done in half the time.
    Unfortunately, with this product, if ONE bank fails, you have to toss out the entire card as the bank can't be replaced.
    Also, RAID0 offers NO redundancy, so a read failure means lost data.

    RAID0 = Two or more disks lumped together to operate as one.
    RAID1 = pairs of drives mirror each other for redundancy.
    RAID5 = 3 or more drives in an array, with data spread out and one disk worth of parity data.
    (There are more variants. Suffice to say that I don't consider RAID0 a good solution... )
  • Clock LoopClock Loop Posts: 2,069
    edited 2010-11-03 04:43
    Using hardware without knowledge of what your doing will result in failure.

    I wouldn't build a system with one of these devices unless the system also had a few hard drives in raid 1 as backup.

    Any piece of new hardware is due to issues with configuration, and reliability failures.

    I was able to run dual 7200 rpm hard drives in raid1 along side the revodrive, some motherboards have issues with this card, others do not, know what the hell your doing if you plan to spend 500$ on a single piece of hardware. I figured anyone spending that kinda money would expect issues, due to working with cutting edge hardware (which is common with fresh hardware)

    The MTBF only suggests that this device has a fairly high life span.

    In 3 years, i bet you this drive will cost 50$
  • GadgetmanGadgetman Posts: 2,436
    edited 2010-11-03 05:13
    Mixing drives (haveing 7200 RPM drives to mirror the Revo) isn't that good an option as the system will wait on the slowest to always keep them synchronised.

    I much prefer to smack in a RAID5 controller with half a GB of BatteryBacked cache, and preferably 15K RPM Dual-port SAS disks...
    (track to track, 0.14mS, average seek, 2.58mS)
    Of course, that's Server HDDs, not something you pick up in a 1337-store.
  • mctriviamctrivia Posts: 3,772
    edited 2010-11-03 05:53
    Personally I think solid state hard drives are an excellent choice for laptops. Not for speed but because they don't get damaged when moving the computer around. Standard hard drives act as a gyroscope and even a slight rotation of the computer will result in a scratch.

    For PCs I think hybrid hard drives would be the best way to go but have not herd much about them in a long time. The idea is to merge a couple gigs of NAND flash memory with a large hard drive. The OS can use the solid state memory as a prefect buffer or to buffer writes while it is reading data.
  • Clock LoopClock Loop Posts: 2,069
    edited 2010-11-03 06:57
    Gadgetman wrote: »
    Mixing drives (haveing 7200 RPM drives to mirror the Revo) isn't that good an option as the system will wait on the slowest to always keep them synchronised.

    I don't mean for direct online mirroring, I don't think the ssd needs that.
    I mean the raid1 motor hard drives used for offline imaging of the ssd for non-live data backup image.
  • ratronicratronic Posts: 1,451
    edited 2010-11-03 09:45
    I don't think you have to worry about OCZ going away anytime soon. I have an OCZ agility2 120gb solid state drive, but you do have to make sure your operating system is setup to use the ssd properly.
    Ssd's aren't really "plug and play". I'm using Windows 7 which uses "TRIM" and there are other things that have to be set.
    http://en.wikipedia.org/wiki/TRIM
    The speed increase IMHO was worth the money spent for it. I also on a daily basis store a back up image of the system drive to
    a 1.5Tb regular hard drive which is done in the background using Macrium Reflect-free edition.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2010-11-03 09:59
    I will repeat. I love solid-state hard drives. I had a 4 Gbyte on my EEEpc and it made the whole experience wonderful - especially not worrying about a bum or jar ruining your hard drive.

    But the current state of the software solutions does not quite make them ready for a 'first choice' for everyone.

    And, the MTBF math just really annoys me. It means nothing in terms of what is really going on.
  • HumanoidoHumanoido Posts: 5,770
    edited 2010-11-03 22:51
    Sounds like an exciting option. My EEE PC has solid state drives. I think they're fantastic. For my larger PC, I use Function-F12 Hibernate. Shuts down or starts up in about 5 seconds. So far, 100% reliable in years of use. knock on wood. :) This could be a poor-man's alternative. The feature is part of WinXP. I don't know why it took me so long to discover it. Maybe because I didn't find it as a user icon and well, I'm more of a Mac Man. :)
  • Peter KG6LSEPeter KG6LSE Posts: 1,383
    edited 2010-11-04 00:34
    especially not worrying about a bum or jar ruining your hard drive.

    I have some 2.5" Ext IEEE1394b FW800 based drives with a rated 1000G non run / 400G run Shock rating . 0.0

    there from G-Tech . not cheap but are some of the fastest drives I have ever used for a EXT storage solution .


    I dont like SSDs that are used for the wrong reason . there not the be all and end all to the data world ..

    mag HDs when running draw a quite flat power draw line( once spinning ).

    FLASH SSDs draw peanuts for read power and read very fast but in Write can in some drives out draw a normal mag HD. and the read speeds are fast but unless you use a higher end drive or use SLC you might as well use a high end Mag HD . .

    I used one for editing video in a friends laptop PC . and it was slower on sustained write then my Gtech 800FW .


    2 years ago I read a white paper on SSDs in a server environment .
    HP did a test l. and in a enterprise server they cooked a SSD in 27 days . 0.0 !!!!!!!!!!!
  • Clock LoopClock Loop Posts: 2,069
    edited 2010-11-04 05:20
    2 years ago I read a white paper on SSDs in a server environment .
    HP did a test l. and in a enterprise server they cooked a SSD in 27 days . 0.0 !!!!!!!!!!!

    But what you didn't say is how fast and how much work was done in that 27 days.

    Take any object and get it going at its maximum and it will eventually blow up.

    http://www.youtube.com/watch?v=0-1CIkXfcr8
  • Peter KG6LSEPeter KG6LSE Posts: 1,383
    edited 2010-11-04 06:34
    it was ran just like a normal server drive. samba I recall ..
    I wish I had the PDF it was a very well written artical.
    ironicly it was a USB flash drive I sent in the wash I had my white papers on that failed ...
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2010-11-04 06:48
    The bottom line is that the makers of solid-state hard drives require good, intelligent support from the OS to properly survive AND prosper. Raid 0 won't do it and though shutting down some traditional features may extend useful life, not much is proven about the success of such strategies. Also, we may need yet another round of BIOS development to allow them to work along side of conventional hard drives.

    So if engineering is really going to be competent and comprehensive, it seems that the dialogue has to be out in the open. (Something the marketing department hates.)

    Yes, for AutoCAD and movie making, these current devices really are useful. For everyday use in a generalized setting, maybe not.

    Still, I'd go only with a Linux supported device at this point as I know I'd get real answers --- not 228 years of MTBF based on a lot of wild assumptions about use.
  • GadgetmanGadgetman Posts: 2,436
    edited 2010-11-04 07:43
    Clock Loop wrote: »
    Take any object and get it going at its maximum and it will eventually blow up.
    http://www.youtube.com/watch?v=0-1CIkXfcr8

    I only watch tractorpulling for the mayhem.
    And that one is one of the better ones...
    (Not often you see the tractor run over its own engine.)

    Of course, the engines used in tractorpulling is pushed well beyond their design specs, or may be old(RR Merlin or Griffon engines haven't been made the last few years), or even in a manner not considered by the builders(mounting 4 or 5 helicopter TURBINE engines on a tractor and hooking them all up to the same 'gearbox'. Even more insane than doing it with a 4 or more V12 engines)

    Anything pushed beyond specs will sooner or later fail...
    (Usually sooner)
    MTBF is estimated, usually by knowing how long products with similar technology lasts and a lot of guesswork. Sometimes they 'remove' some of the guesswork(Some HP Server HDDs have restrictions if in rack-mounted equipment if the location is over 1000meters altitude, there's temperature and humidity levels, how fast those values are allowed to change. AAARGH!), but it's still 'guesswork'...
    It also helps to know that drives are designed towards a set MTBF as building for higher MTBF means more robust and expensive design.

    Want something solid?
    Go for server HW.

    Want something cheap?
    Hit Walmart...

    Want something fast?
    The nearest 1337-shop...

    Want something fast AND solid?
    Go for EXPENSIVE server gear...

    A HP MSA2324fc, 24 x 600GB 10K RPM dial-port SAS HDDs(don't think they're available in larger than 146GB for 15K, yet) and a fibre-channel controller-card in your PC should do plenty...
Sign In or Register to comment.