[ale] motherboard RAID versus software RAID
Greg Freemyer
greg.freemyer at gmail.com
Fri Jan 20 16:49:24 EST 2006
On 1/20/06, James P. Kinney III <jkinney at localnetsolutions.com> wrote:
> On Fri, 2006-01-20 at 16:10 -0500, John Wells wrote:
> > Guys,
> >
> > I've been running software RAID on one of my servers for some time. It
> > works very well.
> >
> > I'm now out of space, and am planning to add two additional SATA 250 GB
> > drives tonight as an additional RAID 1 array. I also recently upgraded
> > the motherboard to an ASUS A7V-600X. The motherboard has a RAID option for
> > SATA drives...as I've never owned any serial ATAs before it's gone unused.
> >
> > Would you recommend going with Linux's software RAID for these two drives,
> > or should I go with the motherboard's RAID? I have no idea of the quality
> > of the MB raid, and I suspect Linux's software RAID might offer more
> > flexibility, but wanted to hear opinions from the group.
>
> Hardware RAID has a slight speed advantage. But it has always been
> outweighed for me by the flexibility and availability of software RAID.
>
> Here's why:
>
> Hardware RAID. You have a RAID5 3-drive system. Drive B fails. You
> replace it and it automatically rebuilds the array. Good.
>
> Software RAID. Same setup. Same problem. When the drive is replaced, it
> automatically rebuilds the array. You can control the rebuild speed and
> availability of the array data.
>
> Hardware RAID. The controller fails. That model is no longer available.
> Buy a new model. It doesn't recognize the existing array since it has
> not been initialized. BAD DAY. Must use specialized RAID
> extraction/recovery tools to create a virtual drive image for copying
> the data to temporarily. Wipe the drives, create new RAID5 stripes, copy
> back the image to the drive array. Total time on a 40GB filesystem:
> 28-24 hours.
>
> Software RAID. No hardware component to fail. Can easily copy the
> relevant files and configuration onto a floppy if needed (USB drive more
> likely) for backup. Existing hard drives can be cloned and installed
> into a secondary box. If the main box has a motherboard issue, pull the
> drives, put them in a new box, edit the config files and reinstall
> those, reboot, go back to work. Downtime: 1-2 hours (including google
> searches for details on how to do this).
>
> BOFH RAID method: Use Software RAID but document it as hardware RAID.
> Unplug one drives power cord. Do this during end-of-month financials.
> Remind accounting staff of the need to upgrade the backup system and how
> they disapproved the funding request last quarter for the upgrade.
> Inform the CFO that the recovery process will take about 18 hours since
> the tape are stored off-site for Sarbans-Oxally compliance. Pop off to
> the pub for 6 hours. Plug back in the power on the "damaged" drive.
> telinit to runlevel 1 for the 20 minutes it takes to rebuild the drives.
> Bring the system back up to full functionality. Report to the CEO that
> even though the requested funding was denied by the CFO, you were able
> to "streamline the process" by "multitasking the drive array" and it
> would have been faster if that new backup system had been used since it
> supports multitasking natively. Accept the sincere thanks from the CEO
> for the diligent efforts and go back to the pub to finish the darts game
> before the PFY gets too smashed to remember he's buying the next round.
>
> >
> > Thanks, as always!
> > John
> >
> > _______________________________________________
> > Ale mailing list
> > Ale at ale.org
> > http://www.ale.org/mailman/listinfo/ale
Interesting comparison, but you left out FakeRAID. (Seriously this is
not a joke.)
With fake raid you get the worst of both worlds. The setup/config is
managed by a custom bios the lives on the controller which can go out
of production. The raid i/o activity itself is implemented in a
custom OS driver the uses lots of CPU.
Most Promise cards and low-end motherboard raid controllers use fakeRAID.
For a list of some sata fakeRAID controllers see:
http://linux.yyz.us/sata/faq-sata-raid.html
If you don't know Jeff Garzik is the author of the above page and is
the libata (generic SATA driver) maintainer for the 2.4/2.6 kernel.
Greg
--
Greg Freemyer
The Norcross Group
Forensics for the 21st Century
More information about the Ale
mailing list