<br><br><div class="gmail_quote">On Thu, Apr 22, 2010 at 2:44 PM, Greg Freemyer <span dir="ltr"><<a href="mailto:greg.freemyer@gmail.com">greg.freemyer@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Jim,<br>
<br>
Linux mdraid supports 3 drive mirrors. You should have similar write<br>
performance to a 2 drive mirror and improved read-performance.<br>
<br>
So if your using mdraid, you might want to consider raid 10 with 3<br>
drive mirrors, and maybe one hot spare for the whole array.<br></blockquote><div><br>I like this even better! <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
Greg<br>
<div><div></div><div class="h5"><br>
On Thu, Apr 22, 2010 at 2:26 PM, Jim Kinney <<a href="mailto:jim.kinney@gmail.com">jim.kinney@gmail.com</a>> wrote:<br>
> RAID 5 was an invention for a time when hard drives were total crap tons of<br>
> money. The pain of losing a drive in a RAID 5 array is just no longer<br>
> balanced by the cost of the drives. If a 1TB drive is only $100, it's<br>
> bluntly dirt cheap now to have a hot spare in a 4 active drive RAID 10<br>
> system. The recovery is much easier and faster when checksums don't have to<br>
> be calculated for every stinking block on the drive(s).<br>
><br>
> My ideal rig: Striped array for speed composed of mirrored triplets - 2<br>
> active, one hot spare per active pair.<br>
><br>
> On Thu, Apr 22, 2010 at 1:05 PM, Greg Clifton <<a href="mailto:gccfof5@gmail.com">gccfof5@gmail.com</a>> wrote:<br>
>><br>
>> Shift in focus to the hardware side of the equation. This thread<br>
>> concentrates on software generated corruption issues, but I have some<br>
>> hardware related questions. First, with RAIDed hard drives, are any file<br>
>> systems more or less likely to cause (or minimize) the likelihood of<br>
>> corruption of the array and if so, why? Second Greg F (and others) have<br>
>> commented on NOT using RAID 5 (and RAID 6) esp. with large hard drives.<br>
>> Looks like 1 or 2 TB hard drives will soon be "standard issue" for<br>
>> everything but notebook computers. So does that mean that RAID should be<br>
>> considered 'dead,' except for 0, 1, 10? Third, would SSDs solve the failure<br>
>> from bad sector issues with HDDs and thus be safe for RAID 5/6<br>
>> implementations?<br>
>><br>
>><br>
>> On Thu, Apr 22, 2010 at 9:41 AM, Ed Cashin <<a href="mailto:ecashin@noserose.net">ecashin@noserose.net</a>> wrote:<br>
>>><br>
>>> On Wed, Apr 21, 2010 at 9:34 PM, Doug McNash <<a href="mailto:dmcnash@charter.net">dmcnash@charter.net</a>> wrote:<br>
>>> ...<br>
>>> > Does anyone out there use xfs? How about a suggestion for a stable<br>
>>> > replacement.<br>
>>><br>
>>> If you use the xfs in the mainline kernel, it's a crap shoot because<br>
>>> of the amount of churn in the code, but<br>
>>> if you use a long-term kernel like 2.6.16.y, 2.6.27.y, or the kernels<br>
>>> maintained by distros, then it ought to be stable (as long as the<br>
>>> distro has enough of a user base for other people to find the xfs<br>
>>> bugs first).<br>
>>><br>
>>> --<br>
>>> Ed Cashin <<a href="mailto:ecashin@noserose.net">ecashin@noserose.net</a>><br>
>>> <a href="http://noserose.net/e/" target="_blank">http://noserose.net/e/</a><br>
>>> <a href="http://www.coraid.com/" target="_blank">http://www.coraid.com/</a><br>
>>> _______________________________________________<br>
>>> Ale mailing list<br>
>>> <a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
>>> <a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
>>> See JOBS, ANNOUNCE and SCHOOLS lists at<br>
>>> <a href="http://mail.ale.org/mailman/listinfo" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Ale mailing list<br>
>> <a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
>> <a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
>> See JOBS, ANNOUNCE and SCHOOLS lists at<br>
>> <a href="http://mail.ale.org/mailman/listinfo" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
>><br>
><br>
><br>
><br>
> --<br>
> --<br>
> James P. Kinney III<br>
> Actively in pursuit of Life, Liberty and Happiness<br>
><br>
><br>
> _______________________________________________<br>
> Ale mailing list<br>
> <a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
> <a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
> See JOBS, ANNOUNCE and SCHOOLS lists at<br>
> <a href="http://mail.ale.org/mailman/listinfo" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
><br>
><br>
<br>
<br>
<br>
</div></div>--<br>
<div class="im">Greg Freemyer<br>
Head of EDD Tape Extraction and Processing team<br>
Litigation Triage Solutions Specialist<br>
<a href="http://www.linkedin.com/in/gregfreemyer" target="_blank">http://www.linkedin.com/in/gregfreemyer</a><br>
CNN/TruTV Aired Forensic Imaging Demo -<br>
<a href="http://insession.blogs.cnn.com/2010/03/23/how-computer-evidence-gets-retrieved/" target="_blank">http://insession.blogs.cnn.com/2010/03/23/how-computer-evidence-gets-retrieved/</a><br>
<br>
The Norcross Group<br>
The Intersection of Evidence & Technology<br>
</div><a href="http://www.norcrossgroup.com" target="_blank">http://www.norcrossgroup.com</a><br>
<div><div></div><div class="h5"><br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>-- <br>James P. Kinney III<br>Actively in pursuit of Life, Liberty and Happiness <br><br>