<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">I appreciate all the responses.<br>
<br>
So I guess what I'm hearing is 1) get over my HW RAID hate, RAID5
the lot using the PERC, and slice and dice with LVM or 2) forgo
RAID altogether, use the PERC to make some kind of "appended" 2TB
volume, and slice and dice with LVM. I'm willing to give up some
effective space to not have a dead box if a drive fails; just
because it's a lab machine doesn't mean people won't be counting
on it. I'm okay with that as long as I have a way to sense a drive
failure flagged by the PERC. <br>
<br>
<br>
<br>
On 9/24/15 7:27 AM, Solomon Peachy wrote:<br>
</div>
<blockquote cite="mid:20150924112717.GB25334@shaftnet.org"
type="cite">
<pre wrap="">On Wed, Sep 23, 2015 at 11:42:37PM -0400, Jeff Hubbs wrote:
</pre>
<blockquote type="cite">
<pre wrap=""> * I really dislike hardware RAID cards like Dell PERC. If there has to
be one, I would much rather set it to JBOD mode and get my RAIDing
done some other way.
</pre>
</blockquote>
<pre wrap="">
There's a big difference between "hardware" RAID (aka fakeRAID) and real
hardware RAID boards. The former are the worst of both worlds, but the
latter are the real deal.
In particular, the various Dell PERC RAID adapters are excellent, fast,
and highly reliable, with full native linux support for managing them.
Strictly speaking you'll end up with more flexibility going the JBOD
route, but you're going to lose both performance and reliability versus
the PERC.
(for example, what happens if the "boot" drive fails? Guess what, your
system is no longer bootable with the JBOD, but the PERC will work just
fine)
</pre>
<blockquote type="cite">
<pre wrap=""> * I foresee I will have gnashing of teeth if I set in stone at install
time the sizes of the /var and /home volumes. There's no telling how
much or how little space PostgreSQL might need in the future and you
know how GRAs are - give them disk space and they'll take disk space. :)
</pre>
</blockquote>
<pre wrap="">
You're not talking about much space here; only 5*400 == 2TB of raw
space, going down to 1.6TV by the time the RAID5 overhead is factored
in. Just create a single 2TB filesystem and be done with it.
FWIW, If you're after reliability I'd caution against btrfs, and instead
recommend XFS -- and make sure the system is plugged into a UPS. No
matter what, be sure to align the partition and filesystem with the
block/stripe sizes of the RAID setup.
(The system I'm typing this on has about ~10TB of XFS RAID5 filesystems
hanging off a 3ware 9650 card, plus a 1TB RAID1 for the OS)
- Solomon
</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Ale mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Ale@ale.org">Ale@ale.org</a>
<a class="moz-txt-link-freetext" href="http://mail.ale.org/mailman/listinfo/ale">http://mail.ale.org/mailman/listinfo/ale</a>
See JOBS, ANNOUNCE and SCHOOLS lists at
<a class="moz-txt-link-freetext" href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a>
</pre>
</blockquote>
<br>
</body>
</html>