<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 07/25/2014 10:47 AM, Jim Kinney
wrote:<br>
</div>
<blockquote
cite="mid:CAEo=5PwWOacZ=6iozDB2+jupL1_+anGgK0JYVz4svN_MG36wPQ@mail.gmail.com"
type="cite">
<pre wrap="">Drive setup:
Use the SSD for /boot and / and swap and put /home on the 1TB spinning
disk. So you will need to do a custom tweak on the drive setup. Use twice
the RAM size for swap up to 2GB RAM After 2GB RAM, use 1.5 up to 3GB RAM.
Above 3GB RAM, use the same size as the RAM for swap.
Make a 500MB partition for /boot and format it as XFS. Use the rest of the
SSD for LVM physical. Create a LVM logical and carve it into / and swap.
Format / with XFS. Make a single partition on the HD and format it XFS
mounted in /home.
KISS principle in action. Unless you really want to manipulate partition
sizes by making the HD an LVM physical drive so you can add/remove
partitions on the fly...</pre>
</blockquote>
Alright, I promise I won't harp on you about btrfs (on this thread
anyway) after this one... :-D Actually, this is less btrfs and more
history, but...<br>
<br>
KISS says to me, "as few moving parts as possible".<br>
<br>
Once upon a time, there was "original UNIX", which gave us a single
filesystem mounted in a tree with a single root. This was better
than a single filesystem that was global, because storage could be
added to the system such that it was simply grafted in where
needed. Of course, this had the problem of the system administrator
having to know how much to put where, and when things ran low,
devices had to be added, new filesystems created, possibly a few
things moved from one filesystem to another, and then the filesystem
was grafted into the tree permanently. Static, administrator
controlled storage.<br>
<br>
This was the way that it was for a long time. Then there were
enhancements made: multiple filesystem types, suited for different
purposes or operating systems.<br>
<br>
The next major enhancement was RAID. But that's more or less still
"static", in that you're just treating a collection of drives as a
single redundant store (we hope) for a single filesystem. But it's
statically configured and if you guessed requirements wrong, you <b>still</b>
had to spend a lot of time doing manual reconfiguration to match a
new setup.<br>
<br>
Then we got dynamic storage technology, with LVM and LVM2. This
allowed us to make the block storage dynamic (but not [necessarily]
the filesystem). So grafting new filesystems into the tree became
easier, and at this point, we didn't have to worry so much about
getting things "right" so much as ensuring that we had enough spare
space to allocate later when we invariably got things wrong—usually,
of course, because requirements aren't delivered half as accurately
as they were when shit was real expensive.<br>
<br>
Now we have two major contenders for filesystems that enable <i>truly
dynamic capability</i>—btrfs on Linux and ZFS on BSD and Solaris.
With these filesystems, life is much easier: start with X storage,
make a single pool which all "filesystems" share, and when space
runs low, add more space and tack it on. No more grafting. Plus!
When hard disks fail (and we all know they will!), you can <i>actually
shrink the filesystem</i> if you can't replace the drive Right
Now. Usually this takes some time to rebalance, but it beats the
pants off of "backup, restore, check all data file integrity".<br>
<br>
KISS to me says "use btrfs or ZFS, whichever the operating system
will support natively". Why?<br>
<br>
It's a single component which provides advanced functionality. One
moving part from the point of view of the system administrator and
the system programmers, which handle the management of storage
pools, which enable rapid backup, near-100% application uptime, and
unprecedented agility in dealing with the limitations of the
hardware that we're all forced to use to run our systems. With
either system you can grow and shrink. With either system,
redundancy occurs on the filesystem object level, not the platter or
disk level. With either system, you are able to create new
"volumes" (not quite whole, independent filesystems, but independent
filesystem roots) and destroy them dynamically at runtime. With
either system, you have data integrity assurances and online,
automatic repair mechanisms which are unavailable without special,
high-end hardware.<br>
<br>
It's my understanding that Red Hat intends to use btrfs as a default
sometime in the 7.x series, though I can't point to a definitive
source at the moment on that. I know they said it wouldn't happen
in 7.0, but I seem to recall reading some things that indicated that
they'd be considering doing so in e.g., 7.2-7.5ish, simply because
that's where everything is moving in terms of enabling their
customers to be as agile as they can be.<br>
<br>
Block devices+volume management+filesystem is workable. But block
devices+volume-aware filesystem is one less moving part and easier
for administrators. And the ability to e.g., create an 80 GB file
and then create another 80 GB file that has just a few megabytes of
different data in it very rapidly is something which can be used by
many applications—hypervisors, transactional data transformation
tools, and so forth.<br>
<br>
Now, I cannot speak much for ZFS, as I've only interacted with it a
few times and never in extended scenarios. I can say a great deal
for btrfs, especially in the 3.x kernels. Honestly, I started using
it after a ton of testing, but I mostly switched to it because I
needed its functionality. I did daily backups on TONS OF DISKS
before I started trusting it and going back to a "normal" backup
regimen. Today I wouldn't install on anything else—I spend so much
less time managing filesystem shit than I used to, and that's
important to me. I really rather enjoy being able to spend more
time getting useful shit done.<br>
<br>
Of course, every now and again I get a project where my most recent
and high technology involved is plain FTP, Windows 2003, and other
similar vintage crap. And on those, my overhead goes THROUGH THE
BLOODY ROOF, for no reason other than I have to go back to doing
tons of things by hand that I don't have to do by hand when I have
modern systems on both ends to work with. (Even installing Cygwin on
Win2k3 and installing OpenSSH on that means I can at least pretend
that it's a halfway reasonable system, and use modern tools. That's
what I do with the Win2k3 server that I am forced to administer. I
just cannot bring myself to install a legacy, non-TLS capable FTP
dæmon^WService on any operating system, even if that's the highest
technology built-in to it... OK, so sometimes I blatantly violate
KISS in order to make life easier—it's the exception that proves the
rule).<br>
<br>
I haven't learned to say "no" to such things yet. Money, money,
money!<br>
<br>
— Mike<br>
</body>
</html>