<div dir="ltr"><div>It started out on Debian 5 (Lenny) and has tracked with Debian stable. I'm not sure which kernels those were. The file system was EXT4.<br><br></div>Jeff<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 28, 2016 at 4:56 PM, Ed Cashin <span dir="ltr"><<a href="mailto:ecashin@noserose.net" target="_blank">ecashin@noserose.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">What filesystem and kernel versions were you using during that time? If all those transitions were made using the same filesystem, I'm impressed by the quality of the filesystem code.<div><br></div><div>In the past I have see transitions like that tickle latent bugs in the filesystem code (or device mapper code or md code or block layer or virtual memory subsystem). I usually create a fresh filesystem and rsync contents. Partly it's to get the free defragmentation, but also it's for fear of bugs.</div><div><br></div><div><br></div></div><div class="gmail_extra"><div><div class="h5"><br><div class="gmail_quote">On Thu, Apr 28, 2016 at 4:31 PM, Jeff Jansen <span dir="ltr"><<a href="mailto:bamakojeff@gmail.com" target="_blank">bamakojeff@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Using LVM costs you almost nothing and it offers tremendous advantages. Even if you don't need those advantages now, having them available for free (or nearly) is a great asset as a sysadmin.<br><br></div>Over five years we had a backup system that began as a single hard drive in a single machine, which became a RAID array in a single machine, which became two RAID arrays in two machines connected by DRBD and a crossover cable, which became multiple RAID arrays in multiple HA machines across a WAN. <br><br>Having LVM as part of the underlying architecture made all those changes, while not "easy," much easier than it would have been without it. If you use LVM but then never change partition on the 8 TB drive, you'll never know it's there, and it will never cause you any trouble. But if you do ever decide to make changes, you will be immensely grateful for the possibilities it opens up for you. <br><br></div><div>HTH<span><font color="#888888"><br></font></span></div><span><font color="#888888"><br></font></span></div><span><font color="#888888">Jeff<br></font></span></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 28, 2016 at 10:34 AM, Todor Fassl <span dir="ltr"><<a href="mailto:fassl.tod@gmail.com" target="_blank">fassl.tod@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">But, Jim, what I'm asking is why bother carving it up at all? What benefit is there in that?<br>
<br>
I get that if you use LVM and ext4 file systems, you can resize the partitions. But if I made the whole 8T one big partition, I'd never have any reason to resize it.<br>
<br>
PS: One thing I forgot to mention that may be of critical importance is that quotas are enforced on this drive.<div><div><br>
<br>
<br>
<br>
<br>
On 04/28/2016 06:16 AM, Jim Kinney wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I have a large drive array for my department. I use LVM to carve it up. I<br>
leave a huge chunk unallocated so I can extend logical partitions as<br>
required. That dodges the need to shrink existing partitions and allows XFS<br>
as filesystem.<br>
On Apr 28, 2016 3:27 AM, "Todor Fassl" <<a href="mailto:fassl.tod@gmail.com" target="_blank">fassl.tod@gmail.com</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
With respect to your question about using LVM ... I guess that was sort of<br>
my original question. If I just allocate the whole 8T to one big partition,<br>
I'd have no reason to use LVM. But I can see the need to use LVM if I<br>
continue with the scheme where I split the drive into partitions for<br>
faculty, grads, and staff.<br>
<br>
On 04/27/2016 02:27 PM, Jim Kinney wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
If you need de-dup, ZFS is the only choice and be ready to throw a lot<br>
of RAM into the server so it can do it's job. I was looking at dedupe<br>
on 80TB and the RAM hit was 250GB.<br>
XFS vs EXT4.<br>
XFS is the better choice.<br>
XFS does everything EXT4 does except shrink. It was designed for (then<br>
very) large files (video) and works quite well with smaller files. It's<br>
as fast as EXT4 but will handle larger files and many, many more of<br>
them. I want to say exabytes but not certain. Petabytes are OK<br>
filesystem sizes with XFS right now. I have no experience with a<br>
filesystem of that size but I expect there to be some level of metadata<br>
performance hit.<br>
If there's the slightest chance of a need to shrink a partition (You<br>
_are_ using LVM, right?) then XFS will bite you and require relocation,<br>
tear down, rebuild, relocation. Not a fun process.<br>
A while back, an install onto a 24 TB RAID6 array refused to budge<br>
using EXT4. While EXT4 is supposed to address that kind of size, it had<br>
bugs and unimplemented plans for expansion features that were blockers.<br>
I used XFS instead and never looked back. XFS has a very complete<br>
toolset for maintenance/repair needs.<br>
On Wed, 2016-04-27 at 13:54 -0500, Todor Fassl wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I need to setup a new file server on a virtual machine with an<br>
attached<br>
ISCSI array. Two things I am obsessing over -- 1. Which file system<br>
to<br>
use and 2. Partitioning scheme.<br>
<br>
The ISCSI array is attached to a ubuntu 16.04 virtual machine. To<br>
tell<br>
you the truth, I don't even know how that is done. I do not manage<br>
the<br>
VMware cluster. In fact, I think the Dell technitian actually ddid<br>
that<br>
for us. It looks like a normal 8T hard drive on /dev/sdb to the<br>
virtual<br>
machine. The ISCSI array is configured for RAID6 so from what I<br>
understand, all I have to do is choose a file system appropriate for<br>
my<br>
end user's needs. Even though the array looks like a single hard<br>
drive,<br>
I don't have to worry about software RAID or anyhthing like that.<br>
<br>
Googling shows me no clear advantage to ext4, xfs, or zfs. I haven't<br>
been able to find a page that says any one of those is an obvious<br>
choice<br>
in my situation. I have about 150 end-users with nfs mounted home<br>
directories. We also have a handful of people using Windows so the<br>
file<br>
server will have samba installed. It's a pretty good mix of large<br>
files<br>
and small files since different users are doing drastically<br>
different<br>
things. There are users who never do anything but read email and<br>
browse<br>
the web and others doing fluid dynamic simulations on small<br>
supercomputers.<br>
<br>
Secondthing I've been going back and forth on in my own mind is<br>
whether<br>
to do away with seperate partitions for faculty, staff, and grad<br>
students. My co-worker says that's probably an artifact of the days<br>
when<br>
partition sizes were limited. That was before my time here. The last<br>
2<br>
times we rebuilt our file server, we just maintained the<br>
partitioning<br>
scheme and just made the sizes times larger. But sometimes the<br>
faculty<br>
partition got filled up while there was still plenty of space left<br>
on<br>
the grad partition. Or it might be the other way around. If we<br>
munged<br>
them all together, that wouldn't happen. The only downside I see to<br>
doing that is that if the faculty partition gets hosed, the grad<br>
partition wouldn't be effected. But that seems like a pretty<br>
arbitrary<br>
choice. We could just assign users randomly to one partition or<br>
another.<br>
When you're setting up a NAS for use by a lot of users, is it<br>
considered<br>
best practice to split it up to limit the damage from a messed up<br>
file<br>
system? I mean, hopefully, that never happens anyway, right?<br>
<br>
Right now, I've got it configured as one gigantic 8T ext4 partition.<br>
But<br>
we won't be going live with it until the end of May so I have plenty<br>
of<br>
time to completely rebuild it.<br>
<br>
<br>
<br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
<br>
</blockquote>
<br>
</blockquote>
--<br>
Todd<br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
<br>
</blockquote>
<br>
<br>
<br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
<br>
</blockquote>
<br>
-- <br>
Todd<br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
</div></div></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br></div></div><div><div dir="ltr"> Ed Cashin <<a href="mailto:ecashin@noserose.net" target="_blank">ecashin@noserose.net</a>></div></div>
</div>
<br>_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
<br></blockquote></div><br></div>