<p dir="ltr">I have a large drive array for my department. I use LVM to carve it up. I leave a huge chunk unallocated so I can extend logical partitions as required. That dodges the need to shrink existing partitions and allows XFS as filesystem.</p>
<div class="gmail_quote">On Apr 28, 2016 3:27 AM, "Todor Fassl" <<a href="mailto:fassl.tod@gmail.com">fassl.tod@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">With respect to your question about using LVM ... I guess that was sort of my original question. If I just allocate the whole 8T to one big partition, I'd have no reason to use LVM. But I can see the need to use LVM if I continue with the scheme where I split the drive into partitions for faculty, grads, and staff.<br>
<br>
On 04/27/2016 02:27 PM, Jim Kinney wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
If you need de-dup, ZFS is the only choice and be ready to throw a lot<br>
of RAM into the server so it can do it's job. I was looking at dedupe<br>
on 80TB and the RAM hit was 250GB.<br>
XFS vs EXT4.<br>
XFS is the better choice.<br>
XFS does everything EXT4 does except shrink. It was designed for (then<br>
very) large files (video) and works quite well with smaller files. It's<br>
as fast as EXT4 but will handle larger files and many, many more of<br>
them. I want to say exabytes but not certain. Petabytes are OK<br>
filesystem sizes with XFS right now. I have no experience with a<br>
filesystem of that size but I expect there to be some level of metadata<br>
performance hit.<br>
If there's the slightest chance of a need to shrink a partition (You<br>
_are_ using LVM, right?) then XFS will bite you and require relocation,<br>
tear down, rebuild, relocation. Not a fun process.<br>
A while back, an install onto a 24 TB RAID6 array refused to budge<br>
using EXT4. While EXT4 is supposed to address that kind of size, it had<br>
bugs and unimplemented plans for expansion features that were blockers.<br>
I used XFS instead and never looked back. XFS has a very complete<br>
toolset for maintenance/repair needs.<br>
On Wed, 2016-04-27 at 13:54 -0500, Todor Fassl wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I need to setup a new file server on a virtual machine with an<br>
attached<br>
ISCSI array. Two things I am obsessing over -- 1. Which file system<br>
to<br>
use and 2. Partitioning scheme.<br>
<br>
The ISCSI array is attached to a ubuntu 16.04 virtual machine. To<br>
tell<br>
you the truth, I don't even know how that is done. I do not manage<br>
the<br>
VMware cluster. In fact, I think the Dell technitian actually ddid<br>
that<br>
for us. It looks like a normal 8T hard drive on /dev/sdb to the<br>
virtual<br>
machine. The ISCSI array is configured for RAID6 so from what I<br>
understand, all I have to do is choose a file system appropriate for<br>
my<br>
end user's needs. Even though the array looks like a single hard<br>
drive,<br>
I don't have to worry about software RAID or anyhthing like that.<br>
<br>
Googling shows me no clear advantage to ext4, xfs, or zfs. I haven't<br>
been able to find a page that says any one of those is an obvious<br>
choice<br>
in my situation. I have about 150 end-users with nfs mounted home<br>
directories. We also have a handful of people using Windows so the<br>
file<br>
server will have samba installed. It's a pretty good mix of large<br>
files<br>
and small files since different users are doing drastically<br>
different<br>
things. There are users who never do anything but read email and<br>
browse<br>
the web and others doing fluid dynamic simulations on small<br>
supercomputers.<br>
<br>
Secondthing I've been going back and forth on in my own mind is<br>
whether<br>
to do away with seperate partitions for faculty, staff, and grad<br>
students. My co-worker says that's probably an artifact of the days<br>
when<br>
partition sizes were limited. That was before my time here. The last<br>
2<br>
times we rebuilt our file server, we just maintained the<br>
partitioning<br>
scheme and just made the sizes times larger. But sometimes the<br>
faculty<br>
partition got filled up while there was still plenty of space left<br>
on<br>
the grad partition. Or it might be the other way around. If we<br>
munged<br>
them all together, that wouldn't happen. The only downside I see to<br>
doing that is that if the faculty partition gets hosed, the grad<br>
partition wouldn't be effected. But that seems like a pretty<br>
arbitrary<br>
choice. We could just assign users randomly to one partition or<br>
another.<br>
When you're setting up a NAS for use by a lot of users, is it<br>
considered<br>
best practice to split it up to limit the damage from a messed up<br>
file<br>
system? I mean, hopefully, that never happens anyway, right?<br>
<br>
Right now, I've got it configured as one gigantic 8T ext4 partition.<br>
But<br>
we won't be going live with it until the end of May so I have plenty<br>
of<br>
time to completely rebuild it.<br>
<br>
<br>
<br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
</blockquote></blockquote>
<br>
-- <br>
Todd<br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
</blockquote></div>