[ale] ISCSI array on virtual machine
Todor Fassl
fassl.tod at gmail.com
Wed Apr 27 16:29:38 EDT 2016
With respect to your question about using LVM ... I guess that was sort
of my original question. If I just allocate the whole 8T to one big
partition, I'd have no reason to use LVM. But I can see the need to use
LVM if I continue with the scheme where I split the drive into
partitions for faculty, grads, and staff.
On 04/27/2016 02:27 PM, Jim Kinney wrote:
> If you need de-dup, ZFS is the only choice and be ready to throw a lot
> of RAM into the server so it can do it's job. I was looking at dedupe
> on 80TB and the RAM hit was 250GB.
> XFS vs EXT4.
> XFS is the better choice.
> XFS does everything EXT4 does except shrink. It was designed for (then
> very) large files (video) and works quite well with smaller files. It's
> as fast as EXT4 but will handle larger files and many, many more of
> them. I want to say exabytes but not certain. Petabytes are OK
> filesystem sizes with XFS right now. I have no experience with a
> filesystem of that size but I expect there to be some level of metadata
> performance hit.
> If there's the slightest chance of a need to shrink a partition (You
> _are_ using LVM, right?) then XFS will bite you and require relocation,
> tear down, rebuild, relocation. Not a fun process.
> A while back, an install onto a 24 TB RAID6 array refused to budge
> using EXT4. While EXT4 is supposed to address that kind of size, it had
> bugs and unimplemented plans for expansion features that were blockers.
> I used XFS instead and never looked back. XFS has a very complete
> toolset for maintenance/repair needs.
> On Wed, 2016-04-27 at 13:54 -0500, Todor Fassl wrote:
>> I need to setup a new file server on a virtual machine with an
>> attached
>> ISCSI array. Two things I am obsessing over -- 1. Which file system
>> to
>> use and 2. Partitioning scheme.
>>
>> The ISCSI array is attached to a ubuntu 16.04 virtual machine. To
>> tell
>> you the truth, I don't even know how that is done. I do not manage
>> the
>> VMware cluster. In fact, I think the Dell technitian actually ddid
>> that
>> for us. It looks like a normal 8T hard drive on /dev/sdb to the
>> virtual
>> machine. The ISCSI array is configured for RAID6 so from what I
>> understand, all I have to do is choose a file system appropriate for
>> my
>> end user's needs. Even though the array looks like a single hard
>> drive,
>> I don't have to worry about software RAID or anyhthing like that.
>>
>> Googling shows me no clear advantage to ext4, xfs, or zfs. I haven't
>> been able to find a page that says any one of those is an obvious
>> choice
>> in my situation. I have about 150 end-users with nfs mounted home
>> directories. We also have a handful of people using Windows so the
>> file
>> server will have samba installed. It's a pretty good mix of large
>> files
>> and small files since different users are doing drastically
>> different
>> things. There are users who never do anything but read email and
>> browse
>> the web and others doing fluid dynamic simulations on small
>> supercomputers.
>>
>> Secondthing I've been going back and forth on in my own mind is
>> whether
>> to do away with seperate partitions for faculty, staff, and grad
>> students. My co-worker says that's probably an artifact of the days
>> when
>> partition sizes were limited. That was before my time here. The last
>> 2
>> times we rebuilt our file server, we just maintained the
>> partitioning
>> scheme and just made the sizes times larger. But sometimes the
>> faculty
>> partition got filled up while there was still plenty of space left
>> on
>> the grad partition. Or it might be the other way around. If we
>> munged
>> them all together, that wouldn't happen. The only downside I see to
>> doing that is that if the faculty partition gets hosed, the grad
>> partition wouldn't be effected. But that seems like a pretty
>> arbitrary
>> choice. We could just assign users randomly to one partition or
>> another.
>> When you're setting up a NAS for use by a lot of users, is it
>> considered
>> best practice to split it up to limit the damage from a messed up
>> file
>> system? I mean, hopefully, that never happens anyway, right?
>>
>> Right now, I've got it configured as one gigantic 8T ext4 partition.
>> But
>> we won't be going live with it until the end of May so I have plenty
>> of
>> time to completely rebuild it.
>>
>>
>>
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> http://mail.ale.org/mailman/listinfo/ale
>> See JOBS, ANNOUNCE and SCHOOLS lists at
>> http://mail.ale.org/mailman/listinfo
--
Todd
More information about the Ale
mailing list