[ale] XFS on Linux - Is it ready for prime time?

Dustin Puryear dpuryear at puryear-it.com
Tue Apr 27 18:57:31 EDT 2010


I tend to agree about RAID-5. Use RAID-5 if your main goal is saving money. Outside of that, it's not the right choice for performance reasons.

We use it on our general purpuse file server here, but we don't need a lot of performance on that NAS. For a database or a high transaction volume storage device, then we would go with RAID-10. For system drives we typically use RAID-1.

-----Original Message-----
From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Doug McNash
Sent: Thursday, April 22, 2010 8:14 PM
To: Atlanta Linux Enthusiasts - Yes! We run Linux!
Subject: Re: [ale] XFS on Linux - Is it ready for prime time?


So, you think we shouldn't be using Raid5, huh. I asked why Raid5 and they said they wanted the reliability. XFS was chosen because of its reputation for handling large (140M) video files and associated metadata. And yes it is NFS.

The big remaining problem is that periodically the system stalls and may take a few seconds to send back the write acks. At that point the writer assumes the file is not being written and starts tossing data.

Is this stalling the nature of a Raid5?, XFS? and would it be improved by different choices? 
--
doug mcnash

---- scott mcbrien <smcbrien at gmail.com> wrote: 
> A lot of the XFS guys from SGI now work for RH.  But that's an aside.
> More to the point is how many machines are simultaneously accessing
> said NAS?  If it's designed for a single system to access, then just
> use something like ext3.  If you're needing multiple simultaneous
> accesses with file locking to avoid the problems that occur with
> multiple machines opening and closing files, you might try GFS.  A
> couple of other things
> 
> 1)  You probably don't want to use RAID 5.  RAID 5 has data throughput
> issues, especially for large stripe units and/or small file changes.
> I know the alternatives aren't that attractive, the most likely being
> RAID10 or RAID0+1 because they require 2x as many discs.  But for the
> additional expense, you'll get much more throughput.
> 
> 2) One might want a different file system because of the data stored
> on the filesystem.  reiserfs for example is really good at storing
> copious amounts of small files, where GFS is good for multiple machine
> accesses, while ext3 is just solid for average people and process
> access on a single machine.
> 
> 3) RAID 5 isn't high performance.
> 
> 4)  I'm guessing that they're sharing the filesystem via NFS, you
> might want to make sure the NFS server is properly tuned and the
> clients aren't doing anything insane to corrupt your data
> 
> 5)  You really need to move off of RAID5
> 
> -Scott
> 
> On Wed, Apr 21, 2010 at 10:15 PM, Jim Kinney <jim.kinney at gmail.com> wrote:
> > How odd. I started using xfs before it was a native thing in redhat (pre
> > RHEL stuff, pre ext3 days). It seemed to always be solid and reliable. It
> > was provided by SGI ( and all the port was provided by SGI as well) and it
> > had a solid track record as the file system that was suitable for huge
> > amounts of data (moving video files was common use). It worked on all of my
> > stuff for all RAID I threw at it. It was imperative to install the xfs-tools
> > to work with it but it sounds like you already have it. If xfs-check is
> > dying due to ram issues, I would be more suspicious of bad hard drives than
> > the xfs code. If there has been a ton of write/delete/write cycles on the
> > drives then the journalling may be corrupted. I'm not sure how to fix that.
> >
> > On Wed, Apr 21, 2010 at 9:34 PM, Doug McNash <dmcnash at charter.net> wrote:
> >>
> >> I'm consulting at a company that wants to turn their Linux based NAS in to
> >> a reliable product.  They initially chose XFS because they were under the
> >> impression that it was high performance but what they got was something of
> >> questionable reliability. I have identified and patched several serious bugs
> >> (2.6.29) and I have a feeling there are more unidentified ones out there.
> >> Furthermore, xfs_check craps out of memory every time so we have to do an
> >> xfs_repair at boot and it takes forever.  But today we got into a situation
> >> where xfs_repair can't repair the disk (a raid5 array btw).
> >>
> >> Does anyone out there use xfs? How about a suggestion for a stable
> >> replacement.
> >> --
> >> doug mcnash
> >> _______________________________________________
> >> Ale mailing list
> >> Ale at ale.org
> >> http://mail.ale.org/mailman/listinfo/ale
> >> See JOBS, ANNOUNCE and SCHOOLS lists at
> >> http://mail.ale.org/mailman/listinfo
> >
> >
> >
> > --
> > --
> > James P. Kinney III
> > Actively in pursuit of Life, Liberty and Happiness
> >
> >
> > _______________________________________________
> > Ale mailing list
> > Ale at ale.org
> > http://mail.ale.org/mailman/listinfo/ale
> > See JOBS, ANNOUNCE and SCHOOLS lists at
> > http://mail.ale.org/mailman/listinfo
> >
> >
> 
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo


_______________________________________________
Ale mailing list
Ale at ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo



More information about the Ale mailing list