<html><head></head><body>Moosefs and glusterfs are VERY different. Moose is an object storage and gluster is more like raid over ethernet.<br><br>Like raid, easy setup of gluster but really hard to change if the needs change: think transition between triple redundant raid 1 to a blazing fast raid 0. Using hardware raid for intra-node redundancy and then maxing out node bandwidth for performance. More nodes means faster performance. It works but has some crunchy spots. Dropping IB support is a killer.<br><br>Moosefs is an object store. A (redundant) head serves metadata and node/chunk location for file blocks. Blocks are handled internally for desired redundancy. Like gluster, more nodes makes it faster. Big caveat is clients need a client tool. Gluster is a nfs/cifs service provider.<br><br>Neither, in fact no large-scale file server, likes lots of little files. Delivering 10,000 1k files is always horrible. Chunking around a single 10M file is faster. Getting users to always zip/tar and move 10,000 files at once never seems to happen.<br><br><div class="gmail_quote">On March 26, 2021 4:28:09 PM EDT, Allen Beddingfield via Ale <ale@ale.org> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">Wondering if any of you have experience with distributed filesystems, such as CEPH, GlusterFS, MooseFS, etc...?<br>We've been using SUSE's "SUSE Enterprise Storage" package, which is CEPH, packaged with a Salt installer, and their UI. Anyway, it worked well, but was sort of like using a cement block to smash a fly for our purposes. <br>SUSE notified us yesterday that they are getting out of that business, and will EOL the product in two years. I'm glad they let us know BEFORE we renewed the maintenance in May.<br>That really wasn't that big of a deal for us, because we were about to do a clean slate re-install/hardware refresh, anyway. <br>Sooo.... <br>I'm looking into MooseFS and GlusterFS at this point, as they are much simpler to deploy and manage (at least to me) compared with CEPH. Do any of you have experiences with these? Thoughts?<br>The use case is to use CHEAP (think lots of servers full of 10TB SATA drives and 1.2TB SAS drives) hardware to share out big NFS (or native client) shares as temporary/scratch space, where performance isn't that important.<br><br>Allen B.<br>--<br>Allen Beddingfield<br>Systems Engineer<br>Office of Information Technology<br>The University of Alabama<br>Office 205-348-2251<br>allen@ua.edu<hr>Ale mailing list<br>Ale@ale.org<br><a href="https://mail.ale.org/mailman/listinfo/ale">https://mail.ale.org/mailman/listinfo/ale</a><br>See JOBS, ANNOUNCE and SCHOOLS lists at<br><a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br></pre></blockquote></div><br>-- <br>Computers amplify human error<br>Super computers are really cool</body></html>