<html><head></head><body>I'm pretty sure Ceph is still supported by RHEL. I saw the drop notice from SUSE this morning and it looks like it's "further development and installation" is dropped. So there's time.<br><br>Sadly, any network filesystem is not "beginner supportable" unless stability is irrelevant.<br><br><div class="gmail_quote">On March 26, 2021 7:23:43 PM EDT, Allen Beddingfield via Ale <ale@ale.org> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">Yeah, I've been playing around with both. Neither is going to be something I can hand off administration of to a non-Linux geek. <br>I'm really wanting something that matches what SUSE advertised their CEPH solution to be, which pretty much doesn't exist outside of products by the major SAN vendors.<br>That would be - something that let's you just smack together a bunch of mismatched servers crammed full of disks into an easy to administer NAS type of setup, where it just automagically places data in the optimal spot, etc... but that pretty much describes the features of our Compellent SAN....so not at all realistic.<br><br>It looks like Gluster will probably end up doing what we need. What I WANT doesn't exist, and that would amount to multi-node FreeNAS!<br>--<br>Allen Beddingfield<br>Systems Engineer<br>Office of Information Technology<br>The University of Alabama<br>Office 205-348-2251<br>allen@ua.edu<hr>From: Ale <ale-bounces@ale.org> on behalf of Jim Kinney via Ale <ale@ale.org><br>Sent: Friday, March 26, 2021 5:28 PM<br>To: Atlanta Linux Enthusiasts<br>Cc: Jim Kinney<br>Subject: [EXTERNAL] Re: [ale] Distributed filesystems?<br><br>Moosefs and glusterfs are VERY different. Moose is an object storage and gluster is more like raid over ethernet.<br><br>Like raid, easy setup of gluster but really hard to change if the needs change: think transition between triple redundant raid 1 to a blazing fast raid 0. Using hardware raid for intra-node redundancy and then maxing out node bandwidth for performance. More nodes means faster performance. It works but has some crunchy spots. Dropping IB support is a killer.<br><br>Moosefs is an object store. A (redundant) head serves metadata and node/chunk location for file blocks. Blocks are handled internally for desired redundancy. Like gluster, more nodes makes it faster. Big caveat is clients need a client tool. Gluster is a nfs/cifs service provider.<br><br>Neither, in fact no large-scale file server, likes lots of little files. Delivering 10,000 1k files is always horrible. Chunking around a single 10M file is faster. Getting users to always zip/tar and move 10,000 files at once never seems to happen.<br><br>On March 26, 2021 4:28:09 PM EDT, Allen Beddingfield via Ale <ale@ale.org> wrote:<br><br>Wondering if any of you have experience with distributed filesystems, such as CEPH, GlusterFS, MooseFS, etc...?<br>We've been using SUSE's "SUSE Enterprise Storage" package, which is CEPH, packaged with a Salt installer, and their UI. Anyway, it worked well, but was sort of like using a cement block to smash a fly for our purposes.<br>SUSE notified us yesterday that they are getting out of that business, and will EOL the product in two years. I'm glad they let us know BEFORE we renewed the maintenance in May.<br>That really wasn't that big of a deal for us, because we were about to do a clean slate re-install/hardware refresh, anyway.<br>Sooo....<br>I'm looking into MooseFS and GlusterFS at this point, as they are much simpler to deploy and manage (at least to me) compared with CEPH. Do any of you have experiences with these? Thoughts?<br>The use case is to use CHEAP (think lots of servers full of 10TB SATA drives and 1.2TB SAS drives) hardware to share out big NFS (or native client) shares as temporary/scratch space, where performance isn't that important.<br><br>Allen B.<br>--<br>Allen Beddingfield<br>Systems Engineer<br>Office of Information Technology<br>The University of Alabama<br>Office 205-348-2251<br>allen@ua.edu<hr>Ale mailing list<br>Ale@ale.org<br><a href="https://mail.ale.org/mailman/listinfo/ale">https://mail.ale.org/mailman/listinfo/ale</a><br>See JOBS, ANNOUNCE and SCHOOLS lists at<br><a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br><br>--<br>Computers amplify human error<br>Super computers are really cool<hr>Ale mailing list<br>Ale@ale.org<br><a href="https://mail.ale.org/mailman/listinfo/ale">https://mail.ale.org/mailman/listinfo/ale</a><br>See JOBS, ANNOUNCE and SCHOOLS lists at<br><a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br></pre></blockquote></div><br>-- <br>Computers amplify human error<br>Super computers are really cool</body></html>