<div dir="auto"><div>Longhorn is really only intended to serve as distributed block storage for kubernetes. I don't think it fits Allen's use case.<br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Mar 28, 2021, 1:24 PM James Taylor via Ale <<a href="mailto:ale@ale.org">ale@ale.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Since SUSE bought Rancher, they are going all in on kubernetes as a focus.<br>
Now to place a bit of my ignorance on center stage.<br>
<br>
I haven't looked into it too much, but what about Longhorn.<br>
It's advertised as designed for kubernetes persistent storage, but would it be useful in a non or mixed kubernetes environment?<br>
I suspect SUSE will have migration tools available to move from CEPH, if not already.<br>
-jt<br>
<br>
<br>
<br>
James Taylor<br>
678-697-9420<br>
<a href="mailto:james.taylor@eastcobbgroup.com" target="_blank" rel="noreferrer">james.taylor@eastcobbgroup.com</a><br>
<br>
<br>
<br>
>>> Allen Beddingfield via Ale <<a href="mailto:ale@ale.org" target="_blank" rel="noreferrer">ale@ale.org</a>> 3/26/2021 4:28 PM >>> <br>
Wondering if any of you have experience with distributed filesystems, such as CEPH, GlusterFS, MooseFS, etc...?<br>
We've been using SUSE's "SUSE Enterprise Storage" package, which is CEPH, packaged with a Salt installer, and their UI. Anyway, it worked well, but was sort of like using a cement block to smash a fly for our purposes. <br>
SUSE notified us yesterday that they are getting out of that business, and will EOL the product in two years. I'm glad they let us know BEFORE we renewed the maintenance in May.<br>
That really wasn't that big of a deal for us, because we were about to do a clean slate re-install/hardware refresh, anyway. <br>
Sooo.... <br>
I'm looking into MooseFS and GlusterFS at this point, as they are much simpler to deploy and manage (at least to me) compared with CEPH. Do any of you have experiences with these? Thoughts?<br>
The use case is to use CHEAP (think lots of servers full of 10TB SATA drives and 1.2TB SAS drives) hardware to share out big NFS (or native client) shares as temporary/scratch space, where performance isn't that important.<br>
<br>
Allen B.<br>
--<br>
Allen Beddingfield<br>
Systems Engineer<br>
Office of Information Technology<br>
The University of Alabama<br>
Office 205-348-2251<br>
<a href="mailto:allen@ua.edu" target="_blank" rel="noreferrer">allen@ua.edu</a><br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank" rel="noreferrer">Ale@ale.org</a><br>
<a href="https://mail.ale.org/mailman/listinfo/ale" rel="noreferrer noreferrer" target="_blank">https://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
<br>
<br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank" rel="noreferrer">Ale@ale.org</a><br>
<a href="https://mail.ale.org/mailman/listinfo/ale" rel="noreferrer noreferrer" target="_blank">https://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
</blockquote></div></div></div>