<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Yes. This.<br>
<br>
I am hardcore on <i>not</i> using any proprietary device for file
serving regardless of how much you spend. Without going too much
into the weeds with my personal experiences it basically comes
down to this: having the ability to act on the device's disk
contents and serve them out how you want it done makes up for far
more in utility than you can hope to gain in convenience. The last
production file server I built (Gentoo Linux/Samba/nfsd) was a
beast. When running ClamAV on its served-out contents it lit up
all its cores and was reading/scanning at well over 200MiB/s.
Nightly it rsynced over its primary share to unshared space on
big/slow drives, made a squashfs volume from it, remounted that,
and backed the rsynced copy up to tape before deleting that copy.<br>
<br>
Another "superpower" this gave us over even the [supposedly]
high-end IBM NAS (actually a rebadged NetApp) we had was
searchability. We didn't so much have people deleting stuff
accidentally as we did "wild mouse drags" - the user didn't know
where their stuff had gone. If they could tell me anything about
what they were trying to do and with what, I could find and
recover it with the typical shell commands, with the searching
taking place at disk speeds instead of network/protocol speeds.
Before you think that network/protocol speeds couldn't possibly be
so slow as to matter, understand that we had <i>backed down</i>
from gig-E to the desktop to 100base-T, apparently so a contractor
could work with some possibly-proprietary Cisco feature using used
and/or refurb equipment just so he could write an article. So you
did <i>not</i> want to execute searches over the network in this
shop unless you had no choice. <br>
<br>
Cost of that IBM NAS? about $40,000. My server, plus spares of
everything including a whole separate rackmount case and
motherboard as a warm spare: $25,000. And with the IBM, we had to
pay extra for each enabled protocol. Also, take this to be a
cautionary tale about the use of IT contractors and the importance
of contractor oversight. <br>
<br>
As an aside: as a wise man cough*BobToxen*cough said to me once, <i>all
NFS implementations are broken</i>. Irrespective of protocol,
you do not want to be in a position where you can't readily update
your protocol daemons. With proprietary NASses you're pretty much
at the manufacturers' mercy in that regard. A rep at LaCie told me
once that whereas their units had NFS available, they really paid
no attention to how well it worked and didn't exactly bust their
humps trying to fix it when it didn't because only a vanishingly
tiny proportion of their customer base even cared to use NFS. <br>
<br>
And yes, RAID5/6 on drives more than 1TB is a don't-do-that
because the probability of unrecoverable read errors on recover
comes up off the peg. And for file serving I advocate keeping the
RAM to a minimum (my server IIRC was 4GiB) unless it's ECC because
you want as few cosmic-ray-catchers in there as possible.<br>
<br>
On 6/15/17 11:04 AM, DJ-Pfulio wrote:<br>
</div>
<blockquote type="cite"
cite="mid:5f514f2e-9552-5416-d00f-28f460cc8044@jdpfu.com">
<pre wrap="">On 06/15/2017 09:29 AM, Ken Cochran wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Any ALEr Words of Wisdom wrt desktop NAS?
Looking for something appropriate for, but not limited to, photography.
Some years ago Drobo demoed at (I think) AUUG. (Might've been ALE.)
Was kinda nifty for the time but I'm sure things have improved since.
Synology? QNAP?
Build something myself? JBOD?
Looks like they all running Linux inside these days.
Rackmount ones look lots more expensive.
Ideas? What to look for? Stay away from? Thanks, Ken
</pre>
</blockquote>
<pre wrap="">
Every time I look at the pre-built NAS devices, I think - that's $400
too much and not very flexible. These devices are certified with
specific models of HDDs. Can you live with a specific list of supported
HDDs and limited, specific, software?
Typical trade off - time/convenience vs money. At least initially.
Nothing you don't already know.
My NAS is a $100 x86 box built from parts. Bought a new $50 intel G3258
CPU and $50 MB. Reused stuff left over from prior systems for everything
else, at least initially.
Reused:
* 8G of DDR3 RAM
* Case
* PSU
* 4TB HDD
* assorted cabled to connect to a KVM and network. That was 3 yrs ago.
Most of the RAM is used for disk buffering.
That box has 4 internal HDDs and 4 external in a cheap $99 array
connected via USB3. Internal is primary, external is the rsync mirror
for media files.
It runs Plex MS, Calibre, and 5 other services. The CPU is powerful
enough to transcode 2 HiDef streams for players that need it concurrently.
All the primary storage is LVM managed. I don't span HDDs for LVs.
Backups are not LVM'd and a simple rsync is used for media files. OS
application and non-media content gets backed up with 60 versions using
rdiff-backup to a different server over the network.
That original 4TB disk failed a few weeks ago. It was a minor
inconvenience. Just sayin'.
If I were starting over, the only thing I'd do different would be to
more strongly consider ZFS. Don't know that I'd use it, but it would be
considered for more than 15 minutes for the non-OS storage. Bitrot is
real, IMHO.
I use RAID elsewhere on the network, but not for this box. It is just a
media server (mainly), so HA just isn't needed.
At SELF last weekend, there was a talk about using RAID5/6 on HDDs over
2TB in size by a guy in the storage biz. The short answer was - don't.
The rebuild time after a failure in their testing was measured in
months. They were using quality servers, disks and HBAs for the test. A
5x8TB RAID5 rebuild was predicted to finish in over 6 months under load.
There was also discussions about whether using RAID with SSDs was smart
or not. RAID10 was considered fine. RAID0 if you needed performance,
but not for long term. The failure rate on enterprise SSDs is so low to
make it a huge waste of time except for the most critical applications.
They also suggested avoiding SAS and SATA interfaces on those SSDs to
avoid the limited performance.
Didn't mean to write a book. Sorry.
_______________________________________________
Ale mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Ale@ale.org">Ale@ale.org</a>
<a class="moz-txt-link-freetext" href="http://mail.ale.org/mailman/listinfo/ale">http://mail.ale.org/mailman/listinfo/ale</a>
See JOBS, ANNOUNCE and SCHOOLS lists at
<a class="moz-txt-link-freetext" href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a>
</pre>
</blockquote>
<p><br>
</p>
</body>
</html>