<div dir="ltr">I like and bought two iXSystem. I love ZFS so that I can make copies for the developers to play with destroy once they were done. </div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jun 15, 2017 at 2:09 PM, DJ-Pfulio <span dir="ltr"><<a href="mailto:djpfulio@jdpfu.com" target="_blank">djpfulio@jdpfu.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">"Under load" - think that is the diff.<br>
<br>
Took my cheap-ass system 26 hrs to mirror 4TB to a new 4TB 7200rpm disk a few<br>
weeks ago. No RAID. Onboard SATA only. Zero load.<br>
<br>
Look for the SELF videos when they are posted to get passed my summary.<br>
<br>
BTW, I'm loving all the different, thoughtful, opinions on this subject shared.<br>
Very nice community!<br>
<span class="im HOEnZb"><br>
<br>
On 06/15/2017 01:16 PM, Jim Kinney wrote:<br>
> Wow! A six month recovery time! I've not had any of my RAID6 systems take longer<br>
> than 10 days with pretty heavy use. These are 4TB SAS drives with 28 drives per<br>
> array.<br>
><br>
> On Jun 15, 2017 5:08 PM, "DJ-Pfulio" <<a href="mailto:DJPfulio@jdpfu.com">DJPfulio@jdpfu.com</a><br>
</span><div class="HOEnZb"><div class="h5">> <mailto:<a href="mailto:DJPfulio@jdpfu.com">DJPfulio@jdpfu.com</a>>> wrote:<br>
><br>
> On 06/15/2017 09:29 AM, Ken Cochran wrote:<br>
> > Any ALEr Words of Wisdom wrt desktop NAS?<br>
> > Looking for something appropriate for, but not limited to, photography.<br>
> > Some years ago Drobo demoed at (I think) AUUG. (Might've been ALE.)<br>
> > Was kinda nifty for the time but I'm sure things have improved since.<br>
> > Synology? QNAP?<br>
> > Build something myself? JBOD?<br>
> > Looks like they all running Linux inside these days.<br>
> > Rackmount ones look lots more expensive.<br>
> > Ideas? What to look for? Stay away from? Thanks, Ken<br>
><br>
> Every time I look at the pre-built NAS devices, I think - that's $400<br>
> too much and not very flexible. These devices are certified with<br>
> specific models of HDDs. Can you live with a specific list of supported<br>
> HDDs and limited, specific, software?<br>
><br>
> Typical trade off - time/convenience vs money. At least initially.<br>
> Nothing you don't already know.<br>
><br>
> My NAS is a $100 x86 box built from parts. Bought a new $50 intel G3258<br>
> CPU and $50 MB. Reused stuff left over from prior systems for everything<br>
> else, at least initially.<br>
> Reused:<br>
> * 8G of DDR3 RAM<br>
> * Case<br>
> * PSU<br>
> * 4TB HDD<br>
> * assorted cabled to connect to a KVM and network. That was 3 yrs ago.<br>
><br>
> Most of the RAM is used for disk buffering.<br>
><br>
> That box has 4 internal HDDs and 4 external in a cheap $99 array<br>
> connected via USB3. Internal is primary, external is the rsync mirror<br>
> for media files.<br>
><br>
> It runs Plex MS, Calibre, and 5 other services. The CPU is powerful<br>
> enough to transcode 2 HiDef streams for players that need it concurrently.<br>
> All the primary storage is LVM managed. I don't span HDDs for LVs.<br>
> Backups are not LVM'd and a simple rsync is used for media files. OS<br>
> application and non-media content gets backed up with 60 versions using<br>
> rdiff-backup to a different server over the network.<br>
><br>
> That original 4TB disk failed a few weeks ago. It was a minor<br>
> inconvenience. Just sayin'.<br>
><br>
> If I were starting over, the only thing I'd do different would be to<br>
> more strongly consider ZFS. Don't know that I'd use it, but it would be<br>
> considered for more than 15 minutes for the non-OS storage. Bitrot is<br>
> real, IMHO.<br>
><br>
> I use RAID elsewhere on the network, but not for this box. It is just a<br>
> media server (mainly), so HA just isn't needed.<br>
><br>
> At SELF last weekend, there was a talk about using RAID5/6 on HDDs over<br>
> 2TB in size by a guy in the storage biz. The short answer was - don't.<br>
><br>
> The rebuild time after a failure in their testing was measured in<br>
> months. They were using quality servers, disks and HBAs for the test. A<br>
> 5x8TB RAID5 rebuild was predicted to finish in over 6 months under load.<br>
><br>
> There was also discussions about whether using RAID with SSDs was smart<br>
> or not. RAID10 was considered fine. RAID0 if you needed performance,<br>
> but not for long term. The failure rate on enterprise SSDs is so low to<br>
> make it a huge waste of time except for the most critical applications.<br>
> They also suggested avoiding SAS and SATA interfaces on those SSDs to<br>
> avoid the limited performance.<br>
><br>
> Didn't mean to write a book. Sorry.<br>
<br>
______________________________<wbr>_________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/<wbr>listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/<wbr>listinfo</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Terror PUP a.k.a<br>Chuck "PUP" Payne<br>-----------------------------------------<br>Discover it! Enjoy it! Share it! openSUSE Linux.<br>-----------------------------------------<br>openSUSE -- Terrorpup<br>openSUSE Ambassador/openSUSE Member<br>skype,twiiter,identica,friendfeed -- terrorpup<br>freenode(irc) --terrorpup/lupinstein<br>Register Linux Userid: 155363<br> <br>Have you tried SUSE Studio? Need to create a Live CD, an app you want to package and distribute , or create your own linux distro. Give SUSE Studio a try.</div></div>
</div>