<p>I built a large zfs pool for my personal use hear at the house, ( <a href="http://gcs8.org/san">gcs8.org/san</a> ) iscsi has good throughput and you can have a small server take care of sharing it out from there. I use freenas and it has served me pretty well. I have ~35.6tb after raid z2. The theory for my design was that if I lose any hardware I can replace it with what ever sense freenas it taking care of my disks not the hardware. If I could change 2 things about my setup I would try and get infinaban or at least 10gb eth, and use a ssd for cacheing.</p>
<p>Now I can't afford to keep a second one to rsync to but I do use crash plain to back it up, works fine I have 8.7tb backed up with them right now. Just my. 02 cents.</p>
<p>from gcs8's mobile device.</p>
<div class="gmail_quote">On Jul 12, 2012 1:23 AM, "Matthew" <<a href="mailto:simontek@gmail.com">simontek@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Look into areca cards, good bang for the buck. Also email <a href="mailto:contact@marymconley.com" target="_blank">contact@marymconley.com</a> wife worked with pentabyte systems for hollywood systems<br><br>On Wednesday, July 11, 2012, Jeff Layton <<a href="mailto:laytonjb@att.net" target="_blank">laytonjb@att.net</a>> wrote:<br>
> Alex,<br>><br>> I work for a major vendor and we have solutions that scale larger<br>> than this but I'm not going to give you a commercial or anything,<br>> just some advice.<br>><br>> I have friends and customers who have tried to go the homemade<br>
> route ala' Backblaze (sorry for those who love BB but I can tell<br>> you true horror stories about it) and have lived to regret it. Just<br>> grabbing a few RAID cards and some drives and slapping them<br>
> together doesn't really work (believe me - I've tried it myself as<br>> have others). I recommend buying enterprise grade hardware, but<br>> that doesn't mean it has to be expensive. You can get well under<br>
> $0.50/GB with 3 years of full support all the way to the file system.<br>> Now sure if this meets your budget or not - it may be a bit higher<br>> than you want.<br>><br>> I can also point you to documentation we publish that explains<br>
> in gory detail how we build our solutions. All the commands and<br>> configurations are published including the tuning we do. But<br>> as part of this, I highly recommend XFS. We scale it to 250TB's<br>> with no issue and we have a customer who's gone to 576TB's<br>
> for a lower performance file system.<br>><br>> I also recommend getting a server with a reasonable amount<br>> of memory in case you need to do an fsck. Memory always<br>> helps. I would also think about getting a couple of small 15K<br>
> drives and running them as RAID-0 for a swap space. If the<br>> file system starts and fsck and swaps (which can easily do<br>> for larger file systems) you will be grateful - fsck performance<br>> is much, much better and takes less time.<br>
><br>> If you want to go a bit cheaper, then I recommend going the<br>> Gluster route. You can get it for free and it only takes a bunch<br>> of servers. However, if the data is important, then build two<br>> copies of the hardware and rsync between them - at least you<br>
> have a backup copy at some point.<br>><br>> Good luck!<br>><br>> Jeff<br>><br>><br>> ________________________________<br>> From: Alex Carver <<a href="mailto:agcarver%2Bale@acarver.net" target="_blank">agcarver+ale@acarver.net</a>><br>
> To: Atlanta Linux Enthusiasts <<a href="mailto:ale@ale.org" target="_blank">ale@ale.org</a>><br>> Sent: Wed, July 11, 2012 5:21:08 PM<br>> Subject: Re: [ale] Giant storage system suggestions<br>><br>> No, performance is not the issue, cost and scalability are the main<br>
> drivers. There will be very few users of the storage (at home it would<br>> just be me and a handful of computers) and at work it would be maybe<br>> five to ten people at most that just want to archive large data files to<br>
> be recalled as needed.<br>><br>> Safety is certainly important but I don't want to burn too many disks to<br>> redundancy and lose storage space in the array. I didn't plan to have<br>> one monolithic RAID5 array either since that would get really slow which<br>
> is why I first thought of small arrays (4-8 disks per array) merged with<br>> each other into a single logical volume.<br>><br>> On 7/11/2012 14:12, Lightner, Jeff wrote:<br>>> If you're looking at stuff on that scale is performance not an issue? There are disk arrays that can go over fibre and if it were me I'd probably be looking at those especially if performance was a concern.<br>
>><br>>> RAID5 is begging for trouble - losing 2 disks in a RAID5 means the whole RAID set is kaput. I'd recommend at least RAID6 and even better (for performance) RAID10.<br>>><br>>><br>>><br>
>><br>>><br>>> -----Original Message-----<br>>> From: <a href="mailto:ale-bounces@ale.org" target="_blank">ale-bounces@ale.org</a> [mailto:<a href="mailto:ale-bounces@ale.org" target="_blank">ale-bounces@ale.org</a>] On Behalf Of Alex Carver<br>
>> Sent: Wednesday, July 11, 2012 5:04 PM<br>>> To: Atlanta Linux Enthusiasts<br>>> Subject: [ale] Giant storage system suggestions<br>>><br>>> I'm trying to design a storage system for some of my data in a way that will be useful to duplicate the design for a project at work.<br>
>><br>>> Digging around online it seems that a common suggestion has been a good motherboard, a SATA/SAS card, a SATA/SAS expander, and then a huge chassis to support all of the SATA drives.<br>>><br>>> It looks like one of the recommended SATA/SAS cards is an LSI 9200 series card connected to an Intel RES2SV240 expander.<br>
>><br>>> What I'm trying to achieve is continually expandable storage space. As more storage is required, I just keep slipping drives into the system.<br>>> If I max out a case, I just add a SATA/SAS card, use external SATA/SAS cables (do those exist to go from SFF-8087 to SFF-8088?), another expander and then stretch into a new case.<br>
>><br>>> It's obviously going to run linux or I wouldn't be asking here. :) The entire storage system will probably start somewhere around 10-16 TB and grow from there. The first question would be suggestions for an optimal<br>
>> configuration of the disks. For example, should the drives be grouped<br>>> into say RAID-5 arrays with four devices per array and then logically combine them in software into a single storage volume? If so, what file system will support something that could potentially reach beyond 100 TB (not that I'd reach 100 TB anytime soon but it can happen)?<br>
>><br>>> Thanks,<br>>> _______________________________________________<br>>> Ale mailing list<br>>> <a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>>> <a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
>> See JOBS, ANNOUNCE and SCHOOLS lists at<br>>> <a href="http://mail.ale.org/mailman/listinfo" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>>><br>>><br>>><br>>><br>>> Athena(r), Created for the Cause(tm)<br>
>> Making a Difference in the Fight Against Breast Cancer<br>>><br>>> ---------------------------------<br>>> CONFIDENTIALITY NOTICE: This e-mail may contain privileged or confidential information and is for the sole use of the intended recipient(s). If you are not the intended recipient, any disclosure, copying, distribution, or use of the contents of this information is prohibited and may be unlawful. If you have received this electronic transmission in error, please reply immediately to the sender that you have received the message in error, and delete it. Thank you.<br>
>> ----------------------------------<br>>><br>>><br>>> _______________________________________________<br>>> Ale mailing list<br>>> <a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
>> <a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
>> See JOBS, ANNOUNCE and SCHOOLS lists at<br>>> <a href="http://mail.ale.org/mailman/listinfo" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>>><br>>><br>><br>><br>> _______________________________________________<br>
> Ale mailing list<br>> <a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>><br><br>-- <br>SimonTek<br><a href="tel:912-398-6704" value="+19123986704" target="_blank">912-398-6704</a><br><br>
<br>_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
<br></blockquote></div>