Hi Jim,<br><br>You do mean PCIe these days, don't you? Being serial point to point data xfer resolves the bus contention issue, no? Ain't much in the way of multi-PCI bus mobos to be had any more as the migration to PCIe is in full swing. I expect PCI will be SO 20th century by Q1 '10.<br>
<br>What about a single 12, 16, or 24 drive RAID controller from 3Ware or Areca (PCIe x8 native, I believe for both now). I'm sure it is much greater than PCI (even PCIX @ 133MHz ~ 800mb/s), but what is the bandwidth on PCIe anyways? <br>
<br>You are basically talking RAID 10 type configuration, no? Using the entire drive vs. short stroking so no complications in prepping a replacement drive, good thought.<br><br>As Richard suggested, customer is interested in some sort of mirrored/load balanced/failover setup with 2 systems (if it fits the budget). How to, is where I am mostly clueless.<br>
<br>Thanks,<br>Greg<br><br><div class="gmail_quote">On Tue, Jul 28, 2009 at 12:24 PM, Jim Kinney <span dir="ltr"><<a href="mailto:jim.kinney@gmail.com">jim.kinney@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
multi-pci bus (not just multi pci _slot_) mobo with several add-on<br>
SATA300 cards. Hang fast drives from each card matching the aggregate<br>
drive throughput to the bandwidth of the pci bus slot. Make pairs of<br>
drives on different cards be mirrors. Join all mirror pairs into a<br>
stripped array for speed.<br>
<br>
Use entire drive for each mirror slice so any failure is just a drive<br>
replacement. Add extra cooling for the drives.<br>
<div><div></div><div class="h5"><br>
On Tue, Jul 28, 2009 at 11:35 AM, Greg Clifton<<a href="mailto:gccfof5@gmail.com">gccfof5@gmail.com</a>> wrote:<br>
> Hi Guys,<br>
><br>
> I am working on a quote for a board of realtors customer who has ~ 6000<br>
> people hitting his database, presumably daily per the info I pasted below.<br>
> He wants fast reads and maximum up time, perhaps mirrored systems. So I<br>
> though I would pick you smart guys brains for any suggestions as to the most<br>
> reliable/economical means of achieving his goals. He is thinking in terms of<br>
> some sort of mirror of iSCSI SAN systems.<br>
><br>
> Currently we are only using 50G of drive space, I do not see going above<br>
> 500G for many years to come. What we need to do is to maximize IO<br>
> throughput, primarily read access (95% read, 5% write). We have over 6,000<br>
> people continually accessing 1,132,829 Million (as of today) small (<1M)<br>
> files.<br>
><br>
> Tkx,<br>
> Greg Clifton<br>
> Sr. Sales Engineer<br>
> CCSI.us<br>
> 770-491-1131 x 302<br>
><br>
><br>
</div></div>> _______________________________________________<br>
> Ale mailing list<br>
> <a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
> <a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
><br>
><br>
<br>
<br>
<br>
--<br>
<font color="#888888">--<br>
James P. Kinney III<br>
Actively in pursuit of Life, Liberty and Happiness<br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
</font></blockquote></div><br><input id="gwProxy" type="hidden"><input onclick="jsCall();" id="jsProxy" type="hidden"><div id="refHTML"></div>