Hi Richard,<br><br>Yes, a 2 node load balancing situation is the way we are leaning. I'm a hardware guy not a software guy (all about nuts & bolts, not bits & bytes) so maybe I misspoke in referring to a database. I have inquired of the customer for clarification as to what database he may be running but haven't gotten a response yet. The customer is a board of realtors organization, so my presumption is that he is hosting a MLS system, that would has listings along with tagged photos of houses for sale in some sort of database format, hence the large number of relatively small files, but he has not explicitly told me that yet. Will gladly pass on more details as I have them. Thanks for a quick response!<br>
<br><div class="gmail_quote">On Tue, Jul 28, 2009 at 12:40 PM, Richard Bronosky <span dir="ltr"><<a href="mailto:Richard@bronosky.com">Richard@bronosky.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
You say database, client says small files. Which is it? If it will be<br>
a database, what database will be used? I read "Fault Tolerant" to<br>
include "Highly Available". In which case I would want a 2 node load<br>
balanced system. I'll give more detail when you clarify.<br>
<div class="im"><br>
On Tue, Jul 28, 2009 at 11:35 AM, Greg Clifton<<a href="mailto:gccfof5@gmail.com">gccfof5@gmail.com</a>> wrote:<br>
</div><div><div></div><div class="h5">> Hi Guys,<br>
><br>
> I am working on a quote for a board of realtors customer who has ~ 6000<br>
> people hitting his database, presumably daily per the info I pasted below.<br>
> He wants fast reads and maximum up time, perhaps mirrored systems. So I<br>
> though I would pick you smart guys brains for any suggestions as to the most<br>
> reliable/economical means of achieving his goals. He is thinking in terms of<br>
> some sort of mirror of iSCSI SAN systems.<br>
><br>
> Currently we are only using 50G of drive space, I do not see going above<br>
> 500G for many years to come. What we need to do is to maximize IO<br>
> throughput, primarily read access (95% read, 5% write). We have over 6,000<br>
> people continually accessing 1,132,829 Million (as of today) small (<1M)<br>
> files.<br>
><br>
> Tkx,<br>
> Greg Clifton<br>
> Sr. Sales Engineer<br>
> CCSI.us<br>
> 770-491-1131 x 302<br>
><br>
><br>
</div></div><div class="im">> _______________________________________________<br>
> Ale mailing list<br>
> <a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
> <a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
><br>
><br>
<br>
<br>
<br>
--<br>
</div>.!# RichardBronosky #!.<br>
<div><div></div><div class="h5">_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/listinfo/ale</a><br>
</div></div></blockquote></div><br><input id="gwProxy" type="hidden"><input onclick="jsCall();" id="jsProxy" type="hidden"><div id="refHTML"></div>