[ale] Fault Tolerant High Read Rate System Configuration

Louis Zamora lzamora at outpostsentinel.com
Wed Jul 29 10:23:48 EDT 2009


To some customers, $50k price is less then price of downtime. If this is
the case, NEC is your best shot as they utilize two server modules. You
can be watching a movie on module A, then yank module A, movie blips for
1/2 second and continues playing - without any loss of data. Module B
has taken over the CDROM, Harddrives and continues running while
alarming you an issue has occurred.

http://www.nec-computers.com/page.asp?id=222

If you do build your own Fault Tolerant solution, know that $50k is the
street price for business continuity solutions. If you need a demo of
NEC, I can arrange that for you. 

On Wed, 2009-07-29 at 09:47 -0400, Jim Kinney wrote:
> On Tue, Jul 28, 2009 at 2:45 PM, Greg Clifton<gccfof5 at gmail.com> wrote:
> > Thanks for the wiki link, will check. But, quite right, we need to maximize
> > the bandwidth utilization of each link in the chain as it were, drives,
> > controller, host slot, more than enough is of no use, who can drink from a
> > fire hose?
> >
> > We're talking server grade stuff here, no 'home brew' mobos, this is a board
> > of realtors office and they don't want no stinkin down time, in so far as
> > possible/affordable.
> 
> Motherboard recommendations: Tyan and SuperMicro. I (and many others)
> have excellent success with their system boards. Both offer multi-bus
> capabilities that suit this as well as systems with ready for and OS
> load that will meet this need. The wiki data plus card data and
> motherboard specs will let you engineer a properly balanced solution.
> >
> > On Tue, Jul 28, 2009 at 2:19 PM, Jim Kinney <jim.kinney at gmail.com> wrote:
> >>
> >> There are some rather fast pci bus config around but you need serious
> >> mobo data to know for sure. Most home-user pc's are crap with even
> >> multiple pcie sharing a common interconnect.
> >>
> >> For numbers on device bandwidth, see here:
> >> http://en.wikipedia.org/wiki/List_of_device_bandwidths
> >>
> >> You don't want more devices on a card than the slot it's plugged into
> >> can take :-)
> >>
> >> On Tue, Jul 28, 2009 at 1:37 PM, Greg Clifton<gccfof5 at gmail.com> wrote:
> >> > Hi Jim,
> >> >
> >> > You do mean PCIe these days, don't you? Being serial point to point data
> >> > xfer resolves the bus contention issue, no? Ain't much in the way of
> >> > multi-PCI bus mobos to be had any more as the migration to PCIe is in
> >> > full
> >> > swing. I expect PCI will be SO 20th century by Q1 '10.
> >> >
> >> > What about a single 12, 16, or 24 drive RAID controller from 3Ware or
> >> > Areca
> >> > (PCIe x8 native, I believe for both now). I'm sure it is much greater
> >> > than
> >> > PCI (even PCIX @ 133MHz ~ 800mb/s), but what is the bandwidth on PCIe
> >> > anyways?
> >> >
> >> > You are basically talking RAID 10 type configuration, no? Using the
> >> > entire
> >> > drive vs. short stroking so no complications in prepping a replacement
> >> > drive, good thought.
> >> >
> >> > As Richard suggested, customer is interested in some sort of
> >> > mirrored/load
> >> > balanced/failover setup with 2 systems (if it fits the budget). How to,
> >> > is
> >> > where I am mostly clueless.
> >> >
> >> > Thanks,
> >> > Greg
> >> >
> >> > On Tue, Jul 28, 2009 at 12:24 PM, Jim Kinney <jim.kinney at gmail.com>
> >> > wrote:
> >> >>
> >> >> multi-pci bus (not just multi pci _slot_) mobo with several add-on
> >> >> SATA300 cards. Hang fast drives from each card matching the aggregate
> >> >> drive throughput to the bandwidth of the pci bus slot. Make pairs of
> >> >> drives on different cards be mirrors. Join all mirror pairs into a
> >> >> stripped array for speed.
> >> >>
> >> >> Use entire drive for each mirror slice so any failure is just a drive
> >> >> replacement. Add extra cooling for the drives.
> >> >>
> >> >> On Tue, Jul 28, 2009 at 11:35 AM, Greg Clifton<gccfof5 at gmail.com>
> >> >> wrote:
> >> >> > Hi Guys,
> >> >> >
> >> >> > I am working on a quote for a board of realtors customer who has ~
> >> >> > 6000
> >> >> > people hitting his database, presumably daily per the info I pasted
> >> >> > below.
> >> >> > He wants fast reads and maximum up time, perhaps mirrored systems. So
> >> >> > I
> >> >> > though I would pick you smart guys brains for any suggestions as to
> >> >> > the
> >> >> > most
> >> >> > reliable/economical means of achieving his goals. He is thinking in
> >> >> > terms of
> >> >> > some sort of mirror of iSCSI SAN systems.
> >> >> >
> >> >> > Currently we are only using 50G of drive space, I do not see going
> >> >> > above
> >> >> > 500G for many years to come. What we need to do is to maximize IO
> >> >> > throughput, primarily read access (95% read, 5% write). We have over
> >> >> > 6,000
> >> >> > people continually accessing 1,132,829 Million (as of today) small
> >> >> > (<1M)
> >> >> > files.
> >> >> >
> >> >> > Tkx,
> >> >> > Greg Clifton
> >> >> > Sr. Sales Engineer
> >> >> > CCSI.us
> >> >> > 770-491-1131 x 302
> >> >> >
> >> >> >
> >> >> > _______________________________________________
> >> >> > Ale mailing list
> >> >> > Ale at ale.org
> >> >> > http://mail.ale.org/mailman/listinfo/ale
> >> >> >
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> --
> >> >> James P. Kinney III
> >> >> Actively in pursuit of Life, Liberty and Happiness
> >> >> _______________________________________________
> >> >> Ale mailing list
> >> >> Ale at ale.org
> >> >> http://mail.ale.org/mailman/listinfo/ale
> >> >
> >> >
> >> > _______________________________________________
> >> > Ale mailing list
> >> > Ale at ale.org
> >> > http://mail.ale.org/mailman/listinfo/ale
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> --
> >> James P. Kinney III
> >> Actively in pursuit of Life, Liberty and Happiness
> >> _______________________________________________
> >> Ale mailing list
> >> Ale at ale.org
> >> http://mail.ale.org/mailman/listinfo/ale
> >
> >
> > _______________________________________________
> > Ale mailing list
> > Ale at ale.org
> > http://mail.ale.org/mailman/listinfo/ale
> >
> >
> 
> 
> 
> -- 



More information about the Ale mailing list