[ale] ATL Colocation and file server suggestions
Jeff Lightner
jlightner at water.com
Fri Jan 23 10:37:08 EST 2009
I'll have to say I strongly disagree.
First off in a SAN environment you usually have multiple paths to the
storage using multiple HBAs in the servers as well as multiple fibre
adapters in the storage arrays. If you just have 2 and 2 that gives you
4 paths to the data. Lose 3 and you're STILL running.
Secondly most storage arrays have a fair amount of redundancy built in
including multiple power supplies with their own batteries that even in
the event of a full data system power outage allow them to flush all
cache to disks before the array itself stops operating. In such an
event having multiple arrays wouldn't have helped because all the arrays
would go down at the same time.
You do of course also want to have redundant fibre switches so loss of
one switch doesn't take down the SAN.
It's possible to lose RAID sets but that isn't a function of SAN but
rather a function of RAID and you'd have the same risk with RAID built
into the server (e.g. Dell PERC). You have even more of a risk NOT
doing RAID because loss of a single drive is apt to destroy all your
data.
I've been working on SANs for over 10 years now and most of the issues
I've seen with them had more to do with misunderstanding of the way the
array works than anything else.
-----Original Message-----
From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Jim
Kinney
Sent: Friday, January 23, 2009 9:52 AM
To: ale at ale.org
Subject: Re: [ale] ATL Colocation and file server suggestions
2009/1/20 Ken Ratliff <forsaken at targaryen.us>:
> If I had my way, I'd yank the drives out of the servers entirely, just
build
> a nice sexy SAN and export disks to the servers via iSCSI. But
that's....
> expensive.
Of course a SAN uses some combination of drive bunching (RAID or
otherwise) to make a pile of drives appear as a single large storage
container. And then the data storage is crippled by bandwidth between
the SAN head and the systems using the data.
Repeat after me: "Single point of failure".
A SAN just moves the problem from the server to a large number of
servers.
Unless there are TWO SANs and a system-level RAID for their access
relying on a single SAN is a head shot waiting to happen.
Best case scenario I've tried to implement but couldn't get past
microsoft-based managers: Hardware (real hardware) RAID 10 on server
with a software RAID1 between server disks and SAN. Since _most_ data
usage is read-intensive, heaviest reads occur on local drives (Looking
for a way to set priorities in software RAID for bandwidth
considerations - ideas?) leaving bandwidth to SAN for infrequent
writes.
--
--
James P. Kinney III
_______________________________________________
Ale mailing list
Ale at ale.org
http://mail.ale.org/mailman/listinfo/ale
----------------------------------
CONFIDENTIALITY NOTICE: This e-mail may contain privileged or confidential information and is for the sole use of the intended recipient(s). If you are not the intended recipient, any disclosure, copying, distribution, or use of the contents of this information is prohibited and may be unlawful. If you have received this electronic transmission in error, please reply immediately to the sender that you have received the message in error, and delete it. Thank you.
----------------------------------
More information about the Ale
mailing list