[ale] Example large generic website for load testing?
DJ-Pfulio
DJPfulio at jdpfu.com
Thu Sep 1 12:32:57 EDT 2016
On 09/01/2016 11:44 AM, Lightner, Jeffrey wrote:
> Since you're planning on doing an rsync anyway why bother with NFS
> mounts? Just rsync from one server to the other over that 10 GB
> connection.
I missed that. Would definitely use straight rsync (w/o NFS), if I could.
> In my experience I have far more trust in SAN than LAN so NFS
> wouldn't be my choice unless I had no SAN storage available.
> Having said that we do use NFSv4 for our data deduplication backup
> appliances (ExaGrid) over 10 GB and both our backups and our
> restores since going to ExaGrid last year our actually outperforming
> backups to tape over SAN. Of course a lot of this has to do with
> the deduplication/compression done on the ExaGrids themselves. The
> technology for this kind of appliance has improved greatly over what
> we had from older Data Domain and Quantum DXi appliances. (Not to
> say newer versions of those might not perform well - we didn't test
> newer ones when we went the ExaGrid route due to other
> considerations.)
SANs are a blessing and a curse.
When they work, they work very well, but in highly dynamic environments
with lots of new/old systems being swapped, crap happens and there is
always the firmware upgrade issue with SANs and SAN storage. With
hundreds of servers connected, sometimes it just isn't possible to
upgrade the firmware (anywhere in the chain) due to incompatibilities
and a fork-lift upgrade is the only way. Newer systems work with newer
SAN storage and older systems, which cannot be touched, stay on the old
storage as long as possible. Of course, this brings in power, cooling,
and space issues for most DCs. Can't just leave old stuff in there.
Virtualization helps greatly with this stuff, unless each VM is directly
attaching to storage.
If Allen isn't in control of the SAN, I can see why he'd shy away from
using it. Especially if that team hasn't been great support. Not saying
that is the situation and my systems have been extremely lucky with
fantastic SAN/Storage team support over the years (except once, during a
nasty daytime outage).
None of this probably matters to Allen.
>
>
> -----Original Message----- From: ale-bounces at ale.org
> [mailto:ale-bounces at ale.org] On Behalf Of Beddingfield, Allen Sent:
> Thursday, September 01, 2016 9:45 AM To: Atlanta Linux Enthusiasts
> Subject: Re: [ale] Example large generic website for load testing?
>
> In this case for this test, the ONLY thing I care about is disk i/o
> performance. Here's why: We currently have a setup where multiple
> physical web servers behind an A10 load balancer are SAN attached
> and sharing an OCFS2 filesystem on the SAN for the Apache data
> directory. This houses sites that administration has determined to
> be mission-critical in the event of an emergency/disaster/loss of
> either datacenter. I'm wanting to replace that with VMs mounting an
> NFS share across a 10GB connection (also repurposing the old
> physicals as web servers), but I want to test the performance of it
> first.
>
> New requirements for this are: 1. Must be available in the event of
> a SAN failure or storage network failure in either or both
> datacenters 2. Cannot be fully dependent on the VMware vSphere
> environment 3. Must be able to run from either datacenter
> independently of the other.
>
> So... 1 Physical host in each location for NFS storage - rsync+cron
> job to keep primary and standby datacenter in sync.
>
> A pair of standalone virtualization hosts in each location, running
> the VMs from local storage, and mounting the NFS shares from the
> server(s) above.
>
> Load balancer handling the failover between the two (we currently
> have this working with the existing servers, but it is configured by
> someone else, and pretty much a black box of magic from my
> perspective).
>
> Oh, there is a second clustered/load balanced setup for database
> high availability, if you were wondering about that...
>
> The rest of it is already proven to work - I am just a bit concerned
> about the performance of using NFS. We've already built a mock-up
> of this whole setup with retired/repurposed servers, and if it works
> acceptably from a 6+ year old server, I know it will be fine when I
> order new hardware. --
More information about the Ale
mailing list