[mirror-admin] Breaking the 1TB limit?
Stephen John Smoogen
smooge at gmail.com
Mon Feb 22 17:19:24 EST 2010
On Mon, Feb 22, 2010 at 2:59 PM, Dax Kelson <dkelson at gurulabs.com> wrote:
> On Mon, 2010-02-22 at 13:27 -0700, Stephen John Smoogen wrote:
>
>> Most of my experience has been helping people try to get data off of
>> them at one point or another. The big issue with the large disks are
>> that they are slow. You can overload them pretty quickly to the point
>> where you may have 16 TB of data in your raid, but effectively you can
>> only use 1-4 because of load :(. Maybe the larger block sizes will
>> help with this.. but a lot of it is just how many platters you can
>> read/write at once (at 5400 to 7200 RPM versus 10-15k)
>
> I'm genuinely curious if this is the case for I/O patterns that public
> mirrors see? I only run a private low volume mirror, so I don't know.
>
> Are the "big slow" disks really the bottleneck compared to WAN
> bandwidth? Are file requests so random that the memory cache can't
> sufficiently speed up the "big slow" disks?
Well it depends on how much memory cache a system has. A couple of
different DVD downloads can overwhelm a lot of boxes.. throw in a
bunch of different distributions. Most of the people I talked with on
IRC seemed to see saturation on 100 mbit links on a university level.
But that is the usual university hardware tradeoff.. they might have
gotten a lot of disks.. but they had only 8gbytes of ram. Or they
might have 32 Gbytes of ram and no disks :).
> Dax Kelson
>
> --
>
--
Stephen J Smoogen.
Ah, but a man's reach should exceed his grasp. Or what's a heaven for?
-- Robert Browning
--
More information about the Mirror-admin
mailing list