<p dir="ltr">No jumpers that I can recall. All the sas jumpers only limit the size of the addressable drive space and all of these show as full size.</p>
<div class="gmail_extra"><br><div class="gmail_quote">On Aug 21, 2016 4:03 PM, "Jeff Hubbs" <<a href="mailto:jhubbslist@att.net">jhubbslist@att.net</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>Are you sure none of the drives are
jumpered out-of-the-box for a lower speed? That happened to me a
few years back; caught it before I sledded the drives. <br>
<br>
On 8/21/16 11:06 AM, Jim Kinney wrote:<br>
</div>
<blockquote type="cite">
<p dir="ltr">Yep. 6Gbps is the interface. But even at a paltry
100Mbps actual IO to the rust layer, the 12 disk raid 6 array
should _easily_ be able to hit 1Gbps of data IO plus control
bits. The 38 disk array should hit nearly 4Gbps.</p>
<p dir="ltr">The drives are Toshiba, Seagate and HGST. They all
are rated for rw in the 230-260 MBps sustained (SATA can only do
bursts at those rates) so 1.8 Gbps actual data to the platters.</p>
<p dir="ltr">I'm expecting a sustained 15Gbps on the smaller array
and 48Gbps on the larger. My hardware limits are at the PCIe
bus. All interconnects are rated for 24Gbps for each
quad-channel connector. It really looks like a kernel issue as
there seems to be waits between rw ops.</p>
<p dir="ltr">Yeah. I work in a currently non-standard Linux field.
Except that Linux _is_ what's always used in the HPC, big-data
arena. Fun! ;-)</p>
<p dir="ltr">I don't buy brand name storage arrays due to budget.
I've been able to build out storage for under 50% of their cost
(including my time) and get matching performance (until now). </p>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Aug 21, 2016 10:04 AM, "DJ-Pfulio"
<<a href="mailto:DJPfulio@jdpfu.com" target="_blank">DJPfulio@jdpfu.com</a>> wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On
08/20/2016 10:00 PM, Jim Kinney wrote:<br>
> 6Gbps SAS. 12 in one array and 38 in another. It should
saturate the bus.<br>
<br>
6Gbps is the interface speed. No spinning disks can push
that much data<br>
to my knowledge - even SAS - without SSD caching/hybrids.
Even then,<br>
2Gbps would be my highest guess at the real-world
performance (probably<br>
much lower in reality).<br>
<br>
<a href="http://www.tomsitpro.com/articles/best-enterprise-hard-drives,2-981.html" rel="noreferrer" target="_blank">http://www.tomsitpro.com/artic<wbr>les/best-enterprise-hard-drive<wbr>s,2-981.html</a><br>
<br>
You work in a highly specialized area, but most places would
avoid<br>
striping more than 8 devices for maintainability
considerations. Larger<br>
stripes don't provide much more throughput and greatly
increase issues<br>
when something bad happens. In most companies I've worked,
4 disk<br>
stripes were used as the default since it provides 80% of
the<br>
theoretical performance gains that any striping can offer.
That was the<br>
theory at the time.<br>
<br>
Plus many non-cheap arrays will have RAM for caching which
can limit<br>
actual disks being touched. Since you didn't mention
EMC/Netapp/HDS, I<br>
assumed those weren't being used.<br>
<br>
Of course, enterprise SSDs changed all this, but would be
cost<br>
prohibitive at the sizes you've described (for most
projects). I do<br>
know a few companies which run all their internal VMs on
RAID10 SSDs and<br>
would never go back. They aren't doing "big data."<br>
<br>
______________________________<wbr>_________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/li<wbr>stinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/li<wbr>stinfo</a><br>
</blockquote>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>______________________________<wbr>_________________
Ale mailing list
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a>
<a href="http://mail.ale.org/mailman/listinfo/ale" target="_blank">http://mail.ale.org/mailman/<wbr>listinfo/ale</a>
See JOBS, ANNOUNCE and SCHOOLS lists at
<a href="http://mail.ale.org/mailman/listinfo" target="_blank">http://mail.ale.org/mailman/<wbr>listinfo</a>
</pre>
</blockquote>
<p><br>
</p>
</div>
<br>______________________________<wbr>_________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
<a href="http://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/<wbr>listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/<wbr>listinfo</a><br>
<br></blockquote></div></div>