[ale] Fwd: [sllug-members]: [sllug-members] rsync and I/O system performance

Jim Kinney jim.kinney at gmail.com
Fri Apr 23 10:15:54 EDT 2010


Some time back an rsync being really slow came up on ALE. The SLLUG group
sent this out and I am forwarding it here. Some very good details about
memory usage and a kernel tweak to help flush memory buffers faster to not
develop IO issues.

----------------------------------------
This posting today on the rsync user list <rsync.lists.samba.org>
raises an interesting point that most of us probably have not thought
much about before, but might find useful in improving our Linux
systems performance:

>> ...
>> Date: Thu, 22 Apr 2010 15:30:36 -0700
>> From: Erich Weiler <weiler at soe.ucsc.edu>
>> User-Agent: Thunderbird 2.0.0.22 (X11/20090625)
>> To: rsync at lists.samba.org
>> Subject: Re: Odd behavior
>>
>> Well, I solved this problem myself, it seems.  It was not an rsync
>> problem, per se, but it's interesting anyway on big filesystems like
>> this so I'll outline what went down:
>>
>> Because my rsyncs were mostly just statting millions of files very
>> quickly, RAM filled up with inode cache.  At a certain point, the kernel
>> stopped allowing new cache entries to be added to the slab memory it had
>> been using, and was slow to reclaim memory on old, clean inode cache
>> entries.  So it basically slowed the I/O of the computer to barely
anything.
>>
>> Slab memory can be checked by looking at the /proc/meminfo file.  If you
>> see that slab memory is using up a fair portion of your total memory,
>> run the 'slabtop' program to see the top offenders.  In my case, it was
>> the filesystem that was screwing me (by way of the kernel).
>>
>> I was able to speed up the reclaiming of clean, unused inode cache
>> entries by tweaking this in the kernel:
>>
>> # sysctl -w vm.vfs_cache_pressure=10000
>>
>> The default value for that is 100, where higher values release memory
>> faster for dentries and inodes.  It helped, but my rsyncs were still
>> faster than it was, and it didn't help that much.  What really fixed it
>> was this:
>>
>> # echo 3 > /proc/sys/vm/drop_caches
>>
>> That effectively clears ALL dentry and inode entries from slab memory
>> immediately.  When I did that, memory usage dropped from 35GB to 500MB,
>> my rsyncs fired themselves up again magically, and the computer was
>> responsive again.
>>
>> Slab memory began to fill up again of course, as the rsyncs were still
>> going.  But slowly.  For this edge case, I'm just going to configure a
>> cron job to flush the dentry/inode cache every five minutes or so.  But
>> things look much better now!
>>
>> A word of warning for folks rsyncing HUGE numbers of files under linux.
 ;)
>>
>> As a side note, Solaris does not seem to have this problem, presumably
>> because the kernel handles inode/dentry caching in a different way.
>>
>> -erich


-- 
-- 
James P. Kinney III
Actively in pursuit of Life, Liberty and Happiness
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20100423/27506a64/attachment.html 


More information about the Ale mailing list