Some time back an rsync being really slow came up on ALE. The SLLUG group sent this out and I am forwarding it here. Some very good details about memory usage and a kernel tweak to help flush memory buffers faster to not develop IO issues.<br>
<div class="gmail_quote"><br>----------------------------------------<br>This posting today on the rsync user list <<a href="http://rsync.lists.samba.org" target="_blank">rsync.lists.samba.org</a>><br>
raises an interesting point that most of us probably have not thought<br>
much about before, but might find useful in improving our Linux<br>
systems performance:<br>
<br>
>> ...<br>
>> Date: Thu, 22 Apr 2010 15:30:36 -0700<br>
>> From: Erich Weiler <<a href="mailto:weiler@soe.ucsc.edu">weiler@soe.ucsc.edu</a>><br>
>> User-Agent: Thunderbird 2.0.0.22 (X11/20090625)<br>
>> To: <a href="mailto:rsync@lists.samba.org">rsync@lists.samba.org</a><br>
>> Subject: Re: Odd behavior<br>
>><br>
>> Well, I solved this problem myself, it seems. It was not an rsync<br>
>> problem, per se, but it's interesting anyway on big filesystems like<br>
>> this so I'll outline what went down:<br>
>><br>
>> Because my rsyncs were mostly just statting millions of files very<br>
>> quickly, RAM filled up with inode cache. At a certain point, the kernel<br>
>> stopped allowing new cache entries to be added to the slab memory it had<br>
>> been using, and was slow to reclaim memory on old, clean inode cache<br>
>> entries. So it basically slowed the I/O of the computer to barely anything.<br>
>><br>
>> Slab memory can be checked by looking at the /proc/meminfo file. If you<br>
>> see that slab memory is using up a fair portion of your total memory,<br>
>> run the 'slabtop' program to see the top offenders. In my case, it was<br>
>> the filesystem that was screwing me (by way of the kernel).<br>
>><br>
>> I was able to speed up the reclaiming of clean, unused inode cache<br>
>> entries by tweaking this in the kernel:<br>
>><br>
>> # sysctl -w vm.vfs_cache_pressure=10000<br>
>><br>
>> The default value for that is 100, where higher values release memory<br>
>> faster for dentries and inodes. It helped, but my rsyncs were still<br>
>> faster than it was, and it didn't help that much. What really fixed it<br>
>> was this:<br>
>><br>
>> # echo 3 > /proc/sys/vm/drop_caches<br>
>><br>
>> That effectively clears ALL dentry and inode entries from slab memory<br>
>> immediately. When I did that, memory usage dropped from 35GB to 500MB,<br>
>> my rsyncs fired themselves up again magically, and the computer was<br>
>> responsive again.<br>
>><br>
>> Slab memory began to fill up again of course, as the rsyncs were still<br>
>> going. But slowly. For this edge case, I'm just going to configure a<br>
>> cron job to flush the dentry/inode cache every five minutes or so. But<br>
>> things look much better now!<br>
>><br>
>> A word of warning for folks rsyncing HUGE numbers of files under linux. ;)<br>
>><br>
>> As a side note, Solaris does not seem to have this problem, presumably<br>
>> because the kernel handles inode/dentry caching in a different way.<br>
>><br>
>> -erich<br><br>
</div><br>-- <br>-- <br>James P. Kinney III<br>Actively in pursuit of Life, Liberty and Happiness <br><br>