all,<br><br>I'm playing w/a script that monitors cpu utilization (mysqld in my case) by taking the delta of user & system jiffies in /proc/$PID between loops at regular intervals. since individual jiffy mileage may vary I did a calibration test by watching a gzip -c bigarsefile.gz > /dev/null. 100% of the time the delta between loops is 100 jiffies/sec while top reports that PID @ 100% cpu utilization (i.e. saturating one core - the goal for the benchmark). this leads me to conclude that a jiffy on my box (dual quad-core Dell 2950 running openSuSE 11 FWIW) is .01 CPU*s so I should have a theoretical 800 jiffies/sec on this box (8 cores). the problem (or ? I can't explain) is that I have a data point for my mysqld PID where the 10 second delta is 9,402 (7,586 user/1,834 sys) which is almost 20% above what should be possible. it would make sense if a jiffy were 1/120th of a second (=> 9,600 would be available in 10 sec) but (again) my gzip test consistently produces 500 per 5 seconds (=> 100/s) so I don't know what to make of this data point. I though about lack of time precision in my measurements but 18%? for a 30 second sample? when the lowest 5 second value for the gzip test was 498/highest 502?<br>
<br>any idea what to make of this? I need to have confidence in this data for scaling testing/planning I'm doing...<br>