[ale] 2 questions memory related

Michael B. Trausch mike at trausch.us
Thu Jun 16 22:08:07 EDT 2011


On 6/16/2011 8:01 PM, Scott Castaline wrote:
> 1. Just had 1 stick 2GB of RAM go bad. I had originally bought as part 
> of a set of 4 sticks for a total of 8GB. My question is how critical 
> is it to buy in matched sets? Couldn't I buy just a replacement for 
> that stick? I don't overclock so I don't push them so I can't see 
> where I have to have handpicked memory. I would plan on getting the 
> same exact mfg part on a single stick basis.Am I wrong tinking this way?

You should be fine ordering the part separately. The only thing that you 
need to be sure of is that you match the specs of the memory that is in 
the system currently.

> 2. I am currently running dd if=/dev/urandom of=/dev/sde on a new 2TB 
> drive. I was logged into my GUI desktop (KDE) and after several hours 
> my desktop started disintegrating and window apps were becoming 
> unrecognisable, but I was able to close ot most of them. I switched 
> over to one of the text VTs and top was reporting that out of 4GB of 
> RAM I only had less than 500MB available and I was starting to use 
> swap. I've done this in the past with only 4GB of RAM on different 
> hardware and earlier version of Fedora, without any problems like 
> this. I'm not doing anything that different than in the past. My 
> question is, what command(s) would tell me what is actually hogging 
> all of the RAM? I don't want to kill the process as I'm anticipating 
> it aking about 4 days to complete. At least on a 2 core AMD it toke 
> about 44 hours for 1TB, I'm not sure how much quicker a 4 core 
> processor will do this. I was also running 800MHz RAM then, now I'm 
> running 1333MHz RAM of course I realize that that is not the real 
> speed of the bus clock.

The 'dd' process is not going to run (significantly) differently on a 
multicore/multiprocessor system than on a unicore/uniprocessor system; 
nor would it likely if it were written to be a multithreaded program. 
The reason is that bandwidth limitations are going to be the bottleneck 
of the command. Even on a system that can generate pseudorandom bytes 
more quickly, the hard disk drives are going to be the limiting factor.

In order to find out which process is hogging memory, I'd recommend 
"htop". Be sure to sort the output by the MEM% column.

     --- Mike


More information about the Ale mailing list