[ale] perl & apache
Eric Z. Ayers
eric.ayers at mindspring.com
Fri Aug 6 07:05:27 EDT 1999
Jerry,
You may very well be bumping up against the max in Linux, which is
512. You have to edit this file and re-compile the kernel.
/usr/src/linux/include/linux/tasks.h:
#ifndef _LINUX_TASKS_H
#define _LINUX_TASKS_H
/*
* This is the maximum nr of tasks - change it if you need to
*/
#ifdef __SMP__
#define NR_CPUS 32 /* Max processors that can be running in SMP */
#else
#define NR_CPUS 1
#endif
#define NR_TASKS 512
#define MAX_TASKS_PER_USER (NR_TASKS/2)
#define MIN_TASKS_LEFT_FOR_ROOT 4
#endif
Also, have you tried upping the maximum number of files allowed on the
system?
(this is from /usr/src/linux-2.2.1/Documentation/proc.txt , but it
looks like this tunable was also in 2.0.36)
file-nr and file-max
The kernel allocates file handles dynamically, but as yet
doesn't free them again.
The value in file-max denotes the maximum number of file handles
that the Linux kernel will allocate. When you get a lot of error
messages about running out of file handles, you might want to raise
this limit. The default value is 4096. To change it, just write the
new number into the file:
# cat /proc/sys/fs/file-max
4096
# echo 8192 > /proc/sys/fs/file-max
# cat /proc/sys/fs/file-max
8192
This method of revision is useful for all customizable parameters
of the kernel - simply echo the new value to the corresponding
file.
The three values in file-nr denote the number of allocated file
handles, the number of used file handles, and the maximum number of
file handles. When the allocated file handles come close to the
maximum, but the number of actually used ones is far behind, you've
encountered a peak in your usage of file handles and you don't need
to increase the maximum.
However, there is still a per process limit of open files, which
unfortunatly can't be changed that easily. It is set to 1024 by
default. To change this you have to edit the files limits.h and
fs.h in the directory /usr/src/linux/include/linux. Change the
definition of NR_OPEN and recompile the kernel.
-Eric.
jj at spiderentertainment.com writes:
> I run redhat 5.2. We constantly pump out about 2MB/sec. The script in
> question gets accessed maybe 50 time in a day. Zombies quickly disappear.
> I got 512MB ram, and lots of spare HD. I average about 250-350 proccesses.
> Hardly ever goes more then 450.
>
> The only thing out of ordniery that I see when I run ps xa is this :
>
> 21328 ? R 401:13 /usr/local/apache/bin/httpd
> 22254 ? R 311:47 /usr/local/apache/bin/httpd
> 24095 ? R 267:05 /usr/local/apache/bin/httpd
>
> Please help.
>
> Thank you
>
> Zhongbin Yu \"jerry\" wrote:
>
> > #I get this annoying error in the apache 1.3.6 error log file. How would
> > #I go about fixing it ? (The cgi is in perl)
> > #
> > #(11)Resource temporarily unavailable: couldn't spawn child process:
> > #/disk1/web/jason/cgibin/guest.cgi
> >
> > if you keep spawning process by forking for new guest login, then total #
> > of processes may exceed what your OS allows for the user who the CGI or
> > httpd is running as. If it is heavily used, you need probably bump up
> > kerrnel parameter for # of process a user can own. If it is not REALLY
> > heavily hitted, then, you need watch out for hanging child processes
> > (zombie). The parent process need to wait for the kid process. 'man
> > perlipc' for more details.
> >
> > java servLet run one program then multiThread for more connections, I
> > think. It could be more suitable if your site is heavily hitted.
> >
> > Also, some other things on the OS can cause this problem too. This
> > message may mean different things on different OS. Quote your OS may help
> > knowlegable folks on this list to help you quicker.
> >
> > $0.02
> >
> > Jerry
More information about the Ale
mailing list