[ale] to lvm or not lvm is my dillema
jc.lightner at comcast.net
jc.lightner at comcast.net
Sat Jan 3 21:39:44 EST 2026
Monitoring and housekeeping are important things. However, sometimes
things get borked more quickly than monitoring alerts. Of course if you
have monitoring tool setup you should have something separate (i.e. another
system) verifying that the monitoring setup itself is still operational.
A truism of monitoring is that it can't be done without impacting things
even if marginally. Whatever monitoring tools or methods you use can
utilize resources and impact performance.
Funny, I was just thinking overnight of a major migration we did at one
employer. Multiple teams spent a few months preparing/testing the new
environment which was designated "Performance and Scalability". All signs
were that it would perform much better than the environment it was
replacing. Shortly after cutover a Senior VP called irate at how poorly it
was performing. Surprised, I began looking into it and found multiple
people from varied departments running "top" as well as an in house
developed monitoring tool for our primary scheduling system and other
monitoring tools . I had to explain to the SVP that UNIX/Linux is designed
for multi-user, multi-process usage but when every process is using the same
resources it doesn't really do well. That was all on top of our usual
monitoring system. Once that SVP stopped "everyone" from doing monitoring
that wasn't part of their jobs the system performed as well as it had in
testing.
Years ago it was common to have cron jobs that looked for and removed "core"
files as any memory dump could quickly fill up space. Oracle in its great
wisdom chose to name one of its directories "core" which caused some initial
"fun".
-----Original Message-----
From: Ale <ale-bounces at ale.org> On Behalf Of Steve Litt via Ale
Sent: Saturday, January 3, 2026 7:13 PM
To: ale at ale.org
Cc: Steve Litt <slitt at troubleshooters.com>
Subject: Re: [ale] to lvm or not lvm is my dillema
On Fri, 2 Jan 2026 15:51:32 -0500
Jeff Lightner via Ale <ale at ale.org> wrote:
> No. If you increase physical RAM you often want to add SWAP
> devices.
>
> Having all your directories in a single partition instead of separate
> filesystems can cause issues down the road. For years I've made it
> a point to have /boot, /root, /var and /usr separate because NOT
> doing so caused major issues. Usually I'll have sub-mounted
> filesystems as well for things I expect to use a fair amount of
> space. Having /root fill causing all sorts of issues that take a
> lot of effort to recover from (sometimes requiring a reinstall).
This running out of space on the whole device can be easily guarded by
having a `df /` in your boot sequence, or in your ~.xinit. If something goes
really wrong and a terabyte file gets written, you can shut down, boot from
a rescue CD, and quickly clean up the mess. I have that happen maybe once
every 5 years. With /var, I'm assuming you have functioning log rotation
that's adjustable. With /usr , I have everything and the kitchen sink in my
/usr tree and it's only 54G. If I absolutely needed the space, let's say I
had a single 128GB disk, I could do better housekeeping on old versions of
TeX and kernels.
I understand that you're using mounted partitions analogously to circuit
breakers in your house, but at the cost that things can get borked when your
drive still has plenty of space, and in addressing those borks, there's a
non-trivial chance of making a data-losing mistake.
By the way, I currently have /var on its own 192 GB partition because that's
how I learned how to do things in 1999, but next time I'll probably just
have it off the root.
Thank you for your post. In response I found that most of my space on /var
was gone, investigated, and found galaxy's worth of tiny files, and deleted
them, so now I'm using less than 1/2 of /var. I had similar experiences on
my /scratch partition, and brought usage down from 5.something TB to 1.9TB,
including probably millions of tiny files.
Housekeeping is a huge part of this whole discussion, and thanks for
prodding me into doing housekeeping on stuff all the way back to 2003.
With those millions of files gone, for the first time in over a year I can
successfully complete a updatedb command. I've sorely missed being able to
search with the locate command.
By the way, check out the following command:
find ./* -exec ls -sadF {} \; | sort -n | tee ~/scratch_files.txt
I can then copy ~/scratch_files.txt to ./danger.sh in the current directory,
use Vim to delete the smaller files, then delete any lines of files I still
know, then turn them into a ./danger.sh shellscript (not set executable).
Then I just ksh ./danger.sh to get rid of the obscenely big files. Since
most people don't have ksh, they can just use sh. Then immediately delete
danger.sh so it doesn't create problems down the line.
Soon I'm going create a program, to be run every day, to perform a du -s on
strategic trees, and alarm when one goes above a certain level. Such a
program will be even more helpful on my next computer, when I implement bind
mounts for what used to be my partitions, because you're right: I sure don't
want / or the single huge partition on the spinning rust to fill up. I'll be
prompted to do housekeeping on more than a once per decade basis :-).
SteveT
Steve Litt
Featured book: Troubleshooting Techniques of the Successful Technologist
http://www.troubleshooters.com/techniques
_______________________________________________
Ale mailing list
Ale at ale.org
https://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
More information about the Ale
mailing list