<html><head></head><body><div dir="auto">Still have to put the data on the disk somewhere! I do use bind mounts when something changes and a chunk needs to be visible somewhere else. Or a dir in a partition needs to be read only.<br><br>But lvextend is live so it's real time.</div><br><br><div class="gmail_quote"><div dir="auto">On August 31, 2023 5:45:55 PM EDT, Steve Litt via Ale <ale@ale.org> wrote:</div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail"><div dir="auto">What's your opinion of using bind mounts to create realtime<br>stretchable/shrinkable partitions as opposed to LVM?<br><br>Jim Kinney via Ale said on Wed, 30 Aug 2023 20:47:05 -0400<br><br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"><div dir="auto">Lvm is a total lifesaver!! You never know really how to partition a<br>drive so lvm can help expand a partition. On. The. Fly! Add a new<br>drive or add a new raid box and lvm says, sure, let's use that!<br><br>It also supports software raid which irritates the hardware purists<br>with deeper pockets that me. A software raid10 is cheap, fast,<br>reliable, and if I really need it, I can clone the box into a new mobo<br>with some boot magic and it resurrects the added blank drives in old<br>and new boxes for me without a pair of cards that cost more than the 4<br>new drives. Spinning rust sata drives with 5 year warranties are<br>totally worth it.<br><br>Yeah. Lvextend is a lifesaver. Lvreduce is awesome as long as the<br>filesystem is not xfs. Ext4 supports shrink. ZFS of course replaces<br>ext4 and raid and lvm but does eat more CPU in Linux land. Pretty sure<br>ZFS borders on being a filesystem cult but the prophets have some<br>really good points. Maybe one day it'll get into the mainline kernel.<br>Probably right after gluster. 😁<br><br>I'm almost embarrassed to admit that I no longer fix my home gear. If<br>it pukes, it just gets replaced. Hardware is mostly pretty reliable<br>(not gonna discuss HPC/supercomputers running a hot tub style liquid<br>cooling solution). There's used Dell/Supermicro server gear in Suwanee<br>data centers that hits eBay. It's usually 5-7 years old and lasts<br>another 3-5 years in the home shop. 3 on the Supermicro, 5 on the<br>Dell. But at $350 for a dual CPU, 8-12 core, 64-128G ram, add your own<br>hard drives, I'm happy.<br><br>I do need to kick the backups again. Long overdue for the bare metal<br>recovery of the entire backup system. Thanks for the reminder of "aging<br>backups".<br><br>On Wed, Aug 30, 2023, 5:36 PM Charles Shapiro <hooterpincher@gmail.com><br>wrote:<br><br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #ad7fa8; padding-left: 1ex;"><div dir="auto">About three weeks ago piglet, my primary desktop computer, pooped<br>out. Press the power button and the fans came on, but nothing else<br>happened -- no POST, no screen, like, Nuthin'. Went through all the<br>hardware troubleshooting I knew, carted it around to a couple of<br>friends who are smarter than me, but never revived it. It was a Core<br>I7 motherboard obtained surplus 5 years ago after a hard life as a<br>server, so I reckon it was no big surprise it finally bit the dust.<br><br>$500 or so and a couple of sessions at Decatur Makers later I'd<br>replaced everything but the Mass Storage, the video card, and the<br>case. She would boot to the BIOS screen np. I could get the GRUB<br>screen but no further -- she'd would just Kernel Panic. The new<br>guts are a 12th gen Intel I9 on a Gigabyte Aorus Z690 gen 1.4 MB, so<br>maybes that had something to do with it.<br><br>Fortunately, I keep my OS on a 120 GB SSD, and my /home on a much<br>larger Spinning Rust drive. So I knew that I wouldn't have to go<br>back to my (shamefully aged) backups. I installed Debian 12 on the<br>SSD (up from Debian 11) and got her to boot ok.<br><br>I configured my original install to use lvm without really<br>understanding what that meant, so my /home wouldn't actually, like,<br>mount with a simple mount(8) command. Cue a deep-dive into lvm,<br>helped along by an excellent tutorial (<br><a href="https://linuxhandbook.com/lvm-guide/">https://linuxhandbook.com/lvm-guide/</a> ) which also let me delve into<br>the Wonderful World of Vagrant.<br><br>After groveling through all that mess, I did the following:<br><br>* vgrename the old piglet-vg vgroup to piglet-home-vg ( using the<br>UUID grabbed from vgdisplay so I was sure to rename the correct one)<br>* vgchange -ay piglet-home-vg to 'activate' my renamed vgroup<br>* vgscan --mknodes to fiddle the file system to recognize my new<br>logical volumes<br>* Verify that I could now mount(8) my piglet-home-vg/home lvolume on<br>/mnt (Yay!)<br>* systemctl set-default multi-user.target to bring the machine up<br>with no GUI and log in as root<br> * Move the installed /home to /home-debian12-default ( in case I<br>needed to grab some stuff from there to make the Debian 11 settings<br>for Plasma work with Debian 12). Make a new empty /home to serve as<br>a mount point.<br> * Edit /etc/fstab to mount /dev/mapper/piglet--home--vg-home on<br>/home<br> * systemctl set-default graphical.target to bring the machine back<br>up<br><br>Of course I still have a bunch of software to install and some stuff<br>to bring back from my backup ( all my local apache stuff is gone for<br>example). But it's really all over but the shouting.<br><br>Fun Things I Learned:<br><br> * If you screw up an entry in /etc/fstab, Debian 12 will halt<br>during the boot process when it tries to mount disks. On some<br>occasions, it'll attempt to mount your screw up for a while and time<br>out after a minute and a half or so, but other times I think it just<br>dies. You can fix this by choosing Emergency Mode from the GRUB<br>menu and fixing the bad edit in your /etc/fstab. Or I suppose you<br>could boot from your stick again if that rocks your sox.<br><br> * Debian 12 doesn't appear to let you mount an lvolume from fstab<br>by UUID. I could do this on my VM, which was running Ubuntu. On<br>Debian you mount from /dev/mapper, which seems to be the Correct Way<br>(at least that's the way shipped lvolumes are mounted). There's<br>some magic going on here that I still don't fully understand. Some<br>of the hyphens in the /dev/mapper lvolume names are doubled, again<br>for reasons which are inscrutable to me.<br><br> * Hardware can be Tricky. If you don't plug in ALL the power<br>connectors on your MB, it will simply refuse to start at all. Then<br>you will tear your hair out until you figure out the dumb misteak<br>you made. And if you get checksum errors late in your install off a<br>Stick, it means that the media is no good no more.<br><br> * vagrant and lvm are pretty way kewl. Learning on a virtual<br>machine let me hack away at lvm and other scary stuff (like<br>parted(8) and mkfs(8) ) break things, and still not disturb anything<br>important on my personal machines. Highly recommended.<br><br>All in all a lot of fun.<br><br>-- CHS<br><br> <br></div></blockquote></blockquote><div dir="auto"><br><br>SteveT<br><br>Steve Litt <br>Autumn 2022 featured book: Thriving in Tough Times<br><a href="http://www.troubleshooters.com/bookstore/thrive.htm">http://www.troubleshooters.com/bookstore/thrive.htm</a><hr>Ale mailing list<br>Ale@ale.org<br><a href="https://mail.ale.org/mailman/listinfo/ale">https://mail.ale.org/mailman/listinfo/ale</a><br>See JOBS, ANNOUNCE and SCHOOLS lists at<br><a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br></div></pre></blockquote></div></body></html>