<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 12pt; color: #000000'>Thanks guys. This thread has been very informative.<div><br></div><div>So you don't LVM inside a VM, but do you partition? I've always partitioned because it's how I was taught (pre-VM), but suppose you have a Linux VM, and you want to add a 200GB partition for some application. You go into your VM software and create the virtual disk and attach it to the VM. Inside the VM it appears as a new device, say /dev/xvde. You could create a partition and /dev/xvde1 would appear and you could mkfs /dev/xvde1 or you could skip the partitioning and just mkfs /dev/xvde. One reason you generally partition is for the sector alignment stuff, but (correct me if I'm wrong) that doesn't apply to a virtual disk. The sector alignment would be taken care of when you partition the drive inside XenServer, VMWare or whatever's running on the bare metal. Another reason you might normally partition a drive is to separate your OS from your data, to make sure run-away log files don't crash your database, etc., but that doesn't apply here because you've already created a separate virtual disk for that purpose.</div><div><br></div><div>I asked a friend at the pub Friday night who works with lots of VMs and he says he partitions just as a reminder to himself that he has or hasn't done something with the virtual disk. So he might go add a new disk to a half-dozen VMs, and when he goes into each one he can more easily tell whether he has taken care of it yet or something like that. If I add or remove a disk once a month it's a lot, so that's not a big selling point for me. Still, I suppose it could be useful as some longer term "documentation" kind of thing.</div><div><br></div><div>So those of you on the list who deal with VMs: do you partition your virtual disks?</div><div><br></div><div>Scott</div><div><br></div><div>p.s. my recent VM experience is mostly with XenServer, so forgive me if my question and/or terminology doesn't make sense for ESXi, KVM, or other VM environments.<br><br><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Phil Turmel" <philip@turmel.org><br><b>To: </b>ale@ale.org<br><b>Sent: </b>Saturday, October 15, 2016 11:08:35 AM<br><b>Subject: </b>Re: [ale] Xen Server adding a virtual disk to a VM<br><br>On 10/14/2016 05:13 PM, DJ-Pfulio wrote:<br>> Ok, so fdisk was patched, but I'm still waiting for that patch to<br>> actually make it into every distro I see. I keep seeing fdisk complain<br>> about GPT disks - easier to just use parted, IMHO. Parted also aligns<br>> partitions correctly, as does gparted. fdisk does not. If you use only<br>> SSDs, don't think that it matters, but on spinning disks, there can be a<br>> real, noticeable, performance hit.<br><br>Interesting. I've been using 'gdisk' for quite some time now. Same<br>style of interface but supports GPT, plus conversions to/from MBR and<br>BSD. I thought is was packaged with util-linux, but I just found out<br>otherwise.<br><br>It is part of the base install of Ubuntu Server at least since 14.04.<br>It came in as a default dependency of udisks on my gentoo systems, which<br>is pulled in by a variety of things. So I assumed it was part of the<br>system set.<br><br>I like gdisk *way* more than parted.<br><br>> GPT has many upgrades over MBR, like duplication at the front/end of the<br>> storage, not only at the beginning. Plus not having to deal with<br>> "logical/extended" partitions ever again is nice. Wikipdeia has more.<br>> <br>> Inside a VM, I don't don't use LVM. Only outside on the hostOS. There<br>> are multiple pros/cons to either method. I can understand if folks want<br>> LVM inside a VM and why they wouldn't. Do some research.<br><br>I do the same. LVM on bare metal, not in VMs. All of my VM disks are<br>LVs, not files. Virt-manager makes that easy, btw -- you can make any<br>volume group in a host a "pool" for VM allocations. It was one of the<br>final straws that got me off of virtualbox.<br><br>> Haven't touched btrfs. Seems there is always some "issue" that is<br>> important to me with it. Whether that is true or not is completely<br>> irrelevant. It is a hassle that I don't need. Understand many people<br>> love btrfs, which is great. More users will eventually fix the issues I<br>> have! Thanks!<br><br>Yup. I played with it once. Haven't touched it since.<br><br>> lsblk is nice. Plus, it doesn't need sudo to work (at least not on any<br>> systems I manage).<br><br>I wrote lsdrv[1] because I didn't like the way lsblk repeated trees when<br>raid arrays were present, and I wanted something that would document<br>controller ports, device SNs, and UUIDs for later recovery tasks.<br>Basically lsblk + blkid + lspci + lsusb in one report.<br><br>Phil<br><br>[1] https://github.com/pturmel/lsblk<br><br>_______________________________________________<br>Ale mailing list<br>Ale@ale.org<br>http://mail.ale.org/mailman/listinfo/ale<br>See JOBS, ANNOUNCE and SCHOOLS lists at<br>http://mail.ale.org/mailman/listinfo<br></div><br></div></div></body></html>