<html><head></head><body>Ovirt can use that iscsi directly and you use virt-v2v to change the vmware to kvm storage.<br><br>But that iscsi is only 8TB. Unless it's got multiple 10Gb ethernet it would make more sense to get a new box with several 10TB drives in raid 10, use iscsi tools on the new box, mount the array and virt-v2v onto new drives.<br><br>I've not poked it in use but the iscsi kernel parts are installed by default in Centos. All that's left are userspace tools.<br><br><div class="gmail_quote">On March 12, 2021 8:58:56 AM EST, Derek Atkins via Ale <ale@ale.org> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">HI,<br><br>iSCSI is supposed to work just like a regular SCSI disk; your computer<br>"mounts" the disk just like it would a locally-connected disk. The main<br>difference is that instead of the LUN being on a physical wire, the LUN is<br>semi-virtual.<br><br>As for your VM issues... If you have 4 24-core machines, you might want<br>to consider using something like oVirt to manage it. It would allow you<br>to turn those machines into a single cluster of cores, so each VM could,<br>theoretically, run up to 24 vCores (although I think you'd be better off<br>with smaller VMs). However, you will not be able to build a single,<br>96-core VM out of the 4 boxes. Sorry.<br><br>You could also set up oVirt to use iSCSI directly, so no need to "go<br>through a fileserver".<br><br>-derek<br><br>On Fri, March 12, 2021 8:47 am, Tod Fassl via Ale wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> Yes, I'm in academia. The ISCSI array has 8TB. It's got everybody's home<br> directory on it. We did move a whole bunch of our stuff to the campus<br> VMWare cluster. But we have to keep our own file server. And, after all,<br> we already have the hardware, four 24-core machines, that used to be in<br> our VMWare cluster. There's no way we can fail to come out ahead here.<br> I can easily repurpose those 4 machines to do everything the virtual<br> machines were doing with plenty of hardware left to spare. And then we<br> won't have to pay the VMWare licensing fee, upwards of $10K per year.<br><br><br> For $10K a year, we can buy another big honkin' machine for the beowulf<br> research cluster (maintenance of which is my real job).<br><br><br> Anyway, the current problem is getting that ISCSI array attached<br> directly to a Linux file server.<br><br><br> On 3/11/21 7:30 PM, Jim Kinney via Ale wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #ad7fa8; padding-left: 1ex;">On March 11, 2021 7:09:06 PM EST, DJ-Pfulio via Ale <ale@ale.org> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #8ae234; padding-left: 1ex;"> How much storage is involved? If it is less than 500G, replace it<br> with an SSD. ;) For small storage amounts, I wouldn't worry about<br> moving hardware that will be retired shortly.<br><br> I'd say that bare metal in 2021 is a mistake about 99.99% of the<br> time.<br></blockquote> That 0.01% is my happy spot :-) At some point is must be hardware. As I<br> recall, Tob is in academia. So hardware is used until it breaks beyond<br> repair.<br><br> Why can't I pay for virtual hardware with virtual money? I have a new<br> currency called "sarcasm".<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #8ae234; padding-left: 1ex;">On 3/11/21 5:37 PM, Tod Fassl via Ale wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #fcaf3e; padding-left: 1ex;"> Soonish, I am going to have to take an ISCSI array that is currently<br> talking to a VMWare virtual machine running Linux and connect it to a<br> real Linux machine. The problem is that I don't know how the Linux<br> virtual machine talks to the array. It appears as /dev/sdb on the<br> Linux virtual machine and is mounted via /etc/fstab like its just a<br> regular HD on the machine.<br><br><br> So I figure some explanation of how we got here is in order. My<br> previous boss bought VMWare thinking we could take 4 24-core machines<br> and make one big 96-core virtual machine out of them. He has since<br> retired. Since I was rather skeptical of VMWare from the start, the<br> job of dealing with the cluster was given to a co-worker. He has<br> since moved on. I know just enough about VMWare ESXI to keep the<br> thing working. My new boss wants to get rid of VMWare and re-install<br> everything on the bare metal machines.<br><br><br> The VMWare host has 4 ethernet cables running to the switch. But<br> there is only 1 virtual network port on the Linux virtual machine.<br> However, lspci shows 32 "lines with VMware PCI Express Root"<br> (whatever that is):<br><br><br> # lspci 00:07.7 System peripheral: VMware Virtual Machine<br> Communication Interface (rev 10) 00:10.0 SCSI storage controller: LSI<br> Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI<br> (rev 01) 00:11.0 PCI bridge: VMware PCI bridge (rev 02) 00:15.0 PCI<br> bridge: VMware PCI Express Root Port (rev 01) [...] 00:18.7 PCI<br> bridge: VMware PCI Express Root Port (rev 01) 02:00.0 Ethernet<br> controller: Intel Corporation 82545EM Gigabit Ethernet Controller<br> (Copper) (rev 01)<br><br><br> The open-iscsi package is not installed on the Linux virtual machine.<br> However, the ISCSI array shows up as /dev/sdb:<br><br> # lsscsi [2:0:0:0] disk VMware Virtual disk 1.0<br> /dev/sda [2:0:1:0] disk EQLOGIC 100E-00 8.1<br> /dev/sdb<br><br><br> I'd kinda like to get the ISCSI array connected to a new bare metal<br> Linux server w/o losing everybody's files. Do you think I can just<br> follow the various hotos out there on connecting an ISCSI array w/o<br> too much trouble?<br><br><br><br> _______________________________________________ Ale mailing list<br> Ale@ale.org <a href="https://mail.ale.org/mailman/listinfo/ale">https://mail.ale.org/mailman/listinfo/ale</a> See JOBS,<br> ANNOUNCE and SCHOOLS lists at <a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br></blockquote></blockquote></blockquote><hr> Ale mailing list<br> Ale@ale.org<br> <a href="https://mail.ale.org/mailman/listinfo/ale">https://mail.ale.org/mailman/listinfo/ale</a><br> See JOBS, ANNOUNCE and SCHOOLS lists at<br> <a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br><br></blockquote><br></pre></blockquote></div><br>-- <br>Computers amplify human error<br>Super computers are really cool</body></html>