<html><head></head><body>That sounds like the hardware nodes supports the iscsi protocol or the base hypervisor layer from vmware is doing the iscsi connection. Probably the later.<br><br>Either way, you're going have a challenge unless you can get details on the iscsi contents. You'll need to tie each virtual drive to it's correct device. Otherwise virtual machine A gets the drive space of virtual machine B.<br><br>From ESX you can get the UUID string of the virtual drive used by that linux vm. <br><br>I would use that vm now as just a source for a backup. Then restore to the new hardware machine running a base install with restore bits. The iscsi array will still need a partition to use for the hardware linux. I doubt vmware will make this easy.<br><br>Un-virtualizing the drive on an iscsi array sounds like lots of pain. Retire first.<br><br><div class="gmail_quote">On March 11, 2021 5:37:36 PM EST, Tod Fassl via Ale <ale@ale.org> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">Soonish, I am going to have to take an ISCSI array that is currently <br>talking to a VMWare virtual machine running Linux and connect it to a <br>real Linux machine. The problem is that I don't know how the Linux <br>virtual machine talks to the array. It appears as /dev/sdb on the Linux <br>virtual machine and is mounted via /etc/fstab like its just a regular HD <br>on the machine.<br><br><br>So I figure some explanation of how we got here is in order. My previous <br>boss bought VMWare thinking we could take 4 24-core machines and make <br>one big 96-core virtual machine out of them. He has since retired. Since <br>I was rather skeptical of VMWare from the start, the job of dealing with <br>the cluster was given to a co-worker. He has since moved on. I know just <br>enough about VMWare ESXI to keep the thing working. My new boss wants to <br>get rid of VMWare and re-install everything on the bare metal machines.<br><br><br>The VMWare host has 4 ethernet cables running to the switch. But there <br>is only 1 virtual network port on the Linux virtual machine. However, <br>lspci shows 32 "lines with VMware PCI Express Root" (whatever that is):<br><br><br># lspci<br>00:07.7 System peripheral: VMware Virtual Machine Communication <br>Interface (rev 10)<br>00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X <br>Fusion-MPT Dual Ultra320 SCSI (rev 01)<br>00:11.0 PCI bridge: VMware PCI bridge (rev 02)<br>00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)<br>[...]<br>00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)<br>02:00.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet <br>Controller (Copper) (rev 01)<br><br><br>The open-iscsi package is not installed on the Linux virtual machine. <br>However, the ISCSI array shows up as /dev/sdb:<br><br># lsscsi<br>[2:0:0:0] disk VMware Virtual disk 1.0 /dev/sda<br>[2:0:1:0] disk EQLOGIC 100E-00 8.1 /dev/sdb<br><br><br>I'd kinda like to get the ISCSI array connected to a new bare metal <br>Linux server w/o losing everybody's files. Do you think I can just <br>follow the various hotos out there on connecting an ISCSI array w/o too <br>much trouble?<hr>Ale mailing list<br>Ale@ale.org<br><a href="https://mail.ale.org/mailman/listinfo/ale">https://mail.ale.org/mailman/listinfo/ale</a><br>See JOBS, ANNOUNCE and SCHOOLS lists at<br><a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br></pre></blockquote></div><br>-- <br>Computers amplify human error<br>Super computers are really cool</body></html>