<div dir="ltr">I ran an ESXi server for my virtual machines on my home network for years with external NAS devices using iSCSI. While I don't know the full setup you're looking at there I believe the details may be relevant and help you out in understanding.<div><br></div><div>In my experience, the ESXi host server was handling all the iSCSI communication with the NAS. The storage data store was added to an iSCSI LUN that ESXi mounted and formatted with its vmfs format. When you would create a VM guest on ESXi and assign the storage for that VMs device it would create vmdk files on top of the vmfs filesystem and present that to the VM as its SCSI drive. The VM was not performing any iSCSI communication directly back to the NAS.</div><div><br></div><div>You can see all that if you use the ESXi vSphere client and browse the data store and you'll find all the files that make up the VM guest configuration.</div><div><br></div><div>It's been my experience that if I wanted to move the VM off ESXi and have it still use iSCSI for storage that I needed to create a new LUN on the NAS, create my new server and configure it to mount the iSCSI LUN as a SCSI device and format it. Then have to handle standard backup/data transfer process of moving the data between the ESXi VM guest filesystem to the new LUN. I believe it is possible to actually create an iSCSI LUN on the NAS and mount it directly to the VM running under ESXi if you install the necessary dependencies to make the data transfer easier, then just unmount and mount to the host outside ESXi. Depends if you actually want to attempt making changes to the existing VM or not.</div><div><br></div><div>If as I read you're really only needing the user home directories, I would agree that building the bare metal with an SSD for the OS and whatever software needs to be installed is best course and then I'd mount the home directories from the NAS but you may just want to use NFS instead of iSCSI for that which would be a much more simple solution.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 12, 2021 at 9:35 AM Tod Fassl via Ale <<a href="mailto:ale@ale.org">ale@ale.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I mentioned that I was kind of skeptical about VMWare. The original plan <br>
was to use the VMWare cluster for research. But I really didn't think <br>
you could take four 24-core machines and make a 96-core machine out of <br>
them. There was nothing on google about that. And, at the very least, I <br>
suggested, you'd need a high speed network to do that and these machines <br>
are connected via a regular 1G switch.<br>
<br>
<br>
We also have a beowulf cluster for research which supports OpenMPI. <br>
That's my real job. When I started questioning the wisdom of buying a <br>
VMWare cluster for research, my boss said it would be fine if I stuck to <br>
my real job. After it became clear that the original plan wasn't going <br>
to work, we repurposed the VMWare cluster for administrative tasks -- <br>
file server, database server, etc.<br>
<br>
<br>
We have already pulled three of the four machines out of the cluster. I <br>
already rebuilt the database server and print server on bare metal. All <br>
that's left is the file server.<br>
<br>
<br>
PS: Before my former boss retired, I did hint around trying to see if <br>
he/she remembered me pretty much rebelling at the idea of doing research <br>
on a VMWare cluster. I didn't want to actually come out and say "I told <br>
you so." But I'm pretty sure that, no, I did not get credit for that.<br>
<br>
<br>
PPS: VMWare makes you promise not to release benchmarks. I never payed <br>
any attention to the legalese, what do I care? But I think I can say <br>
that we were never successful at doing research on virtual machines even <br>
if they had fewer than 24 cores. We'd create a 16 core vm but the <br>
researchers found it unsatisfactory.<br>
<br>
<br>
On 3/12/21 7:58 AM, Derek Atkins wrote:<br>
> HI,<br>
><br>
> iSCSI is supposed to work just like a regular SCSI disk; your computer<br>
> "mounts" the disk just like it would a locally-connected disk. The main<br>
> difference is that instead of the LUN being on a physical wire, the LUN is<br>
> semi-virtual.<br>
><br>
> As for your VM issues... If you have 4 24-core machines, you might want<br>
> to consider using something like oVirt to manage it. It would allow you<br>
> to turn those machines into a single cluster of cores, so each VM could,<br>
> theoretically, run up to 24 vCores (although I think you'd be better off<br>
> with smaller VMs). However, you will not be able to build a single,<br>
> 96-core VM out of the 4 boxes. Sorry.<br>
><br>
> You could also set up oVirt to use iSCSI directly, so no need to "go<br>
> through a fileserver".<br>
><br>
> -derek<br>
><br>
> On Fri, March 12, 2021 8:47 am, Tod Fassl via Ale wrote:<br>
>> Yes, I'm in academia. The ISCSI array has 8TB. It's got everybody's home<br>
>> directory on it. We did move a whole bunch of our stuff to the campus<br>
>> VMWare cluster. But we have to keep our own file server. And, after all,<br>
>> we already have the hardware, four 24-core machines, that used to be in<br>
>> our VMWare cluster. There's no way we can fail to come out ahead here.<br>
>> I can easily repurpose those 4 machines to do everything the virtual<br>
>> machines were doing with plenty of hardware left to spare. And then we<br>
>> won't have to pay the VMWare licensing fee, upwards of $10K per year.<br>
>><br>
>><br>
>> For $10K a year, we can buy another big honkin' machine for the beowulf<br>
>> research cluster (maintenance of which is my real job).<br>
>><br>
>><br>
>> Anyway, the current problem is getting that ISCSI array attached<br>
>> directly to a Linux file server.<br>
>><br>
>><br>
>> On 3/11/21 7:30 PM, Jim Kinney via Ale wrote:<br>
>>> On March 11, 2021 7:09:06 PM EST, DJ-Pfulio via Ale <<a href="mailto:ale@ale.org" target="_blank">ale@ale.org</a>> wrote:<br>
>>>> How much storage is involved? If it is less than 500G, replace it<br>
>>>> with an SSD. ;) For small storage amounts, I wouldn't worry about<br>
>>>> moving hardware that will be retired shortly.<br>
>>>><br>
>>>> I'd say that bare metal in 2021 is a mistake about 99.99% of the<br>
>>>> time.<br>
>>> That 0.01% is my happy spot :-) At some point is must be hardware. As I<br>
>>> recall, Tob is in academia. So hardware is used until it breaks beyond<br>
>>> repair.<br>
>>><br>
>>> Why can't I pay for virtual hardware with virtual money? I have a new<br>
>>> currency called "sarcasm".<br>
>>>> On 3/11/21 5:37 PM, Tod Fassl via Ale wrote:<br>
>>>>> Soonish, I am going to have to take an ISCSI array that is currently<br>
>>>>> talking to a VMWare virtual machine running Linux and connect it to a<br>
>>>>> real Linux machine. The problem is that I don't know how the Linux<br>
>>>>> virtual machine talks to the array. It appears as /dev/sdb on the<br>
>>>>> Linux virtual machine and is mounted via /etc/fstab like its just a<br>
>>>>> regular HD on the machine.<br>
>>>>><br>
>>>>><br>
>>>>> So I figure some explanation of how we got here is in order. My<br>
>>>>> previous boss bought VMWare thinking we could take 4 24-core machines<br>
>>>>> and make one big 96-core virtual machine out of them. He has since<br>
>>>>> retired. Since I was rather skeptical of VMWare from the start, the<br>
>>>>> job of dealing with the cluster was given to a co-worker. He has<br>
>>>>> since moved on. I know just enough about VMWare ESXI to keep the<br>
>>>>> thing working. My new boss wants to get rid of VMWare and re-install<br>
>>>>> everything on the bare metal machines.<br>
>>>>><br>
>>>>><br>
>>>>> The VMWare host has 4 ethernet cables running to the switch. But<br>
>>>>> there is only 1 virtual network port on the Linux virtual machine.<br>
>>>>> However, lspci shows 32 "lines with VMware PCI Express Root"<br>
>>>>> (whatever that is):<br>
>>>>><br>
>>>>><br>
>>>>> # lspci 00:07.7 System peripheral: VMware Virtual Machine<br>
>>>>> Communication Interface (rev 10) 00:10.0 SCSI storage controller: LSI<br>
>>>>> Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI<br>
>>>>> (rev 01) 00:11.0 PCI bridge: VMware PCI bridge (rev 02) 00:15.0 PCI<br>
>>>>> bridge: VMware PCI Express Root Port (rev 01) [...] 00:18.7 PCI<br>
>>>>> bridge: VMware PCI Express Root Port (rev 01) 02:00.0 Ethernet<br>
>>>>> controller: Intel Corporation 82545EM Gigabit Ethernet Controller<br>
>>>>> (Copper) (rev 01)<br>
>>>>><br>
>>>>><br>
>>>>> The open-iscsi package is not installed on the Linux virtual machine.<br>
>>>>> However, the ISCSI array shows up as /dev/sdb:<br>
>>>>><br>
>>>>> # lsscsi [2:0:0:0] disk VMware Virtual disk 1.0<br>
>>>>> /dev/sda [2:0:1:0] disk EQLOGIC 100E-00 8.1<br>
>>>>> /dev/sdb<br>
>>>>><br>
>>>>><br>
>>>>> I'd kinda like to get the ISCSI array connected to a new bare metal<br>
>>>>> Linux server w/o losing everybody's files. Do you think I can just<br>
>>>>> follow the various hotos out there on connecting an ISCSI array w/o<br>
>>>>> too much trouble?<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> _______________________________________________ Ale mailing list<br>
>>>>> <a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a> <a href="https://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">https://mail.ale.org/mailman/listinfo/ale</a> See JOBS,<br>
>>>>> ANNOUNCE and SCHOOLS lists at <a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
>> _______________________________________________<br>
>> Ale mailing list<br>
>> <a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
>> <a href="https://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">https://mail.ale.org/mailman/listinfo/ale</a><br>
>> See JOBS, ANNOUNCE and SCHOOLS lists at<br>
>> <a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
>><br>
><br>
_______________________________________________<br>
Ale mailing list<br>
<a href="mailto:Ale@ale.org" target="_blank">Ale@ale.org</a><br>
<a href="https://mail.ale.org/mailman/listinfo/ale" rel="noreferrer" target="_blank">https://mail.ale.org/mailman/listinfo/ale</a><br>
See JOBS, ANNOUNCE and SCHOOLS lists at<br>
<a href="http://mail.ale.org/mailman/listinfo" rel="noreferrer" target="_blank">http://mail.ale.org/mailman/listinfo</a><br>
</blockquote></div>