<html><head></head><body><div>On Fri, 2016-10-28 at 16:43 -0400, DJ-Pfulio wrote:</div><blockquote type="cite"><pre>Jim, Would you run oVirt at home for 2 boxes with dual-core CPUs and 8G of RAM
each? Make redundant storage and VMs. THAT is the problem and I think there is
a relatively simple solution with minimal config or scripting to solve it.
</pre></blockquote><div><br></div><div>Maybe. They've done all the heavy lifting to make a system that makes managing multiple VMs pretty easy for when those VMs need to just run. Granted, I'm more inclined to use enterprise-sized tools at home for a much smaller scale because my gain-knowledge-time at work is funded but my time at home is not.</div><div><br></div><div>Originally, this was for a 4 node cluster. Absolutely. For a 2 node, probably not. virt-manager is pretty awesome at that scale.</div><div><br></div><div>The spice viewer is pretty fantastic. Not sure if it can work with virt-manager. Being able to get a remote console through 2 VPNS and have a youtube video play with sound is a pretty good test of cool.</div><blockquote type="cite"><pre>
On 10/28/2016 12:28 PM, Jim Kinney wrote:
<blockquote type="cite">
On Fri, 2016-10-28 at 10:49 -0400, DJ-Pfulio wrote:
<blockquote type="cite">
Thanks for responding.
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
Won't be using oVirt (really RHEL only and seems to be 50+ different
F/LOSS projects in 500 different languages [I exaggerate] ) or XenServer
(bad taste after running it 4 yrs). I've never regretted switching from
ESX/ESXi and Xen to KVM, not once.
</blockquote>
Ovirt is only 49 projects and 127 languages! Really!
</blockquote>
If someone wants to run VMs on 3 nodes oVirt seems like overkill. Different use
case than a university, I suppose.
</pre></blockquote><div><br></div><div>It's not the 3 nodes, it's the 65 VMs :-) The nodes have some horsepower.</div><div><br></div><div>OK. I have 65 on 2 nodes. I've not yet lit up the new quad-node cluster in a box. </div><blockquote type="cite"><pre>
<blockquote type="cite">
A major issue for my use is the need to have certain VM up and running at all
times. Ovirt provides a process to migrate a VM to an alternate host if it
(host or VM) goes down. The only "gotcha" of that is the migration hosts must
provide the same cpu capabilities so no mixing of AMD and Intel without
setting the VMs to be i686.
</blockquote>
This similar CPU architecture requirement is a gotcha for all virtual machines
that support migration. KVM-qemu included. I haven't figured out which is the
my least capable CPU recently ... is a C2D less than a modern Pentium? The
Pentium is faster. I need to check the flags.
</pre></blockquote><div><br></div><div>I have a single Intel server on it's own cluster since all the rest of the gear is Opteron. No migration possible.</div><blockquote type="cite"><pre>
<blockquote type="cite">
<blockquote type="cite">
Just doing research today. Need to sleep on it. Probably won't try
anything until Sunday night.
</blockquote>
</blockquote>
Plus I have to figure out who much storage to allocate for my trial with the
distributed storage - 20G seems just a little small. I have many different
sorts of storage for the trial. RAID10, Blue desktop disk, fast USB3 external,
and an eSATA Black disk. Really want to see which performs the worst - thinking
it will be the RAID10 stuff which is infiniband connected (got an amazing
deal!), but really slow otherwise.</pre></blockquote><div><br></div><div>I've found for VMs that storage space is not as much of an issue as RAM and clock cycles for what I use. My base VM has a 10G drive. If I need more I can expand the base drive or add a new drive. I still use LVM on the VM OS just so I can expand as needed.</div><div><br></div><div>Things like memory ballooning are very useful as is VM thin clone with copy on write.</div><blockquote type="cite"><pre>
<blockquote type="cite">
Download CentOS 7.2 Install VM host version yum install epel-release Follow
direction here: <a href="https://www.ovirt.org/release/4.0.4/">https://www.ovirt.org/release/4.0.4/</a> starting with: yum
install ||
<<a href="http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm">http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm</a>>|<a href="http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm">http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm</a>|
</blockquote>
So that install does libvirt, kvm-qemu, sshd, nfs, bridge-utils, and all the
distributed storage stuff automatically? Nice!
<blockquote type="cite">
<<a href="http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm">http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm</a>> Be aware that
when docs refer to NFS mounts, the server for that can be one of the nodes
that has drive space. ISO space is where <duh> iso images are kept for
installations. I have one win10 VM running now for a DBA with specialty tool
needs.
</blockquote>
Have 1 Win7 VM running to record TV and run Quicken from time to time. It can
be down, when nothing is being recorded ... so basically any time other than
prime time or football time. ;) It will be one of the first VMs I migrate into
the sheepdog storage. So will my daily-use desktop.
The big difference in this planned architecture is that distributed storage can
run on the VM hosts. Performance ISN'T the reason to do this. 10 users won't
notice.
That's my plan right now, anyway. Sleep can alter it.
>From the comments, it appears is that
a) nobody has used sheepdog in their environment (it isn't new).
b) nobody is interested in cluster VMs on a small scale.
c) nobody is interested in using small scale systems as redundant Linux storage
for qemu VMs - someone did make a way to mount it outside a VM.
or
d) everyone is busy enjoying fall and has more important things on their plates
today! Which I can understand.
It is interesting how different people come at a problem and get different
answers. ;)
_______________________________________________
Ale mailing list
<a href="mailto:Ale@ale.org">Ale@ale.org</a>
<a href="http://mail.ale.org/mailman/listinfo/ale">http://mail.ale.org/mailman/listinfo/ale</a>
See JOBS, ANNOUNCE and SCHOOLS lists at
<a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a>
</pre></blockquote></body></html>