<p dir="ltr"></p>
<p dir="ltr">On Oct 28, 2016 10:28 PM, "Derek Atkins" <<a href="mailto:warlord@mit.edu">warlord@mit.edu</a>> wrote:<br>
><br>
> So here's a question: have you tried running oVirt on a single machine<br>
> (sort of like the old vmware-server)? I.e., a single machine that has<br>
> CPU and Disk, running a hypervisor, ovirt-engine, etc?</p>
<p dir="ltr">Closest is the converged setup with the manager as a vm.</p>
<p dir="ltr">><br>
> It seems silly to run NFS off a local disk just to get Self-Hosted oVirt<br>
> to work.. But of course they stopped supporting the "AllInOne" in<br>
> ovirt-4.0 and don't seem to support local storage for the<br>
> SelfHostedEngine.</p>
<p dir="ltr">Can't do local storage for converged. Just use the host system to provide the storage.<br>
><br>
> Any Ideas?<br>
><br>
> Second question: is there a web-ui password-change for the AAA-JDBC<br>
> plugin? I.e., can users change their own passwords?</p>
<p dir="ltr">Not that I've seen. Adding users to the internal auth process is not recommended. Ldap is recommended. I use freeipa.<br>
><br>
> -derek<br>
><br>
> Jim Kinney <<a href="mailto:jim.kinney@gmail.com">jim.kinney@gmail.com</a>> writes:<br>
><br>
> > On Fri, 2016-10-28 at 10:49 -0400, DJ-Pfulio wrote:<br>
> ><br>
> > Thanks for responding.<br>
> ><br>
> > Sheepdog is the storage backend. This is the way cloud stuff works on<br>
> > the cheap. Not a NAS. It is distributed storage with a minimal<br>
> > redundancy set (I'm planning 3 copies). Sheepdog only works with qemu<br>
> > according to research, which is fine.<br>
> ><br>
> > Sure, I could setup a separate storage NAS (I'd use AoE for this), but<br>
> > that isn't needed. I already have multiple NFS servers, but don't use<br>
> > them for hosting VMs today. They are used for data volumes, not redundancy.<br>
> ><br>
> > >> Opinions follow (danger if you love what I don't) <<<br>
> ><br>
> > Won't be using oVirt (really RHEL only and seems to be 50+ different<br>
> > F/LOSS projects in 500 different languages [I exaggerate] ) or XenServer<br>
> > (bad taste after running it 4 yrs). I've never regretted switching from<br>
> > ESX/ESXi and Xen to KVM, not once.<br>
> ><br>
> > Ovirt is only 49 projects and 127 languages! Really!<br>
> ><br>
> > Ovirt is just the web gui front end (pile of java) with a mostly python<br>
> > backend that runs KVM and some custom daemons to keep track of what is running<br>
> > and where. It is most certainly geared towards RHEL/CentOS. That may be an<br>
> > irritant to some. I've found the tool chain to JustWork(tm). I need VMs to run<br>
> > with minimal effort on my part as I have no time to fight the complexity. I've<br>
> > hacked scripts to do coolness with KVM but found Ovirt did more than I could<br>
> > code up with the time I have. It really is a GPL replacement for VMWARE<br>
> > Vsphere.<br>
> ><br>
> > And won't be dedicating the entire machines just to being storage or VM<br>
> > hosts, so proxmox clusters aren't an option. The migration from plain<br>
> > VMs into sheepdog appears pretty straight forward (at least on youtube).<br>
> ><br>
> > One thing I like with Ovirt is I can install the host node code on a full<br>
> > CentOS install or use the hypervisor version and dedicate a node entirely.<br>
> > I've used both and found them to be well suited for keeping VMs running. If<br>
> > there is an issue with a node, I have a full toolchain to work with. I don't<br>
> > use the hypervisor in production.<br>
> ><br>
> > A major issue for my use is the need to have certain VM up and running at all<br>
> > times. Ovirt provides a process to migrate a VM to an alternate host if it<br>
> > (host or VM) goes down. The only "gotcha" of that is the migration hosts must<br>
> > provide the same cpu capabilities so no mixing of AMD and Intel without<br>
> > setting the VMs to be i686.<br>
> ><br>
> > Just doing research today. Need to sleep on it. Probably won't try<br>
> > anything until Sunday night.<br>
> ><br>
> > Download CentOS 7.2<br>
> > Install VM host version<br>
> > yum install epel-release<br>
> > Follow direction here: <a href="https://www.ovirt.org/release/4.0.4/">https://www.ovirt.org/release/4.0.4/</a><br>
> > starting with:<br>
> > yum install <a href="http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm">http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm</a><br>
> ><br>
> > Be aware that when docs refer to NFS mounts, the server for that can be one of<br>
> > the nodes that has drive space. ISO space is where <duh> iso images are kept<br>
> > for installations. I have one win10 VM running now for a DBA with specialty<br>
> > tool needs.<br>
> ><br>
> > On 10/28/2016 10:23 AM, Beddingfield, Allen wrote:<br>
> ><br>
> > Will you have shared storage available (shared LUN or high performance NFS for the virtual hard drives that all hosts can access?)<br>
> > If so, the easiest free out of the box setup is XenServer or oVirt. I'm familiar with XenServer, but there are some oVirt fans on here, I know.<br>
> ><br>
> > --<br>
> > Allen Beddingfield<br>
> > Systems Engineer<br>
> > Office of Information Technology<br>
> > The University of Alabama<br>
> > Office 205-348-2251<br>
> > <a href="mailto:allen@ua.edu">allen@ua.edu</a><br>
> ><br>
> > On 10/28/16, 9:17 AM, "<a href="mailto:ale-bounces@ale.org">ale-bounces@ale.org</a> on behalf of DJ-Pfulio" <<a href="mailto:ale-bounces@ale.org">ale-bounces@ale.org</a> on behalf of <a href="mailto:DJPfulio@jdpfu.com">DJPfulio@jdpfu.com</a>> wrote:<br>
> ><br>
> > I'm a little behind the times. Looking to run a small cluster of VM<br>
> > hosts, just 2-5 physical nodes.<br>
> ><br>
> > Reading implies it is pretty easy with 2-5 nodes using a mix of<br>
> > sheepdog, corosync and pacemaker running on qemu-kvm VM hosts.<br>
> ><br>
> > Is that true? Any advice from people who've done this already?<br>
> ><br>
> > So, is this where you'd start for small home/biz redundant VM cluster?<br>
> ><br>
> > I've never done clustering on Linux, just Unix with those expensive<br>
> > commercial tools and that was many years ago.<br>
> ><br>
> > In related news - Fry's has a Core i3-6100 CPU for $88 today with their<br>
> > emailed codes. That CPU is almost 2x faster than a first gen Core<br>
> > i5-750 desktop CPU. Clustering for data redundancy at home really is<br>
> > possible with just 2 desktop systems these days. This can be used with<br>
> > or without RAID (any sort).<br>
> ><br>
> > _______________________________________________<br>
> > Ale mailing list<br>
> > <a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
> > <a href="http://mail.ale.org/mailman/listinfo/ale">http://mail.ale.org/mailman/listinfo/ale</a><br>
> > See JOBS, ANNOUNCE and SCHOOLS lists at<br>
> > <a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br>
> ><br>
> > _______________________________________________<br>
> > Ale mailing list<br>
> > <a href="mailto:Ale@ale.org">Ale@ale.org</a><br>
> > <a href="http://mail.ale.org/mailman/listinfo/ale">http://mail.ale.org/mailman/listinfo/ale</a><br>
> > See JOBS, ANNOUNCE and SCHOOLS lists at<br>
> > <a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br>
><br>
> --<br>
> Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory<br>
> Member, MIT Student Information Processing Board (SIPB)<br>
> URL: <a href="http://web.mit.edu/warlord/">http://web.mit.edu/warlord/</a> PP-ASEL-IA N1NWH<br>
> <a href="mailto:warlord@MIT.EDU">warlord@MIT.EDU</a> PGP key available<br></p>