<html><head></head><body>You need 1-2 extra machines ready to start replacing the old cruft nodes. Set up your manager tools on a vm. Replace node oldest with new hardware and start migrating the fleet. Or replace newest with hot upgrade and push up. Either way going from large ad-hoc to managed deployment is a mountain of work. It pays off. <br><br>Some tools support an "ingest" process to eval a node and add to a list. You'll still have to do manual grouping but it works. TheForeman does this(upstream/replacement for spacewalk/satellite server.). Back adding salt/puppet/chef/ansible is certainly doable just config heavy. <br><br><div class="gmail_quote">On June 1, 2021 9:34:35 AM EDT, Allen Beddingfield via Ale <ale@ale.org> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">So, how do you get from one type of operation to another? For example, I have 500-600 SLES servers. 99% of them were loaded by booting the ISO image, stepping through the installer, bootstrapping against config management, and pushing a base configuration to them. <br>That is where the "cookie cutter" setup stops. Various firewall ports have been configured, directories have been made, "stuff" installed, disks added and mounted, virtual hosts configured, nfs shares configured, local users and groups added, etc . . .<br>Some of these started on SLES 11, were upgraded to 12, then 15. Our idea of config management is pushing patches, deploying rpm-based applications, pushing config files, and remote execution operations.<br>I don't see a path to get from what I have to what you have, without just blowing everything away and starting with a clean slate - which will never be an option.<br>Allen B.<br><br>--<br>Allen Beddingfield<br>Systems Engineer<br>Office of Information Technology<br>The University of Alabama<br>Office 205-348-2251<br>allen@ua.edu<hr>From: Ale <ale-bounces@ale.org> on behalf of Jerald Sheets via Ale <ale@ale.org><br>Sent: Tuesday, June 1, 2021 8:22 AM<br>To: Atlanta Linux Enthusiasts<br>Cc: Jerald Sheets<br>Subject: [EXTERNAL] Re: [ale] Re: Time for this Grey Beard to stir up some stuff<br><br><br><br>On May 31, 2021, at 1:45 PM, Chris Fowler via Ale <ale@ale.org<mailto:ale@ale.org>> wrote:<br><br><br><br>It's a balancing act. Without abstractions you have more work. On a grand scale, this more work can be too much work. With an abstraction you have less work, but you are in tyrannical situation where the abstraction enforces your hosts to conform in some way to what it wishes to work with.<br><br>Automation works best with fewer variables. An environment with all the same hardware, same OS, same versions, etc would work well with Ansible. It would work well with the least experienced admin because any weird b behavior is most likely hardware failure. Abstractions work well in a world of rules. Honestly, I prefer the world where I write most of the rules.<hr>At my most gracious during the pre-coffee hours, I have to address this. The statement is misinformed " in tyrannical situation. where the abstraction enforces your hosts to conform in some way to what it wishes to work with”<br><br>Just like any of the automation we’ve all worked with, whether it be Puppet/Chef/Ansible/SALT or whether it be host lists and “for loops”, it is what you make of it, and you need to be expert in both the platform and its design patterns before you start making assumptions.<br><br>Take my east coast fleet @ about 300k nodes.<br><br>I have a large majority that are rather identical, not the least of which because they’re all auto scaling group members and all need to look identical. I have another percentage over that which require some special sauce of some sort that add a layer of abstraction upon the base abstraction. I have different layers of abstraction across the fleet that are added and layered in ways that provide the maximum of flexibility right down to $special_snowflake machines that have independent one-off configurations, but all applied via abstractions and layering.<br><br>All told, I’d say I’ve got nearly a dozen abstractions, but the combinations and potential configurations maginfies to many hundreds of potential configurations. Then, with parameterization and layering, two machines that are precisely the same can be configured entirely differently based simply on differences because their IPs are different.<br><br>You can no longer look at these things as a Sysadmin who automates, but as an infrastructure developer who iterates. Finding new and improved ways to address abstractions, variablizing input, iterating over dynamic groups of hosts, etc. etc. You only have limitations and some sort of “tyrannical situation” if you allow it to happen. These are development languages in a development paradigm for a reason. You systematize and make into code the very essence of your existing infrastructure, and then do your best to make the moving parts lesser and more generic while maintaining flexibility and idempotent power to cease annoying drift.<br><br>It works, and it’s definitely a better way to *DO* System Administration in this day especially when we’re all being asked to do more with less, and to manage more machines with fewer people.<br><br><br><br>—jms<hr>Ale mailing list<br>Ale@ale.org<br><a href="https://mail.ale.org/mailman/listinfo/ale">https://mail.ale.org/mailman/listinfo/ale</a><br>See JOBS, ANNOUNCE and SCHOOLS lists at<br><a href="http://mail.ale.org/mailman/listinfo">http://mail.ale.org/mailman/listinfo</a><br></pre></blockquote></div><br>-- <br>Computers amplify human error<br>Super computers are really cool</body></html>