[ale] Question
Jim Kinney
jim.kinney at gmail.com
Wed Dec 28 18:58:26 EST 2011
It's not about things being "easily worked through". It's about things
being rapidly worked through when the shit hit the fan. An expert in
Ubunutu will fall on their face in a crisis working on a RedHat system.
Granted there are a few people, I am not one of them, who like to keep all
the nuances in their head of how to do all the minutia twiddly bits of as
many distros as they can,. Been there, done that, discovered the folly of
the process. Once you have to admin 100+ systems, you DON'T WANT (!!!!!)
minutia differences causing a work-flow breakdown. so you settle on a
single distro (If your lucky) or maybe two, one for servers and one for
desktops.
The physical analogy I can give is the auto mechanic. Once guy has a bunch
of tool boxes with some sockets in one box and more in another. Neither box
has a socket rail so the sockets are jumbled together. Another mechanic has
just a single box and all of the sockets are neatly on a rail system in
size order. Your car is broken and you need it back on the road. Which
mechanic do you think will be able to work on your car the quickest? Yes,
there's some cost of maintaining the order of the tool box but the payoff
is in the crunch time.
I write scripts to do nearly all the mess I do. Scripting to support
multiple distros is not feasable. It turns into a mess. The best tool for
that is webmin. Go take a look at that and then decide if multi-distro adds
value to the people who write large checks.
RedHat enterprise for the companies that can afford the support contract
and you as the admin or CentOS for those that can't afford the support
contract with you as admin. The key is you as admin. Many smaller companies
are looking at Ubunutu, not for technical reasons but because of the vast
numbers of fresh grads who've tinkered with Ubunutu are willing to work
long, long hours for meager wages to make it work. The lack of a distro
standard way of doing things is fun for the admin for lousy for continuity.
Admins leave. New admins have to start over often because the old admin had
it all in their head.
So RedHat has their way of doing things. Is it perfect? Nope. But they have
a bunch of technical writers getting paid to document the crap out of
everything so it becomes common knowledge. Unless Ubuntu puts up some
serious cash to catch up on docs, they will never stand a chance of
catching the RHEL freight train. More likely, they will wind up under the
wheels like a penny on a train track.
And yes, every package may come from a pristing upstream source for all the
major distros. But they all add their own tweaks. It really is those tweaks
that make or break a sysadmin.
What do you want to be? A sysadmin or an apache admin? Or a bind admin? Or
a PHP mechanic? I have to install one-off packages on RHEL systems using
tarballs or custom compiles or whatever. I hate doing it. It breaks the
nice, clean rpm -qa | grep foo process so I can see _FAST_ from 300 servers
which machine has the wrong version of foo installed. So I make rpm's when
I can. I've made rpms of ssh keys so I can add admin keys to custom builds
and verify they are intact. The most important first line of defense when
using rpms is the ability to run rpm -Va to see the md5 sum of all binaries
compared to what they were at installation. Where's the config file for a
package? rpm -qc foo. How about the docs for it? rpm -qd foo. what about a
list of every file installed with package foo? rpm -ql foo. What about all
install/remove scripts for package foo? rpm -q --scripts foo
These tools allow a high degree of automation across an army of machines.
Now add high-level tools like Satellite/Spacewalk for large scale system
control/deployment (not a perfect tool but more that I have written in
scripts over the past 10 years) and security ID tools like FreeIPA and
DogTag. It's really the difference between working a few machines here and
there and working hundreds or thousands. I seen other tools like puppet and
it's a great tool for managing a gazillion config files. It's part of the
process but not the solution by itself.
You can't scale a mechanic shop if they work on all makes and models of
cars. It takes too many custom tools and the upkeep on that becomes too
unwieldy. Been there. Done that. Know the pain of having 30 servers and
they are all different. I mashed all 30 into a single default build and my
consulting profitability went up 100%. working one machine then translated
into working all 30 at once. same patches and process with different
customer data but all in the same path scattered around the metro area.
And now I put on my snarky old geezer hat :-) Managing systems is nothing
compared to keeping the admins focused on what needs to be done for the big
picture while balancing the inherent ADHD that runs rampant in all admins.
One huge trick is to thin out the variances between systems so the admins
won't bog down in scripting things for a pile of corner case event and can
focus on moving ahead. That understanding comes with experience and is
usually served with a bucket of misery as an appetizer.
OK. snarky geezer hat off.
To sum up: If you want employable experience and opportunities are lacking
to get that 5 years of RedHat exposure, collect old crap hardware and make
it run using the closest distro that will send home a paycheck, CentOS. You
can use another if you like but it will greatly deplete the marketability
of your skills. Choice is yours.
And yes, HR dweebs do look favorably on experience gained in home study. I
got a job in part because I bought some fiber channel gear off another
ALEer and tinkered with it at home. Once I made it do what I wanted it to
do, I added that to my resume. An HR person asked about it and I was blunt
in my explanation, "I've never done fiber channel on a job site but I
bought a set up to test with and set up a three server fiber channel
circuit and used it to store back up images and other high-speed data
sets.". They were impressed, as they should have been, that I would spend
my own cash and time to learn at home and recommended me up the chain to
the interview. Again I had to explain the fiber channel use in the
interview and was conversant enough with the protocols and tools that I got
the job and promptly had to go reconfig/fix a severely broken fiber-channel
setup.
On Wed, Dec 28, 2011 at 5:13 PM, Michael B. Trausch <mike at trausch.us> wrote:
> On 12/28/2011 02:01 PM, Jim Kinney wrote:
> > On Wed, Dec 28, 2011 at 12:57 PM, Michael B. Trausch <mike at trausch.us
> > <mailto:mike at trausch.us>> wrote:
> > * GCC works the same on every Linux distribution, and its
> configuration
> > files follow the same format on every Linux distribution.
> >
> > true
> >
> > * Python works the same on every Linux distribution.
> >
> > true (mostly)
>
> Not sure I understand what you mean, there. They are all built from the
> same upstream source code, and AFAIK all the standard Python ways of
> obtaining Python packages work across all systems. Sometimes, OS
> distributors also package what they deem to be "stable" versions of
> Python libraries, but it is possible to instead use whatever is declared
> stable in the usual Python distribution channels.
>
> PHP works the same way, with its PEAR system.
>
> > * ISC software (e.g., BIND and DHCP client/server) use the same
> > configuration files and such on all Linux distributions.
> >
> > very true
> >
> > The same is true for most packages that are available on multiple
> > distributions. Some of the core distribution's processes are likely
> to
> > be different, in that they are specific to the distribution (e.g.,
> rpm
> > to Red Hat and derivatives), but IMHO it is most important to learn
> > about the underlying software, because that is what you're *really*
> > supporting and administering.
> >
> > Um. not quite so true. Each distro has "it's way" of admin'ing the
> > system. If a Slackware guy gets on a Gentoo box, guess what, they are
> > not very effective. sure they know _what_ to do, but where?!
>
> Having used both Gentoo and Slackware, I'm lost yet again. Both use
> /etc for housing configuration of installed packages. Probably the most
> major difference between those systems is the fact that Slackware
> doesn't provide any more default configuration than does the upstream
> vendors. Slackware also doesn't (usually) supply patches such as what
> Gentoo uses. After all, Slackware's goal is to have a system that
> follows upstream's directions as closely as possible. Gentoo's goal is
> to have a system that is as flexible and useful as possible.
>
> > LSB not withstanding, all distro's put their configs in different
> > locations and with different formats. In some cases, the major package
> > has been compiled with extra patches so that the configs fit into the
> > distro locations automagically.
>
> Debian, Gentoo, Slackware, and (if I understand correctly) Red Hat all
> use /etc for global configuration. The names of configuration files for
> software that has an upstream is the same, though sometimes in a
> subdirectory of /etc (for example, /etc/apache or /etc/apache2 might be
> the Apache configuration directories used by two different systems).
> But that's something that can _easily_ be worked around, either manually
> or by using "find /etc -name ..." to find configuration files.
>
> It is true that some packages go through more than others, too. For
> example, Samba upstream puts its config file in ${PREFIX}/lib/smb.conf,
> while most OS distros put it in /etc/smb.conf or /etc/samba/smb.conf.
> Likewise, MySQL upstream uses /etc/my.conf, while some distributions
> move it to /etc/mysql/my.conf (such as Debian and relatives).
>
> > A good _modern_ admin doesn't just pull down the source tarball, make
> > configure, make, make install anymore.
>
> I don't disagree---at least in the usual case. I've encountered many
> situations where it is desirable to use fresher upstream code for one or
> another reason (usually it is features that drive that, not security
> fixes). For example, I have one Ubuntu server box in production that
> has an upstream Samba, because the one that was packaged didn't support
> Windows 7 clients (and it still doesn't since they won't backport
> features).
>
> > It matters not whether we're talking about Gentoo or Red Hat. The
> > biggest major difference between the two is the package management
> > system, and probably the second biggest major difference between
> them is
> > the init system and its configuration.
> >
> > EXACTLY! And all of those bits converge in different locations and
> > formats for every distro.
>
> However, none of the init systems are all that difficult to figure out.
>
> I firmly believe that the difference between a good administrator and an
> outstanding one is one that is capable of identifying the difference
> between the distribution and the software contained within it. There
> are many who I think would disagree with me on this point, but it seems
> silly to me to use ISC's DHCP or BIND dæmons without knowing anything
> about how they work.
>
> Packaging systems are something that a good administrator can learn in a
> few days' time, if they are familiar with any other. Most of them have
> features that are comparable from the point of view of the system
> administrator (though they are, of course, very different from the point
> of view of the software packager). And some administrators have more
> tolerance for deficient package managers than others, but that's beside
> the point. :^)
>
> > It is true that sometimes there are "major" differences that aren't
> in
> > the package or init systems, however those differences are usually
> > attributable to things like customizations in the building of
> > configuration files. Some distributions may use a macro system of
> some
> > sort to create the configuration files from a set of files
> maintained by
> > the distribution.
> >
> > In fact, RHEL uses the same kernel number (for the bean-counters) but
> > backports all bug-fix and security patches for the life of the product.
> > That makes for a non-standard process of administration. The admin has
> > to know what was done and where and how to deal with it.
>
> I thought that RHEL had a Red Hat-specific patchlevel for the kernel
> packages.
>
> > Personally, what I would like to see (and don't yet, and don't expect
> > to, honestly) is excellent "ad-hoc" configuration support for systems
> > that doesn't require in-depth knowledge of each program's
> configuration
> > formats and such, and likewise isn't purposely tied to a single
> > distribution.
> >
> > Once ubunutu falls off the market flavor of the month, then RHEL can
> > finish the World Domination Tour :-)
>
> Hahaha! Over my cold, rotting corpse! :)
>
> > Seriously, it can't ever happen. Linux-land is diverse because people
> > perceive different needs and roll their own. RHEL/Fedora/Debian/Ubuntu
> > are the generic, one size fits all platforms (I can't see SuSe rising
> > from the grave it dug itself). Each serves a need.
>
> Indeed. Though the one thing in common between them all is the fact
> that they use upstream software from multiple projects and bring them
> together.
>
> No distribution that I know of makes substantial modifications to the
> upstream software (e.g., they don't go about eliminating or
> restructuring the configuration files), and in most cases, the biggest
> difference from upstream's process is that they override things like
> bindir, sysconfdir, prefix and so forth in whatever the package's build
> system is. They also save you some time in that they will usually have
> scripts that are run by the package manager which help to mitigate any
> changes in what upstream (or even the distributor) has done since the
> previous package version.
>
> > From an admin standpoint, employment has a greater likelihood if you are
> > skilled in RHEL over all else combined. Does this mean the beancounters
> > are running the show? Yep. Doe this make the RHEL process bad? Nope.
>
> The biggest key to RH's success in the enterprise market is that it
> started focusing hard on the enterprise early on; it recognized that it
> couldn't do both consumer desktops and enterprise servers and desktops
> without being detrimental to both.
>
> The only real bitch I have with RH (or any distribution that uses the
> RPM package format) is that the package manager is far too easy to
> break. I managed to find breakage in RPM no less than 1 week after the
> last time I installed such a distribution, F15. Any system that suffers
> from the same limitations it had more than a decade ago is a system that
> I cannot put my trust in.
>
> But, I've gone on at length about RPM in the past, there really isn't
> any need to duplicate it in-depth. I would just like to see them fix
> the core format instead of stacking layers on for the purpose of trying
> to gloss over legacy bugs that never were fixed. Then they could fix
> the user-interface layers (such as yum and its friends) to be able to
> behave more sanely than they do now when they encounter strange
> situations, because many of those strange situations would disappear.
>
> > It does mean that the RHEL way is more exacting as it's designed for the
> > enterprise.
>
> But it still uses the upstream stuff.
>
> I think the big deal with Red Hat is that they do try to abstract away
> some of the underlying things. But I don't believe that is reason to
> not be fully cognizant of what's going on "under the hood", so to speak.
> This is one reason that I dislike (and distrust), for example,
> certification programs. They focus on learning a single thing (the
> distribution) and high-level tasks that take place on it, usually using
> the distributor's tools.
>
> I personally think that's all bass-ackwards. Learn the underlying
> software, its configuration files and other "plumbing" first; then learn
> about the distribution's "porcelain" for working
> with/configuring/maintaining/managing it.
>
> > It has tools and capabilities for a single admin to manage effectively
> > hundreds of different servers and thousands of similar workstations. And
> > it's the only bunch out there that's really poised to take over as
> > Redmond declines and Sun/Oracle becomes a one trick pony. And they play
> > nicely with the open source process. They buy closed products to bolster
> > their business and then spend more cash to open those products up and
> > eventually they become fully community-supported products with an
> > enterprise supported back-end.
>
> You won't see me argue against the (well-known) fact that Red Hat's
> existence and ongoing efforts are beneficial to the world of free
> software in general and GNU/Linux distributions in particular. That
> said, it isn't the system for me. Then again, I keep finding that no
> system really is, there are just systems that fit me "better" than
> others. I still want to make a system that fits me, someday, when I
> have the time. I think one of the big problems that we see today is
> that our systems are too monolithic, despite the fact that they need not
> be. Package managers could be far better than they are, all the way
> around, and I think that a truly great package manager would encourage
> modularity and make it possible to give the SA exactly as much control
> as they need, not only as much control as the packager or distributor
> assigns or delegates.
>
> In fact, I think that the perfect package manager would naturally be
> extensible into something that could be used for wide-scale management
> of enterprise or even larger environments.
>
> --- Mike
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
--
--
James P. Kinney III
As long as the general population is passive, apathetic, diverted to
consumerism or hatred of the vulnerable, then the powerful can do as they
please, and those who survive will be left to contemplate the outcome.
- *2011 Noam Chomsky
http://heretothereideas.blogspot.com/
*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20111228/dfdebd51/attachment-0001.html
More information about the Ale
mailing list