[ale] [ALE] So the winner is?

Leam Hall leamhall at gmail.com
Thu May 20 15:43:16 EDT 2021


I'm an old guy, and I'm happy to face reality. Don't get me wrong; I'm not saying it's all fluffy unicorn farts. But there are a few issues that drive this.

1. People don't care about security enough to pay for it.

People still shop at Target, Experian is still in business, banks still offer on-line banking, and most people still have credit cards. Either accept that you value convenience more than security, or do some drastic life changing.


2. Abstraction and virtualization are mandatory.

By count, most Linux machines either run on a virtual host (KVM, Docker, AWS Image, VMWare) or are highly controlled and blocked off (Android). Yes, Jim and his HPC toys are there, but they are the exception. Most of us don't get to play with a Cray. Even with Linux on bare metal, the udev/HAL tries to abstract the hardware so the applications don't have to have device drivers embedded. So there are at least a few layers of abstraction between the user and the metal.


3. Economics pays.

Servers turn money into heat, unless you have an application running. Let's use the standard 3 tier app; database, middleware, and webserver. For security, each of those needs to be a separate server. If you want bare metal, you're talking three servers. But that means you have three single points of failure unless you double the server count and make your application highly available. Now, that means you need someone with OS skills as well as a few years of experience, HA don't come cheap. Don't forget the network engineer for your firewalls, routers, and switches. You also need a management server (Ansible) unless you're going to build and maintain all these snowflakes by hand, so you're up to 7 physical servers, one firewall, and a couple network devices. You probably want a NAS for drive storage and a backup server for, well, backups. More hardware. Sadly, most physical boxes are only at 5-10% utilization. So you have an RHCE level person, a CCNA level person, and you're probably at a dozen physical devices and a quarter mil per year for salary and benefits. Until you realize that being one deep puts you at risk, so you get two each. That doesn't even count your developer staff, this is just infrastructure.


Or...

Let your dev staff use AWS Lambda, S3, and DynamoDB. Be able to build from a dev's workstation, and set up for deploying to a second availability zone for high availability. You'd need one or two AWS cloud people, so your infrastructure staffing costs are cut in half. You don't have to rack and stack servers, nor trace and replace network cables at 0300. If you really want an OS underneath, for comfort or because you haven't coded your application to be serverless, you can use EC2 and right-scale your nodes. That also means your staff can work from about anywhere that has a decent internet connection, and if your building loses power, your application doesn't.

I know AWS has external security audits, and you can inherit their controls for your artifacts. AWS security is enough for the US DoD, so likely more than sufficient for most other use cases. I do not know much about Digital Ocean or Google Compute, but my bet is they are working to get a share of that same market.


4. The real driver for serverless/microarchitecture/containers.

It's not about circumventing security (though some devs do that), nor is it about always running as root (again, for smart devs, this ain't it). It is about reducing complexity. The fewer moving parts an application host has, the less change the development team has to code around. I just checked three Linux nodes, and they have 808, 527, and 767 packages, respectively. With an AWS Lambda based application, I pick the runtime (Python 3.8, Node.js 14, etc), add just the packages my app specifically needs, and then test that. In truth, the reduced package footprint can increase security. Nor do I have to wait for Red Hat or Oracle to package the version of an application I need; I can do that myself. Yes, it means I need to be aware of where that code comes from, but that's not an infrastructure issue. Devs have to do that in the cloud or on metal.


5. In the end, success matters.

I've been the hardware, OS, datacenter, and network person; I understand the basics of how these things work. AWS and similar are changing what we're used to. I find some of it uncomfortable, but I want to pay the bills. I'll change my habits so my family is provided for.


Leam




On 5/20/21 9:03 AM, DJ-Pfulio via Ale wrote:
> Common sense isn't nearly as common as we all think.
> 
> I recall, vaguely, thinking all the "old guys" just were afraid of the great, new, tech too.  Now I know better.
> 
> 
> On 5/19/21 9:53 PM, Allen Beddingfield via Ale wrote:
>> I remember being at an event several years back, where a group of 20-something web hipsters were doing a session on how they had replaced the legacy client/server setup at a corporation with some overly complicated in-house built thing mixing all sorts of web technologies and dbs in containers running at a cloud provider.  They were very detailed about their decision to put it in containers, because all the infrastructure people at that company were so behind the times with all their security models, insisting on not running things as root, firewalls, blah, blah...
>> Quite a few people left shaking their heads at that point.   I was sitting next to a guy FROM a major cloud hosting provider, who almost choked on his coffee while laughing when one of them said that "It is just a matter of time before Dell and HP are out of the server business - no one needs their servers anymore!  Everything will be running in the cloud, instead!"
>>
>> I still argue that the main motivating force behind containers is that developers want an easy way to circumvent basic security practices, sane  version control practices, and change control processes.  There are plenty of valid use cases for them, but sadly, that is the one actually driving things.  We have a whole generation of developers who weren't taught to work within the confines of the system presented to them.
>> No one ever prepared them for enterprise IT.  Now we have heaven knows what software, running heaven knows what version, in some container that developers can put online and take offline at will.  Who audited that random base Docker image they started with?  Are patches applied to what is running in there?  Is it secretly shipping off sensitive data somewhere?  Who knows.  Unless you defeat the whole purpose of a container, you don't have any agents on the thing to give you that data.
>>
>> Next, I'm going to go outside and yell at people to get off my lawn . . .
>>
>> Allen B.


-- 
Site Reliability Engineer  (reuel.net/resume)
Scribe: The Domici War     (domiciwar.net)
General Ne'er-do-well      (github.com/LeamHall)


More information about the Ale mailing list