<html><head></head><body>I think you're hinting at the issue: people (beancounters in particular) seem pretty ok swapping cost savings for security.<br><br>For the startup with $0, by the hour/job/transaction hosting makes sense. There's a point where some things come local and that's the same discussion when companies stopped leasing computer time 30 years ago and bought servers and built in-house data centers.<br><br>Moving to leased equipment, aka cloud, has costs and advantages. I moved ale and my other web hosting to a dedicated machine I lease because it was WAY cheaper than running in house. But I still have to provide local backup for when it blows up (which I need to finish config on and test, sigh).<br><br>The thing I see with cloud is it allows for a concentration of lower skill and lower paid people to run the gear. The rack access team doesn't need to know anything other than cable plus socket equals food. Anyone ever sat on a call with AWS parallel cluster support? Even with 3rd tier they were basically useless.<br><br>It's really cool stuff, though. But having a crapload of it doesn't make it scale. It just scales problems.<br><br>HA will always be a beast in all ways: technical, network, financial, etc. But really, does everything NEED 5 9s? Could, say, slashdot drop offline for a day or two and that not result in loss of life?<br><br>There's a cost to large stuff that doesn't bear out in obvious manner. Big datacenter uses big power and must be filled to justify cost. Acres of former farmland is bulldozed for instant retrieval of cousin Ethyl's pit bellied pig pictures and NSA recording of everything along with Google, FaceBook, everyone who never deletes files, etc. Seems cooling those monsters is a problem - as Belgium about the lack of living things downstream of a Google datacenter due to hot river water.<br><br>And then there's the loss of farmland. I like food. Especially barley-based liquids.<br><br>Unicorn farts power the white papers touting CLOUD CLOUD CLOUD the same way tobacco was harmless and burning oil can continue forever with no problems. <br><br>New shit. New assholes. Same stench. 10-15 years the push to bring it all home will start. Too bad there's only gonna be a few dozen people that know how to plug in a machine by then.<br><br>Old habits die hard. So I've stopped being a nun.<br><br>Existential rant over. Gotta go convert ideas into heat.<br><br><div class="gmail_quote">On May 20, 2021 3:43:16 PM EDT, Leam Hall via Ale <ale@ale.org> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">I'm an old guy, and I'm happy to face reality. Don't get me wrong; I'm not saying it's all fluffy unicorn farts. But there are a few issues that drive this.<br><br>1. People don't care about security enough to pay for it.<br><br>People still shop at Target, Experian is still in business, banks still offer on-line banking, and most people still have credit cards. Either accept that you value convenience more than security, or do some drastic life changing.<br><br><br>2. Abstraction and virtualization are mandatory.<br><br>By count, most Linux machines either run on a virtual host (KVM, Docker, AWS Image, VMWare) or are highly controlled and blocked off (Android). Yes, Jim and his HPC toys are there, but they are the exception. Most of us don't get to play with a Cray. Even with Linux on bare metal, the udev/HAL tries to abstract the hardware so the applications don't have to have device drivers embedded. So there are at least a few layers of abstraction between the user and the metal.<br><br><br>3. Economics pays.<br><br>Servers turn money into heat, unless you have an application running. Let's use the standard 3 tier app; database, middleware, and webserver. For security, each of those needs to be a separate server. If you want bare metal, you're talking three servers. But that means you have three single points of failure unless you double the server count and make your application highly available. Now, that means you need someone with OS skills as well as a few years of experience, HA don't come cheap. Don't forget the network engineer for your firewalls, routers, and switches. You also need a management server (Ansible) unless you're going to build and maintain all these snowflakes by hand, so you're up to 7 physical servers, one firewall, and a couple network devices. You probably want a NAS for drive storage and a backup server for, well, backups. More hardware. Sadly, most physical boxes are only at 5-10% utilization. So you have an RHCE level person, a CCNA level person, and you're probably<br> at a dozen physical devices and a quarter mil per year for salary and benefits. Until you realize that being one deep puts you at risk, so you get two each. That doesn't even count your developer staff, this is just infrastructure.<br><br><br>Or...<br><br>Let your dev staff use AWS Lambda, S3, and DynamoDB. Be able to build from a dev's workstation, and set up for deploying to a second availability zone for high availability. You'd need one or two AWS cloud people, so your infrastructure staffing costs are cut in half. You don't have to rack and stack servers, nor trace and replace network cables at 0300. If you really want an OS underneath, for comfort or because you haven't coded your application to be serverless, you can use EC2 and right-scale your nodes. That also means your staff can work from about anywhere that has a decent internet connection, and if your building loses power, your application doesn't.<br><br>I know AWS has external security audits, and you can inherit their controls for your artifacts. AWS security is enough for the US DoD, so likely more than sufficient for most other use cases. I do not know much about Digital Ocean or Google Compute, but my bet is they are working to get a share of that same market.<br><br><br>4. The real driver for serverless/microarchitecture/containers.<br><br>It's not about circumventing security (though some devs do that), nor is it about always running as root (again, for smart devs, this ain't it). It is about reducing complexity. The fewer moving parts an application host has, the less change the development team has to code around. I just checked three Linux nodes, and they have 808, 527, and 767 packages, respectively. With an AWS Lambda based application, I pick the runtime (Python 3.8, Node.js 14, etc), add just the packages my app specifically needs, and then test that. In truth, the reduced package footprint can increase security. Nor do I have to wait for Red Hat or Oracle to package the version of an application I need; I can do that myself. Yes, it means I need to be aware of where that code comes from, but that's not an infrastructure issue. Devs have to do that in the cloud or on metal.<br><br><br>5. In the end, success matters.<br><br>I've been the hardware, OS, datacenter, and network person; I understand the basics of how these things work. AWS and similar are changing what we're used to. I find some of it uncomfortable, but I want to pay the bills. I'll change my habits so my family is provided for.<br><br><br>Leam<br><br><br><br><br>On 5/20/21 9:03 AM, DJ-Pfulio via Ale wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;">Common sense isn't nearly as common as we all think.<br><br>I recall, vaguely, thinking all the "old guys" just were afraid of the great, new, tech too. Now I know better.<br><br><br>On 5/19/21 9:53 PM, Allen Beddingfield via Ale wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #ad7fa8; padding-left: 1ex;"> I remember being at an event several years back, where a group of 20-something web hipsters were doing a session on how they had replaced the legacy client/server setup at a corporation with some overly complicated in-house built thing mixing all sorts of web technologies and dbs in containers running at a cloud provider. They were very detailed about their decision to put it in containers, because all the infrastructure people at that company were so behind the times with all their security models, insisting on not running things as root, firewalls, blah, blah...<br> Quite a few people left shaking their heads at that point. I was sitting next to a guy FROM a major cloud hosting provider, who almost choked on his coffee while laughing when one of them said that "It is just a matter of time before Dell and HP are out of the server business - no one needs their servers anymore! Everything will be running in the cloud, instead!"<br><br> I still argue that the main motivating force behind containers is that developers want an easy way to circumvent basic security practices, sane version control practices, and change control processes. There are plenty of valid use cases for them, but sadly, that is the one actually driving things. We have a whole generation of developers who weren't taught to work within the confines of the system presented to them.<br> No one ever prepared them for enterprise IT. Now we have heaven knows what software, running heaven knows what version, in some container that developers can put online and take offline at will. Who audited that random base Docker image they started with? Are patches applied to what is running in there? Is it secretly shipping off sensitive data somewhere? Who knows. Unless you defeat the whole purpose of a container, you don't have any agents on the thing to give you that data.<br><br> Next, I'm going to go outside and yell at people to get off my lawn . . .<br><br> Allen B.<br></blockquote></blockquote><br></pre></blockquote></div><br>-- <br>Computers amplify human error<br>Super computers are really cool</body></html>