[ale] Containers

DJ-Pfulio DJPfulio at jdpfu.com
Sat Oct 3 11:07:49 EDT 2020


On 10/3/20 8:55 AM, Jim Kinney via Ale wrote:
> Why one application per container? Why not one container per
> project?
> 
> Take the basic web pile: web gui front end , logic layer, and backend
> database. The only part that changes in use is the data stored by the
> backend. Pre-container days it all ran on a single host. Or virtual
> host. Or maybe the database ran on a beefy system and multiple
> front/logic layers fed it.

Do you remember when different parts needed different libraries, so we
got to pick 1 for the system and manually compile the others for each
app, then create a START script to modify the LD_LIBRARY_PATH? After
doing that, the admin was always in the middle of every change. Seems
it is even worse today because application devs seldom seem to know much
about the OS. IME, Java app devs in corporate environments are THE WORST.
Of course, there are exceptions.

> If the project requires multiple parts to run, why not put them all
> in a single container? Assuming, of course, the size of things makes
> sense.

Dependency issues. Different libraries needed often conflict. Can be
hardware dependencies too, for some applications. Abstracting away HW
is good.

Upgrade risks. Upgrading 1 app fails, so all the others are down too.

Virtual machines solve these things too, but with 10-100x more overhead.
A VM might use 6G of disk.
A Container might use 60MB-600MB of disk. If the container only supports
running 1 application, then all those other tools aren't available if
a cracker breaks into that environment. If all a container supports is
running MariaDB, that isn't much of a launching off point to hack other
systems. Especially when there isn't any shell provided.

> I ask because I've run into the issue where a critical application,
> composed of multiple containers across multiple hosts fails because a
> host is rebooted, a network issue, other normal problems. -- 
> Computers amplify human error Super computers are really cool

Critical applications don't have only 1 service provider and certainly
efforts would be made to place any redundant provider of a service on
different physical hardware.

If an application is "critical", but not
funded as a "critical" application should be with infrastructure, and
support people, that's a management problem.

I've seen lots of different
companies try to address these issues different ways. Having a CEO scream
about an outage is the least effective. But if the CEO didn't know that
a "critical application" was full of single points of failure, then that
is IT's fault.
The best places I've seen handle C-suite notification
have a process with paper forms and signatures required.  They can't
address a shortcoming without being told there is one.  If the C-suite
people refuse to sign, then the users can get stuffed. It isn't a critical
application. Sorry.

Just as every application needs a backup and DR plans, every application
needs to be rated for criticality based on actual business needs. All
applications are important, but if they are down a week, it doesn't matter.
Some applications cost millions $$$ to the business if they are down even
1 hour. If a 1 hour outage is $1M and putting redundant systems into place
is $200K, seems pretty clear that the money should be prioritized.  There
are "tools" (i.e. questionnaires) available online to help determine system
availability based on the infrastructure deployed. It has been years since
I used those, but a single server with normal internal redundancy is only
95% available using those forms. That calculates into 400+ hrs of unplanned
downtime yearly for systems running 24/7/365 (8760 hrs / yr). Statistically,
we all know that systems aren't down that much. There may be a few 5 hour
outages on a single computer system in a year.  There is a difference between
what can be guaranteed and what luck can deliver.


More information about the Ale mailing list