[ale] KDE, Gnome, XFCE OT?

Beddingfield, Allen allen at ua.edu
Mon Mar 28 09:47:52 EDT 2016


It is not worth my time and effort to tune a VM to fit in a tiny amount of space.  VMware does an excellent job of "thin provisioning" the memory in use.  Ditto for our SAN with the storage. I know people who spend a lot of effort on "right sizing" virtual machines, but when the virtualization platform does a good enough job of it (as opposed to the old days where giving a VM was a hard allocation of 2G), it is not worth the effort.  No matter how much you plan, at some point, you are going to have to take the VM down to add more.
My approach to capacity planning is to always have enough capacity in reserve to not have to worry about it, and to have enough spare allocated to each VM that I don't have to worry about that.  Yes, I know there is always the theoretical case where something goes awry across all the systems at once, and consumes resources that the VM would otherwise not have access to, but if that happens, you have bigger problems anyway.

As for overhead with monitoring, etc..  our biggest resource eater is Tripwire Enterprise.  That thing is a bloated Java-based monstrosity.  I have to limit how much memory it can use, or it will consume everything available.  In many cases, it is using more resources than the actual application on the server.

Allen B.
--
Allen Beddingfield
Systems Engineer
Office of Information Technology
The University of Alabama
Office 205-348-2251
allen at ua.edu

________________________________________
From: ale-bounces at ale.org [ale-bounces at ale.org] on behalf of DJ-Pfulio [DJPfulio at jdpfu.com]
Sent: Monday, March 28, 2016 8:37 AM
To: ale at ale.org
Subject: Re: [ale] KDE, Gnome, XFCE OT?

Supporting 200 email accounts on a 512MB box is possible.

Certainly there are times when more CPU/RAM is needed, but most servers
in most data centers run at 13% utilization. Why?  "Bigger is better"
syndrome.  That includes 5% for systems monitoring. ;)  I've heard of
people using 2G of RAM as a minimal VM. Sounds like the team would
rather blow more RAM than tune for actual needs. Sometimes that is
needed as a rough starting point for green-field deployments ... but if
that is true then why do Amazon and GoDaddy offer multiple, smaller,
system sizes?  Why?

I would never try to put a geospatial DB on 2G of RAM. 32G of RAM is
more like it for that need.

There are methods to reduce CPU/RAM loading that aren't used enough.
Micro-caching, for example.  Caching results for 5 seconds at a time
doesn't impact the end user much, if at all, but reduces the DB load
hugely on primarily read-data. Transactional DB requirements are
different. There are other times when more RAM is needed too. Impossible
to say all the different times when more RAM is needed, but assuming
more RAM is the solution is wasteful.

There are 3 main things to watch when deploying a system:
* RAM use (including VM)
* CPU use
* I/O (network, disk, etc)
I've seen people assume that more RAM will solve a CPU issue. It won't,
but it might solve an I/O issue (if swapping is the problem).  This
isn't rocket science.

If the system is doing 10K TPS, a ballpark guess isn't sufficient. Real
data and facts are necessary to properly architect the total solution -
to include transactions, read-mainly requests, DR, and reporting.


On 03/28/2016 08:28 AM, Lightner, Jeff wrote:
> No server needs more than 2 GB unless running Java?!
>
> That maybe true for small front end web servers but if you're doing
> major DBs or APPs 2 GB is NOT plenty even if they're not Java based.
> Even more active web servers use more than 2 GB.
_______________________________________________
Ale mailing list
Ale at ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo



More information about the Ale mailing list