Wherever I have worked, there’s a lot of servers laying around that aren’t doing a whole lot. A fundamental benefit of virtualization is better utilization of resources. In a virtualized environment, you *DON’T* have a bunch of unused capacity.
Hardware acquisitions work something like this:
- End users (with money) decide they need some software they can’t live without.
- They whine make presentations and convince the boss to buy it.
- Someone finally asks the computer geeks what it’ll take to run it. We have no idea, but don’t want to say that we don’t know.
- Since we don’t want to under specify, and the end users have money, we tell them to get the biggest damn server we think they’ll buy.
- We get the app, we load it, and they don’t really use it much.
- The Big Server sits idling at 10% used.
- Lather, Rinse, Repeat.
That’s where racks of unused servers come from.
That’s what virtualization deals with. If I use a virtual machine to run your application, I can “right size” the VM, so it has exactly what it needs, but no more. When I use servers to host virtual machines, I can consolidate tens or dozens of applications on one server. Users stay happy, computer geeks are happy, the only ones not happy are the hardware vendors.
So don’t put an OS directly on a server. Ever. (I suppose there may be edge cases where you might…) Run a hypervisor like XenServer or VMware ESX on servers, and run all applications as virtual machines.
(That’s not me, BTW.)