From Greg Porter's Wiki

Jump to: navigation, search



Why virtualize?

Modern virtualization products, and especially XenServer (now that Citrix is giving it away) allow smaller operations to run production like the big shops. Big shops can promise no downtime for production applications.

When I say "production", I mean:

  • The ability to properly utilize server CPU - in other words, user applications have what they need to properly run, but servers don't sit idle.
  • Applications are uncoupled from servers - applications run on virtual machines, and virtual machines can start and run on multiple servers.
  • The ability to service physical hosts with no application downtime. If a host needs work, you can evacuate all virtual machines to other hosts with XenMotion (the virtual machines, and the applications they run stay up during this).
  • If optional license (Xen Essentials) is purchased, the ability to do high availabilty (HA). When a physical host fails, HA will attempt to restart its virtual machines on surviving physical hosts automatically. Poor man's clustering!
  • If optional license (Xen Essentials) is purchased, the ability to do workload balancing (WLB). WLB will automatically move virtual machines between physical hosts so that CPU utilization stays in limits.

A minimum production environment for XenServer

For production you'll need:

  1. Two or more machines to server as physical hosts that run the hypervisor (XenServer). These need to have fairly new 64 bit CPU's and must support hardware virtualization if you want to run Windows virtual machines one them.
    • Multiple cores, the more cores the better.
    • The faster the CPUs, the better.
    • Ideally these hypervisor machines would be exactly the same. If the aren't the same, they should have very similar CPU specs. (If they are too dissimilar, XenServer will refuse to allow them both to form a pool together, which you *REALLY* want to have).
    • The more RAM you have the better. Probably 8GB each would be a start.
    • At least a couple of Gig NICs each.
    • You don't need to worry about about drive space, you only need 10 GB or so for XenServer (or use an iSCSI hardware initiator NIC and boot from iSCSI, no drives needed at all).
  2. Shared storage, like a SAN. Must be iSCSI or Fiber Channel to use some XenServer advanced features. If you don't have a SAN already, don't have the money to buy one, and don't mind spend some time fiddling a bit, try Nexenta or Openfiler.
    • If using Openfiler, any old machine will do *IF* it has lots of disk, preferably good "enterprise" class disk (SCSI or SAS) on a hardware RAID card
    • Older servers work fine, you're interested in multiple SCSI spindles on a real RAID card
    • Memory doesn't matter, a gig or two is fine
    • At least 2 Gig NICs (one for management, one for storage access)
  3. A separate machine running Windows to run XenCenter, the XenServer management console on. Can also do other stuff. Any old Windows machine should be ok.
  4. A good fully redundant network infrastructure, especially if using iSCSI (and most people doing this on the cheap will be)
    • At least 2 separate Gig switches that support fancy features like jumbo frames (or 2 storage VLANs on separate switches). The point is if one switch goes down storage clients (like the hypervisors) can still access the storage server.
    • Every machine involved in storage (clients and servers) should have 2 NICs for storage. These should be 2 cards, not 2 ports on the same card.
    • Cable things up so that machine involved in storage has 2 paths to the storage (client NIC one goes to switch one, goes to server NIC one; client NIC two goes to switch two, goes to server NIC two)
    • Configure every device involved (switches and NICs) to support jumbo frames
    • Make sure everthing involved knows how to use the 2 paths (XenServers and Openfiler)

That's really not too much gear: 2 machines for hypervisors, one machine for Openfiler, one Windows machine for management, a few extra NICs, and a couple Gig switches. You could use workstation machines for everything, but workstations usually limit the maximum RAM limit that you can install. Remember the more RAM the better in hypervisors (virtual machines need the RAM).

With this setup you could run dozens of virtual machines. The upper limit for the number of virtual machines will be available RAM or CPU. If you only have 8GB of RAM in each physical host, and you give each virtual machine 1GB, then only 7 will start on each hypervisor (the hypervisor need some, too).

Think about it. With 3 2U servers (6U total) you could virtualize a whole rack of existing servers. If you planned the virtualization scheme out carefully, you convert existing servers to virtual machines running on servers you already have, no new hardware needed. That's what XenConvert does, convert physical machines to virtual ones (P2V - physical to virtual).



Read the manuals. Really. Citrix has good manuals. Read the release notes, the install guide, and the administrator's guide.

The biggest hurdle typically will be setting shared storage up from scratch if you don't already have a SAN. The end goal is to be able to offer up multiple "disks" (also known as LUNs - logical units) to clients via iSCSI or Fibre Channel. You'll need a couple hundred GB per LUN, and eventually you will need at least a couple LUNs. You could make do with just one for now.

Most of the time, when offering up LUNs to clients, you try to be very sure that one, and only one, client can "touch" the LUN. If more than one machine can write to the same LUN at the same time, very bad things happen. The clients step on each other's disk with writes, and the file system gets very confused. By default, most SANs try to protect you from this situation.

Note that in using shared storage with XenServer, you *WANT* to have different physical hosts share the same LUN. It's designed to work this way. XenServer will handle locking and make sure that writes happen as intended to shared disks. This is the "secret sauce" that enables cool features like XenMotion.

The pre-reqs for XenServer and XenCenter are in the install guide. Make sure that you have what's needed.

One non-obvious thing to check for hardware virtualization support on your CPU's. Most newer CPU's support this - *BUT* - the support is turned off by default in BIOS. Look in the BIOS on your hypervisor hosts and make sure that it is on.

You only need 20GB or so of disk on the hypervisor hosts (the manual says a minimum of 16 GB). Since a production install uses shared storage for virtual machines, the only thing that the disks on the hypervisors store is the actual XenServer software. If you just so happen to have a couple of disks in these hosts, and you just so happen to have hardware RAID controllers on these hosts, then consider using the RAID controller to make a redundant RAID disk (say a mirror, like RAID-1) for XenServer to boot and run from. This is not required, XenServer hosts are fairly rugged, disposable and interchangeable if using shared storage. (In other words, if you lose a physical host, you don't have to attempt repair on it, you just reload it.) Don't spend any extra money on disks for physical hosts, one small disk is fine.

Installing XenServer

Read the manuals. Really. It's pretty much all in there. I'm not going to duplicate the manuals here. Read the Citrix docs.

It's pretty straight forward. Burn media. Boot from media. Answer the (simple) questions. It's very much like installing Linux from media. No big surprises.

During the install, it'll ask if you want to install the Linux Pack. If you are going to run Linux virtual machines, you should. That means you have to burn the Linux Pack to separate media, and insert it when prompted. If you load this on one of your physical hosts, remember to load it on all of them. (Remember, you are making a pool of physical hosts, where you can move running virtual machines between them at will. They must be loaded the same.)

Installing XenCenter

XenCenter is the management GUI for XenServers that runs on Windows. It's easy to install and easy to use. You could make do without it, but why? It's possible to run XenCenter on a Windows virtual machine, and some people like that, but it makes more sense to pick some Windows workstation you have access to, and run it from there. (If it's on a vm, and the vm or physical host has issues, then your management console will not be accessible.)

The XenCenter pre-reqs and installation documents are in the Xenserver Installation Guide. Again, read the book, it's straightforward.

It's pretty straight forward. The software is on the XenServer disk. Go to the client_install diectory and click on XenCenter.msi. Answer the (simple) questions.

License the physical hosts

Once you have loaded all of you physical XenServer hosts, get XenCenter going so you can manage them. One of the first things you should do is to license them all. The easiest way to do this is with XenCenter.

Start XenCenter. Connect to all of the physical hosts (you'll need the password the first time). You'll probably get a dialog box warning you you are running unlicensed. That's OK, we'll fix that.

In the XenCenter console, click on Tools / License Manager to start the License Manager. Click on Select All to select all XenServer physical hosts and click Activate. A browser window will open and you will be taken to Citrix’s website to fill out a form to get a free license key.

Acivation web page

After you submit the form, you will soon receive a license.xslic file from Citrix in your email. After you get the email, save the license.xslic file to somewhere you can get to from XenCenter. Start XenCenter, click on a host. Click on Server / Install License Key from the XenCenter menu. Browse to your license key file and click Open. XenCenter will apply the licenses and activate your servers.

Note that you get a one year license. It's free, but it expires. You'll have to do this over again every year. From the documentation:

"Issue: My annual (or not-for-resale) license has expired. What is going to happen?

Resolution: If the license on a XenServer host expires while the system is still running, all active virtual 
machines continue to run as long as the host system is not disrupted or rebooted. However, new virtual machines cannot be launched. 

Patch the physical hosts

Once you have loaded all of you physical XenServer hosts, and you have XenCenter going so you can manage them, one of the first things you should do is to patch them all. The easiest way to do this is with XenCenter. It's easiest to do this before you have any vm's loaded.

Start XenCenter. Go to Help / Check for Updates. This will check your versions of XenServer and XenCenter, and compare them to the latest available. If there are newer ones out, you could download them and then use XenCenter to apply them.

Once some time passes, and you get some virtual machines loaded and running, Citrix will undoubtedly release some updates. Since you have a pool, and you can move virtual machines between your hosts, then you can patch the hosts with no vm downtime. Decide on a host to patch (say the first host). Move all the vm's to other hosts. Patch the first host. It may take a reboot, whatever, let it reboot. Move on to the second host, move the vm's off of it, patch it. Keep doing that until all the hosts are patched. That's what "no downtime" means, you can patch and/or reboot physical hosts at will, and virtual machines don't see it, they stay up.


I have repeatedly plugged the manuals as being good. They are. They do, however, tend to focus on the command line way to do things. In most cases, you don't need to use the command line, XenCenter works fine.

High availability (HA) and workload balancing (WLB) are cool optional features that are enabled when you buy Citrix Essentials for XenServer. You should probably consider buying this some day, and you should build your XenServer infrastructure with an eye towards being "Essentials compatible".

The salient point here is that HA and WLB *REQUIRE* iSCSI or Fiber Channel shared storage. Other types of shared storage like NFS won't work for HA and WLB. Local storage works, but with local only storage you can't do XenMotion. Once you see a live XenMotion move a running virtual machine from host to host without interruption, you'll wonder how you ever lived without it.

Since most people don't have a FC SAN, you'll probably do this with iSCSI. FC works almost exactly the same.

Make a pool

The basic idea is in the installation guide, Chapter 3, "XenServer hosts with shared iSCSI storage".

So far, you have (at least 2) XenServer hosts visible in XenCenter. They are stand alone. The next task is to make them into a resource pool or "pool". Pretty easy.

Click on Pool / New Pool. The first host added to the pool will be the "pool master". It holds the master copy of the metadata about the pool and its members.

Add a new storage repository on local disk

Sometimes, you may want to add a storage repository housed on a disk actually attached to the XenServer. This might be a good idea to hold backup copies of virtual machines for use in the event of iSCSI server failure.

If you add local disks to the XenServer after the inital load, XenServer will see the disks, but do nothing with them. The XenCenter GUI has no provision to use local disk as a storage repository. Therefore, you *MUST* use the command line on the XenServer to do this. It's not so hard.

Use a ssh client like putty and ssh to the server. It's easiest to do this with some cut and paste work, and local consoles don't let you do this. Log into the Xenserver as root.

First, you need the UUID of the XenServer host you're adding the repository to. Use this command:

[root@xenserver2 ~]# xe host-list

uuid ( RO)                : 622f6518-d6fe-45ba-ade5-e80450651ee9
          name-label ( RW): xenserver1
    name-description ( RO): Default install of XenServer

You'll come back to this in a moment and copy it.

Next, check and see if XenServer sees the disk.

[root@xenserver2 ~]# fdisk -l
Disk /dev/cciss/c0d0: 73.3 GB, 73372631040 bytes
<snip, this is the boot drive>

Disk /dev/cciss/c0d1: 733.9 GB, 733910294528 bytes
255 heads, 32 sectors/track, 175664 cylinders
Units = cylinders of 8160 * 512 = 4177920 bytes
Disk /dev/cciss/c0d1 doesn't contain a valid partition table

So in this example, /dev/cciss/c0d0 is my boot drive, I'll leave it alone. /dev/cciss/c0d1 is the new unused, unformatted drive I want to build my new storage repository in.

Next, issue the xe command to build the new storage repository. You should cut and paste the UUID you found earlier, and make sure to use the *CORRECT* name for the disk to use. Building a storage repository is a destructive operation, if you do it against the wrong disk, you'll destroy the contents of the disk. The name-label field is the name that the storage repository will be called in XenCenter.

[root@xenserver2 ~]# xe sr-create content-type=user host-uuid=870cbbf0-23d3-43be-a6a4-9fefdf770c61  type=lvm device-config-device=/dev/cciss/c0d1 name-label="xenserver2 second local disk"

This won't take long. The new storage repository will immediately appear in XenCenter. You can also see it from the command line.

[root@xenserver2 ~]# xe sr-list type=lvm
uuid ( RO)                : ca58ca8e-ba00-e001-9c60-01228396e8a3
          name-label ( RW): Local storage
    name-description ( RW):
                host ( RO): xenserver2
                type ( RO): lvm
        content-type ( RO): user

uuid ( RO)                : 8deb9109-8efd-e8b8-fa7e-5d2dbff4712b
          name-label ( RW): xenserver2 second local disk
    name-description ( RW):
                host ( RO): xenserver2
                type ( RO): lvm
        content-type ( RO): user

Making "guests", AKA virtual machines

Coming soon.

I haven't had any luck with Windows 7 guests. The installer hangs (even with viridian=off). Windows XP seems fine, same with Windows Server 2003.

Linux guests need to be installed with a Xen kernel. You use one of the Linux templates XenServer comes with, and install from a netowrk repository that has the Xen aware kernel images in it. You can't install from a CD or DVD.

Once you have one, stress test it. For Windows, use iometer. For Linux, see Stress Testing and Benchmarking.

Personal tools