Archive for the ‘ Virtualization ’ Category

Growing a Disk is *ALWAYS* a Bad Idea if You Have Snapshots

Never grow a disk if the machine has snapshots.  Even if the GUI lets you.

This morning I got a request to grow a virtual disk on a vm.  What you should do is look at the machine and see if it has any snapshots.  If it does, then you should delete them before attempting any disk grow operations.  In fact, what should happen is is if you try it, the GUI will give you an error stating that the “machine has snapshots, grow is not allowed”.  I was going too fast (and didn’t have enough coffee) and tried growing it with the machine on.  It failed.  I didn’t check for snapshots, I just turned the machine off.  Then I grew the disk.  It allowed me to do this.  (Really, the GUI should have refused to do this.)  This did *NOT* do what I expected…

Read more

Be Careful When Putting an VMware ESX Host in Maintenance Mode With A vCLI Script

If your VMware ESX hosts are like our ESX hosts, the hosts are in clusters and you have Distributed Resource Scheduling (DRS) enabled.  So in the vCenter GUI, when you right click a host and select “Enter Maintenance Mode”, DRS evacuates all running virtual machines to other hosts in the cluster.

I’m learning more about scripting actions like this with the vSphere Command-Line Interface (vCLI).

Specifically, the book says that the vCLI command “vicfg-hostops –operation enter” (“Entering and Exiting Maintenance Mode with vicfg-hostops”, page 28) does exactly the same thing.  In fact that page in the manual says “If VMware DRS is in use, virtual machines that are running on a host that enters maintenance mode are migrated to another host automatically.”

This is *NOT* true.  When you run this command, runnings vm’s are SUSPENDED, not migrated.

As far as I can tell, I’m doing everything correctly.  Others have seen the same thing.

I’m not sure if it’s a “feature” or what, but be careful.  Telling a host to enter maintenance mode with a vCLI script may not migrate running machines.

(If it matters, I’m running ESX 4.1, Update 1.)

A Brief Overview of VMware vStorage API for Array Integration (VAAI) and Why You Should Care

VMware vSphere version 4.1 introduced various new features, including vStorage API for Array Integration (VAAI).  If you are using storage that supports VAAI (and most storage vendors are implementing it over time) then you can offload some storage intensive tasks to the storage array, and the ESX hosts are freed up to do other tasks.  What this means is that for certain kinds of tasks like making a new disk for a virtual machine (vm) that needs to be zeroed out, copying a vm, cloning a vm, etc., these operations will be much faster and the ESX hosts will be much less busy.

Read more

Use Storage vMotion to Properly Rename a Virtual Machine’s Datastore

I’ve been a VMware administrator for a while now, but a co-worked recently showed me this handy trick.  This assumes that you can do a Storage vMotion, This requires you to have the right version of vSphere, like Enterprise or Enterprise Plus. (Well, not exactly…  As Joe points out below, if you turn the machine off or suspend it first, you can do this with Standard as well.)

In vCenter, you can easily rename a virtual machine.  You just right click on the machine and pick “Rename”.  Every virtual machine consists of a folder and files in a datastore.  Unfortunately, right click, Rename does not change the actual names of any of the files or folders.  So if you use rename every now and then, and you have lots of virtual machines, you can easily get into a situation where there are lots of datastores with lots of virtual machine files and folders that don’t match the name of any known machine.  Confusing!  Read more to see the fix… Read more

Support your local VMUG!

vmware-logoVMware sponsors local VMware User Groups worldwide.   I recently attended a meeting of the Santa Barbara VMware Users Group and had a great time!  The topic was VMware View, their flagship desktop virtualization solution.  VMware sent one of their Systems Engineers to do a step by step hands on demonstration of installing VMware View.  After a (free!) pizza lunch, Wyse demonstrated using their thin clients with View.  The coolest Wyse thin clients I saw were the ones made by Teradici that did native PC over IP (PCoIP) in hardware, like the Wyse P20.

Having installed both VMware View, as well as Citrix XenDesktop, I can honestly say both products have their relative strengths and weaknesses but that View has a far easier install process.

Introduction to Desktop Virtualization

I was recently asked to prepare a brief presentation on desktop virtualization.  I’ve been through quite a few desktop virtualization projects throughout the years.   Some went off better than others.  I learned a few tidbits along the way.

I thought I’d share.  Click on the image to get the PowerPoint slides.

Introduction to desktop virtualization

Don’t use the Microsoft iSCSI Software Initiator in a Virtual Machine

Lots of people regularly use the Microsoft iSCSI Software Initiator on physical machines to attach iSCSI LUNs from filers to hosts. This is an add on free download for Server 2003, and comes pre-installed on Server 2008.

 In general, on physical machines, it works fine.

So assuming that it’d work just fine on a virtual machine, we tried to use it on some production machines running Small Business Server 2003.  It seems to load just fine, and works like it usually does on a “real” machine – *BUT* it has terrible throughput under load.  It’ll even disconnect at random times.  Uninstalling/reinstalling, and lots of fiddling around didn’t help.

Of course, we went to Google to see if others have issues with it, and there’s *LOTS* of whining about this, both on VMware ESX and XenServer.

It’s possible that it has been addressed in Server 2008.  We saw this on 2003, and haven’t attempted to replicate the issue on 2008 vm’s.

Nonetheless, we suggest you figure out another way to attach iSCSI LUNs from a filer to a Windows virtual machine.  Both ESX and XenServer allow you to easily present iSCSI based disks to vm’s inside their virtualization environments.  Don’t do it with the Microsoft iSCSI Software Initiator from within the guest.

Add a XenServer storage repository on local disk

Most people use shared storage, like iSCSI, for their XenServer storage repositories to hold virtual machines.  Virtual machines *MUST* be in a shared storage repository in order to enable XenMotion and other advanced features.

It is handy to have some local storage repositories available, mainly to hold backups of machines.  For example, you could shutdown a running virtual machine, right click on it, select Copy, and select a local repository to hold the copy.  Then if you lost your iSCSI server, you’d have a local copy of the virtual machine you could start until the the iSCSI server became available again.

The XenCenter management GUI doesn’t allow you to make local storage repositories.  You have to use the command line.  Luckily, it’s pretty easy.  Here’s my updated XenServer wiki page on how to do this.

Stream local desktops with Citrix Provisioning Server (part of XenDesktop)

“Stream a desktop? What is that?”  Well, it’s something like making an image and then deploying it (ala Norton Ghost).  The cool thing is that clients *ALWAYS* PXE boots from the image, nothing is installed locally.

So you can make one image, and then run a whole room or building full of machines off that one image, and never install anything locally.

As I install it and mess around with it, I’m posting notes on my wiki.

Here’s the video that caught my eye.  (Note that Ardence was acquired by Citrix in 2006.)

Read more

Notes on using Openfiler with Xenserver

Once you have Openfiler going with iSCSI (read my notes here), then try serving up iSCSI storage with Openfiler to XenServer.  You need shared storage, like iSCSI, to enable some of the “enterprise” features like XenMotion, where you can move running virtual machines from host to host.  Here are my notes on using Openfiler with XenServer.