Historically, students and to a smaller degree faculty members, have requested server or other computing resources from the department to support projects or instruction. In the past, the department was unable to satisfy these requests in a wholly satisfactory fashion. Often students were given “yesterday‘s” aging hardware to use. In some cases, depending on what leftover servers were available, they might not get anything at all.
We started experimenting with the possibility of using virtualization to better serve these needs mid 2005.
At this time we had a number of servers running the Solaris operating system, and one of the features Solaris offers is “zones”, virtual servers which run on top of one OS instance. This was a promising start, but at the time zones only ran the Solaris operating system, and many projects wanted to use Linux or other operating systems. Since we had “Solaris zones or nothing” some projects were retooled to run on Solaris, although the project owners in some cases chafed at the Solaris only restriction.
A year later, we explored the use of VMware ESX for virtualization. Through the efforts of a CSC alumni, Kevin Kress and the VMware Academic program, we acquired a couple licenses of ESX (2 licenses of standard, 4 sockets) and started offering ESX based virtual machines for student projects and for instruction. This proved to be highly popular, with students using dozens of vm’s for projects and instructors using vm’s for instruction.
As the requirements for virtual machines grew, the department invested in the infrastructure required to host them on. The department purchased a small Fibre Channel Storage Area Network (HP EVA 4100) as a backing store and a number of Dell PowerEdge 6950 servers suitable for hypervisors. As the licenses provided through the Academic program were limited in licensed features, we paid to upgrade them (Upgraded to Enterprise).
In 2009, we needed more capacity and attempted to purchase more licenses to put more physical hosts in production. We got the purchasing approvals required and submitted the paperwork required to purchase 8 more licenses of ESX (“new” style “by the socket” licenses, enough Enterprise licenses to cover 2 Dell 6950 4 socket servers). Unfortunately, after the process was started, the looming budget crisis came to a head, and all pending purchases were cancelled. This left us running with one physical host properly licensed, and 2 others running on evaluation licenses.
In November 2009, we have over 100 virtual machines in use. Most of these are student projects. We have supported classes where each enrolled student was issued their own vm for classwork. Students use the vSphere client to connect to our vCenter management server, and then perform typical vm management tasks like loading an operating system, and configuring their vm. In this way, students are getting direct hands on experience with VMware tools.
Probably one of the most prolific faculty users of virtual machines in instruction is Dr. David Janzen in Software Engineering. He teaches SE Capstone classes. These are year long classes where students design, build and deploy substantial projects on virtual machines. He says that typically he has 3 or 4 teams a year, and each team has at least 2 virtual machines, one for development and one for deployment. He was one of the original users of Solaris zones before ESX in 2006. He says that his most substantial obstacle during that year was getting the zones from us and dealing with the idiosyncracies of the Solaris only zones.
Since then, Capstone projects have used virtual machines hosted on ESX. These vms are fast and easy for us to provision, and student teams can have multiple vms to experiment with tools on different operating systems. For example, a team might be debating the benefits of LAMP on Linux versus .NET on Windows. Now they can get 2 vm’s and actually try both and see what works better for them.
I taught 2 classes recently CSC 358 “Linux System Administration” (Cal Poly), and CIS 122 “Introduction to Linux” at Cuesta College, using Department provided vms. Classes like this in the past were traditionally taught with a limited number of physical servers and students had to work in large teams. In both of these classes, I issued each student their own vm, and students used the vSphere Client to connect to vCenter, to configure, power on or power off their machine, get console, and load operating systems and software. Students reported that they really like this approach which allowed them the “feel” of having their own server.
Since it doesn’t appear likely that we will get enough ESX licenses any time soon, we have been exploring other options. We currently are evaluating Citrix XenServer 5.5 running a pool on top of Dell PS5000E array of iSCSI shared storage. Although free XenServer lacks some of the enterprise features we have become used to like high availability (HA), so far it appears that it may be a suitable substitute if required.
Other challenges: In our labs, we support something like 300 ageing workstations. Replacing these as they age is an expensive proposition. We are currently exploring ways to leverage our virtualization investment to perhaps use aging workstations as clients to offer virtual desktops. VMware offers VMware View for this, and other vendors have similar products.
For reference, our existing ESX HA/DRS cluster is:
3 Dell 6950 physical hosts (4 socket machines) with 32 GB RAM running ESX4.
HP EVA 4100 FC SAN with 15 TB as a data store for vms.
Dell 2950 running vCenter for cluster management.
Dell 2850 running Oracle 11g as back end for vCenter.