Archive for the ‘ High Performance Computing and Clusters ’ Category

Loaded Rocks 5.3 today on

If you run Rocks, you really should register your cluster.  Really.

If you run Rocks, you really should register your cluster. Really.

Rocks is a modified CentOS Linux distribution specifically made for high performance cluster computing from San Diego Supercomputer Center (SDSC). I don’t know of a good reason *NOT* to run Rocks if you have a cluster.  Your tax dollars pay the Rocks people to build and maintain the Rocks cluster Linux software distribution.  They do a good job.  Most common cluster tools and software packages come already included.  You can literally get a multinode cluster completely loaded and online in a couple of hours.

It was time for to get a facelift.  It was running an older version of Rocks, and has been getting cranky recently.  I loaded the latest stable Rocks 5.3 on it.  No real issues, like usual, Rocks is easy to load.  The only issue I had is that the built in DHCP service tried to assign the IP address of the interconnect switch to a compute node.  That was my fault, because I didn’t tell Rocks that the switch was using that address like the install doc told me to.

The Rocks Avalanche scalable installer uses peer to peer technology to help load compute nodes, so the more nodes you have, the faster they load.  Really.  Nodes that have already started their loads help out new nodes, unburdening the head node from sole loading duty.  It takes maybe thirty minutes to load all of my compute nodes.  Even if I had hundreds or thousands, reloading them all from scratch would just take a couple of hours.

AMAX GPU Workstation, AKA “cluster in a box”

Recently a faculty member got a grant to buy a small GPU based computing cluster.  He wound up buying a ServMax PSC-2n from AMAX.  It came loaded with Debian.  It has four NVIDIA Tesla 2050 GPU cards in it.  That’s four times 448 cores per card = 1792 GPU cores in a workstation form factor.

Here’s a screenshot of it in action:

PSC-2n Desktop

High Performance Computing Clusters in the Computer Science Department, Cal Poly, SLO

Cal Poly, SLO, where I work is a teaching university.  We don’t do a lot of research and historically, there hasn’t been a lot of high performance computing on campus.  A number of years ago, Information Technology Services (ITS, the central campus computing people) got a grant and set up a small cluster for general use.  Users of this cluster that I’ve talked to weren’t very happy with it.  It wan’t particularly big, and it wasn’t particularly easy to use.  Since then, various faculty with research projects have gotten their own clusters which their departments operated, not ITS.  The first cluster I’m aware of like this was, which I helped spec out, buy, rack up, and loaded the operating system on.

Read more

Clustering: Don’t turn the Rocks firewall off unless you know what you’re doing…

Recently Professor Diana Franklin was awarded an NSF grant to build a small high performance computing cluster here on campus. As the “friendly neighborhood system administrator” I volunteered to help build and run it. We chose UCSD’s Rocks Linux based clustering software for it. Rocks comes”out of the box” with most thing installed and configured correctly, even complicated things like Ganglia, a cluster monitor, or Sun Grid Engine, a job scheduler. Rocks makes it easy. Having said that, however, don’t tinker with Rocks components unless you know what you are doing… Read more