Recently Professor Diana Franklin was awarded an NSF grant to build a small high performance computing cluster here on campus. As the “friendly neighborhood system administrator” I volunteered to help build and run it. We chose UCSD’s Rocks Linux based clustering software for it. Rocks comes”out of the box” with most thing installed and configured correctly, even complicated things like Ganglia, a cluster monitor, or Sun Grid Engine, a job scheduler. Rocks makes it easy. Having said that, however, don’t tinker with Rocks components unless you know what you are doing…
A little about Rocks
Rocks clustering software is based on CentOS, a redistribution of Red Hat Enterprise Linux. Rocks itself uses iptables to do both firewalling and network address translation (NAT). By default a Rocks cluster consists of a head node, that is more or less the controller of the cluster, and multiple compute nodes. The head node has 2 network interfaces cards in it. One of these cards is set up as a private network (10.0.0.0 “fake” subnet) that the compute nodes reside on, and a “real” network interface to talk to the outside world. The head node provides the compute nodes with most common network services (DNS, NTP, etc.). It also uses NAT to allow the compute node on their private subnet to access the internet. So even if you don’t really need a firewall, you need iptables running to do the NAT on the head node.
No… Don’t click that!
At some point a few weeks back, I figured that I didn’t need the firewall, and I disabled it with some clicky Gnomish gui widget. I didn’t realize about the NAT, or that the proper way to administer iptables was from the command line. I was in a hurry, so I took the “short cut”. Yeah right.
Quite a few users on the cluster are mechanical or aeronautical engineers running computation fluid dynamic modeling using a commercial package called Fluent. So a few days after I monkeyed with the firewall, users were complaining that they couldn’t run their Fluent jobs. Turns out that the College of Engineering has a Fluent license server, and that Fluent jobs need to talk to the license server (on a different subnet) before starting. Since I had inadvertently turned NAT off, compute nodes couldn’t connect to anything outside of their private subnet, no licenses, no jobs.
What’s done is done
“So no problem”, I thought to myself, “I’ll just use the clicky widget to turn it back on”. Well, you don’t do iptables like that. It didn’t work of course. I’m no iptables expert, but it appears that there are basically 2 config files to make iptables work. There’s /etc/sysconfig/iptables-config, which is the setup file for iptables itself, and then /etc/sysconfig/iptables which are the actual rules. Supposedly. I didn’t have /etc/sysconfig/iptables at all. Iptables wouldn’t start because it had no rules. It *USED* to start. It *USED* to NAT. So not only did the gui widget disable the firewall, it ate my /etc/sysconfig/iptables.
What to do
So at this point I had a couple of choices. I could send email of my silliness to the Rocks maintainers and hope they would have mercy on my soul. Or maybe I find some unused system laying around here, load a default load of Rocks on it, and copy the default /etc/sysconfig/iptables file back to the real head node. What I decided to do was to blow away one of the compute nodes, load it as if it were a new head node, copy the file I needed off of it, and then reload it as a compute node. Since Rocks makes it trivial to load a compute node, this wasn’t as hard as it sounded. Once the “fake” head node was up, I jammed a USB drive it it, copied my file to the USB, and then rebooted the node and told it to PXE boot from the head node (which auto-magically reloads a compute node with no user interaction).
I then copied the “stock” /etc/sysconfig/iptables to the head node, and started iptables. NAT was back.
So like others have said “note: using other utilities to modify the firewall such as ‘setup’ or ‘lokkit’ will break Rocks”. I should have listened.
OBTW, here what a “stock” Rocks 4.2.1 /etc/sysconfig/iptables looks like:
-A POSTROUTING -o eth1 -j MASQUERADE
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
-A FORWARD -i eth1 -o eth0 -m state –state NEW,RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i eth0 -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -i lo -j ACCEPT
# Allow these ports
-A INPUT -m state –state NEW -p tcp –dport ssh -j ACCEPT
# Uncomment the lines below to activate web access to the cluster.
#-A INPUT -m state –state NEW -p tcp –dport https -j ACCEPT
#-A INPUT -m state –state NEW -p tcp –dport www -j ACCEPT
# Standard rules
-A INPUT -p icmp –icmp-type any -j ACCEPT
-A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
# Uncomment the line below to log incoming packets.
#-A INPUT -j LOG –log-prefix “Unknown packet:”
# Deny section
-A INPUT -p udp –dport 0:1024 -j REJECT
-A INPUT -p tcp –dport 0:1024 -j REJECT
# Block incoming ganglia packets on public interface.
-A INPUT -p udp –dport 8649 -j REJECT
# For a draconian “drop-all” firewall, uncomment the line below.
#-A INPUT -j DROP