HowTo:FilerPerformance

From Greg Porter's Wiki

Jump to: navigation, search

Contents

Filer performance

Description of test configuration

In all my examples on these wiki pages, I have used simple examples.

I have 2 XenServer hypervisor hosts, in a simple resource pool, connected to iSCSI shared storage. One Windows 2003 virtual machine lives in the pool.

For these tests, I run the Windows vm on a storage repository hosted on the filer under test, and use iometer from within the vm to report I/O statistics.

This is not a particularly rigorous test suite (you can say that again), but it will hopefully show whether or not storage appliance software suites like Openfiler or Nexenta are in the same league as a "real" array.

Iometer is very configurable. I have set my test instance on Iometer as follows:

  • Latest iometer (version 2006.07.27, the latest is stale, but still works)
  • 10 workers set to running against the vm's C: drive
  • Iometer "Access Specifications" set to "All in one" (percentage of read/write and block size)

To summarize: Same vm, on same host, talking across same switch to same filer hardware. Same, same, same in all test cases. The only difference was that Openfiler was running on the filer host, or Nexenta was running on the filer host.

Windows 2003 running iometer against a SR hosted on Openfiler iSCSI

Test host:

  • Openfiler v2.3, 64 bit, patched
  • 5 300GB SAS drives on PERC RAID Controller
  • The 5 drives are configured in one RAID-5 set, with 2 virtual drives presented, a little one for Openfiler to live on, and the rest for iSCSI.
  • Dell 6950, quad socket of dual-core AMD Opteron(tm) Processor 8212 at 2.0 GHz
  • 32 GHz of RAM
  • One Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet NIC

That's a darn good server, probably better than it needs to be for filer use. Oh well, that's one I had laying around.

  • One 200GB iSCSI LUN presented to XenServer, target configuration all default

Virtual Machine:

  • Windows 2003 32 bit, fully patched
  • 1 CPU
  • 512 MB RAM
  • No application but iometer running

Iometer configuration:

  • Latest iometer (version 2006.07.27, the latest is stale, but still works)
  • 10 workers set to running against the vm's C: drive
  • Iometer "Access Specifications" set to "All in one" (percentage of read/write and block size)
Iometer running on Windows against Openfiler, about 6000 I/O's a second

Interestingly, the load on the Openfiler server goes way up during iSCSI I/O operations. I guess that's what hardware initiators are for.

Openfiler status during Iometer run

Windows 2003 running iometer against a SR hosted on Nexenta iSCSI

Test host:

  • NexentaStor Community Edition, 3.0.2
  • 5 300GB SAS drives on PERC RAID Controller
  • The 5 drives are configured JBOD in the RAID Controller
  • I used one drive for the system pool, and the remaining 4 in one raidz pool for iSCSI.
  • Dell 6950, quad socket of dual-core AMD Opteron(tm) Processor 8212 at 2.0 GHz
  • 32 GHz of RAM
  • One Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet NIC

That's a darn good server, probably better than it needs to be for filer use. Oh well, that's one I had laying around.

  • One 200GB iSCSI LUN presented to XenServer, target configuration all default

Virtual Machine:

  • Windows 2003 32 bit, fully patched
  • 1 CPU
  • 512 MB RAM
  • No application but iometer running

Iometer configuration:

  • Latest iometer (version 2006.07.27, the latest is stale, but still works)
  • 10 workers set to running against the vm's C: drive
  • Iometer "Access Specifications" set to "All in one" (percentage of read/write and block size)
Iometer running on Windows against Nexenta, about 7000 I/O's a second

Like I said, the load on filers goes up when they get hammered during iSCSI I/O operations. I guess that's what hardware initiators are for.

Nexenta status during Iometer run

Windows 2003 running iometer against a SR hosted on Equallogic iSCSI

Same test, same everything, but now against a Dell/Equallogic PS5000E, 2GB cache, 2 controllers, 8 1TB SATA drives in RAID-5, presenting one 200GB LUN to XenServer.

Note that this is an entry level iSCSI array, using SATA and not very many spindles.

Connecting an Equallogic LUN

Iometer on the vm (on the Equallogic hosted SR) runs a bit slower than on Openfiler. Something like 5000 I/O's per second.

A bit slower

Conclusions

Of the three tested, Nexenta was the fastest, about 7000 I/O's per second (note that's with RAID set to JBOD (no RAID), and Nexenta built-in ZFS running a 4 drive zpool).

Next fastest was Openfiler, about 6000 I/O's per second (using a Dell PERC hardware RAID card with cache RAM, drives in RAID-5).

Slowest was the actual dedicated storage appliance, the entry level Equallogic PS5000E, about 5000 I/O's per second (8 SATA drives).

So either storage software solution does a decent job of doing iSCSI under load, and both actually go faster that a dedicated filer that costs 10 times as much.

I can't use this as a basis for any conclusive statements like "Nexenta kicks butt" or "Dell sucks", but it is interesting to note that storage appliance software suites running on (good) commodity hardware isn't significantly worse than a dedicated filer, and in some cases may actually be faster.

To sum it up: Storage appliance software suites hold their own against dedicated filers.

Personal tools