NexentaStor ZFS Bonnie++ Benchmarks

Following on from my previous post regarding AFP and iSCSI benchmarks i’ve decided (after many requests) to post a few raw benchmarks of the system gathered by bonnie++, the environment is as follows:

CPU: Athlon 64 3700+
RAM: 2gb DDR400
Controllers: 2x SATA-II and 1x SATA-I
Hard Drives: 7x Samsung 2tb Spinpoint F3 5600 RPM
OS: NexentaStor 3
ZFS Config: Standard raidz1 with dedup=off and compression=off

So I gathered a few results… after some annoying results I found a bottleneck in my system on 1 of the drives that seemed to bring the benchmark result down greatly, however once this was worked out I acheived the following: … 

 

NexentaStor AFP & iSCSI Xbench Benchmarks

UPDATED WITH BOTH iSCSI & AFP RESULTS

I’ve recently setup a ZFS raidz with 7 disks using NexentaStor, natively this doesn’t come with AFP, but I managed to get a package and get this all working (which i’ll demo in an upcoming tutorial), one thing i noticed however is that I could never find any benchmarks that tested the general use of a NAS… i.e. using CIFS or AFP over a network to another machine and testing performance. There are literally 0 AFP benchmarks for NexentaStor due to its non-native support. So here it is

Test Environment:

NAS: 7 SAMSUNG Spinspoint F3 2tb Hard drives connected via SATA & SATA-II, in a ZFS raidz running on NexentaStor 3
Network: 1000BaseT Gigabit LAN
Test Machine: MacMini with 1gb memory (disk tests are cached in memory for speed) running XBench to benchmark
… 

 

Bye Bye FreeNAS hello NexentaStor

Well, i’ve used FreeNAS for around 2 years+ now, and all has been good, however, in that time demand for large quantities of storage has now been joined by demand for high speed storage; Once I had replaced all of my drives with 2tb 7200rpm drives I realised that FreeNAS wasn’t giving me the performance on each drive that i’d like.

Welcome NexentaStor… a storage appliance natively supporting ZFS as its based on OpenSolaris! NexentaStor offers many of the same features of FreeNAS, however at a greater level of performance. This comes at a cost though, the free Community edition is limited to a some what large 18tb, whereas the paid version will cost you.

Also NexentaStor is a pure storage appliance, although it supports CIFS/iSCSI/NFS and the likes, it does not have all the bells and whistles of FreeNAS… but for me, there is no use having all these features if I can’t have the speed.

I’m installing NexentaStor now as we speak, after which I’ll be posting a review / tutorial on NexentaStor after i’ve got it up and running and configured to my liking 🙂 I hope you enjoy it!

 

Multi-Homed Multi-Boxed ZFS Luns Using iSCSI

Recently i had a thought… Most of my machines are sitting redundant and have upto 4 drives in each… without unscrewing every single one of out of my rack… i want to utilise all of that space into one giant zpool using ZFS.

Imagine combining the drive space resources of 10 computers into 1 giant drive? see where i’m going with this now?

So my idea is to make the drives in each machine available to the “ZFS Master” (the solaris box running the ZFS pool) via iSCSI which is a sort of “offer your drives at a block level over ethernet” protocol… then add them all into a giant zpool… the advantages of this are:

  • Utilising all of my hardware
  • iSCSI can work over WAN so i could use boxes i have in other cities
  • Have each Lun “individual computer + drives” power up via WOL (wake on lan) initiated by the ZFS Master
  • Greater level of redundancy possible.
  • Backup “ZFS Master” possible
  • Everything connected via either Gigabit Ethernet or Fibre Channel.

So imagine.. a rack full of computers with hard drives in them… at the bottom is a more powerful computer running solaris which mounts the hdd’s of every single other computer and adds them into the zpool… then advertises this zpool over AFP / SMB to computers in my house…

Yet another way to make a α size tb system out of old free components that could possibly outperform a £20,000 solution! =) I’ll post all of the results of my testing after the break in a few days 🙂

P.S. This idea is without considering performance as that is something i can work out later 🙂 & thanks bda for your advice

 

Utilising and Exploring ZFS to its Potential (A Quick Guide)

Have you seen the Drobo box? it’s a SAN that allows you to create giant volumes and hot swap out hard drives at will with failure tolerance… bad news, is that it costs close to £1000 even without the drives, i’ll explain how to make a better one… for free! =).

ZFS (Zettabyte Filing System) is Sun’s newest file-system offering, its supported on FreeBSD / Solaris natively and Mac OS X / Linux / Windows via third-party utilities. I’m gonna keep this guide, simple, short and sweet, so i’ll bullet list the main features that wow people about ZFS =)

  • It can store up to 340 quadrillion zettabytes of data (no other production filing system can do this)
  • It checksum’s your data on the fly so you can check for integrity by “scrubbing” it (identifying broken drives before they completely die)
  • It supports every raid configuration you can think of natively and doesn’t suffer from the raid5 data-hole.
  • You can create snapshots of your data that do not waste hdd capacity.
  • Volumes or “Pools” can be expanded at any time, so you can start with a 2tb raid, and increase it to a 10tb raid with no data loss.
  • You can mix/match capacities, brands, rpm’s of drives.
  • Its reliable* (on officially supported incarnations anyway)
  • Its a memory whore (don’t try it unless you have 2gb ram on your system)
  • Its supported in the latest version of FreeNAS (0.7)
  • Allows hotplugging of drives when one fails (so you don’t lose data/time)
  • Hotspares are supported
  • Can be easily transferred / transported to any other ZFS supported system without extensive configuration or any data loss.
  • Its free free free free (under CDDL).

Think of a hardware raid5 or a geom_concat/raid and then think about those again, but without any of the issues / flaws they have… thats what ZFS is! =)

So lets get started, I’ll run through creating and bringing a ZFS raid online first, and then some maintenance commands afterwards. I suggest trying this on a …