Reading Linux extfs (ext2/3/4) in Mac OS X

I have a collection of both Mac OS X and Linux servers dotted around the place… Sometimes I need to read ext4 formatted drives on my Mac’s… this always proves an annoying problem where I have to mount them inside a Virtual Machine and copy things across…. Pain and Slow!

I searched the interweb for a solution to this… and most  guides are about 3 years out of date, so here’s one from 2013 that works on Lion / Mountain Lion and even Mavericks DP1/2.

Download and Install : OSXFUSE-2.5.4.dmg (this took off where MacFuse left off and even has compatibility for old MacFuse plugins).

Download and Install : fuse-ext2-0.0.7.dmg

Plug in your ext fs formatted drive… Voila! I wouldn’t rely on this in a production environment mind you… but its decent if you just need to grab some files from a USB drive quickly for example.

 

NexentaStor ZFS Bonnie++ Benchmarks

Following on from my previous post regarding AFP and iSCSI benchmarks i’ve decided (after many requests) to post a few raw benchmarks of the system gathered by bonnie++, the environment is as follows:

CPU: Athlon 64 3700+
RAM: 2gb DDR400
Controllers: 2x SATA-II and 1x SATA-I
Hard Drives: 7x Samsung 2tb Spinpoint F3 5600 RPM
OS: NexentaStor 3
ZFS Config: Standard raidz1 with dedup=off and compression=off

So I gathered a few results… after some annoying results I found a bottleneck in my system on 1 of the drives that seemed to bring the benchmark result down greatly, however once this was worked out I acheived the following: … 

 

NexentaStor AFP & iSCSI Xbench Benchmarks

UPDATED WITH BOTH iSCSI & AFP RESULTS

I’ve recently setup a ZFS raidz with 7 disks using NexentaStor, natively this doesn’t come with AFP, but I managed to get a package and get this all working (which i’ll demo in an upcoming tutorial), one thing i noticed however is that I could never find any benchmarks that tested the general use of a NAS… i.e. using CIFS or AFP over a network to another machine and testing performance. There are literally 0 AFP benchmarks for NexentaStor due to its non-native support. So here it is

Test Environment:

NAS: 7 SAMSUNG Spinspoint F3 2tb Hard drives connected via SATA & SATA-II, in a ZFS raidz running on NexentaStor 3
Network: 1000BaseT Gigabit LAN
Test Machine: MacMini with 1gb memory (disk tests are cached in memory for speed) running XBench to benchmark
… 

 

Bye Bye FreeNAS hello NexentaStor

Well, i’ve used FreeNAS for around 2 years+ now, and all has been good, however, in that time demand for large quantities of storage has now been joined by demand for high speed storage; Once I had replaced all of my drives with 2tb 7200rpm drives I realised that FreeNAS wasn’t giving me the performance on each drive that i’d like.

Welcome NexentaStor… a storage appliance natively supporting ZFS as its based on OpenSolaris! NexentaStor offers many of the same features of FreeNAS, however at a greater level of performance. This comes at a cost though, the free Community edition is limited to a some what large 18tb, whereas the paid version will cost you.

Also NexentaStor is a pure storage appliance, although it supports CIFS/iSCSI/NFS and the likes, it does not have all the bells and whistles of FreeNAS… but for me, there is no use having all these features if I can’t have the speed.

I’m installing NexentaStor now as we speak, after which I’ll be posting a review / tutorial on NexentaStor after i’ve got it up and running and configured to my liking 🙂 I hope you enjoy it!

 

Multi-Homed Multi-Boxed ZFS Luns Using iSCSI

Recently i had a thought… Most of my machines are sitting redundant and have upto 4 drives in each… without unscrewing every single one of out of my rack… i want to utilise all of that space into one giant zpool using ZFS.

Imagine combining the drive space resources of 10 computers into 1 giant drive? see where i’m going with this now?

So my idea is to make the drives in each machine available to the “ZFS Master” (the solaris box running the ZFS pool) via iSCSI which is a sort of “offer your drives at a block level over ethernet” protocol… then add them all into a giant zpool… the advantages of this are:

  • Utilising all of my hardware
  • iSCSI can work over WAN so i could use boxes i have in other cities
  • Have each Lun “individual computer + drives” power up via WOL (wake on lan) initiated by the ZFS Master
  • Greater level of redundancy possible.
  • Backup “ZFS Master” possible
  • Everything connected via either Gigabit Ethernet or Fibre Channel.

So imagine.. a rack full of computers with hard drives in them… at the bottom is a more powerful computer running solaris which mounts the hdd’s of every single other computer and adds them into the zpool… then advertises this zpool over AFP / SMB to computers in my house…

Yet another way to make a α size tb system out of old free components that could possibly outperform a £20,000 solution! =) I’ll post all of the results of my testing after the break in a few days 🙂

P.S. This idea is without considering performance as that is something i can work out later 🙂 & thanks bda for your advice