rants, tirades, ruminations
ZFS on Linux posted Sat, 13 Oct 2012 03:01:03 UTC
Since I'm on a roll here with my posts (can you tell I'm bored on a Friday night?), I figured I would also chime in here a bit with my experiences using ZFS on Linux.
Quite some time ago now, I posted about OpenSolaris and ZFS. Fast forward a few years, and I would beg you to pretty much ignore everything I said then. The problem of course is that OpenSolaris doesn't really exist now that the asshats at Oracle have basically ruined anything good that ever came out of Sun Microsystems, post acquisition. No real surprises there I guess. I can't think of anyone really whom I've known over the years who actually likes Oracle as a company. They've managed to bungle just about everything they've ever touched and continue to do so in spades.
Now, the knowledgeable reader might say at this point, but what about all of the forks? Sorry folks, I just don't see a whole lot of traction in any of these camps. Certainly not enough to warrant dropping all of my data onto any of their platforms anyway. And sure, you could run FreeBSD to get ZFS. But again, it seems to me the BSD camp in general has been dying the death of a thousand cuts over the years and continues to fade away into irrelevance (to be fair, I'm still rooting for the OpenBSD project; but I'd probably just be content to get PF on Linux at some point and call it a day).
What I'm trying to say of course is that Linux has had the bulk of the lion's share in real, capital resources funding development and maintenance for years on end now. So while you might not agree with everything that's happened over the years (devfs anyone? hell, udev now?), it's hard to argue that Linux can't do just about anything you want to do with a computer platform nowadays, whether that be the smartphone in your pocket or the several thousand node supercomputer at your local university, and everything in between.
Getting back to the whole point of this post, the one things that is glaringly missing from the Linux world still is ZFS. Sure, Btrfs is slowly making its way out of the birth canal. But it's still under heavy development. And while I thought running ReiserFS v3 back in the day was cool and fun (you know, before Hans murdered his wife) when ext2 was still the de facto file system for Linux, I simply refuse to entrust the several terabytes of storage I have at home now to Btrfs on the off chance it won't corrupt the entire file system.
So, where does that leave us? Thankfully the nice folks over at Lawrence Livermore National Laboratory, under a Department of Energy contract, have done all the hard work in porting ZFS to run on Linux natively. This means that you can get all the fantastic data integrity which ZFS provides on an operating system that generally doesn't suck! Everyone wins!
Now I've known about the ZFS on FUSE project for awhile along with the LLNL project. I've stayed away from both because it just didn't quite seem like either was ready for prime time just yet. But I finally took the plunge a month or so ago and copied everything off a dual 3.5" external USB enclosure I have for backups which currently has two 1.5TB hard drives in it and slapped a ZFS mirror onto those puppies. I'm running all of this on the latest Debian testing kernel (3.2.0-3-amd64 at the moment) built directly from source into easily installable .deb packages, and I must say, I'm very impressed thus far.
Just knowing that every single byte sitting on those drives has some kind of checksum associated with it thrills me beyond rational understanding. I had been running a native Linux software RAID-1 array previously using mdadm. And sure, it would periodically check the integrity of the RAID-1 mirror just like my zpool scrub does now. But I just didn't have the same level of trust in the data like I do now. As great as Linux might be, I've still seen the kernel flip out enough times doing low level stuff that I'm always at least a little bit leery of what's going on behind the scenes (my most recent foray with disaster was with the same mdadm subsystem trying to do software RAID across 81 multipath connected SAS drives and we ended up buying hardware RAID cards instead of continuing to deal with how broken that whole configuration was; and that was earlier this year).
My next project will most likely involve rebuilding my Linux file server at home with eight 2-3TB hard drives and dumping the entirety of my multimedia collection onto a really large RAID-Z2 or RAID-Z3 ZFS volume. I've actually been looking forward to it. Now just as soon as someone starts selling large capacity SATA drives at a reasonable rate, I'll probably buy some up and go to town.