Terabytes on a budget… 2U 14.5TB usable backup device

15th July, 2011 - Posted by Willott - 1 Comment

So, a while ago, our tape library broke, properly. The manufacturer had only given a 1yr warranty, on Enterprise hardware this seems beyond stupid, but hey (yes I could’ve bought extended, but as the rest of their kit, servers, switches etc have 3yr minimum on them was a little annoyed that the tape library didn’t have 3yrs). So, instead of forking out a shed load of money for a replacement drive for the tape library I started planning a storage server which could be stored offsite, and act as an always on data store to backup to, and would cost the same (or less as it turned out) than a replacement tape drive. I’ve now got it up and running, and here’s some info to do with it.

I was interested by Backblaze’s device but the size, and using consumer components didn’t sit all that well with me considering our device would be a secondary backup server, and wouldn’t be mirroring data multiple times across multiple servers. I also started considering what OS to use, looking at Freenas, OpenFiler (after suggestions on Edugeek) and after some other suggestions Nexenta. The bit that interested me most – ZFS. Compression, dedupe, snapshots, scrubbing, healing file system, very interesting! So what did I choose in the end…

Well, hardware wise:

  • Xcase 2U 12 drive case with redundant PSU
    • basic case, quite cheap, space for future and some redundancy
  • Tyan S7002WGM2NR
    • Onboard SAS and iKVM
  • Xeon 5620
    • Fairly beefy quad core, plenty enough for ZFS
  • 12GB DDR3 ECC RAM
    • Decent amount to assist ZFS
  • 8x 2TB Seagate LP Drives
    • 5900RPM drives, IOP requirement not significant, transfer rate good, cheap (to buy and replace)
  • 2x 8GB USB sticks
    • cheap, flexible, easily replaced

I’ll let you price up the above, would be wrong for me to say what we spend.

Software wise I chose NexentaStor Community Edition due to it’s zero cost. I had been using Nexenta Core, but thought that using GUI would be more friendly for my staff. NexentaStor also has an upgrade path to a paid version for support should it ever be needed here. I looked at Freenas and OpenFiler, but the ZFS versions within Nexenta are more advanced, due to it being based on OpenIndianna, related to OpenSolaris and Sun, the home of ZFS.

That gave me 14.5TB usable space in a single RAIDZ2 array, that should last me a while (1.5TB data to backup – dedupe will be bought in backup software as backups encrypted).

Comparing with one of our other devices, I got that lot for about the same as a QNAP (with drives) cost wise, but get the benefits of ZFS and some enterprise hardware as well.

Performance wise it’s quite good too, the following is from bonnie:

WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
401MB/s   30%    108MB/s   9%     146MB/s   5%     206/sec
400MB/s   30%    110MB/s   10%    137MB/s   5%     147/sec
——— —-   ——— —-   ——— —-   ———
802MB/s   30%    219MB/s   9%     284MB/s   5%     176/sec

I’m a little bemused by the write result (possibly RAM having some effect here), so am basing my thoughts on the re-write and read speeds, which seem quite good, especially for the environment I’m aiming for. Over CIFS I’m hitting 60MB/s writes and 80MB/s reads – I need to do some more investigations on this – trying Jumbo frames across network possibly may help increase this, or trunking interfaces, but in all honesty, eventually this thing will be going the other end of a 100Mb pipe, so I’ll be looking at 12MB/s as an absolute max, and this performance investigation is just for my own interest.

So, thoughts? Well, cheap storage is easy to come by nowadays, but I think this is one step up from that, using a mix of consumer and enterprise hardware to maximise space, keep performance acceptable for the job in hand and minimise cost. So all in all a good experiment I think and it provides us with what we need (in this day and age, buy what suits your needs both current and future rather than overspec to silly proportions).

Next year’s project? A Unified Storage Server based on the same principle – maximise performance and space for minimal cost (ZFS to the rescue), using 6G SAS drives (7200RPM or higher… not made that decision yet, mirrored rather than RAIDZ), 6G SAS expanders, SSD SIL and L2ARC and 10GbE NICs.

Future projects? The 4u, 48 drive BlackHole Storage device. Just need to get funding, a PCB manufacturer who can throw together a SAS expander backplane for it (I have in mind what I want, just need PCB creating) and someone to CAD and manufacture case (again, have design in mind, just need it creating!).

Could we expand this to Petabytes? Couldn’t we put a large number of BlackHole Storage devices in a rack, have 2 uber spec’d head servers, both connected with the BlackHole Storage devices as iSCSI targets for the heads, and RAID-Z  the BlackHole Storage devices with some failover between the 2 heads? Possibly 640TB usable space in a rack (dependant on RAIDZ configs in devices/heads)? Something for you and me to ponder… or maybe you look to each of the BlackHoles to be an OpenStack storage node (or Gluster, or Lustre) and have inhouse cloud… the possibilities, as they say, are endless!

Tags: , , ,

Posted on: July 15, 2011

Filed under: Computing

1 Comment

NAS Box for backup - Page 2

November 16th, 2011 at 11:10 am    


[...] [...]

Leave a reply

Name *

Mail *

Website