Hardware refresh: NAS and server machine

As the last post hinted, it's been hardware refresh time here at HA Towers lately.

I usually check on my hardware around this time of year, and this year I kinda remembered that my server box was getting a bit old.

I see that I blogged about the last upgrade too, so that was nearly three years ago. I was still using that vintage Antec Aria case and PSU from 2004. But most significantly, I didn't upgrade the hard disk at that time; I used the one from the previous box. So now I look and it turns out that disk is...erm...a Seagate 7200.7 80GB. I have just checked my records, and I'm pretty sure I bought this disk on 2005-11-02. So it's seven and a half years old, near enough. Yes, the box just had one of them - no redundancy. No, I didn't have complete backups - I had my maildir, IRC logs, and Wordpress data on scheduled backup, but the configurations of the VMs themselves and rather a lot of other stuff I've been running on my web server, not backed up at all. Eep.

Please, no-one remind me of the MTBF of ATA hard disks...

So anyway, it was clearly high time for a refresh. The other thing I decided to upgrade was my NAS. The old one - a D-Link DNS-323 - has served me well, but it's pretty antiquated by modern NAS standards, and its NFS server isn't reliable enough to be used. Modern NASes have all kinds of whizzy features (most of which I'm not going to use), but the most obvious difference is pure and simple speed: the DNS-323 manages a transfer speed of about 8-9MB/sec from a RAID-1 array with the wind behind it. The DNS-323 uses some kind of ancient ARM chipset. Modern SOHO NAS appliances use either much newer and faster ARM chipsets (at the low end) or Intel Atoms (at the higher end). I'll tell you how fast my new one is later. (ooh! The suspense!)

So it was time for a bit of hardware shopping. The NAS was easy, after a bit of research at SmallNetBuilder and some price comparisons. I wanted an Atom-based box, for the big speed increase they provide. Various manufacturers are popular in the space, but the Thecus N5550 was available for $460 from NCIX, after the standard price-match-with-directcanada dodge. That's significantly cheaper for a five-disk box than any of its competitors for a four-disk box. Bit of a no-brainer. Thecus' UI is considered not as polished as its competitors, but the reviews indicated it works fine and performs well, and I don't really care about shiny UIs; plus I could always just run FreeNAS or RHEL on it if the Thecus UI turned out to be really terrible. I matched it with five WD Red 2TB disks - the Reds are 'NAS-optimized' disks from WD. It was funny to note that they actually cost 50% more than the 2TB disks I bought for the DNS-323 in 2011 - hard disk prices sure have stopped going down.

I decided I was going to come up with a pretty nice server box this time - the last couple have been kind of bodged-together boxes (though they worked out remarkably reliably). It also has to fit in my little Ikea 'server cabinet'. So eventually I plumped for:

Jonsbo V3+ mini-ITX case Supermicro X9SCV-Q mini-ITX socket G2 motherboard Intel Core i5-2450M CPU Crucial 16GB (2x8GB) DDR3-1600 SODIMM RAM 2x Samsung 840 Pro 128GB SSD Corsair CX430M 80+ Bronze module PSU

The CPU is a mobile one with a 35W TDP; I wanted to try and keep this box power-efficient. Deciding to go with a mobile CPU limits your choices in CPU and motherboard quite a lot, not many are readily available - that's the only motherboard I could find that's capable of taking a Socket G2 (mobile) CPU, and I got the CPU itself off eBay from a business that strips down returned laptops. But happily enough, it's actually a pretty good motherboard. It's meant for servers and has a bunch of neat server features. It's also a UEFI board, and I did a native UEFI Fedora install. The CPU is capable (and about 3-4x faster than the X2-250).

Oodles of RAM is cheap these days, and should help prevent the web server going down under load, and I wanted two disks so I can do RAID-1 redundancy and SSDs because SSDs are so damn fast these days. The 840 Pro is the consensus SSD of choice among hardware tweakers right now, if you were interested! Don't get the 840, though, it will have an awful lifespan.

Modular PSUs are a new hardware tweaker thing that's happened since the last time I built a system. 'Modular' just means that most of the cables aren't permanently wired into the PSU, as is traditional: the main ATX power cable is, but the others are all removable. There's a bunch of sockets on the back of the PSU and you get a bunch of different cables in the box, and you just plug in the ones you need. This is great for this type of small build, as modern PSUs come with all sorts of auxiliary power connectors for exotic graphics cards and stuff; you don't have to plug those in at all, so you save space and cable mess inside the box. All I had to plug in was two SATA power connectors, the rest I left out. 80 Plus certification is another relatively new thing, and simply about efficiency - there are several levels of certification which guarantee certain levels of power efficiency. Keeps heat output and electricity bills down, me likey.

The case is cheap, thin metal as you'd expect and doesn't have any high-end bells and whistles like you get on nice Silverstone or Antec cases, but it's the right size for me, it does the job, and it looks quite nice - very black monolith-y.

Aside from the hard disk mounting travails (see last post!), the build went pretty smoothly, except that I didn't realize how you lock the CPU into the G2 socket; it doesn't have a lever like desktop sockets, it has an actuator you have to turn with a flathead screwdriver.

New server box being built (before PSU install...and string mounting)

Building the NAS box consisted of opening the hard disk bags and sliding them into the drive slots. This is the kind of reason I buy dedicated NAS boxes inside of trying to do custom PC builds!

I got the NAS last week, set it up and have had it transferring data all week; the CPU for the server box arrived today, so I built and installed that today. And now everything's done and both little black monoliths are whirring away in my server cabinet, behaving themselves - so far - very nicely. I have rather a lot better data integrity guarantee on my servers now (whew - I'm also improving my backup plan using Duply, and when F19 goes stable, I'll be able simply to take live snapshots of my VMs to back them up), and the performance improvements are awesome. The new NAS transfer rate? 70-90MB/sec; nearly 10x faster than the old one. That's the kind of bump I like! I have it set up as a 6TB RAID-6 array (like RAID-5, but with two drives' worth of parity data, so it can survive the loss of any two drives). I'll use the old NAS' disks as spares. Its NFS server seems reliable, it's better at handling non-ASCII characters even across various client OSes and protocols than the DNS-323 - 世界の果てまで連れてって! renders as 世界の果てまで連れてって! on my Linux box with the share mounted via NFSv4, and on a Windows box with the share mounted via CIFS - and it can restrict access to shares in various ways, though the way it handles NFSv4, every NFS share is unavoidably accessible as r/w by anyone with access to the server, so I have to use CIFS shares if I want to do restricted access. The old VM host wasn't slow exactly, but a faster CPU, 4x more RAM and bleeding-edge SSDs for storage sure make it faster. I could actually run all my testing VMs on that box and just access them via virt-manager from my desktop if I wanted; the performance seems almost identical between VMs running on the new server box accessed via ssh in virt-manager, and VMs running locally on my desktop.

So I'm happy with the new boxes! Out with the old:

Old NAS and server machines

and in with the new (server box on the left, NAS on the right):

Old NAS and server machines

Comments

Gadget Wisdom wrote on 2013-03-29 01:19:
I used the exact same Aria up until last year as a MythBox, and I've also been upgrading my file server. I want to deploy a better backup. I may have a look at duply. I had issues with previous duplicity attempts. I thought you might have a perspective. Anything you learned from your file server building?
adamw wrote on 2013-03-29 01:41:
Hi Gadget! I use a Zotac Zbox as my HTPC, FWIW. I'm hardly a backup expert, but I find duply seems to do exactly as much as I need and not so much more that it's overly complicated: it lets me schedule incremental, encrypted backup of specific directories to the NAS (which is CIFS mounted, in my setup, but it has all sorts of target options). It tests out fine, but I've been using it in production for all of a couple of days so I'm hardly going to go around giving out any unequivocal endorsements. My strategy is that each server machine has an effectively private directory on the NAS: I have a user account for each server box on the NAS, and a directory for each box which only the matching user account is allowed to mount. The point being that if someone compromises one of my servers, at least they can't then destroy (bad) or access (worse) the backups of my other machines. Each machine backs up whatever vital content it contains nightly, using a simple cron.daily script that just calls 'duply foo backup', 'duply bar backup', etc etc. The backups are encrypted using duply's built-in GPG support; as you have to use a passwordless key or write the password into a config file, of course this is no protection against anyone compromising the machine, but it's protection against anyone compromising the NAS. I then have each of those backup directories synced with an S3 account for 'offsite backup' - if my apartment burns down, or whatever, at least I'll have the backups in S3. Once I'm done tweaking the setup I'll take an offline snapshot of each server machine, encrypt those with a dedicated, passphrased GPG key, and stash them on the NAS and the S3 share. I'll also take copies of the duply configuration and key for each machine and stash them similarly. If I want reasonable convenience but slightly less security I can keep that GPG key on my desktop and stash a copy of it on the S3 share (in which case I'm relying on the security of my desktop, S3, and the strength of my passphrase), or if I want more security and less convenience I can just keep one copy of the 'master' key on a USB stick stashed somewhere discreet (air gap security!) and one in a safety deposit box (for the fire/flood scenario). If any of the VMs fails catastrophically I can just go back to the last snapshot I took - I don't need to take snapshots that often - and then restore the vital data from the nightly backups, as the duply config will all be in place. If something catastrophic happens to *everything* local, I have the snapshots and backup files on S3 to build back up from. Once live snapshotting is finally smoothed up and production ready I could in theory just backup by snapshotting each machine daily and encrypting the snapshots - I think they can be made incremental, though I don't know if there's any kind of encryption integration (yet). I haven't decided if that's the best way to go yet. It gets a bit mind-bending trying to map this stuff out, but I *think* that gives me pretty reasonable protection against all scenarios and doesn't leave anything out in the clear anywhere anyone can get at it. Let me know if you spot any flaws...