As the last post hinted, it’s been hardware refresh time here at HA Towers lately.
I usually check on my hardware around this time of year, and this year I kinda remembered that my server box was getting a bit old.
I see that I blogged about the last upgrade too, so that was nearly three years ago. I was still using that vintage Antec Aria case and PSU from 2004. But most significantly, I didn’t upgrade the hard disk at that time; I used the one from the previous box. So now I look and it turns out that disk is…erm…a Seagate 7200.7 80GB. I have just checked my records, and I’m pretty sure I bought this disk on 2005-11-02. So it’s seven and a half years old, near enough. Yes, the box just had one of them – no redundancy. No, I didn’t have complete backups – I had my maildir, IRC logs, and WordPress data on scheduled backup, but the configurations of the VMs themselves and rather a lot of other stuff I’ve been running on my web server, not backed up at all. Eep.
Please, no-one remind me of the MTBF of ATA hard disks…
So anyway, it was clearly high time for a refresh. The other thing I decided to upgrade was my NAS. The old one – a D-Link DNS-323 – has served me well, but it’s pretty antiquated by modern NAS standards, and its NFS server isn’t reliable enough to be used. Modern NASes have all kinds of whizzy features (most of which I’m not going to use), but the most obvious difference is pure and simple speed: the DNS-323 manages a transfer speed of about 8-9MB/sec from a RAID-1 array with the wind behind it. The DNS-323 uses some kind of ancient ARM chipset. Modern SOHO NAS appliances use either much newer and faster ARM chipsets (at the low end) or Intel Atoms (at the higher end). I’ll tell you how fast my new one is later. (ooh! The suspense!)
So it was time for a bit of hardware shopping. The NAS was easy, after a bit of research at SmallNetBuilder and some price comparisons. I wanted an Atom-based box, for the big speed increase they provide. Various manufacturers are popular in the space, but the Thecus N5550 was available for $460 from NCIX, after the standard price-match-with-directcanada dodge. That’s significantly cheaper for a five-disk box than any of its competitors for a four-disk box. Bit of a no-brainer. Thecus’ UI is considered not as polished as its competitors, but the reviews indicated it works fine and performs well, and I don’t really care about shiny UIs; plus I could always just run FreeNAS or RHEL on it if the Thecus UI turned out to be really terrible. I matched it with five WD Red 2TB disks – the Reds are ‘NAS-optimized’ disks from WD. It was funny to note that they actually cost 50% more than the 2TB disks I bought for the DNS-323 in 2011 – hard disk prices sure have stopped going down.
I decided I was going to come up with a pretty nice server box this time – the last couple have been kind of bodged-together boxes (though they worked out remarkably reliably). It also has to fit in my little Ikea ‘server cabinet’. So eventually I plumped for:
Jonsbo V3+ mini-ITX case
Supermicro X9SCV-Q mini-ITX socket G2 motherboard
Intel Core i5-2450M CPU
Crucial 16GB (2x8GB) DDR3-1600 SODIMM RAM
2x Samsung 840 Pro 128GB SSD
Corsair CX430M 80+ Bronze module PSU
The CPU is a mobile one with a 35W TDP; I wanted to try and keep this box power-efficient. Deciding to go with a mobile CPU limits your choices in CPU and motherboard quite a lot, not many are readily available – that’s the only motherboard I could find that’s capable of taking a Socket G2 (mobile) CPU, and I got the CPU itself off eBay from a business that strips down returned laptops. But happily enough, it’s actually a pretty good motherboard. It’s meant for servers and has a bunch of neat server features. It’s also a UEFI board, and I did a native UEFI Fedora install. The CPU is capable (and about 3-4x faster than the X2-250).
Oodles of RAM is cheap these days, and should help prevent the web server going down under load, and I wanted two disks so I can do RAID-1 redundancy and SSDs because SSDs are so damn fast these days. The 840 Pro is the consensus SSD of choice among hardware tweakers right now, if you were interested! Don’t get the 840, though, it will have an awful lifespan.
Modular PSUs are a new hardware tweaker thing that’s happened since the last time I built a system. ‘Modular’ just means that most of the cables aren’t permanently wired into the PSU, as is traditional: the main ATX power cable is, but the others are all removable. There’s a bunch of sockets on the back of the PSU and you get a bunch of different cables in the box, and you just plug in the ones you need. This is great for this type of small build, as modern PSUs come with all sorts of auxiliary power connectors for exotic graphics cards and stuff; you don’t have to plug those in at all, so you save space and cable mess inside the box. All I had to plug in was two SATA power connectors, the rest I left out. 80 Plus certification is another relatively new thing, and simply about efficiency – there are several levels of certification which guarantee certain levels of power efficiency. Keeps heat output and electricity bills down, me likey.
The case is cheap, thin metal as you’d expect and doesn’t have any high-end bells and whistles like you get on nice Silverstone or Antec cases, but it’s the right size for me, it does the job, and it looks quite nice – very black monolith-y.
Aside from the hard disk mounting travails (see last post!), the build went pretty smoothly, except that I didn’t realize how you lock the CPU into the G2 socket; it doesn’t have a lever like desktop sockets, it has an actuator you have to turn with a flathead screwdriver.
Building the NAS box consisted of opening the hard disk bags and sliding them into the drive slots. This is the kind of reason I buy dedicated NAS boxes inside of trying to do custom PC builds!
I got the NAS last week, set it up and have had it transferring data all week; the CPU for the server box arrived today, so I built and installed that today. And now everything’s done and both little black monoliths are whirring away in my server cabinet, behaving themselves – so far – very nicely. I have rather a lot better data integrity guarantee on my servers now (whew – I’m also improving my backup plan using Duply, and when F19 goes stable, I’ll be able simply to take live snapshots of my VMs to back them up), and the performance improvements are awesome. The new NAS transfer rate? 70-90MB/sec; nearly 10x faster than the old one. That’s the kind of bump I like! I have it set up as a 6TB RAID-6 array (like RAID-5, but with two drives’ worth of parity data, so it can survive the loss of any two drives). I’ll use the old NAS’ disks as spares. Its NFS server seems reliable, it’s better at handling non-ASCII characters even across various client OSes and protocols than the DNS-323 – 世界の果てまで連れてって！ renders as 世界の果てまで連れてって！ on my Linux box with the share mounted via NFSv4, and on a Windows box with the share mounted via CIFS – and it can restrict access to shares in various ways, though the way it handles NFSv4, every NFS share is unavoidably accessible as r/w by anyone with access to the server, so I have to use CIFS shares if I want to do restricted access. The old VM host wasn’t slow exactly, but a faster CPU, 4x more RAM and bleeding-edge SSDs for storage sure make it faster. I could actually run all my testing VMs on that box and just access them via virt-manager from my desktop if I wanted; the performance seems almost identical between VMs running on the new server box accessed via ssh in virt-manager, and VMs running locally on my desktop.
So I’m happy with the new boxes! Out with the old:
and in with the new (server box on the left, NAS on the right):