My First FreeNAS Build

As my lab has grown over the years, so have my storage needs.  Today I have an array dedicated to backing up files.  That array consists of twelve (12) 2TB drives configured in a RAID 6 array.  This gives me roughly 18.1 TB of storage.  That sounds like a ton of storage.  Unfortunately, I’ve run out of space.  I’ve even started saving less back-ups of some of my VM’s to conserve on space.  At one point, I had less than 1TB of space remaining.

Obviously, anywhere near 10% free space really isn’t acceptable anyway.  This presents a problem…my current file server is a virtual machine on my original ESXi box.  This server is completely full of drives.  Additionally, I think I’m ready to graduate to a real NAS (network attached storage) system.

FreeNASLogo

Enter FreeNAS.  FreeNAS is an open-source operating system designed for network attached storage servers.  At its core, it built on FreeBSD with all of the storage being handled by something called ZFS.  ZFS is another open-source product, this time a file system.  So instead of FAT or NTFS that we see in Windows, ZFS is an enterprise file system that is focused on ensuring data integrity.

On top of ZFS, FreeNAS has an excellent GUI with a variety of additional features that make it an attractive NAS.  It has built-in file-sharing features like SMB/CIFS, FS, FTP, iSCSI, and others.  FreeNAS has full support for ZFS snapshots (think virtual machines, but for your system), replication, and encryption.  It also has plugins!  Media servers, private cloud services, and plenty of other cool things that run in something FreeBSD calls jails.  Basically the plugin is walled off from the rest of your server.

And with that, let’s move on to the hardware.  To determine what hardware I needed, I first took at look what I need my NAS to do.  First and foremost, I need a place to back everything up.  So I need at least one array of big traditional disks for that purpose.  A good rule of thumb for me has always been to upgrade to at least twice the amount of space you have now.  So I have 20TB of RAW storage, I need at least 40TB of RAW storage for this purpose.

Second, I have a series going on Essbase performance.  While all of the local storage benchmarks will be very interesting, many (if not most) companies are using network storage for their virtualized environments.  The amount of performance here doesn’t really matter as much.  I need enough to do high performance network storage testing.  So I need some type of SSD-based drive or array for this purpose.

Finally, I would like to have a network-based datastore for my VMware cluster.  This needs to be somewhere between the first two.  It needs speed, but also a lot of space.  This is another area that FreeNAS can help.  FreeNAS with ZFS uses RAM to provide a read cache.  On top of this, you can plug in a second level of read cache and a second level of write cache in the form of SSD’s.  This will give you performance similar to SSD for many activities against your larger data store.  This is similar to the tiered storage that is available on many enterprise SAN’s.

This also gives us another way to test Essbase performance.  Specifically, we can test how well the write cache works with an Essbase cube.  Because the write cache only stages synchronous writes, we’ll get to see how well that works with an Essbase database compared to other types of databases that generally work quite well with this setup.

Back to the rest of our hardware…we definitely need a lot of RAM.  Clearly, FreeNAS and ZFS are going to eat up quite a bit of CPU, especially I decide to use any of the plugins.  And of course, this is a Network Attached Storage server, so we need some serious network connectivity.  Gigabit just won’t do.  So what did I decide on?  Let’s take a look:

Processor(s)(2) Intel Xeon E5-2670 @ 2.6 GHz
MotherboardSupermicro X9DR7-LNF4-JBOD
Memory256 GB Samsung ECC Registered DDR3 @ 1600 MHz
ChassisSupermicro CSE-846TQ
ChassisSupermicro CSE-847E16-RJBOD1
HBASupermicro AOC-2308-l8e
HBALSI 9200-8e
Solid State Storage(2) Intel S3700 200GB SSD
Hard Drive Storage(8) HGST Ultrastar 2TB Hard Drives (SATA)
Hard Drive Storage(20) HGST Ultrastar 3TB Hard Drives (SAS and SATA mix)
Hard Drive Storage(10) HGST Ultrastar 3TB Hard Drives (SAS and SATA mix)
Network Adapter(2) Intel X520-DA2 Dual Port 10 Gbps Network Adapters

If you happened to read my series on building a home lab, you might recognize some of the parts.  I stuck with the E5-2670 as they are even cheaper now than ever before.  I did have to move away from ASRock motherboard to a Supermicro board.  This board has a built-in SAS2 controller, six (6) PCIe slots, and sixteen (16) DIMM slots.  I’m going with 256GB of DDR3 RAM which should support our plugins, our primary caches, and the secondary caches nicely.  I’ve also purchased a pair of Intel X520-DA2 network cards to provide four (4) 10GB ports.

I added to the onboard controller a pair of matching LSI-based 2308 controllers to give me 24 ports of SAS2.  This fit nicely with my Supermicro 846TQ, which has 24 hot-swap bays and a redundant power supply.  And that power supply is connected to a 1500VA UPS so that we can ensure that during a power outage, our data remains intact.  FreeNAS again helps us out with built-in UPS integration.

 

So now that we’ve talked about the server a fair amount, what about the actual storage for the server.  I’ll start by setting up a single-disk array with the 1.6TB NVMe SSD.  This should provide enough speed to max out a 10GB connection for many of my Essbase related tests.

zpool3

I’ll also be setting up an 8-disk stripped set of mirrored 2TB drives.  This is equivalent to RAID 10 and should provide the best mix of performance and redundancy.  I’ll have a ninth drive in there as a hot spare should one of the drives fail.  In addition, this is the easiest array to actually expand in ZFS.

zpool4

I also have a pair of Intel S3700 200GB SSD’s to use as an L2ARC (second level read cache) and/or ZIL/SLOG (write cache).  We’ll be testing Essbase performance in three different configurations:  just the hard drives, the hard drives with the write cache, and the hard drives with the write cache and the second level read cache.  These configurations will closely resemble many of the SAN’s that my clients deal with on a daily basis.

The final piece of the storage component of the new NAS serve at the backup device for the network.  I’ll be setting up a 10-disk RAID-Z2 array with 5TB, 6TB, or 8TB drives.  This is basically the ZFS version of RAID 6 which will provide me with 40TB, 48TB, or 64TB of storage.  Here’s an example of what this will look like:

zpool1

Now that we’ve covered storage, we can talk about how everything is going to be connected.  My lab setup has three ESXi hosts and an X1052 switch.  The switch has 48 ports of 1Gb ethernet, but only four (4) ports of 10Gb ethernet.  Four ports, four servers!  But I really would like to have 10Gb between all of my servers AND 10Gb for my network-based data stores.  This is why we have two X520-DA2 cards.  This will allow me to connect one port to the switch so that everything is on the 10Gb network and also allow each server to connect directly to the FreeNAS server without a switch.

This means that each server will need two 10Gb ports as well.  Two of the servers will have X520-DA2 network cards.  One port will connect to the switch, the other will connect to the FreeNAS server directly.  The last server will actually have two X520-DA1 network cards.  This allows me to test the difference between passing through the X520-DA1 using VT-d and using the built-in network functionality.  This will be similar to the testing of passthrough storage and data stores.

The hardware has already started to arrive and I’ve begun assembling the new server when I have free time.  I’ll try to actually document this build for the next post before we get into the actual software side of things.  Until then…time to stop procrastinating on my final Kscope preparations.

Oh, and here is my final build list of everything I have ordered or will be ordering to complete the system:

  • SuperChassis 846TQ-R900B
  • (2) E5-2670 @ 2.6 GHz
  • Supermicro X9DR7-LN4F-JBOD
  • 256GB Registered ECC DDR3 RAM (16 x 16GB)
  • Noctua i4 Heatsinks
  • (5) Noctua NF-R8 (to bring the noise level down on the chassis)
  • (2) SanDisk Cruzer 16GB CZ33
  • (2) Supermicro AOC-2308-l8e
  • (3) Full-Height LSI Backplates (for the AOC-2308’s and the P3605)
  • (6) Mini-SAS Breakout Cables
  • (10) 5TB Toshiba X300 Drives, 6TB HGST NAS Drives, or 8TB WD Red Drives
  • Intel P3605 1.6TB PCIe SSD
  • (9) 2TB HGST Ultrastar 7K3000 Hard Drives
  • (4) 10Gb Twinax DAC Cables
  • (2) Intel X520-DA2
  • CyberPower 1500VA UPS

5 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.