Planning a ZFS "Mini-SAN", which server do I buy?

Pandasaurus

Member
Aug 19, 2012
196
2
76
[Cliffsnotes]I'm new to the whole SAN/ZFS thing, and am trying to figure out what I'm doing. I can buy a dirt cheap Dell, a really cheap Sun/Oracle with a SPARC, or a DL380 G6. Which one, and why?[/Cliffsnotes]

For more explanation (possibly too much), continue reading.

So, I'm in the early planning stages of building a "Mini-SAN" (for multiple purposes, one of which being learning what I can about as many aspects of network/server admin/design as I can), and have a question or two. For context, the network will consist of between 3-5 physical hosts (not counting the ZFS box), some (not all) of which will be running a couple of VM's. All hosts will be booting from the SAN. SAN will (eventually) consist of a total of 20 spindles between two disk shelves. Single disk capacity will be low in the interest of cost, and the fact that I don't particularly need a huge amount of storage. (I'm looking at 147GB 10k or 15k SAS drives, I *might* go up to 300GB depending on cost when I purchase, just for the sake of room for future expansion.) Connectivity will be 10Gb (possibly 20Gb, or dual 10Gb port) Infiniband along with standard copper 1GbE.

That all said... In my reading, I'm seeing conflicting reports about the hardware needs of ZFS. Some say CPU doesn't matter much, as long as you have plenty of RAM, and SSD's for L2ARC and ZIL logs. Others say they're seeing 90% usage on 6-core Xeons. Naturally, those reports are met with others saying "You're doing it wrong", but to somebody who has no experience in the area of network storage (that's me), it gets confusing really fast. So, I'm hoping somebody here can help me out. I've more or less narrowed down my search to a handful of options as far as the actual server that will be hosting the ZFS, and I'm hoping I can get some input as to which would be the best choice and why.

First off, I have the opportunity to get a Dell R310 for dirt cheap (way under current eBay pricing). However, the R310 is a fairly low-end box. Single socket, max of 32GB RAM. It's also 1U, which I'm trying to avoid for noise reasons. However, rack space is limited, so there is a slight benefit there. Currently, it's equipped with an i3-540 and 2GB of RAM. Pluses are, it's DDR3, and can be upgraded to at least a 4-core Xeon (I think there's a 6-core option that would work as well, not positive on that) if necessary for performance. Also, by way of being low-end, it would also be low-power. Bonus! It's also a Dell, so I would get the benefit of having hardware from a different vendor to get experience with (other servers will either be HP or Supermicro).

Second option is a Sun/Oracle T5220. Also a single socket, but it's an 8-core SPARC (64 threads total, if that matters for ZFS. I've gotten conflicting reports on that as well). Downside is it's DDR2. Pluses are, supports up to 64GB RAM, it's 2U, and it's not much more expensive than the Dell to get one with 32GB already installed. It would also have the benefit (or downside, depending on how you look at it) of being SPARC, and thus a somewhat unique vendor/hardware architecture. (Possible selling point for a future job)

Last, but not least... HP DL380 G6. Pretty standard on this one. Dual socket, with the possibility of low-power 4 or 6 core Xeons. Supports more (DDR3) RAM (theoretical maximum of 192GB, but I'd probably never get there). Downside, it's about 3x the cost of either of the other options.

I was initially planning on going with the DL380, due to it being newer hardware than the Sun/Oracle, and a "preferred" vendor. Then I checked RAM prices on eBay and realized that currently, DDR2 can be had for cheaper than DDR3 (cost of RAM being a reason for going newer, initially), which got me thinking going with the Sun/Oracle might be "fun" for the unique learning experience. As of yesterday, I got the opportunity to pick up the Dell for even cheaper (though the cost of adding RAM and possibly upgrading to a Xeon would more than make up the difference) than the Sun/Oracle.

So... Now that I've put you to sleep, or made you want to claw your eyes out, or maybe just made you want to smack me... Real question time.

How important is CPU power for a ZFS server? Is ZFS capable of using several threads, or does that not even matter? Is the upgrade from DDR2 to DDR3 worth going with the Dell or HP over the Sun/Oracle? If so, would the single CPU on the Dell be enough for my uses (keeping in mind that the other hosts on the network will have little to no local storage, *everything* will be on the SAN)?

I appreciate any input anyone can give. Sorry if the questions seem a bit stupid or noob-ish... But, I'm a noob, and can't seem to find a source of info that doesn't have somebody else giving conflicting info on the next page. >.>
 

gea

Senior member
Aug 3, 2014
221
12
81
about you questions

CPU
CPU is not irrelevant but not too important unless you want high-end
with compression, dedup and 10 GbE.

RAM
Solaris/ZFS is stable with only 1-2 GB of RAM but RAM is the source of read performance as it is used as read-cache. Use as much as possible or needed to satify your performance needs.

About your hardware
I would use quite modern serverclass hardware and avoid the Sparc.

HP is ok but you should check the disk controller.
It should/must be some LSI or rebranded LSI in a raidless HBA/IT mode.
The Nic should/must be a Intel. (For IB check compatibility)

If you want it to be a head for disk-shelves, you should/must use
an LSI expander with SAS disks.


What I use and prefer for a max 24 disk storage (ESXi datastore)

SuperMicro hardware like
- a 24 + 2 bay expanderless case like a
http://www.supermicro.nl/products/chassis/2U/216/SC216BA-R920LP.cfm

- add two HBA controller like a LSI 9207

- add a serverclass mainboard like a SuperMicro X9 SRH 7TF
(onboard LSI 9207, Intel 10 GbE, IPMI, max 512 GB ECC RAM, add a Xeon)
http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRH-7TF.cfm

or a cheaper (X10 SL7 -F max 32 GB ECC RAM, add a Xeon)
http://www.supermicro.com/products/motherboard/Xeon/C220/X10SL7-F.cfm

For the disks I would skip the 10k/15k spindels and use an SSD only Mirror or a Raid-Z2 pool where you can expect up to 100x better IOPS.

ex
- Sandisk Extreme Pro 240-960 GB (quite affordable with best continous write performance under load beside some special enterprise SSDs like the Intel S3700 that you may add as a dedicated ZIL to reduce small writes on the SSD pool)


regarding OS
I use and prefer OmniOS (free Solaris fork) for what I had build a Web-UI
 
Last edited:

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
First off, you won't be running 3-5 hosts unless you are planning on having a generator outside your room.

For your purposes, DDR2 vs DDR3 isn't a concern beyond price. The R310 also uses non-ECC RAM which will save a little bit of cash. The general rule of thumb is 1Gb of RAM per 1TB of storage. You can make do with less, but you can't have too much.

CPU also depends if you are going to be doing anything else with the SAN. IE plugins for media streaming/transcoding, etc. A lot of home built SAN's end up being multipurpose devices and that will have an effect on your CPU needs. I can show you the CPU usage on my SAN tonight if you want to give you an idea what you are looking at.

Gea: This is a very low budget home lab build (Panda's my brother) so your options are way, WAY, out of his price range. The HP P series Smart Array controllers (which is what comes standard on the DL's) are rebranded LSI's. I've got a pair of P410's myself in my SAN. I've also told him repeatedly to avoid the SPARC's and he doesn't listen.

To give a better idea on budget he's looking at, $200/ea for hosts plus some extra RAM. $100 for the Storage Server, $40/ea for shelves. Basically looking at under $1,000 for the entire lab setup. $500 mobos, etc, need not apply.
 

gea

Senior member
Aug 3, 2014
221
12
81
Replace the professional SuperMicro case with a cheaper case (like a Norco) with a backplane, use the SuperMicro X10-SL-7 (8 x SAS/Sata + 6 x Sata) with an i3 CPU, add 8 GB+ ECC RAM. Hard to find a cheaper setup with modern professional server class hardware. Optionally use a smaller board without SAS controller.

If you want to go really cheap, use a 4bay HP Microserver N36-54 with 8GB ECC.
ZFS runs fine on the Microserver.
 
Last edited:

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Hard to find a cheaper setup with modern professional server class hardware.

Which is precisely why he's not looking for current gen. I'm not sure what's hard to understand about this. That motherboard costs as much as an entire server he's looking at.
 
Last edited:

gea

Senior member
Aug 3, 2014
221
12
81
Which is precisely why he's not looking for current gen. I'm not sure what's hard to understand about this. That motherboard costs as much as an entire server he's looking at.

I have overseen that you only want to spend 100$ for the whole storage server and 1000$ for the whole lab -
what means that a 200$ class SuperMicro X10 Board with a modern Intel server chipset, ECC support, 2 Intel Nics and remote IPMI or a HP Microserver in the same pricerange is too expensive. You cannot use new hardware.

If you intend to use a Solaris based SAN OS (as I asume as you intend Sparc what I would not suggest as it is mainly Oracle Solaris only) you cannot buy any cheap old hardware and expect a good user experience but trouble. You must buy a hardware that works well with Solaris. Its not like Windows where everything runs. You should buy only parts with known good compatibility.

It may happen that you can buy a used hardware that works well. But then you must look at the power consumption where you pay energy without having a similar performance like a modern hardware. The difference can be 20W vs 200W in idle with same performance. What I show up is an option with non used hardware. If you can get a similar hardware used and "dirt cheap" this is different.

Look for something that is not older than say 3 years with proved Solaris compatibility (what means mainly Intel Nic, LSI HBA in IT mode or Sata and a serverchipset to support ECC) . Use the new option to compare and find a similar used hardware alternative like the HP options best those with a Xeon + ECC. Check the individual specs.
 
Last edited:

RaiderJ

Diamond Member
Apr 29, 2001
7,582
1
76
I just put together a SuperMicro X10 board for an all-in-one ZFS/ESXi setup, following much of Gea's advice from around the web. I realize your brother is budget constrained, but pay close attention to what is being recommended. Saving money now by getting older hardware will cost you more in the long run. You'd be better off to wait and get the right equipment. Doing so will allow you to only have to buy things once.
 

holden j caufield

Diamond Member
Dec 30, 1999
6,324
10
81
if you need to run a few vms I see no need to run that many servers. Also the more you put in the electricity costs and possibly now the cooling costs go up dramatically. For someone like me in a residential pricing, once you get to tier3 it's just too much, also don't forget the noise pollution, in a datacenter thousands of fans are ok in a home it will drive you crazy. I run freenas, vmware esx server and hyperv all from my laptops, it takes up no space, quite, pulls almost no electricity.
 
Feb 25, 2011
16,822
1,493
126
If you're looking to build a home lab to learn, the hardware is irrelevant. 32GB ECC DIMMs in a $40k server look and feel (almost) the same as the 4GB DIMMs in your laptop. There is no real difference (except in outward appearance and number of supported HDDs) between a 12-disk SAS shelf and a 1-disk USB/eSATA enclosure. And so on.

What's important is learning how to configure the software - this dictates some of your hardware purchases because you need to have X or Y feature. (In the old days, before everybody had freaking 64-bit Cell Phones, this was more important, but I'm still going to recommend an AMD FX-series chip for your hypervisor system because of the hardware virtualization features and MOAR COARES.)

Here is your FreeNAS "SAN" build. You will also need a thumb drive to install FreeNAS on.

http://pcpartpicker.com/p/ZdwCZL

You will, of course, be using iSCSI and not Fiber Channel.

Here is your ESX/VMware/Hyper-V box.

http://pcpartpicker.com/p/TJZqrH

You will also need a switch that supports VLANs.

Here's what you will know how to do when you are done:

Build your SAN. Configure and install FreeNAS and/or the storage system OS of your choice. Set up your drives using RAIDZ, RAIDZ2, mirrored stripes, or the configuration of your choice. Try a couple different ones and benchmark the differences. Pull a drive and rebuilt the array. Practice this several times. Replace a drive in a running array while an application is running. FreeNAS has a nice GUI, but familiarize yourself with ZFS CLI commands anyway. Wipe it out, install Ubuntu Server, and do everything again using mdadm (linux software raid) and linux live volumes. Decide which one you want to stick with and go with it.

Configure a switch with 2 VLANs. Uplink VLAN 1 to your router, and the motherboard LAN on the HV. That's your management network. Use VLAN 2 for iSCSI traffic only. (SAN box and 2 ports on extra hypervisor NIC.) You don't need a router on your iSCSI LAN yet, just use static IPs.

Build and install your hypervisor. Take note of the hardware virtualization features like VT-d which AMD enables by default on all of its CPUs. Both drives in the HV system should be in a simple RAID mirror. Unplug one of the drives and see what happens. Rebuild the array. Plug the motherboard LAN into your management network and the two NIC LAN ports into your storage VLAN.

Create a local datastore on the internal drives, and mount an iSCSI LUN. Create a datastore on that. Mount an NFS share and create a datastore on that. Stand up a VM and run IOMeter on all three storage types, migrating the vm between storage as needed. You may be surprised by the results.

Stand up a VM and install pfsense to route between the two networks (management and iSCSI), so you can access SAN administration functions from a laptop connected to your data network.

Stand up more VMs. Try some of the nitpicky features like hardware passthrough. Perhaps with a second video card and a desktop OS installed in the VM.

Etc.
 

Red Squirrel

No Lifer
May 24, 2003
67,907
12,375
126
www.anyf.ca
I built a "SAN" (I treat it more like a file server but SAN sounds more cool. ) a year or so ago. I went with a Supermicro board (forget which one but it was a fairly basic one, nothing crazy) with 8GB ECC ram and a Xeon E3-1230. For the case I went Supermicro 24 bay. Looked into the Norcos but based on reviews and DOAs I decided to skip that and go with Supermicro even though it was more expensive. Redundant PSU was also a big selling point. A file server is not something you want to turn off, ever, so if I need to do any kind of power work I can walk power from one source to the other.



The 4 IBM enclosures are also connected to it via fibre channel, but I don't really use them and they're not even UPSed. Eventually I may even take those out, they look cool more than anything, but the top new server is like 10x better than they are in every respect.

Currently I have about 19TB between 3 raid arrays, two raid 10's and my old raid 5 that I moved from my old server. Goal of this server is to host all data but OSes. Any other server or machine on my network has a SSD for OS drive and that's it. No hardware raid or nothing. Just cheaper and easier to manage. For raid I use Linux mdadm raid. I find it's the simplest and has been rock solid. I am interested in BTRFS too but it's not considered stable yet.

Recently added a new VM host too: (the big box above is my old white box core2quad server with 8GB of ram, still running lot of stuff which I need to migrate)



That server has 32GB of ram using a Supermicro board and SSD for OS. CPU is a Xeon E3-1270. I long debated between E3 or E5 but the E5 platform is just so expensive, for the same price I can build two E3 based boxes, so eventually I'll probably do that, build another.

It runs ESXi so can't do much in terms of migration (not willing to pay for something for home use). I really wanted to go open source but none of the solutions fit my needs, they were just too tedius to use and user unfriendly, or just plain flaky. I may eventually write my own using libvirt as a backend, at that point I'll want to build more boxes to use as a lab.

Now when you start having this much equipment you want adequate power outage protection, and you may even want to have your stuff running for more than 2 minutes as you will start to rely on things more. That's when inverter-chargers come in. Skip the UPS and get one of those and some 100AH+ marine batteries. 100AH marine/RV batteries are best bang for your buck per AH.


Inverter-charger


Battery rack (the one with fans) with battery bank isolation switches (for maintenance)




Fuses. shunt, and sensor wiring... honestly this is kinda a cable fail and I want to redo it every time I see it, but it works.

I want to eventually go with a -48v telco style setup with inverters though. I figure it's worth spending extra money on power as that is what is protecting your expensive equipment. Eventually want to get a generator transfer switch and generator too. 24/7 equipment does not like to be turned off.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
if you need to run a few vms I see no need to run that many servers. Also the more you put in the electricity costs and possibly now the cooling costs go up dramatically. For someone like me in a residential pricing, once you get to tier3 it's just too much, also don't forget the noise pollution, in a datacenter thousands of fans are ok in a home it will drive you crazy. I run freenas, vmware esx server and hyperv all from my laptops, it takes up no space, quite, pulls almost no electricity.

The "no need" part has been explained multiple times in person to him, as well as electricity and cooling concerns. I don't disagree and have been pointing him the direction of setups that will be a bit friendlier to running at home.

If you're looking to build a home lab to learn, the hardware is irrelevant. 32GB ECC DIMMs in a $40k server look and feel (almost) the same as the 4GB DIMMs in your laptop. There is no real difference (except in outward appearance and number of supported HDDs) between a 12-disk SAS shelf and a 1-disk USB/eSATA enclosure. And so on.

Yes and no. There's no difference between a 12 disk SAS shelf and a 1-disk USB enclosure, but there is a difference between running DAS vs iSCSI/Fibre and things of that nature. Not to mention array rebuilds.

There's also the little things you learn working with servers that you simply won't get on home systems. Remote Management (iLo/DRAC/RSA), RAID controller memory/batteries, etc. Finding out little things like there's more than one type of external SAS connector. No, none of that is needed for a home server but it makes you look a little less dumb the first time you have to open up a server as part of a job. I've applied for datacenter jobs where they have you strip down a server as part of the interview process.

Don't get me wrong, I like your builds, but there are some things you'll miss out on by not going with actual server hardware, even older hardware.

Stand up more VMs. Try some of the nitpicky features like hardware passthrough. Perhaps with a second video card and a desktop OS installed in the VM.

That's the one thing he'll be missing out on with the older hardware. No passthrough on that era of hardware.

Doing so will allow you to only have to buy things once.

Is the older equipment less efficient? Absolutely. But there's a difference between less efficient and not working. My setup is all Xeon 5000 series systems, but performance wise it slaughters the desktop PC/laptop route.

In regards to the electricity requirements many of you mention, this will not be a 24x7 setup. It's going to be using schedule WOL for when he gets home, then powered off at the end of the day.
 
Last edited:

cfineman

Junior Member
Nov 20, 2006
20
0
0
I am (was?) close to pulling the trigger on something similar... maybe even this weekend if good deals are to be found. My goal was to build an all-in-one, esxi-based lab that I would run software raid, a media server (or two to experiment), and maybe one or two other utility VMs. Was thinking of an e3-1275-v3 (or 1275L if I can find one) and one of the SuperMicro boards and hanging a bunch of disks off it (eventually moving to an LBA-flashed controller card to expand storage).

Until a day or two ago, I was *not* thinking of building out a separate storage node. The rational was that most of the heavy lifting would be done by the VMs i mentioned above and it seemed sensible to keep the data local. Understand that I loose some flexibility in terms of replacing/adding compute. Can a SAN/fileserver (perhaps evidenced by the above statements, i'm not clear on the distinction) provide competitive IOP/s?
 
Feb 25, 2011
16,822
1,493
126
Can a SAN/fileserver (perhaps evidenced by the above statements, i'm not clear on the distinction) provide competitive IOP/s?

It can, but it ultimately depends on the design and use. For a home "learning lab" situation, I would expect to be IOPS-limited by the smallish RAID arrays (probably just ~100 IOPS per spindle) before I was limited by the connectivity type. (1GbE will top out around 1k IOPs for iSCSI.)

SSDs turn that on its head, of course.

But it also comes down to application - a SAN/NAS might not provide as many IOPS to a single client as a DAS RAID10 of SSDs, but it will provide a reasonable number of IOPS to a lot more clients.

And in the "learning lab," it's not the performance that matters, it's the variety of experience.

Incidentally, you might be interested in this: http://www.vmware.com/files/pdf/vsphere_perf_exchange-storage-protocols.pdf
 

cfineman

Junior Member
Nov 20, 2006
20
0
0
Thanks I will check out that case study. FWIW, I'm not gonna be building out an array of SSDs... in fact my plan is to reuse as much of my current hodgepodge of disks as I can.

The application I'm most concerned about performance wise is media streaming (which can probably be largely offset by sufficiently large memory buffers). The "lab" part of this won't be as reliant on performance... mostly that will be for playing around with various platforms/software stacks. I am more concerned that these will interfere with the performance of the above app (but again... maybe I should just ensure I have sufficient memory?)
 
Feb 25, 2011
16,822
1,493
126
No worries then - a single HDD can stream more media than a family of four has any right to be watching.
 

cfineman

Junior Member
Nov 20, 2006
20
0
0
lol no doubt. I want to be able to transcode to different res devices as well as stream music.

My collection (music, photos, videos, and other random crap... ) has grown beyond what my current storage supports. I've also encountered some bit rot. I was gonna kill a couple of birds with one stone.

One thing i've nearly convinced myself of is I'm probably over thinking this for my modest needs :-D

Separating storage and compute is probably just not warranted in my case.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |