Building a Server...Again

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Background: Current Server is a hack job to replace the one I had for 8yrs before the MB died in Feb13. This is part of a complete system overhaul for almost all computers I have, as well as the infrastructure (New switches, wireless, UPS, Rack, etc...). This is for my home, yes I know I have issues .

Use:
  1. File Server (General Files)
  2. Media Server (for Several XBMC Boxes)
  3. Proxy/Firewall (Little one is getting online, want to have some control)
  4. Gateway/Router (Internet Control for little ones)
  5. DHCP (Wired systems will be static, Wireless will be DHCP)
  6. SQL (Learning and eventually control XBMC in master database)
  7. Web (why not, got everything else)
  8. VM (Switching all but one system to Linux, this will support a few programs that need Windows)

Price:
Trageting $2200-$2400
Everything is bought and shipped in USA

Software:
Will be running Ubuntu Server 13.10 (Unless compelling reason to use something else)
Servers I plan to install (open to recommendations): Squid (Proxy), Apache (Web), Samba (File/Media), DHCP3 (DHCP), UFW (Firewall), and MySQL (SQL). Have not identified VM software yet.

Parts:
1 ASRock Z77 Extreme4 LGA 1155 (Using this MB because I have 2 other systems using them and gives me ability to swap if one fails)
1 G.SKILL Ripjaws X Series 8GB (2 x 4GB)*(Thinking 16GB due to addition of VM)
1 Intel EXPI9301CTBLK 10/100/1000Mbps (This is to connect to the Modem for Gateway/Router Function)
1 SUPERMICRO x8 SATA / SAS 8-Port
1 XIGMATEK CCA-EMFCB-U01 4 in 3 HDD Cage Extra Hard Drive Bay
10 Seagate 4TB (32TB RAID Z2)
1 CORSAIR TX Series CMPSU-650TX
1 NZXT Tempest 210
1 APC BR1000G Back-UPS Pro 1000
1 3ware 1 to 4 Serial ATA Breakout Cable
1 COOLER MASTER 120mm Fan 4 pack

I also have 2 128GB Virtex4 SSD that will be mirrored for boot and applications, the RAID Z2 is for data and media only.
The processor will be a i3-2120T that I currently have.

I have Monitors, Keyboards, etc... once this is running, I expect to remote manage, but have 2 Viewsonic VG2230 that can be connected if needed.

This system is designed for stability, it will not be overclocked.

Part Preferences
I prefer Intel, APC, Supermicro and Seagate. I know hard drives are a hot topic, I am open to WD but have not had the best of luck with them (though the last WD I had was over 7 years ago). Anything is negotiable if enough data is shown as why to change.

Timing:
I will be building this end of this month/early December. I have flexibility to adjust the timing slightly as I find deals on parts. However the current server is completely full and lacks many of the functions the new one will have (it is running SAMBA, and it is full).

Other Thoughts
I know this is overkill, but part of the reason for doing this is to learn. I hope to have this server last me quite some time. A lot of the parts are common across my other two systems so that I can keep critical function in event of failure (MB, RAM, I have another i3 Processor, Power supply).

The RAID is meant to give me a single entity for storage. One of the other systems will be a backup system that will backup all critical (IE non Media) data to its RAID Z1 array. Data will also be replicated on my laptop.

I am a bit concerned about the How many Media streams I can run, I would like to support at least 2, up to 5. I use ISO images for DVD's (don't try to tell me I should encode, I have my reasons, this is not the focus of this discussion) so there are high bit rates.

All of the XBMC boxes are Intel NUC wired to gigabit switch. I think the hard drives may be my limiting factor though, thoughts on that would be appreciated.

Thanks for any help with this, I have been planning for about 6 months, but it is time for some peer review to see if I am on the right track. This is the most extreme build I have done in the 20 years I have been building systems, I want to make sure I got it planned right!

Update:
After all the discussion on here, this is the current selected solution. Feel free to continue to provide feedback, I would not be surprised if it changes again!

After much thought, research and consideration, I think I have a final solution:

ASRock Q87WS-DL ATX Server Motherboard LGA 1150 Intel Q87
Intel Core i5-4570S Haswell 2.9GHz LGA 1150 65W
G.SKILL Ripjaws X Series 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600
LSI Internal SATA/SAS 9211-4i 6Gb/s PCI-Express 2.0
10 Seagate Desktop HDD.15 ST4000DM000 4TB 64MB Cache (32TB RAIDZ2)
CORSAIR TX Series CMPSU-650TX 650W
Cooler Master HAF 912 Mid Tower Computer Case
Cooler Master 4-in-3 Device Module Cage
3ware .5M Internal (SFF-8087) Serial ATA Breakout Cable
Nippon Labs SATA Adpater Molex 4-Pin PC power cable to 2 x SATA

Thanks!
 
Last edited:

Virge_

Senior member
Aug 6, 2013
621
0
0
What requirements led you to choose the AOC-SAS2LP-MV8? It's a JBOD card capped at SATA-II speeds.. and you're using SATA-III drives. Seems wise to invest in an appropriate hardware card, otherwise you're better off with the integrated RAID controller.

Edit: I'd also use a H80i. I swapped all of my HTPC's to closed-loop systems and have been extremely happy with the decrease in noise/heat. Negligible cost increase.
 
Last edited:

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
The AOC was chosen for stability... but an SATAIII would be better. I am open to recommendations.

H80i probably won't get me much...this is the server and sits down in a rack in the basment...noise is not an issue. HTPC systems are nuc's...they run pretty quite.

Thanks!
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Also note that I am planning on using ZFS...so I need JOBD not raid support. This will be used with the onboard controller SATA channels to cover the 12 hard drives.
 

Virge_

Senior member
Aug 6, 2013
621
0
0
The RAID controller is extremely important for this type of server. Stability, reliability, and performance should be key here. The real question is how much are you willing to spend?

Your call on cooling. BTU's have to be compensated for. I'd rather spent $60 more and save in the long-term on cooling costs. Although, my priorities are a bit different when I'm running a dozen boxes in my home server room.
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
I agree...if I was using RAID. However with ZFS I am not sure that will be buying me much going for enterprise class raid. I had that before and do not see it as a good fit...maybe if I had twice the number if systems and funds it would be more realistic.

Thanks!
 

Paperlantern

Platinum Member
Apr 26, 2003
2,239
6
81
Is there a specific reason you would install a bare metal OS on a server this day in age? It would seem to me grabbing a copy of ESXi, installing that on the bones of the machine would be the best idea. You could then install your Ubuntu server in a VM, with virtually no hardware problems, and still do all the file serving and other bits you would like to do with that server. Plus if you ever wanted to mess with a different flavor of Linux for another server application, or have a need for anything Windows related (i dunno maybe you suddenly get the urge to mess with Exchange or something, who knows), you can install that without bothering your current server, or need additional hardware.

On top of that you can install ESXi on a thumb drive, eliminating the need of using your SSDs for a boot environment, you can use that datastore for your highest demand server to be stored there.

I'm not saying this is the definitive answer, but I use ESXi in my home server environment and I absolutely love it. With the advent of 5.5 as well, you can manage the host from ANY machine with a browser.


Done.

Just food for thought.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
What requirements led you to choose the AOC-SAS2LP-MV8? It's a JBOD card capped at SATA-II speeds.. and you're using SATA-III drives. Seems wise to invest in an appropriate hardware card, otherwise you're better off with the integrated RAID controller.

Edit: I'd also use a H80i. I swapped all of my HTPC's to closed-loop systems and have been extremely happy with the decrease in noise/heat. Negligible cost increase.

Given that none of the HDDs out today can touch SATA 1.5Gb/s speeds, let alone SATA 6Gb/s speeds, I'd say that a SATA 3Gb/s card is perfectly reasonable. ZFS only cares that it can see the bare drives, so it doesn't need any kind of fancy RAID.

OP, I think the thing to think about here is software. I would not run a normal Ubuntu release on a server because they go out of support after 12 months and constantly upgrading is a pain. The LTS release is totally fine though. I would also virtualize everything except for the file serving bits using KVM.

As for storage, you'll see huge performance increases if you create two 8GB partitions on your SSDs for the ZIL (e.g. zpool attach zpool1 log mirror /dev/sda4 /dev/sdb4). Writes will be logged to the SSDs and then flushed from memory to the HDDs in large chunks. This allows you to get SSD performance for bursty writes even though you're going to an HDD array.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Given that none of the HDDs out today can touch SATA 1.5Gb/s speeds
TBF, they can reach those speeds, though just barely, and certainly not on average. Run HD Tach on a current WD Blue. SATA 3Gbps, however, is quite fine.

Personally, just to save possible headaches of setup and future config changes, I would make a separate firewall box (a wireless router could have performance problems, with all the tracking and filtering). Something always seems to cause a problem, eventually, with them combined, that's more annoying to work out than if there were separate, whether as VM, or using an OS that sets them both up.

Given that 14.04 LTS is around the corner, setting 13.10 up now, then doing a rip-and-replace upgrade in several months, may be worth it. 12.04 can be problematic with 7 series chipsets and 3rd gen Cores, due to the old kernel. There's no reason you couldn't re-use everything in your storage pool all over again with the updated OS, just taking the server down for a bit to do a re-install, since that's ~6 months away. Once the new one is installed, you'll have several years of updates for it.
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
TBF, they can reach those speeds, though just barely, and certainly not on average. Run HD Tach on a current WD Blue. SATA 3Gbps, however, is quite fine.

Personally, just to save possible headaches of setup and future config changes, I would make a separate firewall box (a wireless router could have performance problems, with all the tracking and filtering). Something always seems to cause a problem, eventually, with them combined, that's more annoying to work out than if there were separate, whether as VM, or using an OS that sets them both up.

Given that 14.04 LTS is around the corner, setting 13.10 up now, then doing a rip-and-replace upgrade in several months, may be worth it. 12.04 can be problematic with 7 series chipsets and 3rd gen Cores, due to the old kernel. There's no reason you couldn't re-use everything in your storage pool all over again with the updated OS, just taking the server down for a bit to do a re-install, since that's ~6 months away. Once the new one is installed, you'll have several years of updates for it.

I was hoping to use 14.04LTS, but you hit my issue, it is still 6mo, hence the reason for 13.10 Server. I think this may be my best option.

I must admit that I am new to the VM world and still learning what is the right fit. I had never heard of ESXi before today, hence why I never considered it.

I am not opposed to a separate firewall box, I just want to make sure it can do the functions I am looking for: Security Filtering and Protection, Loging of user activity (I don't believe is censoring the internet for my son, but I do what to know where he has been).

As you can tell I am also new to most of the servers, I have done SAMBA and played with DHCP3 and Apache, that is about it. But I am up for the challenge. Actually by the time I figure it all out it will be when 14.04LTS is released...

Thanks!
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Given that none of the HDDs out today can touch SATA 1.5Gb/s speeds, let alone SATA 6Gb/s speeds, I'd say that a SATA 3Gb/s card is perfectly reasonable. ZFS only cares that it can see the bare drives, so it doesn't need any kind of fancy RAID.

OP, I think the thing to think about here is software. I would not run a normal Ubuntu release on a server because they go out of support after 12 months and constantly upgrading is a pain. The LTS release is totally fine though. I would also virtualize everything except for the file serving bits using KVM.

As for storage, you'll see huge performance increases if you create two 8GB partitions on your SSDs for the ZIL (e.g. zpool attach zpool1 log mirror /dev/sda4 /dev/sdb4). Writes will be logged to the SSDs and then flushed from memory to the HDDs in large chunks. This allows you to get SSD performance for bursty writes even though you're going to an HDD array.

Good idea on the partitions, I will definitely add that to my notes.

I agree on LTS, See previous post on the why of using 13.10 Server. I may use that to learn the setups then redo it with 14.04LTS is released.

Still learning the Vertualization, need to research that more.

Thanks!
 

pitz

Senior member
Feb 11, 2010
461
0
0
I highly recommend that you get something with IPMI. Nothing like being away from home, or even having the machine sitting in a back corner, and having to walk up to it and reset it physically if something goes wrong. Having the console over Ethernet is incredibly valuable.

I know that this doesn't fit with your "spare Z77 board around" philosophy, but the Linux kernel is generally flexible enough that, in a pinch, you could do the swap anyways.

There's an Asus board, costs about $300 these days, takes a Haswell LGA1150 CPU (ie: an E3-1275 v3), has 14 x SATA/SAS (6 derived from the Intel chipset, 8 derived from a LSI) and lots of Intel Ethernet that looks like an incredible deal these days. With IPMI to boot.
 

pitz

Senior member
Feb 11, 2010
461
0
0
Given that none of the HDDs out today can touch SATA 1.5Gb/s speeds,

Ummm, most modern 2Tb drives can push out 150-170mb/sec. The newest WD Black is good for over 200 apparently. Having full capability helps on RAID re-syncing.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
I highly recommend that you get something with IPMI. Nothing like being away from home, or even having the machine sitting in a back corner, and having to walk up to it and reset it physically if something goes wrong. Having the console over Ethernet is incredibly valuable.

I know that this doesn't fit with your "spare Z77 board around" philosophy, but the Linux kernel is generally flexible enough that, in a pinch, you could do the swap anyways.

There's an Asus board, costs about $300 these days, takes a Haswell LGA1150 CPU (ie: an E3-1275 v3), has 14 x SATA/SAS (6 derived from the Intel chipset, 8 derived from a LSI) and lots of Intel Ethernet that looks like an incredible deal these days. With IPMI to boot.

Have to agree with this, IPMI is critical for anything that doesn't get used as a workstation.

If you want to build a server do it properly. Server class motherboard, ECC RAM, and for the love of god don't put an OS directly on the hardware. ESXi is free, use it.

Viper GTS
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Good things to consider, I have some pondering to do.

Does ESXi cause any issues with ZTS? Server Class is great, but the costs get out of hand quickly...I like the features of the ASUS (though I have had bad experiences with them in the past), but I can have a spare Z77 board sitting around and still be lower cost (especially when you figure in ECC Memory). Not saying it is a bad idea, just a counter point. I need to run my options.

I do like the idea if IPMI, I need to look into that. Still trying to fully comprehend what ESXi buys me as well, just need to learn more.

Thanks!
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
ESXi buys you the ability to manage each of the functions individually. You can upgrade an OS, replace it entirely, play with something new and delete it when you're done, all without touching your other infrastructure that is running on the same machine. It also abstracts your services from the hardware they run on. Want a faster machine next year? No problem - Just upgrade the hardware underneath and your VMs never know anything changed. Want to back the entire machine and all its config up? No problem - It's just files. The hardware config, the data, and everything needed to run that machine anywhere in the world can be thrown onto any storage medium. Going to do an upgrade and you're not sure if you're going to screw something up? Snap the VM, do your upgrade. Roll back if you hose it up, delete the snap if it works as you want.

And no, it doesn't interfere with ZFS - You can do a PCI passthrough of a device into a VM. You need hardware that supports this though, look for VT-d on the Intel datasheet.

I am running an ESXi server with VMs for my ZFS storage (with three VT-d mapped storage controllers), Plex server, OpenVPN access server, Ubiquiti Unifi controller, and all the random experiments that I play with.

Viper GTS
 

Twinpeaksr

Senior member
Aug 9, 2000
386
0
76
Thanks for the quick primer, sounds similar to Hardware Abstraction Layer (Pardon me if I seem ignorant, I am an embedded hardware guy trying to learn software stuff).

Right now I think I will change the Supermicro Card to an LSI 9201-16i. I think I will stick with the same MB, but I am still working through different MB/Processor combinations.

I will read up on ESXi, sounds like it may be a good idea.

I think the Hard Drives will be my bottleneck, but I am still learning on the effects of how ZFS is implemented, and weather I need to add another Ethernet.

Thanks!
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
If you do this:

And no, it doesn't interfere with ZFS - You can do a PCI passthrough of a device into a VM. You need hardware that supports this though, look for VT-d on the Intel datasheet.

I am running an ESXi server with VMs for my ZFS storage (with three VT-d mapped storage controllers), Plex server, OpenVPN access server, Ubiquiti Unifi controller, and all the random experiments that I play with.

Then you don't get this:

It also abstracts your services from the hardware they run on. Want a faster machine next year? No problem - Just upgrade the hardware underneath and your VMs never know anything changed. Want to back the entire machine and all its config up? No problem - It's just files. The hardware config, the data, and everything needed to run that machine anywhere in the world can be thrown onto any storage medium. Going to do an upgrade and you're not sure if you're going to screw something up? Snap the VM, do your upgrade. Roll back if you hose it up, delete the snap if it works as you want.

Passing through raw PCI devices fundamentally breaks down the virtualization abstraction. At that point, it's hardly better than a physical machine. No migrates, no fault tolerance, etc. Raw mapping PCI devices that happen to be storage devices are even worse, at that point you can't even make a clean snapshot for backup/restore because the hypervisor fundamentally can't track what's going on.

I think that ZFS should be sitting down at your lowest layer. If you're using VMware, that means having a ZFS storage box (or appliance) serving ESXi hosts via iSCSI or NFS. However, given that the OP has a single box to work with, I think that using KVM as the virtualization layer makes more sense. Give the VMs ZVOLs as disks or even just QCOWs sitting on a normal ZFS fileset.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
If you do this:



Then you don't get this:



Passing through raw PCI devices fundamentally breaks down the virtualization abstraction. At that point, it's hardly better than a physical machine. No migrates, no fault tolerance, etc. Raw mapping PCI devices that happen to be storage devices are even worse, at that point you can't even make a clean snapshot for backup/restore because the hypervisor fundamentally can't track what's going on.

I think that ZFS should be sitting down at your lowest layer. If you're using VMware, that means having a ZFS storage box (or appliance) serving ESXi hosts via iSCSI or NFS. However, given that the OP has a single box to work with, I think that using KVM as the virtualization layer makes more sense. Give the VMs ZVOLs as disks or even just QCOWs sitting on a normal ZFS fileset.

I would argue that it is still beneficial, as you can move the VM to new hardware easily. With raw disks on pci pass through you can lose both the physical system and the storage controllers themselves and still be OK as long as your disks are intact. Yes you can't vMotion it but what home user has licensing that allows vMotion? On this budget that simply isn't a concern.

I do it at home and it works great for my needs, of course I wouldn't do it in production and neither is he.

Viper GTS
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
I would argue that it is still beneficial, as you can move the VM to new hardware easily. With raw disks on pci pass through you can lose both the physical system and the storage controllers themselves and still be OK as long as your disks are intact. Yes you can't vMotion it but what home user has licensing that allows vMotion? On this budget that simply isn't a concern.

I do it at home and it works great for my needs, of course I wouldn't do it in production and neither is he.

Viper GTS

I'm pointing out that the things that you highlighted as advantages for VMs (easy backup, easy hardware upgrade, easy snapshot) are severely diminished once you start passing through raw PCI devices. Sure, you can do them, but not as easily. In fact, I would argue that it is exactly as challenging as doing the same to a normal Linux machine.

So rather than hack around trying to fit a square peg (single-box ESXi) into a round hole (ZFS-backed storage), why not just put a round peg (KVM) into a round hole (ZFS)?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |