Help pick parts for 2 VM Server builds

Dug

Diamond Member
Jun 6, 2000
3,469
6
81
Need to test 2x 2016 servers in lab setting

Requirements
Low power, low noise. Stability is not priority as configurations and VM's will be changing all the time.

CPU: low power xeon with igpu. Speed is not important.

Motherboard: mATX. I would like mITX but I need more than one pci-e slot. Exception would be if motherboard has onboard sas controller OR 4 nics.
At least 2 of the onboard nics need to be Intel and match in model #

Case: Smaller the better, but not at the expense of noise. Room for 4 2.5" ssd's. I don't need a dedicated mounting as they will swapped out quite a bit.

Memory: 16GB. No brand preference and not sure if ECC is a requirement for anything.

Storage: 4 ssd's. I have 4 already, but need 4 more. Small size is fine. I'm partial to Samsung, but haven't been following ssd's for awhile.

Power Supply: ? If there is a small case with small built in PS, then that's ok.

Thanks in advance!
 
Feb 25, 2011
16,822
1,493
126
Supermicro motherboards. Compatible RAM (check either the qualified hardware list from SM or use a compatability checker like Crucial has on their website.) ECC or not depends on the board. If these aren't production systems, there's little incentive to spend the extra money.

Most motherboards will have 4 SATA ports for the SSDs - why do you need [del]SATA[/del] SAS?

Any old case.

What are you testing? Seems to me buying a couple old rackmount servers is a better way to go if you're just doing something temporary or for testing. (Also assuming you're testing/validating a software config, not a hardware config.)
 
Last edited:

Dug

Diamond Member
Jun 6, 2000
3,469
6
81
This will be mostly for testing Storage Spaces Direct, Switch embedded teaming and virtual RDMA, SOFS by Hyper-V via SMB.

2016 Storage Spaces Direct can use SATA or NVMe. (2012 couldn't) This could become very beneficial for inexpensive fast storage, as it wouldn't require SAS SSD's or two more servers as storage controllers connected to JBOD's.

I have yet to meet a rackmount server that was quite. This testing will be at home, hence the requirement for low noise.
 
Feb 25, 2011
16,822
1,493
126
Ah, at home. Yeah, that would leave out rackmount gear.

Should work:

http://pcpartpicker.com/p/kkngkL

Could add a SAS controller or something if you have to. At low load, the little fan in the stock Intel HSF will stay pretty quiet. Same for the power supply. Maybe add a couple low-noise case fans if you can.

With the right adapters, 4x 2.5" HDDs/SSDs will fit in a single 5.25" optical bay, so the case doesn't really matter too much.

Ummm... what's the budget?
 
Last edited:
Feb 25, 2011
16,822
1,493
126
Oh, and pedestal-style servers are also available pretty cheaply as used prebuilts. They're usually pretty quiet (sometimes sold as either/or SMB servers/workstations.)

You'd still probably need to add a NIC/SAS controllers, but if an older CPU and 4-8 GB of RAM is sufficient, might be a cheaper way of getting from point A to point B.
 

Dug

Diamond Member
Jun 6, 2000
3,469
6
81
Thanks for your help!

Budget isn't the main concern, but this is only for personal testing and no real workload, so no reason to pay for more than I need.

But time is money and so speed is important to me. But that's why I'm using SSD's. I'm not sure that compute power will play into much for what I'm doing.

What about a SuperMicro with 4 Intel built in?
http://www.newegg.com/Product/Produ...m_re=MBD-X11SSH-LN4F-O-_-13-182-996-_-Product
Newer chipset, and can use Skylake processor for a few bucks more.
 
Last edited:

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
For what you're talking, I don't think I would even bother with SuperMicro. I guess if you lay out more detail on what your testing entails, that could help, but you'd likely get by fine with something very simple.

For what it is worth, I am running an AMD X4(3 actually) on an AMD880 Asus board with an Intel NIC as my home server. No issues whatsoever, and no funny components to create the build. I would think any reasonable i5 build would be sufficient for what you are looking for. And if CPU really is not important, drop down to i3.

The TS140s frequently go on sale at TigerDirect and Newegg: http://slickdeals.net/f/7951967-len...-psu-dvd-rom-200-after-30-rebate-free-s-amp-h

For a self built box:

PCPartPicker part list: http://pcpartpicker.com/p/NFTkVn
Price breakdown by merchant: http://pcpartpicker.com/p/NFTkVn/by_merchant/

CPU: Intel Core i3-6100 3.7GHz Dual-Core Processor ($129.99 @ Newegg)
Motherboard: ASRock H110M-HDV Micro ATX LGA1151 Motherboard ($51.99 @ Amazon)
Memory: G.Skill NT Series 16GB (2 x 8GB) DDR4-2400 Memory ($69.99 @ Newegg)
Storage: Crucial BX200 240GB 2.5" Solid State Drive ($64.99 @ Amazon)
Storage: Crucial BX200 240GB 2.5" Solid State Drive ($64.99 @ Amazon)
Storage: Crucial BX200 240GB 2.5" Solid State Drive ($64.99 @ Amazon)
Storage: Crucial BX200 240GB 2.5" Solid State Drive ($64.99 @ Amazon)
Case: Fractal Design Core 1000 USB 3.0 MicroATX Mid Tower Case ($29.99 @ Newegg)
Power Supply: Corsair Builder 430W 80+ Bronze Certified ATX Power Supply ($19.99 @ Newegg)
Total: $561.91
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-01-20 17:29 EST-0500

Usually for a virtual host I would want at least 4 cores if you want to start throwing more VMs on there, which would figure to be about $60 more. But all reasonable components that I suspect would have no issues with ESXi. (If an AMD880 works, there is no reason the above won't.)

Also, you may want to consider a SAS controller and RAID config for drives, but you may or may not need that. M5015 on ebays can be had cheaply and are easily compatible within ESXi.
 
Feb 25, 2011
16,822
1,493
126
Thanks for your help!

Budget isn't the main concern, but this is only for personal testing and no real workload, so no reason to pay for more than I need.

But time is money and so speed is important to me. But that's why I'm using SSD's. I'm not sure that compute power will play into much for what I'm doing.

What about a SuperMicro with 4 Intel built in?
http://www.newegg.com/Product/Produ...m_re=MBD-X11SSH-LN4F-O-_-13-182-996-_-Product
Newer chipset, and can use Skylake processor for a few bucks more.

Ooh, I didn't know they'd gotten the X11s out the door. Time to upgrade, I guess.

It's actually cheaper if you don't have to buy a NIC.

mvbighead has some valid points - particularly regarding AMD. I know you said you want a Xeon, but if you don't need a lot of CPU performance, AMD chips pretty much all support the virtualization goodies that are otherwise restricted to higher-end Intel chips. (Except for that G4400 which inexplicably supports Vt-X too. Not that I'm complaining.)
 

Dug

Diamond Member
Jun 6, 2000
3,469
6
81
mvbighead-

I would go that route, but I've never looked into compatibility.

In the "old" days you would miss features needed by the server OS using a desktop cpu and motherboard.

Even with SuperMicro they have a compatibility list.
http://www.supermicro.com/support/resources/OS/C236.cfm

They show several of their boards that aren't certified to work as hyper-v hosts. Does that mean it won't work? Probably not, but I like to be close to recommended hardware as possible. Unless I'm all wrong here and I can get away with cheap desktop.

There's just certain gotcha's that people have run into, for instance having the exact same nic for pairing is a requirement. Breaks even if they are the same brand, but different model number.
 

zir_blazer

Golden Member
Jun 6, 2013
1,184
459
136
(Except for that G4400 which inexplicably supports Vt-X too. Not that I'm complaining.)
You mean VT-d. Intel seems to have been enabling it nearly universally since Broadwell (Which is irrelevant on Desktop since it didn't had even a full product lineup, but on Mobile they did). Even Skylake Celeron G3900 seems to have VT-d.
Reason why Intel is including VT-d among the full lineup may be related to the fact that several Windows 10 security features needs a working IOMMU to be used. I would say that Skylake platform should have perfected proper VT-d support compared to the pain in the butt that it was on most of SB/IB era.


Storage: 4 ssd's. I have 4 already, but need 4 more. Small size is fine. I'm partial to Samsung, but haven't been following ssd's for awhile.
Why do they have to be 4? If you intend to RAID them for performance, may as well pick a NVMe SSD like the Intel 750 instead. It will simply be much better.
 

Beer4Me

Senior member
Mar 16, 2011
564
20
76
Check my sig. I built my Vmware server for less than $700 total. Had to use ESXi-Customizer tool to incorporate the Intel 9 series SATA/AHCI drivers into the VMware ISO as VMware got rid of them.

Freaking flies. I can thick provision VMs in like 10 seconds. I've had 8 VM's running on here at one time without issue, granted they all had 1 vcore + 4 GB RAM assigned to them.

Edit: Since its just a lab, I don't care about having redundant RAID-1 or -10 SSDs. It's just a lab, for me to practice AD/Exchange stuff, nothing more.
 
Last edited:
Feb 25, 2011
16,822
1,493
126
You mean VT-d. Intel seems to have been enabling it nearly universally since Broadwell (Which is irrelevant on Desktop since it didn't had even a full product lineup, but on Mobile they did). Even Skylake Celeron G3900 seems to have VT-d.
Reason why Intel is including VT-d among the full lineup may be related to the fact that several Windows 10 security features needs a working IOMMU to be used. I would say that Skylake platform should have perfected proper VT-d support compared to the pain in the butt that it was on most of SB/IB era.



Why do they have to be 4? If you intend to RAID them for performance, may as well pick a NVMe SSD like the Intel 750 instead. It will simply be much better.

'x' and 'd' are right next to each other on the keyboard. That's my story and I'm sticking to it.
 

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
mvbighead-

I would go that route, but I've never looked into compatibility.

In the "old" days you would miss features needed by the server OS using a desktop cpu and motherboard.

Even with SuperMicro they have a compatibility list.
http://www.supermicro.com/support/resources/OS/C236.cfm

They show several of their boards that aren't certified to work as hyper-v hosts. Does that mean it won't work? Probably not, but I like to be close to recommended hardware as possible. Unless I'm all wrong here and I can get away with cheap desktop.

There's just certain gotcha's that people have run into, for instance having the exact same nic for pairing is a requirement. Breaks even if they are the same brand, but different model number.

I believe most of the standard features are available across all CPUs. The main difference CPUs such as Xeons have is that they support multiple channels for multiple processor (IE - Dual socket motherboard with 14 cores per socket/etc). Aside from that, most of the virtualization bits can be enabled on any consumer processor, especially any i-Series CPU from Intel. I believe much of that was getting at back during the Core2 era, maybe even prior.

There used to be a whitelist site with configs that people have used, but I am having trouble finding it. If you peruse google with 'Esxi Whitebox', you'll find sites like this:

https://www.vmguru.com/2014/02/building-a-new-esxi-whitebox/

All I know is, I have loaded ESXi temporarily and permanently on a number of different sets of hardware that I have. Core2Duo, AMD, i5, etc etc etc. The only areas you will experience compatibility issues is typically:

* RAID Controllers
* NICs

If you have an Intel NIC you'll be fine 99% of the time. The onboard 'RAID' controllers work fine for non-RAID access 99% of the time, and work for RAID about 0% of the time. If you want RAID, you must go with a server grade RAID controller such as the M1015, M5014, M5015, etc. I recently upgraded to an M5015 cross-flashed to LSI 9260-8i. Had to get to a certain level of LSI chip so that I could use my 4 x 3 TB disks (NAS VM). If you don't need larger disks, you can probably get by with something like the M1015... but again, this is ONLY if you need RAID.

That aside, this last go round I decided to load ESXi onto a USB drive: http://www.amazon.com/dp/B00LLEN5FQ/ref=twister_B00MQL3OHW?_encoding=UTF8&psc=1

Why am I telling you this? Because you can load ESXi onto dang near anything disk wise. Some datacenters boot ESXi from SD flash cards. The only trouble that could arise is if it does not see the AHCI controller on the board, but I've yet to see a problem there... at least not since the 5.x days.

Honestly, I wouldn't give much thought to server grade hardware for what you are suggesting. You want server grade if you're looking for redundancy, ECC, and those sorts of things. But a lab environment for playing around with features/etc... not much point.

EDIT:

I should add that most of what VMWare does involves the CPU, memory, network, and disk controllers. All the other 'features' on the board are more or less moot because a VM's hardware is software that is given access to CPU/memory/disk/network hardware. As long as the virtualization bits are there, you won't have a problem... and those bits are on pretty much any CPU released in the last 5 years or more:

http://ark.intel.com/Products/VirtualizationTechnology
 
Last edited:

Dug

Diamond Member
Jun 6, 2000
3,469
6
81
Why do they have to be 4? If you intend to RAID them for performance, may as well pick a NVMe SSD like the Intel 750 instead. It will simply be much better.

Storage Spaces Direct (S2D) requires 4 VM's. Each VM must have a minimum of 2 physical drives.

I want to be able to simulate physical failure of hard drives too, so even if it ran with one, I would still want two for each vm so I can practice recovery.

4 nics are needed for teaming and virtual switch. Mgmt, VM, Live Migration, Cluster/CSV


mvbighead- thanks for the info, but I will only be using hyper-v
 
Last edited:

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
mvbighead- thanks for the info, but I will only be using hyper-v

I've also loaded that a number of times on different types of hardware. Really, just about any HyperVisor is going to work with any recent set of hardware. HyperV should really have no problem on anything you can load Windows on. NIC support I imagine is a bit more generous there, though I do suspect software RAID is still going to mostly be a miss. Everything else should be fine.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |