Hey guys, i'm in the process of researching a project I'd like to complete in the next few weeks, and I'm having a bit of a hard time finding all the info I need.
In short, I'd like to build an ESXi server for the purpose of testing SQL clusters, 2008 server clusters, AD, etc, etc. Basically a learning lab. I can see running 8 machines at the same time at any given point. I'd also like the option to sell this setup to a client down the road if I find myself not using it enough. So it needs to be semi-enterprise class. So all of this will be rackmount. Please understand that I'm not looking to build a $10k server, but something middle of the road.
My two basic options are to create a single box with maxed out cpu/memory/HDD(raid,etc), and the other is to create a box with all HDD's, plus a decent ITX mobo to serve as an iSCSI SAN. (if that would even work).
I'm leaning toward a SAN + ESXi server though because down the road it could work out to be a HA ESX server config for a client.
For the ESXi server, I'll get some kind of quad core cpu, 16gb ram, etc, etc.
For the SAN side of things, I'm thinking 3x Supermicro sata/sas enclosure with a total of 15 drive bays. This san will also be my storage box (currently a 4TB 2U box). So I'm thinking 6x 1TB or 2TB in RAID5 or 6. I can handle all of those details.
The details I'm not so familiar with are how to configure the VMFS LUN/Array. Fiber Channel is out of the question, it's just way to expensive, and SAS is up there, but 4x 147GB would be doable. or possibly 4x SSD. Both of these would be in RAID10 (?) to gain the most performance out of them. The questions I have are all about performance for these 4 or so drives in whatever type of raid will get me the best performance.
Can anyone suggest what would be the best way to build this myself? I understand ESX(i) has very strict hardware requirements, but I plan on following the HCL as well as others experience.
In short, I'd like to build an ESXi server for the purpose of testing SQL clusters, 2008 server clusters, AD, etc, etc. Basically a learning lab. I can see running 8 machines at the same time at any given point. I'd also like the option to sell this setup to a client down the road if I find myself not using it enough. So it needs to be semi-enterprise class. So all of this will be rackmount. Please understand that I'm not looking to build a $10k server, but something middle of the road.
My two basic options are to create a single box with maxed out cpu/memory/HDD(raid,etc), and the other is to create a box with all HDD's, plus a decent ITX mobo to serve as an iSCSI SAN. (if that would even work).
I'm leaning toward a SAN + ESXi server though because down the road it could work out to be a HA ESX server config for a client.
For the ESXi server, I'll get some kind of quad core cpu, 16gb ram, etc, etc.
For the SAN side of things, I'm thinking 3x Supermicro sata/sas enclosure with a total of 15 drive bays. This san will also be my storage box (currently a 4TB 2U box). So I'm thinking 6x 1TB or 2TB in RAID5 or 6. I can handle all of those details.
The details I'm not so familiar with are how to configure the VMFS LUN/Array. Fiber Channel is out of the question, it's just way to expensive, and SAS is up there, but 4x 147GB would be doable. or possibly 4x SSD. Both of these would be in RAID10 (?) to gain the most performance out of them. The questions I have are all about performance for these 4 or so drives in whatever type of raid will get me the best performance.
Can anyone suggest what would be the best way to build this myself? I understand ESX(i) has very strict hardware requirements, but I plan on following the HCL as well as others experience.