I have a few VMware servers, however the challenge is to create an inexpensive (relatively) highly available SAN (iSCSI) to store the virtual machines. I like the error correction and de-duplication features of ZFS. I considered just building a ZFS box which would give me raidz2 for the hard drives. However that makes that server a single point of failure. Here is what I was thinking, please let me know your thoughts.
Since I have VMware, I thought I could create two clustered ZFS virtual servers on different hosts (stored on the hosts' local hard drives). Then I could have my physical servers that actually house the SAN's hard drives present each individual HD (since ZFS like to be as close to the hardware as possible) as iSCSI targets to the two virtual frontend ZFS servers. I could then use these individual disks to form a sort of raidz2 over iSCSI so each LUN could have its disks spread across multiple physical servers. Those raidz2 LUNs would then be presented over iSCSI to the VMware hosts. The storage chain would look like this:
VMware Hosts ---iSCSI--- virtual ZFS servers ---iSCSI--- multiple physical storage servers
This way the virtual front end servers would be redundant and, if laid out properly, each LUN presented to VMware would still be available if any one node went down. Of course this presents several questions:
Is this even possible? Am I crazy or just plain stupid?... wait, that's besides the point.
There would likely be a performance hit but how drastic would it be? Could this be mitigated by RAM and SSD caching? If so, where would the caching be focused, front end, back end, or both?
What kind of hardware should I be looking at? Open Solaris (Open Indiana) seems to have a shorter list of hardware. Could anyone recommend chipsets? Also solid inexpensive SATA controller cards (no raid) that have at least 4 ports each and would have drivers for this scenario?
I have a 2U case picked out that can hold 12 hot swap drives to give you an idea of the size of each physical node.
All constructive input is greatly appreciated. Thanks!
Since I have VMware, I thought I could create two clustered ZFS virtual servers on different hosts (stored on the hosts' local hard drives). Then I could have my physical servers that actually house the SAN's hard drives present each individual HD (since ZFS like to be as close to the hardware as possible) as iSCSI targets to the two virtual frontend ZFS servers. I could then use these individual disks to form a sort of raidz2 over iSCSI so each LUN could have its disks spread across multiple physical servers. Those raidz2 LUNs would then be presented over iSCSI to the VMware hosts. The storage chain would look like this:
VMware Hosts ---iSCSI--- virtual ZFS servers ---iSCSI--- multiple physical storage servers
This way the virtual front end servers would be redundant and, if laid out properly, each LUN presented to VMware would still be available if any one node went down. Of course this presents several questions:
Is this even possible? Am I crazy or just plain stupid?... wait, that's besides the point.
There would likely be a performance hit but how drastic would it be? Could this be mitigated by RAM and SSD caching? If so, where would the caching be focused, front end, back end, or both?
What kind of hardware should I be looking at? Open Solaris (Open Indiana) seems to have a shorter list of hardware. Could anyone recommend chipsets? Also solid inexpensive SATA controller cards (no raid) that have at least 4 ports each and would have drivers for this scenario?
I have a 2U case picked out that can hold 12 hot swap drives to give you an idea of the size of each physical node.
All constructive input is greatly appreciated. Thanks!