ZFS High Availability SAN

BaDaBooM

Golden Member
May 3, 2000
1,077
1
0
I have a few VMware servers, however the challenge is to create an inexpensive (relatively) highly available SAN (iSCSI) to store the virtual machines. I like the error correction and de-duplication features of ZFS. I considered just building a ZFS box which would give me raidz2 for the hard drives. However that makes that server a single point of failure. Here is what I was thinking, please let me know your thoughts.

Since I have VMware, I thought I could create two clustered ZFS virtual servers on different hosts (stored on the hosts' local hard drives). Then I could have my physical servers that actually house the SAN's hard drives present each individual HD (since ZFS like to be as close to the hardware as possible) as iSCSI targets to the two virtual frontend ZFS servers. I could then use these individual disks to form a sort of raidz2 over iSCSI so each LUN could have its disks spread across multiple physical servers. Those raidz2 LUNs would then be presented over iSCSI to the VMware hosts. The storage chain would look like this:

VMware Hosts ---iSCSI--- virtual ZFS servers ---iSCSI--- multiple physical storage servers

This way the virtual front end servers would be redundant and, if laid out properly, each LUN presented to VMware would still be available if any one node went down. Of course this presents several questions:

Is this even possible? Am I crazy or just plain stupid?... wait, that's besides the point.

There would likely be a performance hit but how drastic would it be? Could this be mitigated by RAM and SSD caching? If so, where would the caching be focused, front end, back end, or both?

What kind of hardware should I be looking at? Open Solaris (Open Indiana) seems to have a shorter list of hardware. Could anyone recommend chipsets? Also solid inexpensive SATA controller cards (no raid) that have at least 4 ports each and would have drivers for this scenario?

I have a 2U case picked out that can hold 12 hot swap drives to give you an idea of the size of each physical node.

All constructive input is greatly appreciated. Thanks!
 

theevilsharpie

Platinum Member
Nov 2, 2009
2,322
14
81
I think your attempt to eliminate the single point of failure will itself become a point of failure.

VMware's stuff works very well if you run it in a supported configuration. It doesn't perform well at all on cheap, hacked-together parts.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
you can use lefthand VSA for free as a bare TARGET (no features but dishing up LUNS) forever. it works great esp san/iq 9

what you are saying is very common.

The SQL servers direct iscsi mount their storage partitions - but now how are you going to back them up? Vmotion gets tricky too.

But it was very common practice in low cost non-tiered storage days of sql 2000/2005 - the new o/s and sql/os 2008R2 and tiered storage make alot of the rules gone-by "go away".
 

BaDaBooM

Golden Member
May 3, 2000
1,077
1
0
theevilsharpie - I think you always have that risk. In a larger environment with an expensive EMC SAN and cisco fiber switches, there were several downtimes even though it was a fully supported configuration. It wasn't due to configuration issues either because what fixed most of it (they still don't have it fixed completely) were bug fixes in the switch's code. You just have to test it as best you can and hope you don't run into anything like that. The set up I am doing now is for a smaller and less critical environment. Why the high availability? Partially for just the experience/fun of it and partially for convienence and ease of maintenance. I saw all the money that was spent and still there were problems. I guess this is my experiment to see if I can get similar functionality with a fraction of the cost.


Emulex - They brought in VSAs at one point for evaluation... the performance just plain sucked - for us anyway. Not sure if they just didn't set it up right or what (it had plenty of hardware to back it). Also with no RAIDz or de-duplication, it is missing some key features for me. Also I will have some SQL but that won't be the majority of the VMs in my environment. However I am curious about what you mean by the rules going away.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
the old rules of having dedicated discs for database/tmp/log are no longer really true with modern SAN and the newer sqlos(2008R2) - and lots of ram - and ramcache's on san.

The newer sqlos (sql server is an os) will become much more lazy than its earlier versions as far as writing so you can throw the database/tmp/log all on a single san (we're talking a heavy weight san like $50K+) on the same disk sets with different LUN's and let it optimize away on its own.

so if you have a raidset with 2TB - you don't have to carve it up. just thin provision and tell it (or in some cases it knows, or has drivers) what you are doing and it will just handle it's business (prioritization of store, tiering cache,ssd,sas,sata).

That's what i meant by the old school rules of sql disk layout are gone.

you do know that the EMC storage solution "virtual appliance" is free too. it has alot of cool features.

I've got a bunch of real lefthand units - they perform well but the real nifty part is the auto-thin - network-raid - asynchronous wan snapshop continuous replication (n:1) is all built into the software so you don't pay per TB or ala-carte like most san providers.

Check out AOE - it uses layer2 switching and raid of network volumes to create low cost redundant storage which could be presented as iscsi back to esx.

imo latency kills - you will be hard pressed to find even FC that can compete with 6 15K SAS drives in RAID-10 (or 6 SSD drives in RAID-10) because you have network stack,switching,etc. latency multiplies esp with sql servers.

If you check out the vsphere forum you'll see monstrous EMC's that have huge latency and decent disk i/o but a single C300 will outperform it.

Definitely makes you think if you can make smaller clusters of unreliable hardware (grid) - cheaper - maybe that is a better solution than one monster high reliable solution that costs $$$$$$$$$.
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
Based on the inflection point of NAND saturation/availability there is a movement towards redundant dedicated hardware, as Emulex mentions. Much cheaper and easier to build two dedicated servers for SQL with DAS (SSD). You get 6 figure SAN performance out of a 10k SSD unit.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |