is there anyone using ESXi datastores over iSCSI?

Feb 25, 2011
16,823
1,493
126
vmware cluster w/ mixed 10GbE and 8Gb FC from two two Compellent SANs. Performance is fine.
 
Feb 25, 2011
16,823
1,493
126
2 vmware clusters with FC to the local SAN, mostly single-purpose linux server VMs, some software build servers, a NAS is backed on one - a lot of static data storage. The clusters are iSCSI'd to each other for replication and failover purposes.

Drives are a mix of 10k, 15k, and 7200k. ~250 drives, probably ~370TB total.

iSCSI is primarily used for replication, but sometimes somebody fat-fingers and runs a VM from an iSCSI datastore on the "wrong" SAN. Performance has never been a problem to the point where somebody noticed, but we catch those things when we do VM audits.

iSCSI is slower for certain tasks (doing UNMAP, for instance) but for the most part it's fine. We don't really monitor IOPS - we're not bottlenecked by iSCSI, and we run it through the same cabling as data (different VLANs though) so it wouldn't really be fair.

If you design it right and give it its own fabric, though, 10GbE iSCSI is generally considered "just as good" as 8Gb FC. And it's cheaper. A lot of lower-end SANs are iSCSI-only.

And honestly, unless you're bolting a LOT of hard drives together, or using SSDs, your IOPS will be limited by that anyway.
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,444
0
76
It will be an all-flash array. Intel XL710 on the iSCSI target and x520 mezzanine cards on the initiators, which are dell blades. 4 40 GbE twinax links going into a pair of Force10 blade switches. no other traffic on this network.

no DAS option, not interested in infiniband or FC and there just aren't enough write-ups about tons of mixed-workload VMs on 10/40 GbE iSCSI with any hard numbers to look at.

I'm sure it will perform great for sequential i/o, but here and there you might find a MS SQL or MySQL VM. A friend tells me this would perform horribly with SQL and Exchange.


Basically, I am looking for the intel Fortville version of this document:
http://www.chelsio.com/wp-content/uploads/2013/09/T5_iSCSI_Brief.pdf
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
67,917
12,379
126
www.anyf.ca
We did at a place I worked at before. I don't remember any specifics and obviously can't check but it was pretty decent. Two Dell Equalogic SANs in a single cluster of several luns, and about 8 ESXi hosts. We had around 60 or so VMs running including file server, database, exchange etc... serving a hospital. Probably around 500 employees or so.
 
Feb 25, 2011
16,823
1,493
126
It will be an all-flash array. Intel XL710 on the iSCSI target and x520 mezzanine cards on the initiators, which are dell blades. 4 40 GbE twinax links going into a pair of Force10 blade switches. no other traffic on this network.

no DAS option, not interested in infiniband or FC and there just aren't enough write-ups about tons of mixed-workload VMs on 10/40 GbE iSCSI with any hard numbers to look at.

I'm sure it will perform great for sequential i/o, but here and there you might find a MS SQL or MySQL VM. A friend tells me this would perform horribly with SQL and Exchange.


Basically, I am looking for the intel Fortville version of this document:
http://www.chelsio.com/wp-content/uploads/2013/09/T5_iSCSI_Brief.pdf

I doubt you're going to have a lot of luck finding benchmarks for specific equipment, unless you get it from the manufacturer - but the NICs are only one link in the chain anyway.

Most of the information I've found comparing iSCSI to FC directly (on similar hardware, same workloads, etc.) has come from VMware whitepapers. Usually it's the same story - iSCSI is a higher latency, not awful, overall IOPS is close to even.

Example (older PDF but there's lots of 'em on vmware.com, you get the idea.): http://www.vmware.com/files/pdf/vsphere_perf_exchange-storage-protocols.pdf

Is your friend over 30? "Horrible" is probably overstating the case, but that would have been accurate in, like, 2005.
 

alyarb

Platinum Member
Jan 25, 2009
2,444
0
76
Yes he's over 30 and I'm not. I try to be diplomatic over fabric disagreements but I think we are way beyond the point of no return with converging on ethernet. If we invest in any new adapters they must be ethernet.

I took his remarks about SQL and Exchange over iSCSI as an exaggeration also. The Chelsio T5 family is really impressive, too bad they don't offer any NICs in the dell blade form factor.

I think spending $400-600 per port on intel NICs is the economically best choice though.

Each link into the blades will be 10-gig. but when we have 32 or more of them, it's probably time to widen the link to the iSCSI target(s). I can't find a lot of people using 40G iSCSI other than those Chelsio performance docs.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I doubt you're going to have a lot of luck finding benchmarks for specific equipment, unless you get it from the manufacturer - but the NICs are only one link in the chain anyway.

Most of the information I've found comparing iSCSI to FC directly (on similar hardware, same workloads, etc.) has come from VMware whitepapers. Usually it's the same story - iSCSI is a higher latency, not awful, overall IOPS is close to even.

Example (older PDF but there's lots of 'em on vmware.com, you get the idea.): http://www.vmware.com/files/pdf/vsphere_perf_exchange-storage-protocols.pdf

Is your friend over 30? "Horrible" is probably overstating the case, but that would have been accurate in, like, 2005.

Exchange server supporting 1800 users, 1314 IOPS @ about 83MB/s random IO over iSCSI. Peaks are normally around ~140MB/s random IO. Works fine. 3 SQL servers running on the same VM Clusters.

Backed by to EMC SAN, using mix of SAS and fiber channel, Exchange runs on SAS 1TB 7200 RPM drives, SQL on some 15k FC drives.

iSCSI performance is generally based on: Good switches, deep buffers, flow control. Good SAN, decent amounts of cache, flash caches etc. The next step is assuming the host itself can even digest the disk IO at the speeds you are looking at. I also recommend that you include MPIO in the design. 2 x 10GBE > 1 x 40GBE for reliability. If you need 40GBE, do 4 x 10GBE MPIO.
 

CoPhotoGuy

Senior member
Nov 16, 2014
452
0
0
We have plenty of customers running SQL or Exchange on iSCSI links.

The best thing to do is to get IOPS numbers from the hardware vendor if you can and then that should give you a fairly good idea about what all you can run on it. It also depends on the VM load as not all SQL/Exchanges boxes are equal depending on how many other clients rely on them.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |