2 vmware clusters with FC to the local SAN, mostly single-purpose linux server VMs, some software build servers, a NAS is backed on one - a lot of static data storage. The clusters are iSCSI'd to each other for replication and failover purposes.
Drives are a mix of 10k, 15k, and 7200k. ~250 drives, probably ~370TB total.
iSCSI is primarily used for replication, but sometimes somebody fat-fingers and runs a VM from an iSCSI datastore on the "wrong" SAN. Performance has never been a problem to the point where somebody noticed, but we catch those things when we do VM audits.
iSCSI is slower for certain tasks (doing UNMAP, for instance) but for the most part it's fine. We don't really monitor IOPS - we're not bottlenecked by iSCSI, and we run it through the same cabling as data (different VLANs though) so it wouldn't really be fair.
If you design it right and give it its own fabric, though, 10GbE iSCSI is generally considered "just as good" as 8Gb FC. And it's cheaper. A lot of lower-end SANs are iSCSI-only.
And honestly, unless you're bolting a LOT of hard drives together, or using SSDs, your IOPS will be limited by that anyway.