Equalogic iSCSI speed issue...

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
I have recently been baffled by an issue that seems to be apparent wherever I attempt to test from. I have had successful tests prior to the results I am getting now, but for whatever reason, I cannot get past this problem and my iSCSI based drives are seemingly capped at 120mb/s based on the ATTO disk benchmark.

The SAN itself is a EqualLogic PS4000, 16 x SATA 250GB spindles (7200RPM), RAID 50 with two hotspares, dual NICs with a redundant dual NIC in standby. The SAN is connected to a Cisco 6513 Chassis, with a relatively current IOS.

I have adjusted the MTUs on each port to support jumbo packets (9000).

Each port has been set to auto to allow autonegotiation of 1GB/Full (and have also been hard set at 1000/Full with no change in performance).

Each NIC has been disabled once to allow a single NIC config to ensure that each single NIC connection benchmarked similarly (same results as with two NICs).

Given that the tests showed the same performance with a single NIC as they did with duals, I am beginning to wonder what else I could be missing. Any suggestions as to what might cause the drives to essentially be capped at 120mb/s. I know I had once achieved benchmarks about 12 times that speed, which I'd be happy if the performance was simply better than some of the server's built in 3 disk RAID 5 arrays. I just know that what I am seeing now essentially barely above the performance of a single sata disk, which is pretty poor in my book.
 

azev

Golden Member
Jan 27, 2001
1,003
0
76
how are you connecting to the equalogic box and run the test ??
Any virtualization component involved ?
 

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
I've used a Virtual ESX Guest, a HyperV Guest, and Windows Server 2008 Enterprise Physical host, all with the same results.
 

drebo

Diamond Member
Feb 24, 2006
7,035
1
81
Aggregated NICs won't give you a performance increase per host. 2 NICs @ 1gbps != 2gbps total bandwidth per host. It will allow you to have two hosts connected and receive full 1gbps speed. It distributes each IP flow between the two links based on source or destination MAC address or IP address (depending on how you have the switch configured).

So, really, you shouldn't see a performance increase from a single host using one NIC versus two.

Basically, your expectations are based on false assumptions about the technology you're using, while what you are actually getting is perfectly in line with what you should be getting.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Aggregated NICs won't give you a performance increase per host. 2 NICs @ 1gbps != 2gbps total bandwidth per host. It will allow you to have two hosts connected and receive full 1gbps speed. It distributes each IP flow between the two links based on source or destination MAC address or IP address (depending on how you have the switch configured).

So, really, you shouldn't see a performance increase from a single host using one NIC versus two.

Basically, your expectations are based on false assumptions about the technology you're using, while what you are actually getting is perfectly in line with what you should be getting.

Not exactly true when working with ESX and iSCSI. If you configure round robin correctly it will distributed load on to each connection and exceed the 120MB/s per host. You need the LUN's configured however for round robin and for the sessions to show "active/active."

This also requires actual switches and cabling that allow it to see all the controller ports. I also assume you have dual controllers... I shouldn't assume so much lol.

Also SATA in Equalogics suck... the controllers run in SATA pass through mode which gimps the drives from the start. RAID 50 is also IOP heavy.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
iscsi sucks - it is laggy. there's a reason why FCOE is going to dominate and all the new servers have FCOE CNA's instead of just iscsi TOE adapters. FCOE is more suited for lower latency and higher speeds.

I'm sorry whoever sold you that setup should have gone 10gbe.

You will feel the pain so bad if you run sql server due to latency. no joke.
 

azev

Golden Member
Jan 27, 2001
1,003
0
76
lets get the fact straight here;
-OP mentioned max transfer speed of 120mb/s this is way slower than what gigabit nic is capable of (1000mbps)
-Does the server that OP use to perform the test have multiple nics ?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
iscsi sucks - it is laggy. there's a reason why FCOE is going to dominate and all the new servers have FCOE CNA's instead of just iscsi TOE adapters. FCOE is more suited for lower latency and higher speeds.

I'm sorry whoever sold you that setup should have gone 10gbe.

You will feel the pain so bad if you run sql server due to latency. no joke.

Assuming no lag issues due to insufficient spindles / IOPs, I have not seen any significant lag using iSCSI in our environments. Most fiber channel (or FCOE) stuff is still a significantly expense. 10gig FCoE adapters are still $1500 + per machine and the SAN units are also more expensive.

I have a rather distributed environment however so I rarely need the performance that FC or FCoE offers. Most of the latency issues that I see with these units are people loading SATA drives in SAS (encapsulation mode roaches already slow drives), to few spindles for the load they want and overloading a lun.

Someone running a million transactions a second might have issues with this set up however many a small sql databases run on these setups without issue.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
FCOE CNA is standard on the G7 hp lineup now.

latency is pretty real though. sql server pains
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
iscsi sucks - it is laggy. there's a reason why FCOE is going to dominate and all the new servers have FCOE CNA's instead of just iscsi TOE adapters. FCOE is more suited for lower latency and higher speeds.

I'm sorry whoever sold you that setup should have gone 10gbe.

You will feel the pain so bad if you run sql server due to latency. no joke.

iSCSI doesnt suck, its just that it has its use and FCOE has its use. I use it all the time with a c7000 blade chassis and it works well.
 

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
lets get the fact straight here;
-OP mentioned max transfer speed of 120mb/s this is way slower than what gigabit nic is capable of (1000mbps)
-Does the server that OP use to perform the test have multiple nics ?

Thanks for catching what I was trying to relay in my message.

I am not at all trying to attain light speed or some such thing, but 120MB/s is barely above the speed of a single SATA disk. When using 14 in a RAID 50, one should expect a significantly faster transfer speed than 120MB/s.

As to my server configs, each has 2 (or more) NICs. I use 1 NIC (on the two NIC servers) that is dedicated for iSCSI traffic. The other for VM traffic, which at the present time contains only 1 VM. Servers are Dell PE2950s running Windows 2008 Enterprise, and I have also tested on a Dell 2970 running ESX 4.0.1.

As stated before, at one point, I did see speeds of 600+ MB/s, but for the life of me I can't figure out what changed to cause SAN to be so slow. In addition, the SAN being used has very little IO at this point in time, primarily because I am taking issue with the speed being so poor that I do not want to put this into production when I know something isn't right. I have about 3 connections that are used for tests at this point, but nothing writing/reading heavily to the SAN.

All NICs being used are either the built in Broadcom NICs which support TOE and other specifications related to iSCSI (5708 I believe), or Intel GB NICs. I have seen no difference in either. I intend to take a retired server (PE2650) tomorrow and load it with OpenFiler and see if I can reach speeds higher than I have with the EqualLogic, to ensure that all servers are configured fine, and if my problem is specifically with a setting in the SAN.

To those suggesting iSCSI sucks, well, I can certainly understand the notion that the more expensive Fiber based SANs are much better in comparison, but the lack of speed I am seeing with my SAN is far worse than any iSCSI SAN should be, and that is what I am looking to correct. I'd think it pretty safe to assume that a 14 disk RAID 50 config would be faster than an equivalent single disk config, no?
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
8 bits = 1 byte.
b = bit and B = byte

Are you seeing 120 megabits/second or 120 megabytes/second?
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Then 120MB/s is just about the max you're going to get on a single gigabit link.
 

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
Then 120MB/s is just about the max you're going to get on a single gigabit link.

http://en.wikipedia.org/wiki/Data_rate_units

Oh how right you are. Not sure what the heck was happening (or what I was thinking), but in some of my previous tests, I was seeing rates well above that, but that must've been a fluke in the benchmark tool, and I have tried different tools just to make sure that the benchmark itself wasn't faulty. I should have just remembered this from the git go, but those off the wall tests made me think something was a miss when the numbers fell to 120. Turns out, as you said, that is just where they should be. Thanks Crusty.
 

Jamsan

Senior member
Sep 21, 2003
795
0
71
Out of curiousity, can you post up your ATTO benchmark? I'm curious on how our Equallgoic configuration stacks up against others.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
dunno i'm gonna grab a pair of p4300 G2 soon ; i'll post up how they stack up (you can do network raid-0/1/10/5) i will probably run them in raid-1 to prevent any sort of failure.

i don't suspect they will be that fast but the 5520 xeon and 8gb of ram for cache should get out on read speed.

can't you just pop in a 10gbe nic into the equallogic? that's how the left hand works.
 

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
dunno i'm gonna grab a pair of p4300 G2 soon ; i'll post up how they stack up (you can do network raid-0/1/10/5) i will probably run them in raid-1 to prevent any sort of failure.

i don't suspect they will be that fast but the 5520 xeon and 8gb of ram for cache should get out on read speed.

can't you just pop in a 10gbe nic into the equallogic? that's how the left hand works.

I'd guess that could be possible, but that kind of expense is hardly justifiable in my environment. A 24 port 10GB switch alone is 10,000 (http://configure.us.dell.com/dellsto...=MLB1842&s=biz). It'd sure as heck be nice, but our SAN wasn't much more than double that (AFAIK), so adding that kind of expense would be a very tough one to pass. Not to mention needing to add 10GB NICs to each server that has access to the SAN.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Just to give you some info, I ran that test on one of the arrays I use for archiving (7200 RPM 1 TB SAS disks in RAID 5) and I get ~111384 write / 142280 read with a round robin active/active equalogic. It is really hard for me to get an idle period to run it though since it is constantly pulling backup data etc. (I know the results are a bit bogus because rather than a smooth increase something random like 256k tests would be slower showing that machine was under other load at the time)

I tested on a set of 4 15k 149gig disks in RAID 10 that I use for testing in another unit (same type) and pulled 127522 write 165700 read. This one was run inside a VM in the test environment also since there are no direct windows filesystems on this disk group. ESXi 4 using paravirtual disks if your curious.
 
Last edited:

Agamar

Golden Member
Oct 9, 1999
1,334
0
0
I am getting one of those Dell 10GB switches to interconnect my servers next year. Seems like a good deal and since we only have 12 servers, there will be plenty of room to grow later.

BTW, try using IOMETER and tell me what you get with these settings:
Max Disk Size: 8000000 sectors
Starting Disk Sector: 0
# of Outstanding IOs: 64
500 Transactions per second

I used a 100% read benmark on my system (that is being used) and got 162MB/sec and 5204 IO/s
 

SnOop005

Senior member
Jun 11, 2000
932
0
76
I think buying a 10GBe switch is a waste of money at this point. Your iSCSI is running on SATA drives and even in your configuration I don't think the drives are fast enough to saturate a 10Gbe link. Obviously your current configuration is maxing out on a single gigabit link (1000/8 = 125MB). I would make sure you're bonding the NICs on your EqualLogic as well as setting up link aggregation group (802.3ad) on your switch for any device that has bonded NICs.

Also, I don't think VMware supports jumbo packets.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
there's a good guide on how to do esx mpio rr with 2 3 4 gigabit ports - its a pita to setup (cli required) but it will pull its weight.

an msa2324sa will still outrun most iscsi since you dual path (2x 4x3gb) sas connections per server or single (12gbit) , 8 singles or 4 dual-path by a long shot. heck its 90% as fast as the msa2324FC.

the iscsi san like p4300 lefthand though you can run 8,16,whatever gb of ram - so it will get about the same performance raid-5 as raid-10 (80%) and reads through the 8-16gb of cache are stupid fast.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |