iSCSI performance, I'm wondering what other people are getting for iSCSI performance

Crusty

Lifer
Sep 30, 2001
12,684
2
81
I have an iSCSI target setup on a Linux box with mirrored 1TB drives I use for dumping backups to.

I usually see 30-50MB/s on a gigabit link. I could probably tweak a couple of things to get some more speed out of it, but it doesn't really matter.. plenty of time for the backups to be done.

I have no doubt that you shouldn't be able to saturate most kinds of connections you can throw at the server. With just 4x 1TB drives in RAID 10 I can saturate our gigabit network with FTP and HTTP file transfers.
 

holden j caufield

Diamond Member
Dec 30, 1999
6,324
10
81
thanks for the input, that's exactly what I'm getting too. 30ish avg with a burst of 50mb. Any tweaks that you can recommend?
 

RebateMonger

Elite Member
Dec 24, 2005
11,588
0
0
I just set up a StarWind Free iSCI target on Server 2008 under a Server 2008 Hyper-V host. The "disk" is a single 20 GB virtual disk file on a single Hitachi 1 TB hard drive. Running HDTune 2.55, I got about 30 MBps read performance across my Gigabit network (non-Jumbo frames).

On the local computer (the Hyper-V host using Hyper-V virtual NICs), I get 80 MBps reads when testing with HDTune. Actually, the response is a bit weird. It starts at about 45 MBps, then, at 10% of the disk, it jumps to 80 MBps for the rest of the disk. I've tried this twice with the same results.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
My VMWware hosts can get pretty close to saturating the iSCSI link, but they are attached to an actual SAN with 15 disks in it. Teamed dual controllers for a total of 4 Gbps. Very rare to actually get there though. I only see something like that pop up when I am booting the esx server with the staged start ups turned off so all of the VMs are trying to boot at the exact same time.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
dont have esx setup for mpio (yet) but in sequential situations (not often) it nails gig-e (or both) all day long.

storage vmotion and backups are usually where you see this kind of bandwidth.

it is your drives. i've got 12 15K sas drives.


now random i/o is no where gig-e speed. not even close. but you can go to the vmware forums and read up on the "standard" bechmark everyones put their san's through. random i/o is more relevant imo but the limit is gig-e is a big problem.

fortunately dual-port 10gbe cards are dropping below $500 now. just need to afford a switch that can route that
 

randal

Golden Member
Jun 3, 2001
1,890
0
71
We have an offsite san that we use ZFS+iSCSI to replicate our hot, local san. When the offsite falls behind for whatever reason and we sync the raid, we can get up to ~600mbps on a gigE pipe.

We've seen that spike up to 750mbps on very heavy continuous read ops, but 600mbps is our actual sustained throughput rate.

That's on pools that are 6x1TB @ 10kRPM in raidz2 (raid-6).

Randal
 

Cable God

Diamond Member
Jun 25, 2000
3,251
0
71
I typically see 60-80 mbytes/sec on a GigE link with 12x 1TB SATA 7.2k disks on Win2008 with Hyper-V/EMC AX4-5i.
 

TillFogel

Junior Member
Sep 10, 2009
3
0
0
Try to setup Jumbo frame 9, and then test it again. In my case StarWind showed the same write-read speed, so as HDD.
 

mooseracing

Golden Member
Mar 9, 2006
1,711
0
0
Using the free starwind program with their registry tweaks and single Gb connections I can get whatever the HD maxes out at up to Gb speeds pretty much constantly.

I ahve been very happy with it, I am testing out running a few virtual machines off of it, trying to get the funds to get 10Gb NIC's and Switches, and then due a custom built iSCSI SAN.

In file copies of 10GB's takes just a hair over a minute, pretty good for a free software and spare hardware.
 

Eagle17

Member
Nov 23, 2001
114
0
0
I have a dual esx server farm with a iscsi san using opensolaris. i have two pools setup one with raid 10 on 8 15k 146gb sas drives and the other on 5 500gb sata 1 drives.

I can get 99MB/s on the sas drives using atto on the large sequintial writes and reads.

on the smaller writes and reads and non-sequintial writes/reads the speads are horrible.

I have recently upgraded to 2gbs fiber on this system using comstar and have seen a huge improvment in non-sequential reads/writes.
(more than 3 times faster)
 

azev

Golden Member
Jan 27, 2001
1,003
0
76
the transfer speed of ISCSI is highly dictated by the speed of spindles that its running. In our lab we have a supermicro box running 16x 15k6 seagate drives and 16GB ram, and with that we are able to saturate many gigabit connections with sequential transfer.
I believe tomshardware have an article about this, go check out their website.
 

SammyJr

Golden Member
Feb 27, 2008
1,708
0
0
Originally posted by: holden j caufield
Just trying to get a baseline and what to expect.

Thanks

If you're using a modern iSCSI target like LIO-Target, the one in Solaris/OpenSolaris, or the one in Windows Storage Server 2008, your performance will be based entirely on your underlying network and storage.

I can max out Gigabit Ethernet using any of those 3 targets easily.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
it is possible to do mpio with the software initiator with esx 4; haven't tried it yet but there's a good bit of discussion on the vmtn on how to do it. i'm curious if you can use a copper cable between two machines at 10gbe. the hp nic supports it ($150 15m cable instead of 2 gbic's which are insanely expensive and some fiber). the bad part about the 10gbe is the 1gb of reserve ram PER PORT. that adds up quick.

windows storage server is pretty sucky at about anything other than backup. it is quite useful for having smb2/nfs/iscsi in one box to backup everything. its on technet for evaluation
 

net-runner

Junior Member
Aug 16, 2016
1
0
1
In most cases iSCSI network link utilization depends on the testing pattern or workload type. Obviously, a sequential file-based copying and small random block access will result in quite a different throughput and network utilization.

StarWind itself does not shape it’s network performance so you should be capable of getting almost full network link usage if drives being tested are fast enough. Additionally you can optionally speed up everything even more by using StarWinds L1 RAM-based caching or L2 SSD-based caching.

You can find a lot of information regarding StarWind performance on their support forum here https://forums.starwindsoftware.com/
 
Feb 25, 2011
16,823
1,493
126
50-60MB/sec sequential transfers. Random I/O is quite a bit faster than normal HDDs as long as there's unused RAM in the server for caching.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |