ISCSI performance hit

ochadd

Senior member
May 27, 2004
408
0
76
I've been looking into putting a couple of our SQL databases located on seperate machines on a Vertex or X25-m over iSCSI. This is my first time playing with it. The performance hit I'm taking running it over the network vs locally is quite massive.

67% read, 4 outstanding IOs, 100% random, 4k size

Vertex local
Read MBps: 6.5
Write MBps:3.2
IOps: 2465

Vertex over iSCSI
Read MBps: 2.93
Write MBps: 1.44
IOps: 1120

Network usage: 8%
CPU usage: 15%

Running FreeNas iSCSI target on the same dual core laptop that it's tested locally using Windows.
Dell 2724 switch both machines are connected to, no configuration done.
I introduced a ton of variables here but do these results seem like they could be improved via optimizations like jumbo packets or better practices of some kind or is this indicative of overhead of iSCSI?
 

skypilot

Golden Member
Mar 20, 2000
1,616
0
0
The additional latency introduced from iSCSI processing and data propagation over the network are to blame. It would be less pronounced if disk latency was higher (say, with a traditional hard drive), but when it's .01 ms, it's quite noticeable. You'll still get very good performance, though.

As an aside (it won't help much in for your random 4k ios), I always use jumbo frames in ethernet/ip storage networks (iscsi, aoe, nfs, etc), but then again I always use a completely separate physical network for stoarge stuff as well.
 

ochadd

Senior member
May 27, 2004
408
0
76
Originally posted by: skypilot
The additional latency introduced from iSCSI processing and data propagation over the network are to blame. It would be less pronounced if disk latency was higher (say, with a traditional hard drive), but when it's .01 ms, it's quite noticeable. You'll still get very good performance, though.

As an aside (it won't help much in for your random 4k ios), I always use jumbo frames in ethernet/ip storage networks (iscsi, aoe, nfs, etc), but then again I always use a completely separate physical network for stoarge stuff as well.

It's still faster than the 3x15k RAID 5 that's bottlenecking the problematic database but likely not fast enough to cover both I had hoped to do

This is just a test setup but had hoped to prove that moving all of our databases to a single SAN based on SSDs would be the ticket. If you lose the response time benefit it doesn't make much sense. Is this why FC is still considered better than iSCSI?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
read the starwind forum.

you need to tweak the o/s; switch settings (jumbo/flow control if possible).

the problem with gigabit is just that. its gigabit. i hit that limit all day long. you then need MPIO to bring together two gigabit.

which is where 4/8gb FC sits nicer.

of course $500 buys you a netxen 2port 10gbe nic (1gb reserve ram per port) but these can do FCOE or ISCSI at hella fast rates.


go to the vmware forums on storage; there are extensive benchmarks.

i prefer DAS honestly but storage vmotion is super dope.

p.s. make sure you aligned your partitions
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
Originally posted by: Jamie M
Hi, right now there is a free version of the StarWind server, I suggest you cheek it.
http://www.starwindsoftware.com/free
I'm looking forward to trying this again. It appears it now supports Server 2008 32- and 64-bit. Earlier, there was only Server 2003 SP1 support, which isn't well-supported under MS Hyper-V Server.
 

ochadd

Senior member
May 27, 2004
408
0
76
Originally posted by: Jamie M
Hi, right now there is a free version of the StarWind server, I suggest you cheek it.
http://www.starwindsoftware.com/free

Thanks for the link but I'm not sure how that applies to my problem?

read the starwind forum.

you need to tweak the o/s; switch settings (jumbo/flow control if possible).

the problem with gigabit is just that. its gigabit. i hit that limit all day long. you then need MPIO to bring together two gigabit.

which is where 4/8gb FC sits nicer.

of course $500 buys you a netxen 2port 10gbe nic (1gb reserve ram per port) but these can do FCOE or ISCSI at hella fast rates.


go to the vmware forums on storage; there are extensive benchmarks.

i prefer DAS honestly but storage vmotion is super dope.

p.s. make sure you aligned your partitions

Will try out those optimizations and post back. The partition is aligned correctly and firmware is the latest rev from OCZ.

Lets say 1-2 SSD or another 4 SCSI drives will cover all my fast storage needs but I have 3 physical machines that need access to it. Each machine has two single core, hyper threaded, processors and 4gb of memory and that is plenty.

I won't pretend to be a master of all storage solutions but I've got two machines that need more IO and a new machine coming in the next several months that will need similar performance. Each is putting out about 700 iops and I need around 1000. Ideally a setup that can provide 2500+ to four different physical boxes would cover me for the foreseeable future. Intel SSDs seem like the perfect hammer for this nail. Start off with a RAID 1 set and then move to RAID 10 or 5 as the data grows. I've only got 65GB of SQL databases, of that only 32GB really needs to be fast. Figure double that for future growth.Which method is best to get SQL access to the quick storage?

Upgrading the machines individually seems wasteful at this point. It will cost me $1000 in hardware to add two more drives to each server. They all run SCSI HBAs so the drives are going to die with those machines. Intel X25-E gives 4000+ iops for $650 and you lose the mechanical platters? If I wanted to put them internally I'm also looking at a SATA/SAS HBA for each machine.

Nothing like trying to answer my own questions before it's even posted but I suppose I would like some input. Some options I'm seeing:

1. Stick more mechanical drives in the existing servers. Upgrade again in a year.
2. Build a SAN requiring another box, more NICs, more switches, and possibly some training.
3. Consolidate SQL boxes into a new box, SQL 2008 license, 100ish SQL cals, fast internal storage.
4. Virtualize the existing SQL setups into a new box. Use existing licenses, fast internal storage, possibly some training.
5. DAS for three machines? I don't know if this is possible. Anyone suggest a setup?

Any and all input is appreciated.
 

ochadd

Senior member
May 27, 2004
408
0
76
New X25-M released

They aren't marketed towards servers but they are begging, begging!, to be run in them. It's all my storage wishes coming true. If I can just figure out which connectivity method is right.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
yeah DAS with vsphere essentials ($995 msrp) may not be a bad idea.

i am actually going to convert a windows storage server into a esx 4.0 server because i can always run openfiler on a vm to export local das storage (on more than 1 machine).

don't dick around with SSD. its too expensive (X25-E) for the iops. There are many ways to find cheap sas brand name units.


not sure why you need multiple copies of sql server but vsphere (esxi/esx 4.0) does a GREAT job on ram deduplication of static areas (ie dll's stuck in ram).

running 3 2008 boxens with sql server eats like 5gb of ram when i provisioned 3gb each.

with enough platters you can still separate log from o/s from database even in virtualized environment.

the dl385 with istanbul is a very nice server for vmware right now. can easily fit all your machines with the dual 6-core and ram isn't so freaking expensive.
 

ochadd

Senior member
May 27, 2004
408
0
76
What method would perform the best serving a single RAID set to multiple machines? The data set is so small it's hard to accept that a overhaul of my entire setup is required. 40GB low estimate and 80GB max is all I need to provide for.

You don't like SSDs but they seem to be the obvious choice for hardware. Right now I've got nine 147GB SCSI drives in RAID sets serving 64 gigabytes of data. Two Intel drives would more than cover the speed if I can get rid of the overhead.

To me DAS using SAS means internal storage or one machine connecting to an external RAID box like a Dell MD3000. I don't know how I would connect multiple machines directly to a single RAID set without using iSCSI, FC, or other network shares. Is there something I'm missing here? Maybe using DAS and seperate RAID 1 sets for each database is the ticket?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
what is the point of sharing this storage? are you trying to run a sql cluster or something?

i think what i was saying is if you need storage vmotion you could export your DAS using iscsi to lessor machines and/or to the DAS machine itself.

also what kind of workload is your sql doing? highly transactional? data warehouse/ETL/BI? really does make a difference in architecting a solution.

go over to the vmware vsphere storage forum and check out what other SAN solutions are doing.

a DL380G6 with say 8 drives in raid-10 DAS can totally smoke some older SAN/DAS systems that cost 2-10X more money.

give the openfiler a shot. x64 only if you have the ram to use it. 32bit if you have a p4 (non-core-duo).
 

ochadd

Senior member
May 27, 2004
408
0
76
We are doing write heavy operations generating production records for manufacturing processes combined with large reads against several tables of 9 million + records each and growing. The write operations are fine but when reporting against, viewing, and altering said data the delay is 15 seconds +++ for various operations. The company that designed the software that runs on top of this database has been unsuccessful optimizing things to get the delay down.

The point of my project is supposed to be to solve these delays. My hope is that I could kill several birds with one stone by putting our other SQL databases that have no problems on an expandable and reliable storage "backbone" as well. If I'm buying a nice RAID card and 2-4 SSDs already for the needed performance and they happen to be large enough to support all the data might as well not waste the free very quick and expensive space. I know I can solve my current problem with simple internal storage, even throwing another 2-3 SCSI drives might solve it.

Another push for me is getting away from spinning disks. I've got several servers still running 36GB 15k rpm drives that cost 1.5x the price of a new SSD when new and this is my chance to open the gate and start letting them out to pasture.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
1. more RAM. MORE RAM. more RAM.
2. 2 sql servers (replication) 1 to read and 1 to ETL
3. never use raid-5
4. go for the SSD but good luck finding certification on that.
 

ochadd

Senior member
May 27, 2004
408
0
76
Originally posted by: Emulex
1. more RAM. MORE RAM. more RAM.
2. 2 sql servers (replication) 1 to read and 1 to ETL
3. never use raid-5
4. go for the SSD but good luck finding certification on that.

1. I agree 100% that more RAM may be exactly what's needed. Doing so due to currently running SQL 2000 standard on Server 2003 means that I have $11500 in licensing fees to pay Microsoft to upgrade the software first. Two SQL processor licenses and Server 08. At that point I might as well build a new machine and consolidate which what I'm trying to delay until next year.

2. Adding a second server would be a waste since the first could be upgraded to cover it for the same or less cost.

3. Moving to RAID 10 and adding another couple SCSI drives would probably solve the problem too but them I'm still into it for around $720 for the drives. Adding a second server $1.2k minimum.

4. What do you mean by certification? The community agreeing with me or a SQL certified setup of some kind?

I haven't made up my mind and I'm honestly considering a consultant at this point. If I'm so far off base with my RAID card + SSDs for $650 then I'm missing something completely. This is the reason I come to the forums through so I'm not working in a vacum
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
sql 2000 can use AWE on win2k3 EE no problemo.

sql 2000 is AWE friendly.

dl380 G4 with 12GB of ram; sql server 2000 eats up as much ram as you SET it to use.

say you give sql server 6GB, 2GB for o/s buffers, 4gb for 32bit junk.

all on x86 versions of both apps.
 

ochadd

Senior member
May 27, 2004
408
0
76
Originally posted by: Emulex
sql 2000 can use AWE on win2k3 EE no problemo.

sql 2000 is AWE friendly.

dl380 G4 with 12GB of ram; sql server 2000 eats up as much ram as you SET it to use.

say you give sql server 6GB, 2GB for o/s buffers, 4gb for 32bit junk.

all on x86 versions of both apps.

We are running SQL 2000 standard. We are locked into 2GB of memory usage. I know it's ridiculous but back when I specced out that server it seemed like allot.

SQL 2000 standard on Windows 2003 standard.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
well good luck with the SSD. i think it is the wrong approach. the whole point of sql server is to manage lazy writes and cache indexes/tables in ram.


maybe you should bust out your copy of sql 2005. it's 99.99% compatible with sql 2000 (esp in compat mode) and just load it up on a small cheapie system with a carpload of ram say 8gb dedicated to sql server x64 on XP64 (you ever heard of MAPS?) free internal use software rules with MAPS-NA.

you do know how to use the sql server query analyzer right? i mean i've seen some ass-backwards views written that made no sense at all and the parallel query processor was doing massive table scans over and over in the code. ridiculous. knocked 30 minute reports down to 30 seconds by rewriting the query a little (hints) and tweaking the indexes.

it's not uncommon practice to separate your data as well. say every two months you move the current dataset into another tableset and use unions and views to limit the focus on the dataset.

i'm going to also suggest you review the settings on your scsi controller Battery back write cache. after 3 years they disable themselves (compaq ciss) due to battery being dead. the extra ram and battery write-back has HUGE impact on raid-5 since you are doing a double read per write.

my old ass server has 3x72gb in raid-5; it was bought before i got here. database is about 10gb. smart array 6i with BBWC at 75% write/25% read ratio. non-aligned partition (!@#%$) loses about 20% i/o with 64KBPS stripe size (best for sql server 2000).

If i were to do anything i'd add 1 more 72gb scsi drive and do an online migration of raid-5 to 10 raid-10. then move the BBWC to 75% read and 25% write (since the ram almost acts as a bottleneck at raid-10 its not DDR2 in those old boxen).

but the fact is i've got a brand new dl380 to replace the dl380g4 and with 4x146gb 10K 2.5" SAS running under Vsphere esx 4.0; with 4 vm's running; the entire server is faster than it was before on the old gear.

 

ochadd

Senior member
May 27, 2004
408
0
76
Originally posted by: Emulex
well good luck with the SSD. i think it is the wrong approach. the whole point of sql server is to manage lazy writes and cache indexes/tables in ram.


maybe you should bust out your copy of sql 2005. it's 99.99% compatible with sql 2000 (esp in compat mode) and just load it up on a small cheapie system with a carpload of ram say 8gb dedicated to sql server x64 on XP64 (you ever heard of MAPS?) free internal use software rules with MAPS-NA.

you do know how to use the sql server query analyzer right? i mean i've seen some ass-backwards views written that made no sense at all and the parallel query processor was doing massive table scans over and over in the code. ridiculous. knocked 30 minute reports down to 30 seconds by rewriting the query a little (hints) and tweaking the indexes.

it's not uncommon practice to separate your data as well. say every two months you move the current dataset into another tableset and use unions and views to limit the focus on the dataset.

i'm going to also suggest you review the settings on your scsi controller Battery back write cache. after 3 years they disable themselves (compaq ciss) due to battery being dead. the extra ram and battery write-back has HUGE impact on raid-5 since you are doing a double read per write.

my old ass server has 3x72gb in raid-5; it was bought before i got here. database is about 10gb. smart array 6i with BBWC at 75% write/25% read ratio. non-aligned partition (!@#%$) loses about 20% i/o with 64KBPS stripe size (best for sql server 2000).

If i were to do anything i'd add 1 more 72gb scsi drive and do an online migration of raid-5 to 10 raid-10. then move the BBWC to 75% read and 25% write (since the ram almost acts as a bottleneck at raid-10 its not DDR2 in those old boxen).

but the fact is i've got a brand new dl380 to replace the dl380g4 and with 4x146gb 10K 2.5" SAS running under Vsphere esx 4.0; with 4 vm's running; the entire server is faster than it was before on the old gear.

Well you have convinced me. My boss gave me the green light to overhaul it all. The only catch is that I'm limited to $10k to do it with.

I'm going to virtualize it all and move the SQL machine to Server 2008 and SQL 2008. Haven't decided on ESX or HyperV but I've also been given the green light to build a $2500 test box that will become a spare server when I'm done.

The fast internal storage would have been a bandaid approach to the problem and surprisingly he was in agreement. So now it's off to redesign my entire infastructure.

Memory used in all my servers that I'm considering replacing is 9.5GB. And there are seven pentium 4 era Xeons doing the work. Planning to start off with 48GB of DDR3 and dual 5520 Nehalems. Give 8-10GB to the troubled SQL server and see what happens. Should allow it to cache damn near the entire database.

Lots of planning to do to see if $10k will get me there


 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
That sounds like a good plan.

Take a detailed look at the move to SQL 2008, though. On SBS 2008 Premium, at least, MS has been providing free SQL 2005 licenses in case there's compatibility problems with SQL 2008. I have NO idea how often there are compatibility issues or what they might be or how hard they are to fix. So I'm providing an queston and not an answer. Sorry.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
most "microsoft" apps seem happy with 64bit sql server 2008 on server 2008. its the add-on isht that might have some issues outstanding. then again i'm not sure i've seen a stable true-64bit AV program for 2008 yet.. gotta love new tech.


btw i'd wait for 2008 R2.

sign your company up as a microsoft registered partner (figure out a way) and get MAPS; $249/year and you get sql server 2008 standard (which in 64bit has unreal limits) and sql server 2008 standard.

skip hyper-v. if you are unix-minded Vsphere essentials ($995 msrp) will do it for 3 vm servers. If you are not so inclined i suggest you get vsphere essentials plus (or buy veeeam backup).

Hyper-v is cool; in some ways its faster. but nowhere near as polished man. I can't speak for 2008 R2 hyper-v but i ditched the "Free" solution (2008 is free for me) and went with vmware. the cheapie. I can install ghettovcb etc. I'd strongly suggest NOT wasting your time on hyper-v. been there done that. it is nice; it is actually faster (in some cases) but the management is just not there.

btw when you do choose a solution keep in mind UDIMM ECC is a bit cheaper and alot faster but the current servers (dl380 G6) are limited to 24gb (period).

RDIMM ECC starts to get expensive over the 4gb mark (8gb chips are mad expensive).
also when you double bank up (12 dimms versus 6) the fsb drops to 1066, if you triple up to the 18 dimm slots you drop to 800fsb.

no mixing. here's a good link on DDR3 in servers.

http://h18006.www1.hp.com/prod...s/tool/hp_memtool.html

if price is key and 24gb is a limit you can deal with for a long time; unbuffered ddr3 will rule in speed and price. 24gb is hard to swallow for some since you could do 6/12/18 buffered ddr3 * 4gb. start with 12 x 4gb RDIMM and have some room to throw in more ram later.

take a look at the hp dl380G6 smart buys and that ram builder.

The p410 with 512bbwc is hella fast with ddr3. same drives on a dl380 g5 with p400 gain about 20% speed due to the faster controller. this is proven on the vsphere disk i/o benchmarks.

alot of folks do not recommend sql server in a vm. i do not think they took into consideration the fact that 2008/2008 is faster than the traditional 2003 R2/sql 2005. You have to make that judgement call on your own.

in my own personal testing i found the dl380 g5/p400-256-bbwc in 4x146gb 10K RPM raid-10 the bottleneck was not ram; definitely not cpu; but all disk. whatever you do get the fastest drives you can in there 15K.

I am looking at how to best follow sql server practice of having the main datastore separate from (rest of vm's) and log files separate as well. it gets tricky if you only have 8 drives they do have some no-dvd versions of the chassis' that support more drives. it would be very wise to consider this if you have no plans to move to a high end san.

http://communities.vmware.com/docs/DOC-1520
^^ read and weep. you will see some VERY expensive FC san solutions have craptastic results.

I was all ready to go ISCSI but without 10gbe i do not think sql server would reach its full potential and well 10gbe switches are expensive. the nic's are only $500 for 2 port but each nic eats 2gb of ram for buffer (OUCH!).
 

ochadd

Senior member
May 27, 2004
408
0
76
I'm leaning towards a Dell T710. 48GB of memory, dual Xeon 5520s, six gig NICs, and room for 16 hot swap drives for $4k. RAID 1 set for virtual machines, another for log files and slow storage, and another RAID 1 set of the new 160gb X25-M that the SQL data will ride on. Will put me right at $10k.

I'll have to screw around some next week and see what's all out there. Enough thinking about IT for this week.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
X25-E (SLC) only for servers dude.

Also be very careful when mixing SAS and SATA on the same backplane.

make sure your controller is pci express v2 (aka 5.0) with DDR3 ram.

you do realize 300gb 15K SAS 3.5" drives hot-swap like $330 (smart buy hp).

i'd do like raid-1 o/s; raid-10 8 drives sql database/4 raid-10 15K sas log.

you need to make sure if you are doing large ETL's your log files can expand fast enough and large enough.

x25-m seems like a bad idea. hp only sells X25-E for enteprise storage. and they sell SATA for enteprise storage, but not X25-M. i'm guessing they put some serious though into why don't we get some ocz vertex and sell it to our customers who buy 10K servers? well probably because after having a team of engineers actually test it in real life; it wasn't as reliable as sata(!?).

makes you go hmm

what i'm saying is stick to what works. buy the right deal. run with proven technology. it may cost more or go slower but when you have dell-tard service find a product that they didn't sell in their enteprise server the first thing they will do is say remove it. or no service.

I have to explain this to so many people they want a DL380 server and some empty sleds (ebay has them!) to put their crappy $100 1TB drives in. NO! i won't sell them to you; i would probably get in trouble for being an authorized reseller; and when it goes tizt up my name is going to be on the total package. do not try something that is not 100% authorized because every single person from dell to microsoft is going to nail you to the wall because of it.

deal hunt: right when a model drops off the map they mark them down huge. 40-60% off while the last stock remains. you can use that towards your advantage. maybe open box or excess stock. Alot of times we can not return an opened custom built server; our customer spends $5M a year with us; do we take it back? yup. Can we return it to our distributor? nope. Where does it go? On the website as open-box new. Maybe never even got used. Someone ordered a server without a DVD drive (it doesn't have that option in lieu of more storage). so we eat it and ebay or sell it likewise. nothing wrong with it; full warranty. i guarantee if you go to every big online var you will find these deals as we all have to do the same thing.

So if you find yourself with a slightly old model; maybe customer return can't sell it as new you can find hella deals. you'd be suprised man. these deal hunters here have no clue as to how much of a bargain some enterprise stuff goes for. it has $0 value to be on our books; so if we get $1 for it; great its gone done. i've seen $1000 controllers go for $50 on ebay because well we don't care. clean up the junk/returns room. it's been long written off.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |