40Tb Media Server Performance issues

ashrafi

Junior Member
Jan 17, 2011
20
0
0
I have build a media Server with following specifications,
Gigabyte GA-H55-UD3h motherboard
Processor i5 680
8 Gb Ram 1333Mhz (4*2gb)
Dell Perc 6i Raid Card
Chenbro Expander
20 * 2Tb Western Digital Caviar Green in Raid 6 configuration


I have set Intellipark to 300secs as discussed on some forums , After installing Server 2008 R2 I transferred 3.4Tb data on the raid setup and it took 3 days . (The maximum speed reached is 30Mbps).I have set the cache block size to 512mb . I believe the speed if extremely slow , I made sure the raid card was installed in the x16 pcie slot , I might be missing something , how can I increase the performance of this media server or is it bottle neck due to hardware itself.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
The best way to tell if it's the drives/controller or filesystem would be to benchmark the block device by itself or with some other filesystems and see what the differences are, but I don't know if any good tools for that for Windows.

Has the syncing finished? If that's still running it'll slow performance.

Where were you transferring the data from? That may make a difference too.
 

ashrafi

Junior Member
Jan 17, 2011
20
0
0
Syncing was completed ( Raid6 was configured and Initialization and background build and i installed 2008 R2 after ) and I let the computer run for a day or 2 ( free run ) and now i am trying to test it , I have done following test.

Transferring 3.4Tb files over the network ( I know N.W traffic and BW will cause issues )
transfer 240 gb from external drive ( USB BW bottle neck )
Transfered 500 Gb files locally between ( 2Tb Raid 1 OS dirve to Raid 6 Storage drive )

All three tests , The Write speed is indicated way higher ( 315Mb/s in local transfer) and then degrades to only 30-35Mb/sec.
This shows the limiting factor is somewhere in the hardware setup .
Secondly The maximum Write speed for Caviar Green is 84Mbps .

I chose GPT for Storage drive due to OS limitations and 16Kb allocation size, The stripe Cache is 512Kb.
 

ashrafi

Junior Member
Jan 17, 2011
20
0
0
Syncing was completed ( Raid6 was configured , Initialized & build , I installed 2008 R2 after )
I let the computer run for 2days (free run) and now i am trying to test it ,
I have done following tests.

Transferred 3.4Tb files over the network ( I know N.W traffic and BW will cause issues )
Transfered 240 gb from external drive ( USB BW bottle neck )
Transfered 500 Gb files locally between ( 2Tb Raid 1 OS dirve to Raid 6 Storage drive )

All three tests indicated Write speed way higher ( 315Mb/s in local transfers) and slowly degraded to only 30-35Mb/sec.
This shows the limiting factor is somewhere in the hardware setup itself( assumption)
Secondly The maximum Write speed for Caviar Green is 84Mbps ( I should atleast get 70mbps).

I chose GPT for Storage drive due to OS limitations and 16Kb allocation size, The stripe Cache is 512Kb.
Maybe Striping is making it as slow ... dont know!!
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
How much memory does the RAID controller itself have on it? What is the caching set for?
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
I had a HP smartarray controller once. For me, my problem turned out to be due to the fact that the SA software would only run on proliant hardware and since I wasn't running proliant hardware, I was SOL. The controller could only attain decent performance if it was configured to with the special software. Have you run whatever dell software might be needed?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
yeah sounds like your controller is too weak. i would break the raid into smaller chunks and see if it goes faster. maybe you need 2 or 4 controllers for the job
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Just throwing this out there but running SATA on a SAS board requires encapsulation (IE CPU time on the card) and with 20 drives that adds a lot of over head to the process. I would also throw out there that the WD Green drives are a "long recovery attempt" drive which may cause the SAS controller to drop disks from the array quickly. Most servers either a) come with special firmware for the SATA drives to shorten the disk read retry to basically zero b) use enterprise disks that already do this. If a disk develops a silent error, during a rebuild of that much disk, odds are decently high that you might see a 3+ disk drop because the WD greens are taking to long to respond to a spotty sector.
 

Golgatha

Lifer
Jul 18, 2003
12,240
646
126
Just throwing this out there but running SATA on a SAS board requires encapsulation (IE CPU time on the card) and with 20 drives that adds a lot of over head to the process. I would also throw out there that the WD Green drives are a "long recovery attempt" drive which may cause the SAS controller to drop disks from the array quickly. Most servers either a) come with special firmware for the SATA drives to shorten the disk read retry to basically zero b) use enterprise disks that already do this. If a disk develops a silent error, during a rebuild of that much disk, odds are decently high that you might see a 3+ disk drop because the WD greens are taking to long to respond to a spotty sector.

http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery

This person speaks the truth.
 

bloinkXP

Senior member
May 16, 2003
369
0
0
All of what imagoon says is true. This is why enterprises only user these disks in the lowest tier of storage offerings. Another thing to consider is the RAID variant you have chosen. RAID 6 is the slowest of the mainstream RAID types (jbod, RAID0, 1, 0+1, 5,6, 10). RAID 6 is 3d+2p where as RAID 5 is 3d+1p. This gives you more storage at a slightly higher risk. The risk is a disk failure, and given the MTBF on these disks you are probably safe for the lifetime of the warranty. The reason for multiple volumes is to help partition (if you choose) and size the LUNs correctly.

Now that being said on boot up take a look at the PERC controller and look for read/write cache settings. Remember that you are serving and want read cached. I am a HP man, but I am sure there is some parity.

Good Luck!
 

ashrafi

Junior Member
Jan 17, 2011
20
0
0
@zephyrprime : Yes you are right , I have had experience with Hp 400 and such cards and I came to similar conclusion , the hardware wouldn't post beyond Hp bios.But Dell Perc cards are very friendly ( they have workarounds in GA baord ) they do not like Intel Chipset Raid but work fine with Tyan and Asus.

@Emulex:The Dell Perc 6i is designed to work in enormous environments , They are used with external sas expanders consisting 40+ drives , I dont think its the raid , however breaking the raid in to chunks might increase performance but its a bad practice and secondly the media server would hold files as big as 250-500 Gbs , which nullifies smaller chunks option for me .

@imagoon : i am exactly sure but from the context and what Golgatha mentioned , the only diff between WD Green and RE is that green has Intellipark enabled and Time limited error Recovery (TLER) disabled , however its a risk i am willing to take because i cant fiund any patch for TLER and price point between 40Tb Green and 40Tb WD RE is enormous(3x) .So Intellipark has been sorted out , TLER i would take my chances. About Encapsulation .. can you explain it ..as I am not clear on it.

@BionkXP: Totally agreed , but i set up Raid 6 thinking , with so many drives and knowing WD Green probability it would a good option to have 2 hotspares, I agree not the best of the best option , and Raid 50 would be a better option. There are 2 Option on the controller READ/Write AHEAD(Cache to HDD -Faster but Data lost on power failure - needs batt attached to Raid cont) and READ/Write Through ( Direclty to HDD , No batt and Slower).

I am not 100% sure on how big the Cluster Size / Stripe Cache Size /Block Size should be for optimum performance.!
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I am talking about this:

Serial ATA Tunneling Protocol

Basically SATA is a point to point protocol while SAS is 1 controller to multiple devices (and is multi-path etc.)

In order to make these disks work they:
take the SATA command, encapsulate it in to a SAS frame, send it to the end point the strips out the SATA command from SAS frame and then feed it to the disk. Then reverses the process to send replies back. All of this adds latency and load to the disk controller. Due to the design, SATA will always be "single path" as well. Real SAS disks have the standard SATA looking connector but additional connectors on the back side of the connector to allow for multiple connections to the controller.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
The Perc 6i is a 256MB cache LSI 1078E based chipset. Honestly, that chipset (power PC 500MHz) is pretty old at this point as it was way behind the dual core IOP348 parts. Just because a huge number of drives is supported, doesn't mean it is fast. Also, remember that your green drives may be ~80MB/s max, but min is going to be a lot less. Doing RAID 6 is basically 3x write amplification since you are writing parity twice when you write data. If you had all 20 drives in RAID 6, 30MB/s would not be bad on big writes since that 256MB buffer will fill fast and you have slow drives doing lots of seeks.

Did you try breaking the 20x drives into smaller RAID 6 arrays? Like 2x 10 drive arrays or 3x 6 drive arrays with 2 hot spares? There is a really good chance you will see better performance.

Also, SAS expanders do not help seek times so you basically have a slow controller, slow disks, and a SAS expander adding a tiny bit of latency in the equation.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
20 * 2Tb Western Digital Caviar Green in Raid 6 configuration
Wait, you mean that you are using one single massive 20 drive raid6 array?
This is a really bad idea. Not just in terms of performance, but also because you are near guaranteed dataloss.
RAID5 arrays should max out at 4 to 6 drives (depending on who you ask), RAID6 at 8 to 10 (depending on who you ask).
One giant 20 drive raid6 array is simply unsafe.

Plus, it would probably hurt performance.
And with the amount of money you are spending and data you are storing, why are you using such an inferior OS as MS server 2008?
I recommend you get ZFS using either FreeBSD or Oracle Solaris Express 11. Or at the very least something linux based. maybe use FreeNAS (FreeBSD based). But ZFS is by far the best.
 
Last edited:

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
1. Have you verified the RAID array performance, without a filesystem, or by copying a single huge file (e.g. 100 GB from a suitable fast drive, or RAID array, or by using an STR testing benchmark software).

Copying multiple small files (random writes), will severely lower transfer speeds. While RAID 6 is the ideal choice for streaming - it's performance is shockingly bad for random writes (stripe size makes little difference). With such a large number of drives, performance will be further degraded, even more so as your drives are slow.

To avoid catastrophically bad random write performance, you need to enable 'write-back' caching on the PERC. (N.B. Write back is only available when the PERC's backup battery is installed and in good condition. A missing, malfunctioning or discharged battery will automatically disable write back).

2. Have you installed the latest firmware for the PERC? Early versions of the PERC 6/i firmware have catastrophic performance bugs.

3. Have you installed an HSF on the PERC? The PERC must have a fan installed. When installed in a Dell server, the server case is specifically designed with forced airflow past the cards - therefore an additional fan is not required on the card. In a standard ATX case, which does not have this special airflow pattern, the card will overheat without an additional fan.

4. The PERC 6/i has a weak CPU - but even so, it should be good for 200 MB/s on RAID 6 wtih 20 drives, double the capacity of gigabit ethernet. As you are using gigabit networking, there's no real need for a faster card. (By contrast software RAID on your CPU should be good for close to 2000 MB/s - apart from the fact that there aren't enough PCI-e lanes to move 2000 MB/s to the SAS card). For streaming media, the performance of RAID 6 is excellent, and almost indistinguishable from RAID 0 both reading and writing, apart from the limitations of the parity engine on the RAID card. (RAID 6 for streaming purposes has no 'write amplification'). However, video-on-demand arrays should be configured with the largest possible RAID stripe size, as this dramatically improves performance when serving multiple streams simultaneously.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Copying multiple small files (random writes), will severely lower transfer speeds. While RAID 6 is the ideal choice for streaming - it's performance is shockingly bad for random writes (stripe size makes little difference). With such a large number of drives, performance will be further degraded, even more so as your drives are slow.
Random writes on spindle drives are shockingly bad to begin with. RAID doesn't improve them.
SSDs are so amazingly faster because they are over 100x faster than spindle drives in random writes/reads. Not because they are ~2x faster in sequential writes/reads.

PS. not only should you go ZFS, you should ditch the RAID controller. just get SATA controller with no raid and lots of ports and let ZFS see and handle drives directly for best performance. Use 6+ drive in RAIDz2 arrays (raid6 equivalent). So, probably 3x6 drive arrays of raidz2 with 2 hot spares is best.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
areca has 4gb of cache for 24 ports. how much gig's of BBWC can you go up to?
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
Wait, you mean that you are using one single massive 20 drive raid6 array?
This is a really bad idea. Not just in terms of performance, but also because you are near guaranteed dataloss.

So a 20-drive RAID-Z2 array is actually not THAT bad so long as you have data scrubbing. Granted, with 2TB drives... a 40TB array of slow "green" disks is fairly scary since rebuild times are not going to be positively impacted. I will say, I will not run a 20-drive RAID array because I would rather split it into two to be safer, even if I need to buy two more drives for <$200.

@Emulex: Perc 6i is fixed 256MB, less than the LSI branded 1078 cards with 512MB.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
So a 20-drive RAID-Z2 array is actually not THAT bad so long as you have data scrubbing.

raidz2, with regular scrubbing, is indeed a whole lot better for a 20 drive RAID6.
But, like you, i would split it in two or even three even then.

With ZFS you can actually combine the various raidz2 vdevs into a single pool (essentially, raid0 of raidz2 arrays) if you really wanted to. But the inability to shrink losslessly remove vdevs (only replace their drives or add drives) makes me weary of the prospect.
 

ashrafi

Junior Member
Jan 17, 2011
20
0
0
@Mark R, Very Helpful , I compiled following list ,

Firmware Update -Done
Battery Attached - Done
Write Back Enabled - Done
Install HSF - Done
Cpu Utilization during Transfer - Almost idle .

Raid Alinement = Need Help!
Found following Article
http://support.microsoft.com/kb/929491
I m not sure if I can use Diskpart in such a manner while raid is configured and I am in windows.

I will post result soon ,

For all the folks talking about Linux and Solaris , I Agree and Appreciate but I dont have such knowledge to deal with the server on daily basis.Some thing saved for future builds.

What exactly should I Test in IOmeter there are multiple tests.help!
 

ashrafi

Junior Member
Jan 17, 2011
20
0
0
Mark ,What exact numbers are you looking at to determine performance...with all these benchmarks i need to know what exactly to look for as benchmark , and is it normal for write speeds to be this low as 103MB/s ( is there any formula ..)
I heard alot about Raid allignment , how would that work ?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |