Is RAID 0 slower than a single drive configuration?

RaiderJ

Diamond Member
Apr 29, 2001
7,582
1
76
I've read a few RAID setups FAQs, and I've come to the conclusion that if I were to take a drive and run it as a single drive it would be faster that 2 of that same drive in a RAID 0 setup. Faster meaning higher data throughput for both reading and writing.

I'm curious because I was considering running a RAID 0 setup if it was faster. I mostly do basic desktop applications, and play games. Is there a difference between on-board RAID and PCI card RAID setups?
 

Ionizer86

Diamond Member
Jun 20, 2001
5,292
0
76
RAID 0 is faster, since it uses both drives for read and write operations. But if you don't do anything intensive, you may not notice a speed difference going to RAID 0.
 

goog

Golden Member
Sep 8, 2000
1,076
0
0
No RAID0 is faster overall, otherwise why risk it?
RAID1 isn't any faster because you're mirroring not striping the drives.

Basic PCI RAID cards are the same as onboard.

For basic desktop apps you won't notice any improvement, for loading games you might.


 

killmeplease

Senior member
Feb 15, 2001
972
1
0
RAID0 is definately faster! I would never go back to a single drive.

But I also wouldn't leave my important docs exclusively on a RAID0 config either. I've learned my lesson the hard way.
 

Scootin159

Diamond Member
Apr 17, 2001
3,650
0
76
for the money I've found you can get a nice 10,000 rpm SCSI setup, and while it will loose in capacity, it will be much faster for everyday apps (much lower seek time...time between files).
 

RazeOrc

Senior member
Nov 16, 2001
269
0
0
RAID 0 is by far the fastest, most people say RAID 0 under normal circumstances doesn't do much, I would agree if all you're using is an integrated onboard controller, or a simple "software controller RAID card." If you use a full blown hardware RAID you'll notice a difference, all the time. Although you'll also notice the extra $100 price difference between the two missing from your wallet. If you have tons of money to burn go SCSI, or just put a SCSI cotroller in and run a boot drive off of it. If however you are more realistic and don't have an unlimited defense budget then go software raid in RAID 0, it's fast. And as for danger of loosing everything, that's why you should at the very least put in 3 drives. 1 for your boot with various partitions to keep backup files on and which will give you a normal ammount of safety and reliability, and then 2 more in RAID 0 for everything that is non crucial (storage, games, ect, ect.). This is only a personal reccomendation for what has worked very well for myself.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81


<< RAID 0 is by far the fastest, most people say RAID 0 under normal circumstances doesn't do much, I would agree if all you're using is an integrated onboard controller, or a simple "software controller RAID card." If you use a full blown hardware RAID you'll notice a difference, all the time. >>



Sorry. Intuitive, but untrue. The AT comparison found the hardware solutions to be slower than their "soft hardware" counterparts. Why?

(Quote)
"If you recall, we showed that all RAID functions, not just the RAID 5 functions, were calculated on the hardware RAID cards. What does this mean in terms of actual performance? Well, it does mean that the system CPU does not have to do as much work. At the same time, it also means that the hardware RAID solutions will perform slower.

The reason for this is that with the high speed of CPU processors available today, like the Duron 850MHz processor used in this review, the CPUs are able to perform the RAID calculations faster than the hardware RAID coprocessors can. It is true that when moving to RAID 5 the CPU is overburdened with parity bit calculations, but in RAID 0 arrays the CPU has plenty of power to devote to a few simple calculations per clock.

What does this mean to you? Well, if you are a home computer user looking for only a RAID 0 or RAID 1 configuration, there is absolutely no reason to spend the extra money for a hardware RAID card. In fact, you are actually better off performance wise going with a cheaper software RAID solution."

Full page available here.
 

PH0ENIX

Member
Nov 20, 2001
179
0
0
I dont know if I agree with that...

It really depends on what processor is on your hardware RAID controller.

BTW; in this post i'm talking about SCSI raid - not IDE.

Look @ it like this;

If a piece of code runs on an 8088 processor, and takes 1.666 seconds to execute, and that same piece of code, running on a p4 2.2, takes 1.661 seconds to execute, are you really going to notice the difference?

The processor on your RAID card is a dedicated I/O processor. All it ever has to do is sort and send instructions to the disks - it doesn't take huge amounts of grunt to do that.

Most older RAID controllers, using something like an I960 RISC chip @ 33mhz, would have to be doing some incredibly INTENSE I/O to put full utilization on the processor.

It's true, and it's an interesting theory, that your system CPU could process the instructions much faster - but you're talking about really minute differences.

Think about a 486 system, quite capable of running as an iP-forwarding internet gateway linux box.

Imagine that same system, and the only thing it has to do is handle disk I/O instructions...
Now remember that thats just a low-end controller... the high-end ones have much more powerful processors; intended for high-end high-speed arrays with lots of disks (>10).

So, you've got a dedicated I/O processor, more than capable of handling the load, you've got no problem with;

"oh my NT installation just died and I didn't keep my drive configuration diskette so I've just lost all my array data"
or
"oh my CPU usage is like 10% when i'm thrashing my array, and my array is being thrashed coz my system is under load, coz it could really do with that extra 10% processor time"

I dont believe you are 'better off' with software RAID, if you can afford a hardware controller.
Just because your processor can theoretically execute the IO operations faster, doesn't mean to say your overall system performance is going to be better.
Just think about it - the more load you have on your disk operations, the more load you're going to have across the board, because your CPU is doing extra work. Essentially, the more load you put on your system, the larger the overhead gets.

That doesn't sound too efficient to me...

Oh and BTW again,
RAID0 = faster than a single drive of the same specs for certain.
RAID1 is actually SLOWER than a single drive, because it has to do every single operation in tandem; to mirror the data.
RAID5 is still much faster than a single drive, but slightly slower than RAID0, because it has to store the parity data.
RAID5e is slower again, because it has to put the parity data across all drives in the stripe; but I think that little performance loss is a small price to pay for the luxury of fault tolerance while still retaining RAID0 capacity levels...
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
No offense, but I find this argument flawed in several ways.

First off, I'm not "theorizing." I posted the link... did you look at it? The hardware RAID controllers scored lower in Winstone Content Creation... that's about as "real world" as you're going to find.

Second, we're not discussing high-end SCSI RAID configurations. We're discussing cheap IDE RAID. So let's stick to that topic.

Some details...




<< If a piece of code runs on an 8088 processor, and takes 1.666 seconds to execute, and that same piece of code, running on a p4 2.2, takes 1.661 seconds to execute, are you really going to notice the difference? >>



This is simply impossible. Let's assume that there's not even a difference in efficiency and/or pipelining between the 8088 and P4. Then the same task takes the same number of cycles to complete. Even so, the P4 runs at a higher clock rate - it will complete many such tasks before the 8088 completes the first. Granted, you're exaggerating with the 8088 part, but the principle remains the same.



<< The processor on your RAID card is a dedicated I/O processor. All it ever has to do is sort and send instructions to the disks - it doesn't take huge amounts of grunt to do that.

Most older RAID controllers, using something like an I960 RISC chip @ 33mhz, would have to be doing some incredibly INTENSE I/O to put full utilization on the processor.
>>



You miss the point. Of course you're not going to max the utilization on the coprocessor. But the controller's proc runs slower than the main CPU whether it's 10% or 99% utilized.



<< It's true, and it's an interesting theory, that your system CPU could process the instructions much faster - but you're talking about really minute differences. >>



Not so minute that they don't show up in benchmarks, at least. I'm not saying the difference is tremendous. But why pay more for something that performs worse?



<< Think about a 486 system, quite capable of running as an iP-forwarding internet gateway linux box.

Imagine that same system, and the only thing it has to do is handle disk I/O instructions...
>>



And imagine an OC'ed Athlon XP or P4 doing the same thing. A modern CPU under 99.99% of user loads will have more than enough cycles to spare for simple RAID 0 calculations. Again, you're mixing utilization with speed.



<< Now remember that thats just a low-end controller... the high-end ones have much more powerful processors; intended for high-end high-speed arrays with lots of disks (>10). >>



Possibly true, but we're not talking about $1000 controllers here.



<< So, you've got a dedicated I/O processor, more than capable of handling the load, you've got no problem with;

"oh my NT installation just died and I didn't keep my drive configuration diskette so I've just lost all my array data"
>>



What are you talking about? The array configuration is stored in the controller's BIOS. You can wipe your OS as much as you like. The array's just like any disk as far as that's concerned.



<< or
"oh my CPU usage is like 10% when i'm thrashing my array, and my array is being thrashed coz my system is under load, coz it could really do with that extra 10% processor time"
>>



The same test I linked to showed a 6% increase over a 40% base I/O load using the software controller cards. Considering that nobody in their right mind would ever run a load this high, you're looking at a difference of possibly 2 or 3 percent by using software controllers.



<< I dont believe you are 'better off' with software RAID, if you can afford a hardware controller.
Just because your processor can theoretically execute the IO operations faster, doesn't mean to say your overall system performance is going to be better.
Just think about it - the more load you have on your disk operations, the more load you're going to have across the board, because your CPU is doing extra work. Essentially, the more load you put on your system, the larger the overhead gets.

That doesn't sound too efficient to me...
>>



I don't care what it sounds like, just look at the numbers please.



<< Oh and BTW again,
RAID0 = faster than a single drive of the same specs for certain.
RAID1 is actually SLOWER than a single drive, because it has to do every single operation in tandem; to mirror the data.
RAID5 is still much faster than a single drive, but slightly slower than RAID0, because it has to store the parity data.
RAID5e is slower again, because it has to put the parity data across all drives in the stripe; but I think that little performance loss is a small price to pay for the luxury of fault tolerance while still retaining RAID0 capacity levels...
>>



That's correct, at least.
 

PH0ENIX

Member
Nov 20, 2001
179
0
0
Well first off, in the part about loosing the RAID config, I was talking about a software RAID configuration.
No controller, no BIOS, I mean disk-administrator created stripes - what you would refer to as software RAID.

This about software controlled physical RAID controller cards, I really dont understand - it's not something I've ever seen before, but if I was interested in IDE RAID, it'd be something i'd want to research. I might do anyway, just in case I ever need to know...

You are correct, I was exaggerating with the 8088 - but it's not a physical impossibility.
Granted, it's off the subject, but you should be aware that there IS code which executes faster on an 8088 that it does on a P3 - and you would assume a P4 also. But like I say, thats off topic, as the 'code' we ARE talking about doesn't.

Now I re-itterate the fact that I thought we were talking about pure software RAID configs vs. hardware controllers - possibly because I didn't visit the link, I just went on what you had said; my bad.

So what you're talking about here is cards that contain the array config - but dont process the instructions for the drives?
Like a pseudo-hardware controller of sorts...

In that case It seems like someone would add an I/O processor to a card, charge more for it, but it has no advantage whatsoever?
Granted, less overhead on the CPU, but apparently that idea is a waste of time, since the CPU wouldn't really notice the added load, since I/O operations are not the most intensive taks in the world - so they add a chip, to take away load that isn't even detrimental, and in the process they lessen performance.

That's a damn good marketing strategy, that one.

The way I read it; it's basically saying that unless your system is under constant, extreme load, hardware raid controllers are not only unnecessary, but detrimental when compared to software driven controllers.
So it might not be practical for your average home user. Funny that - since RAID was never geared towards your average home user in the first place...

I'd be VERY interested to see the same sort of thing with SCSI - but as far as i'm aware there are none of these psuedo-hardware controllers in SCSI flavour.

Besides all that, I like to keep as much load away from my Celeron as possible... for obvious reasons
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81


<< Granted, it's off the subject, but you should be aware that there IS code which executes faster on an 8088 that it does on a P3 - and you would assume a P4 also. >>



Interesting... not being much of a CPU/programming person, I wasn't aware of that.



<< So what you're talking about here is cards that contain the array config - but dont process the instructions for the drives?
Like a pseudo-hardware controller of sorts...
>>



Yup. The cards do *something*, but I don't know exactly what. They are significantly faster than running "pure" software RAID.



<< In that case It seems like someone would add an I/O processor to a card, charge more for it, but it has no advantage whatsoever? >>



Not exactly, but true for RAID 0 and RAID 1. The XOR calculations needed for RAID 5 *are* demanding enough that they seriously tax the host CPU. That's where the need for pure hardware RAID comes in.



<< I'd be VERY interested to see the same sort of thing with SCSI - but as far as i'm aware there are none of these psuedo-hardware controllers in SCSI flavour. >>



With good reason... most SCSI RAID systems are either mid-to-high end Windows servers or *nix rigs. Windows boxes of that caliber would rather use RAID5 than 0 or 1. And *nix's pure software RAID is plenty adequate for RAID 0 or 1. I'm sure that "soft" cards could be made for SCSI, but there just wouldn't be much of a market.

 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
"Sorry. Intuitive, but untrue. The AT comparison found the hardware solutions to be slower than their "soft hardware" counterparts. Why?"

Clever, sorry to burst your bubble, but that AT "review" was horrid. Nothing of any worth can be pulled from that article. Properly implemented hardware RAID solutions will perform better than pseudo software with basically no exceptions regardless of how slow the onboard processor is. The conclusions for the poor performance of hardware implementations were wrong. The reason the Adaptec AAA-UDMA performed so poorly is because it's a SCSI RAID controller with an IDE translator, I didn't see that mentioned anywhere in the article. Obviously any kind of translator like that is going to destroy performance. I have no idea why the Promise controller performs so poorly, but it was a huge disappointment in everyone's eyes who were looking forward to it. Promise released another supertrack RAID card that uses a variation of the same I/O processor (i960 used in many SCSI RAID controllers) as the original that performs significantly better which means the problem with the first card was probably firmware or driver caused. RAID 0 requires practically no calculations. Most people could probably calculate it by hand quick enough to keep up with the array. The drives are so much slower than the processor, that the processor will be sitting idle practically the whole time running RAID 0. It is absolutely not the bottleneck causing the poor performance.

The fastest IDE RAID cards are hardware based, but this review didn't have any of them. All the 3Ware cards, the Adaptec 1200A/2400A and the Promise SuperTrak SX line are all hardware based cards that will outperform software cards especially in RAID 5, but none of them were reviewed.

"RAID1 is actually SLOWER than a single drive, because it has to do every single operation in tandem; to mirror the data."

Not necessarily true. If the card load-balances reads, it will be just as fast as RAID 0 for reads. The 3Ware cards do, I don't recall offhand if any other IDE RAID cards do.

"RAID5 is still much faster than a single drive, but slightly slower than RAID0, because it has to store the parity data."

RAID 5 on anything but highend SCSI hardware performs extremely poorly in writes. The first generation hardware IDE RAID cards, and software implementations will score in the single digit MB/s in writes regardless of the speed of the drive. The most recent 3Ware cards perform respectably in RAID 5.
 

Civic2oo1x

Senior member
Jan 29, 2002
342
0
0
I'm running a Raid 0 and notice a significant increase in seek times as opposed to when I had both drives seperate. (Didn't have WinXP Raid support till a few weeks ago).
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
RAID 0 should not increase access time. In fact, it usually decreases access time slightly due to better read head positioning. If you access time is increasing, especially by large amounts, you should take a look at your array and try to figure out what's wrong with it.
 

Armitage

Banned
Feb 23, 2001
8,086
0
0


<< The fastest IDE RAID cards are hardware based, but this review didn't have any of them. All the 3Ware cards, the Adaptec 1200A/2400A and the Promise SuperTrak SX line are all hardware based cards that will outperform software cards especially in RAID 5, but none of them were reviewed. >>



Yea, I'll second that!
And now 3ware is back in the IDE RAID business (they were going to drop it for awhile).
The early Promise hardware controllers were very bad, occasionally losing to software based models. The newer ones are supposed to be pretty good.
The software based controllers are nice for a few extra ports, but not worth much for anything other then RAID 0 (which I won't use due to reliability concerns).

For the full story, take a look @ storagereview.com. BTW, this excellent site is in danger of going under


 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
OK, having spent yet more time reading SR, I'll stand corrected. Would you say that the AT explanation is just grasping at straws to find an explanation for the two crappy hardware controllers tested?

I'm still interested in an explanation, but now a different one: why are the better hardware controllers faster than soft hardware for RAID 0? From what's been discussed, we seem agreed that the processing requirements for striping are minimal. Minimal enough that even a slower coprocessor can work fast enough to keep up with the physical limitations of the array. Thus, it seems that the presence of a coprocessor, in itself, has no effect on performance. I see three possible explanations myself - please, correct me if I'm wrong. 1) Performance takes a hit on soft cards because instructions have to travel across the PCI bus to be processed, and then returned. I'm no bus techie, but it seems that this wouldn't take long enough to make the array wait up for it. Correct or not? 2) The pure hardware cards have better firmware or some other improvement in architecture. But then why couldn't soft cards contain comparable firmware as well? 3) The soft cards require the OS to interact with a driver in order to perform the calculations. Thus, the CPU has to go through system RAM to process its instructions. But wouldn't this still be fast enough not to impact performance? If I had to guess, I'd pick #3, but I don't really know. Please explain if you can - I'm not trying to be argumentative, it's just that I find filesystems and storage quite interesting and want to be sure I understand them correctly.

Two other, minor bits:
1) The Adaptec 1200A is not a pure hardware controller. It does not support RAID 5 like it's more powerful and expensive brother, the 2400A. SR tests showed it to be very similar in performance to the Iwill Side-RAID, a soft card using the Highpoint chipset.

2) Most of those "better" hardware cards are quite expensive, definitely more than most people looking at RAID 0 on these boards want to pay. The cheapest I found, at Hypermicro, was the Escalade 6200 for $120 - only 2 channels, so only RAID 0 or 1. The SR benches for the 4-channel 6400, though not up to par with the newest Escalades, are still significantly better than the soft card controllers. So would the 6200 be the controller of choice for "budget RAID"?
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
"Would you say that the AT explanation is just grasping at straws to find an explanation for the two crappy hardware controllers tested?"

Pretty much. They should leave storage reviews to other sites, or at least do more research, as most people assume info posted on this site to be accurate above all others.

As for why hardware is faster than software, I don't know for sure, but my guess would be 1, with 2 and 3 basically rephrasing what 1 says. Making multiple passes through the PCI bus, CPU and main memory can't do anything to improve speed. The more drives you add, the worse software based RAID performs which would lend more credence to that guess.

"But then why couldn't soft cards contain comparable firmware as well?"

If you're a company like Promise who produces both, you need as many reasons as possible to charge 5 or 10 times more for a hardware card. R&D costs money, so the better/more expensive products get the better R&D.

My bad on the 1200A, fell asleep on that on, it is a 2 channel card not capable of RAID 5, but I would still buy one of those over a Promise or IWill card.

Hardware costs more because it should perform better. You get what you pay for. If you don't need RAID 5 or more than 2 drives, something like the mentioned 3Ware card should be fine. Personally, I would avoid ATA RAID altogether, but if that was your only option, you should go with something from 3Ware. Their higher end cards seem kind of goofy to me, I don't know what someone would do with a 6 or 8 channel ATA RAID card, the cable routing for that would be an absolute nightmare. Most cases wouldn't be able to fit that many drives and still be able to connect to the card with an 18" cable.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Just to correct a common misconception.
RAID1 will have supperior read performance compared to a single drive, write performance is what MAY suffer, this is not allways the case though, depends on the setup.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
RAID 1 read performance will only be better if the card supports load balancing. Unfortunately, very few ATA RAID cards do. 3Ware cards do, and I believe one or two others do that I can't recall offhand, I don't believe any are software/psuedo software.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76


<< Unfortunately, very few ATA RAID cards d >>


Hmm, didn't know that.
Oh well, just yet another reason to steer clear of IDE RAID I guess.
 

Crassus

Member
Oct 21, 2001
171
0
0
From my personal point of view:

I'm using two striped IBM DTLA 305040 (IDE/RAID 0) on a Promise FastTrack 100 since Dez 2000.

I've never seen Win2000 booting up this fast anywhere else with a single drive. I also run a third drive for the most important data.
I think if you happen to have a bit of trouble-shooting knowledge you really should consider running a two disk stripe hardware based.

If application loading speed matters to you (and SWAPFILE speed as well) I'd really advise you to go for it.

A few things to consider along the way:
1. If you start with an onboard RAID controller from Promise, than the array will run under the aforementioned card as well without any tuning, just plug it in.
2. Changing from two channels to one and vice versa works the same - plug and play at its best
3. If you happen to carry around huge amounts of data on harddisks and you need speed, than a setup of a controller card plus two disks isn't the worst as well. Did this and loved it. Swapping around 70 Gigs of data to a Pentium 100 can be a pain in the you-know-where (BIOS limitations anyone?), but no problem with this one.
 

Carceri

Member
Aug 7, 2001
119
0
0
I have a 3Ware Escalade 6200 RAID card. In RAID0 with two disks it beats a single disk in all read tests, but in write tests it's almost 3 times slower than a single disk. I have no idea why, but from what I have read on the newsgroups this is a common thing with some 3ware cards. If anyone have an explaination I would like to hear it.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |