No offense, but I find this argument flawed in several ways.
First off, I'm not "theorizing." I posted the link... did you look at it? The hardware RAID controllers scored lower in Winstone Content Creation... that's about as "real world" as you're going to find.
Second, we're not discussing high-end SCSI RAID configurations. We're discussing cheap IDE RAID. So let's stick to that topic.
Some details...
<< If a piece of code runs on an 8088 processor, and takes 1.666 seconds to execute, and that same piece of code, running on a p4 2.2, takes 1.661 seconds to execute, are you really going to notice the difference? >>
This is simply impossible. Let's assume that there's not even a difference in efficiency and/or pipelining between the 8088 and P4. Then the same task takes the same number of cycles to complete. Even so, the P4 runs at a higher clock rate - it will complete many such tasks before the 8088 completes the first. Granted, you're exaggerating with the 8088 part, but the principle remains the same.
<< The processor on your RAID card is a dedicated I/O processor. All it ever has to do is sort and send instructions to the disks - it doesn't take huge amounts of grunt to do that.
Most older RAID controllers, using something like an I960 RISC chip @ 33mhz, would have to be doing some incredibly INTENSE I/O to put full utilization on the processor. >>
You miss the point. Of course you're not going to max the utilization on the coprocessor. But the controller's proc runs slower than the main CPU whether it's 10% or 99% utilized.
<< It's true, and it's an interesting theory, that your system CPU could process the instructions much faster - but you're talking about really minute differences. >>
Not so minute that they don't show up in benchmarks, at least. I'm not saying the difference is tremendous. But why pay more for something that performs worse?
<< Think about a 486 system, quite capable of running as an iP-forwarding internet gateway linux box.
Imagine that same system, and the only thing it has to do is handle disk I/O instructions... >>
And imagine an OC'ed Athlon XP or P4 doing the same thing. A modern CPU under 99.99% of user loads will have more than enough cycles to spare for simple RAID 0 calculations. Again, you're mixing utilization with speed.
<< Now remember that thats just a low-end controller... the high-end ones have much more powerful processors; intended for high-end high-speed arrays with lots of disks (>10). >>
Possibly true, but we're not talking about $1000 controllers here.
<< So, you've got a dedicated I/O processor, more than capable of handling the load, you've got no problem with;
"oh my NT installation just died and I didn't keep my drive configuration diskette so I've just lost all my array data" >>
What are you talking about? The array configuration is stored in the controller's BIOS. You can wipe your OS as much as you like. The array's just like any disk as far as that's concerned.
<< or
"oh my CPU usage is like 10% when i'm thrashing my array, and my array is being thrashed coz my system is under load, coz it could really do with that extra 10% processor time" >>
The same test I linked to showed a 6% increase over a 40% base I/O load using the software controller cards. Considering that nobody in their right mind would ever run a load this high, you're looking at a difference of possibly 2 or 3 percent by using software controllers.
<< I dont believe you are 'better off' with software RAID, if you can afford a hardware controller.
Just because your processor can theoretically execute the IO operations faster, doesn't mean to say your overall system performance is going to be better.
Just think about it - the more load you have on your disk operations, the more load you're going to have across the board, because your CPU is doing extra work. Essentially, the more load you put on your system, the larger the overhead gets.
That doesn't sound too efficient to me... >>
I don't care what it sounds like, just look at the numbers please.
<< Oh and BTW again,
RAID0 = faster than a single drive of the same specs for certain.
RAID1 is actually SLOWER than a single drive, because it has to do every single operation in tandem; to mirror the data.
RAID5 is still much faster than a single drive, but slightly slower than RAID0, because it has to store the parity data.
RAID5e is slower again, because it has to put the parity data across all drives in the stripe; but I think that little performance loss is a small price to pay for the luxury of fault tolerance while still retaining RAID0 capacity levels... >>
That's correct, at least.