SCSI performance, worth it?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0
Unless that is a solid state drive shuttle, the access time benchmark is bogus. Pure and simple. RAID does not shrink platters or increase the speed of the read heads, so it cannot improve access time benchmarks which test the entire surface of the drives. It can improve access time over a given capacity, but it cannot improve access time over the entire array.

The results are real. The controller runs the entire test in its cache. Controllers with 2+GB of cache are around the corner so this is like a SSD but better, more capacity and affordable with redundancy to boot. The test system uses 512MB of PC2700 DDR ECC cache with a battery backup. Disk activity is practically nil even when virus scanning!

If the test is run with the cache disabled, the access times are 4.8 mS which is close to the physical specification of the disks. RAID10 shaves a full mS the results. Not nearly as effective as the cache. Even running a single drive has a huge benefit with the cache ON. The only bad thing that could happen is the battery spoil and someone trips over the power cord while doing a defrag. (I've tested under these conditions for the exception of the tripping part and the OS was fuxored.)

Cheers!
 

beatle

Diamond Member
Apr 2, 2001
5,661
5
81
RAID controller batteries. I hate those things. We've had to recondition the batteries of a few of the the PERC4 controllers on servers @ work and it basically brings the computer to its knees since there is no caching on the card at all.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Well, that's just what I said. Running the test out of cache is basically a SSD test, and worthless. It's not testing the speed of the actual mechanical array. RAID cannot speed up the mechanics of the drive. And tested improvement in array access time, is a statistical anomaly.
 

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0

Getting back to the point here, it improves actual performance (what the user "feels") dramatically. Compare the costs of a 600GB SSD vs. 600GB RAID50 with 1GB cache, for example. If your database is tuned to run within the cache, you are far ahead of the game.

Having spare batteries on hand prevents the issue of running without the cache!

Cheers!
 

Zepper

Elite Member
May 1, 2001
18,998
0
0
Many SCSI RAID Host Adapters usually have large RAM caches on 'em - most accesses are gotten from the cache at RAM speed, not drive speed. So the test is of the whole storage subsystem, not just the drive itself. That's why his numbers look so good.
. But I can say that since I've switced to SCSI, I've not had to recover from a trashed drive. That did happen on occasion when I was running IDE.
.bh.
 

Regs

Lifer
Aug 9, 2002
16,665
21
81
Shuttle, that's a glitch in the benchmarking program for sure. A Raid controller cannot physically enhance the speed or performance of a hard disc. The access times and read/write sessions are on the hard drive.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Originally posted by: Zepper
Many SCSI RAID Host Adapters usually have large RAM caches on 'em - most accesses are gotten from the cache at RAM speed, not drive speed. So the test is of the whole storage subsystem, not just the drive itself. That's why his numbers look so good.
. But I can say that since I've switced to SCSI, I've not had to recover from a trashed drive. That did happen on occasion when I was running IDE.
.bh.

It's still bogus to say that the average access time of the above array is .1ms. It's not, the access time of a cached read is .1ms. It's pretty much impossible to get a .1ms access time from a mechanical drive unless the data just happens to be exactly where the readhead is. And almost no ATA RAID controllers have cache, so it's not really even relevant to this thread or for most discussions here which are about P/S-ATA arrays.
 

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0
Shuttle, that's a glitch in the benchmarking program for sure. A Raid controller cannot physically enhance the speed or performance of a hard disc. The access times and read/write sessions are on the hard drive.

I've never claimed that the HBA is changing the laws of physics here. It sure makes a difference in i/o, however!

This is why HDTach is flawed. A real benchmark allows the parameters to be adjusted. The tests running in cache actually have access time of ~ 45 uS. HD Tach wasn't designed to report such low numbers so it's either 0 or 0.1.

If the use of the array requires sustaned accesses larger than the cache (obviously) the physical disks play a large part of the equation.

A race car requires race car fuel. Problem is people think they are going to get "raptor slapping" performance by hitching a 15k disk to a $40 controller. You won't. If your budget doesn't allow purchasing a real caching HBA, stick with SATA. You will be happier.

Cheers!
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,130
15,276
136
Originally posted by: Regs
Shuttle, that's a glitch in the benchmarking program for sure. A Raid controller cannot physically enhance the speed or performance of a hard disc. The access times and read/write sessions are on the hard drive.

The way raid works with stripes, is that when a particaular seek is required at the OS level (forget the cache for the moment), you have a greater chance that one of the heads (1 of 5 in my case) is close to the requested sector to read, thus effectively you do get (in my case) 5x faster average seek times. With my disks rated at 4.6-5.2 ms average seek(depending on where I read my specs from) and 5 disks, that is why I get 1.1 ms average seek. The 128 meg cache helps even more where large amounts of reading are done. With the 5 disks acting as one disk, you have to allow this to count in "how fast is my disk" and the benchmark should reflect that.

And no matter how much anyone here thinks otherwise, in raid 0 or 5 (and possibly other modes) it is a virtual fact that access times will be faster on multiple raided drives. Transfer rates is another story...... Sometimes yes, sometimes no.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
And no matter how much anyone here thinks otherwise, in raid 0 or 5 (and possibly other modes) it is a virtual fact that access times will be faster on multiple raided drives. Transfer rates is another story...... Sometimes yes, sometimes no.

No, it isn't. You have it reversed. When dealing within a single capacity, adding drives will allows increase transfer rates, while it will do nothing for access time. Drive count is irrelevant for access times, all that matters is platter count. I just explained why in another thread, so rather than retyping it, I'll just link to it:

http://forums.anandtech.com/messageview.cfm?catid=27&threadid=1278440
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,130
15,276
136
Originally posted by: Pariah
And no matter how much anyone here thinks otherwise, in raid 0 or 5 (and possibly other modes) it is a virtual fact that access times will be faster on multiple raided drives. Transfer rates is another story...... Sometimes yes, sometimes no.

No, it isn't. You have it reversed. When dealing within a single capacity, adding drives will allows increase transfer rates, while it will do nothing for access time. Drive count is irrelevant for access times, all that matters is platter count. I just explained why in another thread, so rather than retyping it, I'll just link to it:

http://forums.anandtech.com/messageview.cfm?catid=27&threadid=1278440

And I don;t care that you say "you explained it" because I did further up in this thread. Unless you want to argue with a masters degree in engineering, listen to me. But arguing in the internet is useless. If you don't like my advice, go away or ignore it.

I have quite a bit of experience with raid0 in both IDE and SCSI. In both cases, transfer rates are limited to the bandwidth of the devices, for IDE 133 mps and in my case for SCSI (even though I have an ultra 160 controller) 133 mps since it in in a 32bit pci slot.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
The latest SCSI drives are QUIETER than most any IDE hard drive you can buy.

SCSI makes your computer faster by keeping CPU cycles used far lower than IDE drives. If you have big directories with hundreds of files SCSI is a godsend.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Originally posted by: Markfw900
Originally posted by: Pariah
And no matter how much anyone here thinks otherwise, in raid 0 or 5 (and possibly other modes) it is a virtual fact that access times will be faster on multiple raided drives. Transfer rates is another story...... Sometimes yes, sometimes no.

No, it isn't. You have it reversed. When dealing within a single capacity, adding drives will allows increase transfer rates, while it will do nothing for access time. Drive count is irrelevant for access times, all that matters is platter count. I just explained why in another thread, so rather than retyping it, I'll just link to it:

http://forums.anandtech.com/messageview.cfm?catid=27&threadid=1278440

And I don;t care that you say "you explained it" because I did further up in this thread. Unless you want to argue with a masters degree in engineering, listen to me. But arguing in the internet is useless. If you don't like my advice, go away or ignore it.

I have quite a bit of experience with raid0 in both IDE and SCSI. In both cases, transfer rates are limited to the bandwidth of the devices, for IDE 133 mps and in my case for SCSI (even though I have an ultra 160 controller) 133 mps since it in in a 32bit pci slot.

No need to get so bitter. No one here is arguing but you. I missed whatever advice you were giving, but your reduced access explanation is wrong. If that upsets you so much, I apologize.

Oh, and FYI, your rigs page that lists 10k Barracuda drives is wrong too. Seagate's 10k drives are all Cheetahs and now Saavio's. Barracuda is Seagate's name for their 7200RPM line of drives.
 

Regs

Lifer
Aug 9, 2002
16,665
21
81
MarkFW, you have defied the very definition of RAID . RAID capability can improve transfer rates immensely, but it cannot decrease access times. Any other benefit you would see would be from read/write burst rates. But that's comparing a single ATA HDD to a RAID array of 3 HDDs.

If it takes one hard disk a given amount of time to seek from one track to another, it?ll take two hard drives the exact same time. This proven by benchmarks over and over again. So again, I really don't understand your logic. An array of hard drives acting as one cannot enhance the others ability to seek information on the platter.

Your theory is commendable but it would have to backed up by some for of data.
 

Monoman

Platinum Member
Mar 4, 2001
2,163
0
76
The mistake people here are making is they are compairing ATA and even entry level SCSI RAID. IT can't be compaired to a processor based SCSI card with a LOT of cache.
 

Sideswipe001

Golden Member
May 23, 2003
1,116
0
0
Yes, I've noticed that this has broken down into a full flegged discussion of what SCSI can do. The long and short of it, is that an average home user picking up a SCSI drive will see some advantages over IDE, but they will not be coming close to hitting the maximum of SCSI's abilities. I personally love my 15K drive. It's hot, but it idles very quiet; and I don't mind the noise it makes when it is accessing data. I like to know my computer is "thinking".
 

Monoman

Platinum Member
Mar 4, 2001
2,163
0
76
Originally posted by: Sideswipe001
I personally love my 15K drive. It's hot, but it idles very quiet; and I don't mind the noise it makes when it is accessing data. I like to know my computer is "thinking".

lol ditto!
 

Monoman

Platinum Member
Mar 4, 2001
2,163
0
76
Originally posted by: KillaKilla
So no?

have you even read the thread? lol

basically.. I/O benchmarks say the raptor is ALMOST as fast as the top of the line 15K SCSI and yes, it will FEEL faster in everyday tasks(web, mail) but in FPS, IMO no it won't be any faster.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |