74GB Raptor Questions

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hajime

Senior member
Oct 18, 2004
617
0
71
Originally posted by: Acanthus
Originally posted by: Hajime
Originally posted by: Acanthus
Originally posted by: DaveSimmons
The odds of losing your data are doubled when you use RAID0 so be sure to come up with a backup strategy.

If you're doing this to improve gaming performance, click the "Storage" tab at the top of the page and read the article on the real-world performance of Raptors in RAID0.

You dont half the MTBF of the drives by using 2 its a fallacy.

Yes, yes you do.

Two drives means twice the chance of failure. Or, in other words, MTBF/n, where N is the number of drives involved on the RAID-0.

3 drives means 1/3rd the MTBF. 4, 1/4th.

FYI for the OP: You might be better off with RAID-1 then RAID-0. RAID-0 only offers benefits in a limited selection of benchmarks and a -handful- of extremely disk intensive consumer uses. I highly doubt you will be dealing with art files in the multiple-gb range and whatnot, so.... However, RAID-1 offers a similar performance benefit when it comes to reads. Plus, a RAID-1 will protect you against failure of a hd.


The MTBF of 1 drive is (made up for the example) 100000 hours. The MTBF of 2 drives is 100000 hours...

Adding another drive doesnt make the 1st or 2nd more likely to fail, while i agree there is a small increase in risk, if the drive dont die within the 1st 2 months... they are going to last a long time.

-sigh-

RAID-1 and RAID-0's MTBFs are incredibly easy to calculate. RAID-1 is MTBF*n, and RAID-0 is MTBF/n.

To quote this "The issue with RAID 0 has always been that splitting data across two hard disks inevitably resulted in doubling the chances of data loss via hard disk failure." MTBF/n, as I said.

And to quote this, "he Mean Time Between Failure (MTBF) of the array will be equal to the MTBF of an individual drive, divided by the number of drives in the array." Again, MTBF/n.

I can find hundreds of more sources easily if you want.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Whatever, there are members on this forum that have run 16 drives in raid-0 before for years without failure... This pops up in nearly every raid-0 thread.

Im sure that was a strike of lightning multiple times though and it couldnt possibly be accurate. What with all your math formulas and no real world experience.
 

Cat

Golden Member
Oct 10, 1999
1,059
0
0
Drives fail randomly. You can get lucky and have a drive last an eternity, or have tons fail in a span of a few months, like we did a few months ago. (3D rendering)
It is more likely to happen, and you're incredibly naive to believe that just because one person's RAID0 is working, that yours is failsafe. Probability is backed up by results verifying it.
 

tallman45

Golden Member
May 27, 2003
1,463
0
0
Don't forget that you can also loose your Raid if your MB dies and you have to swap it out with a new one.

Raid 0 is great for very large files, putting an OS on a Raid 0 result in slower OS performance because of it's many smaller files.

As it has been recommended go with a single 74gb as your OS drive
 

boyRacer

Lifer
Oct 1, 2001
18,569
0
0
I have everything on my RAID 0 array. I'm so dangerous. :evil:

Actually I back up every week.
 

Hajime

Senior member
Oct 18, 2004
617
0
71
Originally posted by: Acanthus
Whatever, there are members on this forum that have run 16 drives in raid-0 before for years without failure... This pops up in nearly every raid-0 thread.

Im sure that was a strike of lightning multiple times though and it couldnt possibly be accurate. What with all your math formulas and no real world experience.

I've set up everything from a RAID-0 to a RAID-50 in half a dozen environments, Acanthus.

What's your experience like, praytell?
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Hajime
Originally posted by: Acanthus
Whatever, there are members on this forum that have run 16 drives in raid-0 before for years without failure... This pops up in nearly every raid-0 thread.

Im sure that was a strike of lightning multiple times though and it couldnt possibly be accurate. What with all your math formulas and no real world experience.

I've set up everything from a RAID-0 to a RAID-50 in half a dozen environments, Acanthus.

What's your experience like, praytell?

Im sure you have. Im not going to take this any further

Raid 0 is the worst thing ever!!! No one use it!!! Your data just corrupts itself instantly!!!

Seriously though, for a HOME USER AND GAMER if you back up once in a while... youre not losing mission critical data.

For a business, of course id reccomend raid 5.

Oh and i administrate 17 networks in my city.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Cat
Drives fail randomly. You can get lucky and have a drive last an eternity, or have tons fail in a span of a few months, like we did a few months ago. (3D rendering)
It is more likely to happen, and you're incredibly naive to believe that just because one person's RAID0 is working, that yours is failsafe. Probability is backed up by results verifying it.

I did not say, at any point, that any hard disk is failsafe. I said that putting 2 drives on one array does not make the drives themselves any more or less likely to fail. (Unless youre a retard and put them in concurrent bays and improperly cooled, which happens all the time).
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Acanthus
Originally posted by: Cat
Drives fail randomly. You can get lucky and have a drive last an eternity, or have tons fail in a span of a few months, like we did a few months ago. (3D rendering)
It is more likely to happen, and you're incredibly naive to believe that just because one person's RAID0 is working, that yours is failsafe. Probability is backed up by results verifying it.

I did not say, at any point, that any hard disk is failsafe. I said that putting 2 drives on one array does not make the drives themselves any more or less likely to fail. (Unless youre a retard and put them in concurrent bays and improperly cooled, which happens all the time).

DaveSimmons said:

Originally posted by: DaveSimmons
The odds of losing your data are doubled when you use RAID0 so be sure to come up with a backup strategy.

Then you said:

You dont half (sic) the MTBF of the drives by using 2 its (sic) a fallacy.

Dave clearly meant (and expanded on this in later posts) that you double the odds of the RAID0 as a whole failing, not an individual drive. You continued to hammer on the fact that the individual drives are no more likely to fail -- and while this is true, that's not the point. A multidisk RAID0 array is more likely to fail than a single hard drive, because if any one of the disks in the RAID0 fails, the entire array fails.

If the MTBF of the drives is the same, the MTBF of a RAID0 array is lower than that of a single drive. It's not that the drives in a RAID0 are more likely to die, it's that what you care about is the chance of none of the drives in the array having a problem. That probability gets lower as you add more drives.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Matthias99
Originally posted by: Acanthus
Originally posted by: Cat
Drives fail randomly. You can get lucky and have a drive last an eternity, or have tons fail in a span of a few months, like we did a few months ago. (3D rendering)
It is more likely to happen, and you're incredibly naive to believe that just because one person's RAID0 is working, that yours is failsafe. Probability is backed up by results verifying it.

I did not say, at any point, that any hard disk is failsafe. I said that putting 2 drives on one array does not make the drives themselves any more or less likely to fail. (Unless youre a retard and put them in concurrent bays and improperly cooled, which happens all the time).

DaveSimmons said:

Originally posted by: DaveSimmons
The odds of losing your data are doubled when you use RAID0 so be sure to come up with a backup strategy.

Then you said:

You dont half (sic) the MTBF of the drives by using 2 its (sic) a fallacy.

Dave clearly meant (and expanded on this in later posts) that you double the odds of the RAID0 as a whole failing, not an individual drive. You continued to hammer on the fact that the individual drives are no more likely to fail -- and while this is true, that's not the point. A multidisk RAID0 array is more likely to fail than a single hard drive, because if any one of the disks in the RAID0 fails, the entire array fails.

If the MTBF of the drives is the same, the MTBF of a RAID0 array is lower than that of a single drive. It's not that the drives in a RAID0 are more likely to die, it's that what you care about is the chance of none of the drives in the array having a problem. That probability gets lower as you add more drives.

I agree with what youre saying, im saying its not 50% for every drive you add. More like 5%.

This isnt the 500MB maxtor or IBM deathstar days.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Acanthus
I agree with what youre saying, im saying its not 50% for every drive you add. More like 5%.

This isnt the 500MB maxtor or IBM deathstar days.

An n-disk RAID0 array is at least n times more likely to fail in the same time span as a single drive. I don't know how to put it more plainly.

Let's say a particular model of hard drive has a 50,000 hour MTBF. This means this type of drive will fail, on average, after roughly six years of continuous operation. For modelling, let's say that there is a 10% chance of the drive failing in any 12-month period -- or, from another perspective, a 90% chance that the drive will not fail in a 12-month period. This is not terribly accurate -- real drives become increasingly likely to fail as time goes on due to mechanical wear, and some drives of the same model will age and fail far more rapidly than others, and this assumes a constant, full load on the disks -- but it gives about the right MTBF.

Chance of a single drive not failing in the first six years of operation: (.90) * (.90) * (.90) * (.90) * (.90) * (.90) = (.90) ^ 6 = 53.1%

The chance of an n-disk RAID0 not failing within a time span is the same as the chance of none of the disks in the array failing within that timespan. This means that for each 12-month period, there is a (.90^n) percent chance of the array not having a failure.

The chance of a two-disk RAID0 not failing within six years is: (.90^2) * (.90^2) * (.90^2) * (.90^2) * (.90^2) * (.90^2) = (.90^2)^6 = (.90^12) = 28.2%

The chance of a four-disk RAID0 not failing within six years is: (.90^4)^6 = (.90^24) = 8.0%.

The modelling here is very simplistic, but the math is along the right lines; the odds of any drive in a RAID0 array failing get exponentially worse as you add more drives. However, in reality, hard drives have (at least for the first few years of their lives) a *far* better chance of going a year without an incident than the numbers I've given. But they *will* all fail eventually; it's just a matter of time, and there's no way to tell in advance how long any given drive will last.
 

Bozz

Senior member
Jun 27, 2001
918
0
0
Originally posted by: Mr Bob
That is pretty sh*tty. How come there are not any SATA cables that allow you to conntect two drives onto one cable?

Basically, I would need to buy an SATA card in order to use more than 2 SATA drives....

I have a SCSI array with 5 drives on one u160 cable and an Adaptec 29160 controller. Its pretty sh*tty how Parallel ATA doesn't let you connect five drives, dont you agree?
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Yeah, but you are capped at 160MB/sec.

So unless they are older/slower drives that dont put out more than ~35MB/sec sequential read/writes, youre hurting performance overall. (it would be faster on 2 sep cables on diff controllers)
 

Bozz

Senior member
Jun 27, 2001
918
0
0
Dont get technical Acanthus, I'm just stirring poo

The drives are in 2x RAID-1 software arrays and a hot spare but hey, using perfmon in w2ksvr it has never come near its peak limit, perhaps slightly limited by the 100Mb network backbone
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |