Raid 5 question...settle a bet!

SlickVic

Senior member
Apr 17, 2000
774
0
0
Help settle a bet:beer:

We have a Compaq server that has 3 hot-swap disks with a hardware RAID 5 configuration (no hot spare) running Windows 2000 Server. Since RAID 5 requires a 3 disk minimum, I say if 1 disk fails, the server stops working.

My friend says it will keep running long enough for a replacement drive to be put in for rebuilding?

Who's right?

TIA
 
Jan 31, 2002
40,819
2
0
Your friend. RAID 5 has the 3 disk minimum because that's the number of disks required to get redundancy. One disk can fail, and it'll go to parity-rebuild mode (slow as hell) until you replace it.

- M4H
 

compudog

Diamond Member
Apr 25, 2001
5,782
0
71
I believe you can pull one drive out and still leave the server running until a replacement is installed, but there is a performance hit and the controller device must support this. I could be wrong, but that is the way it was explained to me.
 

jose

Platinum Member
Oct 11, 1999
2,078
2
81
How much money did you just loose ? (he's right)

Regards,
Jose
 

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0
Help settle a bet

We have a Compaq server that has 3 hot-swap disks with a hardware RAID 5 configuration (no hot spare) running Windows 2000 Server. Since RAID 5 requires a 3 disk minimum, I say if 1 disk fails, the server stops working.

My friend says it will keep running long enough for a replacement drive to be put in for rebuilding?

Who's right?

TIA

You lose!

The HBA will set any logical drives within the array of physical drives to DEGRADED. Performance is NOT affected. (Performance IS slower during a rebuild and this can be tuned to suit needs through GAM) HOWEVER, if you lose ANOTHER disk even while the rebuild process is commencing the data is TOAST.

The 3 Disk requirement for RAID5 just means three disks are required to build a RAID5 array. You can have many more than three but the same rule always applies. (you can only lose one!)

Cheers!
 

SlickVic

Senior member
Apr 17, 2000
774
0
0
Ouch..I'm taking a beating...boy, will I hear it tomorrow...I could have sworn you needed at least 3 drives, but guess I need to go re-read my books.

I lost a case of beer BTW


Thanks all

 

Mday

Lifer
Oct 14, 1999
18,647
1
81
Originally posted by: SlickVic
Ouch..I'm taking a beating...boy, will I hear it tomorrow...I could have sworn you needed at least 3 drives, but guess I need to go re-read my books.

I lost a case of beer BTW


Thanks all


raid 5 requires 3 drives min. otherwise you dont have raid 5. there are 2 striped and one parity for the base case of 3 drives. you can add more parity and more striped drives as you want. obviously if the number of parity drives = the number of striped drives, you may as well run raid 10.

if you lose one drive, assuming it's a striped drive, your array is down. you can restore it by the replacing the drive, and running the restoration sequence (see the controller manual). if you lose a parity drive, your array may or may not be down depending on how your controller deals with things. but for the most part, if you lose a parity drive, your array will still be up. there are raid 5 capable systems that keep a "4th" drive around for a "3"-drive raid 5 implementation as a spare. this drive isnt used until a drive goes down. and once it goes down, that drive kicks in.

depending on how you phrased the actual bet, you may or may not lose.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
You lose. Pull one out. it will slow down, but keep on going. That's why RAID 5 takes up about 1 drive's worth of room.
 

LordOfAll

Senior member
Nov 24, 1999
838
0
0
If raid 5 didnt work this way there is no point in having it. Raid 0 is faster all around. That is one diff, raid 5 can lose a drive and still function.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: Mday
Originally posted by: SlickVic
Ouch..I'm taking a beating...boy, will I hear it tomorrow...I could have sworn you needed at least 3 drives, but guess I need to go re-read my books.

I lost a case of beer BTW


Thanks all


raid 5 requires 3 drives min. otherwise you dont have raid 5. there are 2 striped and one parity for the base case of 3 drives. you can add more parity and more striped drives as you want. obviously if the number of parity drives = the number of striped drives, you may as well run raid 10.

if you lose one drive, assuming it's a striped drive, your array is down. you can restore it by the replacing the drive, and running the restoration sequence (see the controller manual). if you lose a parity drive, your array may or may not be down depending on how your controller deals with things. but for the most part, if you lose a parity drive, your array will still be up. there are raid 5 capable systems that keep a "4th" drive around for a "3"-drive raid 5 implementation as a spare. this drive isnt used until a drive goes down. and once it goes down, that drive kicks in.

depending on how you phrased the actual bet, you may or may not lose.

Actually the parity data is split up between the drives, so it doesn't matter which one you lose, the array will remain up, albeit slower than if it were entirely intact.
 

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0
raid 5 requires 3 drives min. otherwise you dont have raid 5. there are 2 striped and one parity for the base case of 3 drives. you can add more parity and more striped drives as you want. obviously if the number of parity drives = the number of striped drives, you may as well run raid 10.

if you lose one drive, assuming it's a striped drive, your array is down. you can restore it by the replacing the drive, and running the restoration sequence (see the controller manual). if you lose a parity drive, your array may or may not be down depending on how your controller deals with things. but for the most part, if you lose a parity drive, your array will still be up. there are raid 5 capable systems that keep a "4th" drive around for a "3"-drive raid 5 implementation as a spare. this drive isnt used until a drive goes down. and once it goes down, that drive kicks in.

depending on how you phrased the actual bet, you may or may not lose.

What you have described is RAID3.

Actually the parity data is split up between the drives, so it doesn't matter which one you lose, the array will remain up, albeit slower than if it were entirely intact.

The penalty (in speed) only begins when the rebuild starts after the missing member is replaced with a non defunct drive or a defunct drive is forced back online via GAM. Degraded RAID5 arrays actually can run faster since the XOR engine goes idle and the parity is no longer needed.

Cheers!
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
The penalty (in speed) only begins when the rebuild starts after the missing member is replaced with a non defunct drive or a defunct drive is forced back online via GAM. Degraded RAID5 arrays actually can run faster since the XOR engine goes idle and the parity is no longer needed.

Cheers!
True dat.
 

SCSIRAID

Senior member
May 18, 2001
579
0
0
The penalty (in speed) only begins when the rebuild starts after the missing member is replaced with a non defunct drive or a defunct drive is forced back online via GAM. Degraded RAID5 arrays actually can run faster since the XOR engine goes idle and the parity is no longer needed

Untrue... The performance degredation occurs from the time that the drive fails until the rebuild completes. Once the array goes degraded, if the user accesses data that was on the failed drive then the xor engine is used recreate that missing data. So in the case of the three drive RAID 5 with one drive defunct, assuming random data access, one third of the read ops would require the reading of both remaining drives and then the missing data would be regenerated using the xor engine. The write penalty would be increased on writes to the missing drive since the missing data would have to be regenerated from the other two drives via the xor engine before the parity could be updated to reflect the influence of the new data. A write to data that is on one of the remaining drives still updates the parity since the parity is needed to regenerate the data when the failed drive is replaced.
 

Scorpion

Senior member
Oct 10, 1999
748
0
0
Degraded RAID5 arrays actually can run faster since the XOR engine goes idle and the parity is no longer needed.
I don't believe this is true, as long as it wasn't the parity drive that got hosed. The XOR engine would still need to run in the case of a missing drive to determine what should have been on the other drive, based on what is on the parity drive.

Edit: Basically what SCSIRAID said right above me.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Originally posted by: ScoRp!oN
Degraded RAID5 arrays actually can run faster since the XOR engine goes idle and the parity is no longer needed.
I don't believe this is true, as long as it wasn't the parity drive that got hosed. The XOR engine would still need to run in the case of a missing drive to determine what should have been on the other drive, based on what is on the parity drive.

Edit: Basically what SCSIRAID said right above me.

There is no parity drive in a RAID 5 array. Parity data is spread equally among all the drives in the array.
 

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0
I'll run some tests on a system with a 100MHz i960RN XOR PPU and see what happens when a disk is removed. (besides that really loud beeping!)

Interesting theory as several people have told me that degraded R5 arrays are read similar to R0. I guess this depends on the controller too.

Cheers!
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: shuttleteam
I'll run some tests on a system with a 100MHz i960RN XOR PPU and see what happens when a disk is removed. (besides that really loud beeping!)

Interesting theory as several people have told me that degraded R5 arrays are read similar to R0. I guess this depends on the controller too.

Cheers!

When you just think quickly(a tad too quickly perhaps?) that seems very reasonable.
But thinking for another minute, it's true as SCSIRAID said.
And thinking even a bit more, write performance should increase, seeing as the controller can't very well write any parity information onto a degraded array, while reads should suffer, since some data will have to be recreated from the parity data?

No?

I need a beer... :beer:
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
And thinking even a bit more, write performance should increase, seeing as the controller can't very well write any parity information onto a degraded array, while reads should suffer, since some data will have to be recreated from the parity data?

No, because despite the missing drive, the controller has to pretend that the drive is still there for all writes so the array can be properly rebuilt when the dead drive is replaced. Data is evenly distributed among the drives in a RAID 5 array, so you can't have 50GB each on 2 drives and 30GB on the third drive. The controller has to rebuild data and parity data from the missing drive so that it can be properly updated on the remaining drives for any writes to the array.
 

SCSIRAID

Senior member
May 18, 2001
579
0
0
The algorithm MUST update the parity information onto the array for the rebuild to work.... Two cases exist...

Edit: I left off the most important case of all.... 1a

Case 1... 3 drive R5. If the stripe being written to is missing a data component (because of the failed drive) but the data unit being written is on the surviving drive then the algorighm must do the following.... 1) read old data 2) xor old data with new data to determine where old data and new data differ 3) read old parity 4) xor the output from step 2 into the old parity read in step 3 5) write the outcome of step 4 to the parity stripe, 6) write the new data to the data stripe. This keeps the stripe 'in sync' so that when the replacement drive is inserted and rebuild occurs that the missing data can be regenerated. If you just write the new data and leave the parity alone... the data generated during the rebuild will be wrong.

Case 1a... 3 drive R5. If the stripe being written to is missing a data component (because of the failed drive) and the write is destined for the missing drive then the algorighm must do the following. 1) read other existing data and parity 2) xor them together to create the missing old data 3) xor the new data with the old data to find out where the data patterns differ (because this is where the parity would differ) 4) xor the result from step 3 with the old parity (to flip the parity in the places where the data patterns differ) 5) write the result from step 4 into the parity stripe. The new data which would have been written on the missing drive if it were present is now 'contained' or 'represented' indirectly in the parity information. When rebuild occurs later.. the xor of the existing data with the parity will yield the 'missing' data.

Case 2... 3 drive R5. If the stripe being written to is missing the parity component (because of the failed drive) then all you do is write the new data. This is probably where the performance improvement claim is coming from. This is only 50% of the time in writes and will be swamped out by the other penaltys due to the drive being missing.

Remember that R5 is striped parity as opposed to a dedicated parity drive. A failed drive will represent missing data in some stripes but missing parity in other stripes.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: Pariah
And thinking even a bit more, write performance should increase, seeing as the controller can't very well write any parity information onto a degraded array, while reads should suffer, since some data will have to be recreated from the parity data?

No, because despite the missing drive, the controller has to pretend that the drive is still there for all writes so the array can be properly rebuilt when the dead drive is replaced. Data is evenly distributed among the drives in a RAID 5 array, so you can't have 50GB each on 2 drives and 30GB on the third drive. The controller has to rebuild data and parity data from the missing drive so that it can be properly updated on the remaining drives for any writes to the array.

Aha, I thought it did that during the rebuild(split it evenly that is).
You learn something new every day, as the saying goes
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |