Linux software raid

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
I have a Dell powerapp server, an adaptec AIC card (not sure on specific model) hooked up to a Dell enclosure with 11 SCSI drives in it. 11 (drives) +1 (enclosure is detected) + 1 (Controller) is 14, so I should be good there...


problem is creating my software raid drive. I am running the following command:
debian:~# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=10 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 --spare-devices=1 /dev/sdl1

and I get the following output

mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=355597440K mtime=Wed Oct 4 06:35:51 2006
mdadm: /dev/sdb1 appears to be part of a raid array:
level=5 devices=11 ctime=Wed Oct 4 06:32:10 2006
mdadm: /dev/sdc1 appears to be part of a raid array:
level=5 devices=11 ctime=Wed Oct 4 06:32:10 2006
mdadm: /dev/sdd1 appears to be part of a raid array:
level=5 devices=11 ctime=Wed Oct 4 06:32:10 2006
mdadm: /dev/sde1 appears to be part of a raid array:
level=5 devices=11 ctime=Wed Oct 4 06:32:10 2006
mdadm: /dev/sdf1 appears to be part of a raid array:
level=5 devices=11 ctime=Wed Oct 4 06:32:10 2006
mdadm: /dev/sdg1 appears to be part of a raid array:
level=5 devices=11 ctime=Wed Oct 4 06:32:10 2006
mdadm: /dev/sdh1 appears to be part of a raid array:
level=5 devices=11 ctime=Wed Oct 4 06:32:10 2006
mdadm: /dev/sdi1 appears to contain an ext2fs file system
size=33591880K mtime=Wed Oct 4 11:00:22 2006
mdadm: /dev/sdj1 appears to contain an ext2fs file system
size=33591880K mtime=Wed Oct 4 11:00:22 2006
mdadm: /dev/sdk1 appears to contain an ext2fs file system
size=33591880K mtime=Wed Oct 4 11:00:22 2006
mdadm: /dev/sdl1 appears to contain an ext2fs file system
size=33591880K mtime=Wed Oct 4 11:00:22 2006
mdadm: size set to 35559744K
Continue creating array? y
mdadm: SET_ARRAY_INFO failed for /dev/md0: Device or resource busy
debian:~#


why would it se /dev/sdi as ext2?

debian:~# fdisk -l /dev/sdi

Disk /dev/sdi: 36.4 GB, 36419584000 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdi1 1 4427 35559846 fd Linux raid autodetect
debian:~#

changing to 9 disks and removing /dev/sdi1 just changes the error /dev/sdj1


default deb install, no raidtools2, I had to create the /dev/sd(i, j, k, l) devices, (installed without array connected) mknod /dev/sdi b 8 0 (iirc). I also did an mknod /dev/sdi1 b 8 1 (maybe this is what caused the problem?)


 

drag

Elite Member
Jul 4, 2002
8,708
0
0
I don't know much about SCSI drives.. and I know much less about major minor numbers.

But I thought it was...
(with google's help)
/dev/sda 8 0
/dev/sda1 8 1
/dev/sdb 8 16
/dev/sdb1 8 17
/dev/sdc 8 32
/dev/sdb1 8 33

etc etc

I could of been confused.

How about trying over and do:
./MAKEDEV sd\*

?

(Also I'm assuming that your running Debian with a 2.4 kernel and no udev/devfs, right?)
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Ryoga
An 11 disk software RAID 5?

What a tremendously bad idea.

Ya I was going to mention that, but I forgot about it.

If it's just a regular computer with a 32bit 66mhz PCI bus the performance is going to be crap.

Plus I would go with RAID 10, not RAID 5 with so many devices. Raid 5 just isn't very good when you get a lot of disks.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
Does linux raid do hotspares? You could do two 5-disk Raid 5 arrays with a global hotspare and then combine them with lvm.

You'll be losing 3 drives worth of space instead of 1, but the performance will be just as good and you'll have the added security of being able to survive potentially 3 disk failures AND rebuild times will be much much faster.

BTW, I know squat about setting up software raid from the command line because that is one of the many things I use Webmin for.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Brazen
Does linux raid do hotspares? You could do two 5-disk Raid 5 arrays with a global hotspare and then combine them with lvm.

Sure. Hotspares. Hotplug. Hotswap. You can grow raid arrays also.

For instance with recent kernels and a Intel motherboard with SATA ACHI support you can do hotplug SATA devices.

With SCSI hardware of course it's going to be even cooler. But Still software raid on this scale is going to be kinda sucky I think. He'll have to do some benchmarks to see how it affects it.

It may work out fine if it's dedicated storage, even more likely if it's using a PCI-X or PCIe bus. Still I wouldn't want to run busy applications on it.

[/quote]You could do two 5-disk Raid 5 arrays with a global hotspare and then combine them with lvm. You'll be losing 3 drives worth of space instead of 1, but the performance will be just as good and you'll have the added security of being able to survive potentially 3 disk failures AND rebuild times will be much much faster.[/quote]

Raid 10. mahaha. With hotspare.

No parity calculations so...
Much Higher performance.
Much quicker recovery time. (the hotspare just needs to copy the mirror of the drive that went down.)
And a lot cheaper then your setup in terms of drive space lost.

Although more expensive then RAID 5. Raid 5 is used nowadays because it's the most economical, but things like Raid 6 and Raid 10 are better depending on what you want and disks are so cheap nowadays.



BTW, I know squat about setting up software raid from the command line because that is one of the many things I use Webmin for.

It's easier then it sounds/looks.

It looks like he isn't using devfs or udev to create the device nodes so he tried to do it himself and assigned one of the device files he thought was going to be a drive in his array to realy be the drive that contains his root file system. Something weird like that.

 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
Originally posted by: drag

And a lot cheaper then your setup in terms of drive space lost.

Er, eh? I though Raid 10 had a 50% loss of disk space? Raid 5 only loses 1 disk per array. In my setup he loses 3 disks worth of space; in your setup he loses 6 disks worth of space!

It's easier then it sounds/looks.

Maybe so, but I think software raid discussions are good time to plug Webmin. I've never had a problem using it. It's graphical, yet no GUI need be installed on the server. There's no nomenclature to mix up. It's as easy as click, click, boom!
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Brazen
Originally posted by: drag

And a lot cheaper then your setup in terms of drive space lost.

Er, eh? I though Raid 10 had a 50% loss of disk space? Raid 5 only loses 1 disk per array. In my setup he loses 3 disks worth of space; in your setup he loses 6 disks worth of space!

Er.. Ya. Your right. Your setup is pretty nice way to go. I guess I just had a moment of muddled vision there or something.

Maybe though if perserving disk space while gaining more reliability is your goal then maybe take a look at RAID 6. RAID 6 is a bit like Raid 5, except that it does double parity. So instead of loosing one disk's amount of space you loose two.. but it's much more fault tolerant.

Although RAID 6 tends to perform just a bit worse then raid 5. Extra overhead and all that.

I don't know if Linux MD does support 'Raid 6'... I think it does, but not sure.

Also I was thinking RAID 10 because it would be less stressfull on his computer since he is using a rather large amount of disks on just software raid and maybe Raid 10 would be easier on it.

It's easier then it sounds/looks.

Maybe so, but I think software raid discussions are good time to plug Webmin. I've never had a problem using it. It's graphical, yet no GUI need be installed on the server. There's no nomenclature to mix up. It's as easy as click, click, boom![/quote]

Be carefull though. If your root in on that array frequently when bad things happen, bad things bad enough to crash your computer it's likely that the kernel MD code will detect something is wrong and refuse to activate the array. So you'd have to fix it manually with a rescue cdrom or something if your root is on that array.

So knowing how to manage it a little bit is nice.

That's all.

Personally I don't like setting it up either. I like using the Debian installer stuff to setup software raid and then setup LVM for me. Works pretty well.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
quick look over the posts...


It's a 64 bit PCI SCSI card...I can't seem to find any of my HW raid cards (Damn the hired help who hides things from me)

Performance isn't going to be too much of an issue, it's an FTP server, and I think the outside will run out of pipe before we run out of pipe/SCSI bus.

Setup is / on a single 36GB drive (internal) and then /home mounted on the external array. I was going to go with 2 disks raid 1 for /, and then 9 Raid5 for storage, but the Deb installer kept locking up during install that way (over and over and over)

It is using a 2.6.8ish kernal iirc (home now, headed into work in a few). I'm going to wipe that box and start over, as I'm sure it's the mknod of the actual partition, instead of just the physical disk? If all else fails (i.e. next go around fails) I will pull a nice raid card from another server, but I didn't want to do that (makes my 8 matching servers not match, although they currently arn't using their raid cards/eternal arrays)
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
Originally posted by: nweaver

It is using a 2.6.8ish kernal iirc (home now, headed into work in a few).

Not unless you passed an argument to the installer ("linux26" I think?). Otherwise AFAIK, Debian still uses 2.4.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
Originally posted by: drag

Er.. Ya. Your right. Your setup is pretty nice way to go.

*gasp* I'm gonna frame this post

Be carefull though. If your root in on that array frequently when bad things happen, bad things bad enough to crash your computer it's likely that the kernel MD code will detect something is wrong and refuse to activate the array. So you'd have to fix it manually with a rescue cdrom or something if your root is on that array.

That's when I come here and say "OMG! MY LINUX BOXEN IS NOT WORKY!"

Of course, more knowledge and comfort with the command line tools is always a good thing, but so far I'm lazy in this area anyway.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: nweaver
quick look over the posts...


It's a 64 bit PCI SCSI card...I can't seem to find any of my HW raid cards (Damn the hired help who hides things from me)

Performance isn't going to be too much of an issue, it's an FTP server, and I think the outside will run out of pipe before we run out of pipe/SCSI bus.

Setup is / on a single 36GB drive (internal) and then /home mounted on the external array. I was going to go with 2 disks raid 1 for /, and then 9 Raid5 for storage, but the Deb installer kept locking up during install that way (over and over and over)

It is using a 2.6.8ish kernal iirc (home now, headed into work in a few). I'm going to wipe that box and start over, as I'm sure it's the mknod of the actual partition, instead of just the physical disk? If all else fails (i.e. next go around fails) I will pull a nice raid card from another server, but I didn't want to do that (makes my 8 matching servers not match, although they currently arn't using their raid cards/eternal arrays)


Well the only thing I figure is this.. You posted:
I had to create the /dev/sd(i, j, k, l) devices, (installed without array connected) mknod /dev/sdi b 8 0 (iirc). I also did an mknod /dev/sdi1 b 8 1


The major number '8' with the minor number '0' is for /dev/sda, not /dev/sdi
Then 8 and 1 would be for /dev/sda1, not /dev/sdi1

so what is happenning maybe is that it's trying to examine your drive, but is looking at the first scsi drive which I am guessing is housing your root and such.

The device numbering goes like this.. With SCSI drives the major number is always going to be 8. That is the same for all scsi harddrives, until you get up till you get to /dev/sdq and then the major number becomes 65.

You can have up to 15 partitions for each drive.

The minor number is how you tell which drive or partition your using.

Starting with zero the minor numbers are this:
drive number * 16 + partition number

So for the drive there is no partition number, so that is 0.
So for /dev/sda the major and minor numbers will be 8 0
For /dev/sda1 it will be 8 1

For /dev/sdb and /dev/sdb1 it would be 8, 16 and 8, 17 respectively.

Now for sdi through k...
brw-rw---- 1 root disk 8, 128 2006-10-05 08:27 sdi
brw-rw---- 1 root disk 8, 129 2006-10-05 08:27 sdi1
brw-rw---- 1 root disk 8, 144 2006-10-05 08:27 sdj
brw-rw---- 1 root disk 8, 145 2006-10-05 08:27 sdj1
brw-rw---- 1 root disk 8, 160 2006-10-05 08:27 sdk
brw-rw---- 1 root disk 8, 161 2006-10-05 08:27 sdk1

Now that is with current Debian unstable with static dev files.

You should be able to use MAKEDEV script to help you out...
For instance:
cd /dev/
./MAKEDEV sdi
will make the sdi and all of it's partitions.


Maybe that is the issue?

(of course with more modern udev setup the numbers can be anything, I beleive, but they still follow devfs for convience.)
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
Originally posted by: Brazen
Originally posted by: nweaver

It is using a 2.6.8ish kernal iirc (home now, headed into work in a few).

Not unless you passed an argument to the installer ("linux26" I think?). Otherwise AFAIK, Debian still uses 2.4.

I am passing that argument
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
so I just started over...


I installed with / on /dev/sda (non array disk, just a normal, onboard SCSI)

on the first boot, it's whining about missing disks for /dev/md0 (only found 7 disks, of the 10+1) so I think it's not creating the device nodes again

It's 2.6 kernel, what's the procedure for creating the device nodes automagiclly? same as what was posted above?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
It's 2.6 kernel, what's the procedure for creating the device nodes automagiclly? same as what was posted above?

Using MAKEDEV should leave them there for good unless you're using udev, if you're using udev and they're not appearing automatically something is really wrong.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
so I used the cd /dev ./MAKEDEV /dev/sd* for each of the drives over H (I, J, K, L) and then remade the array (with the above mdadm command) and now it seems to be working (It didn't error, and /proc/mdstat shows the array)

it now mounts, but a df -h seems strange...Size is 301G, iused is 33M, available is 206G (maybe because it's building still?)
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I didn't think you should use the /dev prefix on the devices to create (i.e. ./MAKEDEV sdi) but if it worked run with it.

it now mounts, but a df -h seems strange...Size is 301G, iused is 33M, available is 206G (maybe because it's building still?)

What does /proc/mdstat say?
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
well, I gave up, and yanked a raid card from one of my other servers. Hoping it will still be happy with this....


So /etc/mdadm/mdadm.conf showed only drives B-H, not I, J, K, or L (hotspare), even though it shows it as needing 10 devices (so it would fail to start, then fail to mount, and freak out). after a reboot (and a Ctrl+d to continue booting) I could rebuild the array (same command as up above, no mknods or new MAKDEV commands needed) and /proc/mdstat would show the array building, and even finish. but one reboot, and it was dead again. Rebuild it, data is still there, but still, pain in the butt. I though I had a large set of disks earlier like this (May have just been 7 disks though) that was working fine. I dumped that server a while back to do something else with it, so I can't check. Oh well, as long as my raid card will work, I'll be happy.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
Doh...spoke too soon. Now it's trying to mount the raid array (/dev/sdb1 as /home) before loading the megaraid drivers

Off to google!
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
Google hasn't yeilded much....probably wrong terms.

How do tell it to load modules for my raid card BEFORE mounting the drives (especially /home)

Any ideas would be great
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Putting them in /etc/modules should work. Although I don't remember where that file gets loaded by default, I moved mine to S03 in rcS.d so it would happen before udev started.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
i added it to /etc/modules.conf, and now I get a kernel panic

giving up for today, and out of town next week, so I'll get back to this later.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Not modules.conf, /etc/modules. No extension, /etc/modules.conf was for the old version of insmod/modprobe and I don't think it's used any more.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
Originally posted by: nweaver
i added it to /etc/modules.conf, and now I get a kernel panic

giving up for today, and out of town next week, so I'll get back to this later.

*whisper* w e b m i n . . .
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |