Originally posted by: SeTeS
I'm glad to have gotten a number of comments on the hardware/raid side of the question...
Also, re expansion... can a software raid 5 array under linux be expanded without rebuild, or am I asking too much?
I don't think so.. at least it wouldn't be convienent. I think I seen something mentioned from the kernel 'md' manager (software raid in linux = 'md' raid, more or less).
At least not with standard MD software RAID, I don't think. If you realy want to know you'd have to ask somebody with more authority.
What you can do, however, that is fairly easy is to setup a LVM enviroment.
LVM stands for 'logical volume management'. It consists of 3 parts..
1. Physical Volumes... these are the 'hard' devices. It can be a drive, or a partition or a software or hardware RAID array.
2. Volume group... a bunch of physical volumes that are pooled together to form one managable hunk of data. A sort of 'one big logical drive' sort of thing.
3. Logical volume.. this is a sort of logical partition on the volume group.
The nice thing about logical volumes over partitions, is that they can be resized and moved, and deleted without taking the system down. Some file systems like reiserfs this can be while they are being used, but others like ext3 need to be unmounted for certain operations.
It's handy enough that I like to use them even when I only have one drive.
For example on my file server I have 3 drives (one PATA onboard, two SATA on PCI-ide adapter) in a software RAID 5 array. Now the software RAID makes it seem one big device, of course, but I still use LVM because it makes my system much easier to manage.
If I wanted too I could of had all of them seperate without a RAID 5 array or anything like that and made them into one volume group and not had to sacrifice one full drive-size hunk of space for the RAID to use up, but I thought that the extra redundancy was worth it.
I am thinking that if you want to setup one RAID 5 array right now for the time being, and when you begin to run out of space you could go and add a entirely NEW larger RAID array. That way you could either add that second array to the Volume group and expand the logical volumes to take up the extra space, or you could setup a seperate new Volume group and migrate logical volumes from one array to the other as you need more space.
Combining LVM and software RAID is a realy very common setup for low-end linux servers and most Linux installers can easily handle assisting in setting this up for you.
If you want to get fancy you can check out EVMS. This is 'take it to the next level' linux stuff and stands for 'enterprise volume management system'.
It takes existing stuff like LVM and adds on new capabilities and management tools. You can do fancy stuff like do high aviability and load balancing file/database cluster servers using CLVM (cluster logical volume management) and other technologies.
This is for higher end stuff. For example with Redhat they have people increasing the performance of their network attatched storage systems and lowering costs by taking the NAS and connect it to various server 'front ends' using stuff like iSCSI and GFS (global file system) and managing it all using CLVM and EVMS type stuff. Basicly you use expensive high speed connections to connect to a bunch of servers that then spread the load for database stuff or file server stuff thru different connections in a cheaper ethernet backbone to load balance for clients.
EVMS is usually aviable if you want to play with it. For isntance I can install it thru apt-get on my debian box right now if I felt like it.
So there is a lot of room for you to play around with...
Also for a bonus Linux MD raid is very fast for being software raid. With a nice CPU you can easily outperform most hardware raid arrays. (of course hardwar raid arrays are still nice for their extra data-protection features, and hot plugability and such).
Although if you go the root with planning on expanding with more drives you'd probably want to get a system with PCIe connections as arrays as small as 4-5 drives can easily saturate a regular 32bit 66mhz PCI bus. PCIe (and multiple CPU cores) realy let the Linux LVM/software raid scale much better then it did previously.
Any comments on a distro? I shore ain't no linucks ofishanado, so prolly wunna thimare 'gooey' distros ud be a good idear.
tia
Well any distro should be able to do software raid and LVM easily enough.
I use Debian (be sure to install with the linux26 boot option so that you can get LVM2 support with device-mapper stuff), but you want nice guis and such.
Suse is a good choice. Yast is suppose to be a very handy configuration tool.
If you want Redhat stuff, which is most likely system you'd encounter in the real world, then licenses aren't tooo expensive. (plus you have the ability to get support up to a year with no extra charge), but I doubt you realy care about that. There are no-cost Redhat clones like CentOS that are popular. All the features and tools that Redhat provides, just minus the redhat trademarks and icons and such.
Ubuntu should be able to handle this no-sweat. It uses the Debian installer so it should be able to set it up.. it's a bit confusing at first using the installer to setup LVM on top of software raid, but its not going to take more then a little struggle.
Also if you want to move the array from one machine to another, it should work as long as you pretty much keep the same basic harddrive arrangement.