Raid-6 options -- H/W and Software

hasu

Senior member
Apr 5, 2001
993
10
81
I am trying out different Raid-6 setups with up to 7 (or 8) hard drives (each 3TB). I tried MDADM and ZFS and now looking at some hardware controllers. OS will be Linux on a separate SSD.

1. Resync is too slow on MDADM compared to ZFS.
2. MDADM can be expanded by adding drives but ZFS cannot be.
3. ZFS has better error detection (because of data checksum), but scrubbing with bad memory can lead to data loss and hence ECC is recommended.
4. For data loss prevention, ZFS uses copy on write and that can lead to severe fragmentation.

While looking at h/w raid solutions, I noticed LSI-9650SE and LSI MegaRAID-8888-ELP are very inexpensive in ebay (used ones sold by recycling companies). Do they need TLER enabled drives or would it work with regular hard drives? In the event of a controller failure, if we attach all existing drives from one controller to a new one will it work without loosing data?

Since it is not easy to change the setup later, I am trying to do it right
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
I am trying out different Raid-6 setups with up to 7 (or 8) hard drives (each 3TB). I tried MDADM and ZFS and now looking at some hardware controllers. OS will be Linux on a separate SSD.

1. Resync is too slow on MDADM compared to ZFS.
2. MDADM can be expanded by adding drives but ZFS cannot be.
3. ZFS has better error detection (because of data checksum), but scrubbing with bad memory can lead to data loss and hence ECC is recommended.
4. For data loss prevention, ZFS uses copy on write and that can lead to severe fragmentation.

While looking at h/w raid solutions, I noticed LSI-9650SE and LSI MegaRAID-8888-ELP are very inexpensive in ebay (used ones sold by recycling companies). Do they need TLER enabled drives or would it work with regular hard drives? In the event of a controller failure, if we attach all existing drives from one controller to a new one will it work without loosing data?

Since it is not easy to change the setup later, I am trying to do it right

If you use a hardware RAID card then you need to use enterprise/NAS class drives.

If you lose a controller card, you SHOULD be able to recover your data if you replace it with the same card. Expanding a hardware RAID array will be no easier than with ZFS. Keep in mind that while you can't expand an array with ZFS, you can add another array and just expand the pool. There are even some performance benefits in doing so.

What is it that you hope to get from hardware RAID that ZFS doesn't give you already? The data integrity advantages of ZFS make it a much better choice in most situations.

http://www.krenger.ch/blog/zfs-vs-hardware-raid-raid-10/
http://milek.blogspot.com/2007/04/hw-raid-vs-zfs-software-raid-part-iii.html?m=1
http://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/

ECC would be equally important with hardware RAID as it is with ZFS.
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
1. Resync is too slow on MDADM compared to ZFS.
2. MDADM can be expanded by adding drives but ZFS cannot be.
3. ZFS has better error detection (because of data checksum), but scrubbing with bad memory can lead to data loss and hence ECC is recommended.
4. For data loss prevention, ZFS uses copy on write and that can lead to severe fragmentation.

What are you actually trying to do and what specifically was it about mdadm and ZFS that don't meet your needs?

Your list of problems sounds sort of dubious. ZFS is most certainly expandable. If you're worried about scrubbing data w/ bad memory leading to data loss why on earth are you not just generally worried about data loss from bad memory? ZFS takes significant steps under the hood to minimize fragmentation that can result from COW but if COW is causing actual performance problems, why not just turn it off on the dataset where it's causing trouble.
 

hasu

Senior member
Apr 5, 2001
993
10
81
The main reason I want to try hardware RAID now is because I can't do that once the system is in place for regular use!

Keep in mind that while you can't expand an array with ZFS, you can add another array and just expand the pool.

Sure you can add more pools to the original one to expand. The thing is if my main pool has two drive redundancy, I need to have at least three hard drives in the new pool to have the same redundancy.

ECC would be equally important with hardware RAID as it is with ZFS.
Well, if RAM fails on the server, you will know that when you see corrupted files delivered to the client, and if you have saved data during that time you might have also damaged those files, but you can replace the bad RAM and recover the remaining data. But if you scrub the whole zpool with bad ram, it can actually corrupt the good data as part of auto fixing (because the checksum from the datablock won't match with the calculated checksum). This is my understanding but I may be wrong.

why not just turn it off on the dataset where it's causing trouble
I was under the impression that ZFS is all about copy on write. I did not know that you could turn off COW in ZFS. How would you do that, can you explain, please?
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
I made a mistake, btrfs allows you to disable cow on specific datasets, a quick look at the zfs dataset properties indicates that it does not.

I still think it would be useful to hear more about your intended workload that mdadm and zfs are not performing adequately for.

Also, it sounds like there is confusion about how to expand a zfs pool. The easiest way is to add another vdev. More capacity. Alternatively you replace the drives in a vdev one at a time with larger capacity one, rebuilding the vdev between swaps. Once all of the drives in the vdev are upgraded, capacity is expanded.
 

hasu

Senior member
Apr 5, 2001
993
10
81
I still think it would be useful to hear more about your intended workload that mdadm and zfs are not performing adequately for.

Also, it sounds like there is confusion about how to expand a zfs pool. The easiest way is to add another vdev. More capacity. Alternatively you replace the drives in a vdev one at a time with larger capacity one, rebuilding the vdev between swaps. Once all of the drives in the vdev are upgraded, capacity is expanded.

Well, for home use any of these modern solutions with about 60MB/s over smb will be sufficient, and the amount of valuable data will be under 3TB for most people. I was just trying to learn stuff that I never did before. I use mdam on a personal server, but read somewhere that h/w raid might give higher throughput.

Coming back to my original question, are there any h/w raid cards that can work with consumer grade hard drives?
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
Well, for home use any of these modern solutions will be sufficient, and the amount of valuable data will be under 3TB for most people. I was just trying to learn stuff that I never did before. I use mdam on a personal server, but read somewhere that h/w raid might give higher throughput.

It might. There is no magic involved, though, and almost everything will depend on the relative quality of the set-ups. You could assemble a HW raid set-up that would outperform a poorly provisioned ZFS server, and you could strap together a junker HW raid server that would have circles run around it by a well-provisioned ZFS server.

If you're just wanting to try something new, ServeTheHome's forums and site have some good reviews of HBA's and RAID controllers. To answer your original questions, if you move all of the disks to an identical RAID controller, in theory, it should be fine. I think most HW RAID controllers are intended to be used with TLER enabled drives.
 

gea

Senior member
Aug 3, 2014
221
12
81
Coming back to my original question, are there any h/w raid cards that can work with consumer grade hard drives?

For a hw raid solution, you should use TLER raid disks. You should also use a controller with cache and BBU to reduce the risk of a corrupted raid after a powerloss during writes (write hole problem with partly updated raid information among disks).

With this in mind, you should also know that hw raid is faster than a good software raid like ZFS only when your CPU is slow with less RAM.

A modern multicore CPU with several GB fast RAM beats any hw raid in nearly all cases. You will never come close to the datasecurity and mostly not to the performance of a ZFS system with its superiour caching options when using enough RAM.
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
With this in mind, you should also know that hw raid is faster than a good software raid like ZFS only when your CPU is slow with less RAM.

A modern multicore CPU with several GB fast RAM beats any hw raid in nearly all cases. You will never come close to the datasecurity and mostly not to the performance of a ZFS system with its superiour caching options when using enough RAM.

Yeah...I think this is sort of what I was trying to say more politely above
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
ZFS is generally superior when are not willing to put the money down to buy current RAID hardware.

To clarify I mean multihost/multichannel SAS using 520 byte or 4160 byte sector sizes etc. Once you move up to that type of hardware / controllers / backplanes etc, ZFS will get wrecked and you still get the sector level checksumming / recovery / snapshots / flash cache / compression / deduplication etc.

However those systems basically "start" at $20k and end in multiples of millions.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
Also ZFS usually means networked file sharing. Hardware raid can be local DAS which is far faster!
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |