Large storage solution - Software Raid 5?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
It is only 50% efficient for your drive capacity. I cannot believe you actually asked me what I meant by that.

I am still not understanding how you are getting that nonsense figure.
The capacity depends on the RAID level and array size.
Are you assuming I am going to replace 1TB drives with 2TB drives (aka 50% size increase) and then saying its only 50% efficient to replace rather then add? (even though I COULD have added had I WANTED to)

The process you described is NOT online capacity expansion, which refers to the capability to add one or more drives to an existing RAID device, and have the RAID expanded to the new drive(s).

Thats ONE form of OCE, what I described is another, adding a vdev is another.
And ZFS actually can do the form YOU described. Its drawback is that it cannot do online capacity shrinking in any form (no vdev removal, no parity reduction, no array shrinking)
 
Last edited:

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
I am still not understanding how you are getting that figure. The capacity depends on the RAID level and array size.

I can only assume you are incorrectly describing your zpool.

I thought you meant that you had a zpool that consisted of 6 drives, in 3 mirrored vdev pairs. In other words, if you had 6 1TB drives, then you would be able to store only 3TB of data. 3/6 = 50% efficiency.

In contrast, a 6 drive dual-parity distributed parity RAID of 1TB drives can store 4TB, so the efficiency is 66.7% (4/6)

I cannot believe I had to explain that.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
And ZFS actually can do the form YOU described. Its drawback is that it cannot do online capacity shrinking in any form (no vdev removal, no parity reduction, no array shrinking)

Wrong again. ZFS is incapable of expanding a RAIDZx vdev.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I can only assume you are incorrectly describing your zpool.

I thought you meant that you had a zpool that consisted of 6 drives, in 3 mirrored vdev pairs. In other words, if you had 6 1TB drives, then you would be able to store only 3TB of data. 3/6 = 50% efficiency.

In contrast, a 6 drive dual-parity distributed parity RAID of 1TB drives can store 4TB, so the efficiency is 66.7% (4/6)

I cannot believe I had to explain that.

1. My zpool is 5 disk RAID6
2. The 6 drive 3 pairs of RAID1 was a hypothetical pool.
3. You said that doing inline upgrade gives me 50%... that is, you said that taking my 5 drive RAID6 array and replacing it drive by drive from 750GB drives to a new bigger drive (say 3TB) gives me 50% efficiency. Being too huffy to explain yourself we wasted both our times on what is a simple grammar error (you were actually referring to the hypothetical setup being 50% efficient, not the upgrade of the RAID6 array being 50% efficient upgrade).
4. While it is true that my hypothetical setup is 50% efficient rather then 66.(6)%, it is far better overall because you can perform rolling upgrades 2 drives at a time. You end up paying less money to handle your growing space needs and more flexibility.
 
Last edited:

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
3. You said that doing inline upgrade gives me 50%... that is, you said that taking my 5 drive RAID6 array and replacing it drive by drive from 750GB drives to a new bigger drive (say 3TB) gives me 50% efficiency. Being too huffy to explain yourself we wasted both our times on what is a simple grammar error (you were actually referring to the hypothetical setup being 50% efficient, not the upgrade of the RAID6 array being 50% efficient upgrade).

Wrong again. I was referring to your comment about adding a mirror vdev to an existing pool with a RAIDZ2 vdev. I said that was only 50% efficient for the added capacity.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Wrong again. ZFS is incapable of expanding a RAIDZx vdev.

You know, you made errors more then twice as often as I did this last discussion and I didn't start every post with WRONG AGAIN...
Please try to be a little more civil here.

And yes, it seems you are right here. What ZFS can do is add parity volumes. Aka, converting RAID5 into RAID6 by adding a drive.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
4. While it is true that my hypothetical setup is 50% efficient rather then 66.(6)%, it is far better overall because you can perform rolling upgrades 2 drives at a time. You end up paying less money to handle your growing space needs and more flexibility.

But either is far worse than a snapshot RAID setup for a media server, which is what this thread is about.

Say it with me now. ZFS is a terrible choice for a media server.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
You know, you made errors more then twice as often as I did this last discussion and I didn't start every post with WRONG AGAIN...
Please try to be a little more civil here.

I made no errors. You made multiple errors and are now wrongly accusing me of making errors.

Please try not to post so much misinformation and nonsense here.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Wrong again. I was referring to your comment about adding a mirror vdev to an existing pool with a RAIDZ2 vdev. I said that was only 50% efficient for the added capacity.
But either is far worse than a snapshot RAID setup for a media server, which is what this thread is about.

Say it with me now. ZFS is a terrible choice for a media server.

Either you dial down the condescending superiority or we are done talking.

As far as your first quote goes. You misread my own post.
You used 1 & 2 to denote your replies. But what you labeled "2" what was actually a reply to an unnumbered hypothetical I posted later in that post based on points 1&2. Which through the wording lead me do believe you are referring to my own current setup (since points 1&2 were discussions about my current setup). It is also annoyingly condescending to say "YOU ARE WRONG" about me stating that we had a case of MISCOMMUNICATION.

As per your second point. Having a slightly superior setup to ZFS (which it might be, I need to look into that specific one more) doesn't make the second best a "terrible choice"

I made no errors. You made multiple errors and are now wrongly accusing me of making errors.

Please try not to post so much misinformation and nonsense here.

You made errors. For example:
jwilliams4200 said:
With ZFS, the most you can do is add another RAID vdev to your existing zpool.
taltamir said:
1. You can do online capacity expansion on ZFS. Simply swap each drive in turn with a bigger one and then perform a resliver.
1. Who would be crazy enough to waste their time doing that? And who wants to upgrade 5 drives at a time?
taltamir said:
Not me, that is why I am not doing it. But you said IMPOSSIBLE not "inconvenient"
It happens, people make mistakes.
 
Last edited:

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
As far as your first quote goes. You misread my own post.
You used 1 & 2 to denote your replies. But what you labeled "2" what was actually a reply to an unnumbered hypothetical I posted later in that post based on points 1&2.

...

As per your second point. Having a slightly superior setup to ZFS (which it might be, I need to look into that specific one more) doesn't make the second best a "terrible choice"

...

You made errors. For example:

Wrong again. And again. And yet again. Do you ever write anything without errors?

I did not misread your post, and I was not referring to an "unnumbered hypothetical". I was referring to your numbered 2:

2. You can add a RAID1 2 disk vdev to an existing RAID6 5 disk array, for example.

ZFS is NOT slightly inferior, it is a terrible choice for a media server, as I have explained several times, even giving the bullet points in an earlier post.

As for my statement here:

ZFS is an even worse choice for a media server than most other distributed-parity RAIDs, since most others (mdadm, most hardware RAID) allow online capacity expansion (OCE). With ZFS, the most you can do is add another RAID vdev to your existing zpool.

there is no error. No one using the term "OCE" means to replace the drives in a RAID one at a time with larger ones until all the drives are the same (larger) size. That is not OCE, that is repeatedly degrading and rebuilding your array by the number of times that the RAID has drives. OCE refers to adding one or more drives to your array and expanding the volume to the new drive(s). Also, note that I used the words "distributed-parity RAIDs". I was not talking about mirrors, which are not distributed-parity. I cannot believe I have to explain this to you.

Also, you were wrong yet AGAIN when you said that you can convert a RAIDZ1 vdev to RAIDZ2. ZFS cannot add any drives to a RAIDZx vdev.
 

Vegemeister

Junior Member
May 10, 2012
13
0
66
I did a little reading about "snapshot RAID", and it seems to me that a redundant storage system that only becomes redundant after a cron job runs is a fundamentally flawed idea.

Furthermore, I don't see how online capacity expansion is that important. If you're doing upgrades on your file server that don't at least double it's capacity, you're spending too much time upgrading, and possible running your filesystems uncomfortably close to capacity. The fast way to expand a ZFS file server is to only use half of your SAS/SATA ports during normal operation, and build a new array and copy the data over when you upgrade. The old disks can then be retired to offline backup duty.

It should surprise no one that running a 10TB file server occasionally requires purchasing 10TB of disks.
 

Silenus

Senior member
Mar 11, 2008
358
1
81
Wrong again. And again. And yet again. Do you ever write anything without errors?...

jwilliams...you seriously need to take it down a notch brother. A little more civility is NOT a lot to ask. There was a miscommunication..and a few incorrect points (on BOTH your parts)...but taltamir is trying at least to talk it out while you continue to shout down.

In any case you are being very over-dramatic stating that ZFS is a "terrible" choice for a media server. Assuming we are a talking about about large volume media storage, in others words a NAS unit....then it certainly is not a terrible choice. It is a different choice. The FACT is that a given solution should be chosen based on a ones individual needs and preferences. Clearly online capacity expansion (in the traditional way) is something important YOU. Ok, that's fine. I'll give you that a ZFS system doesn't do the usual online expansion...but that doesn't mean you can't expand it. You can expand it in the ways already discussed, by either adding vdev's to a pool, or by replacing all drives in a vdev one a time with larger drives, resilvering, and letting it autoexpand when complete. The fact is, in both these cases your existing data remains online and available. Whether you call that OCE or not is arguing semantics. ZFS method of expansion is simply a caveat one must accept when choosing a ZFS based system, and plan for it accordingly.

But there are also enormous upsides and features to ZFS based systems, particularly the opensolaris variants. I've already mentioned some of them in an earlier post, but there are more. And FOR ME (and I assume taltamir) having those extra enterprise class features and robustness available... FAR outweigh the downside of increased expansion difficulty.

So for me ZFS is an excellent choice for a large home storage solution. It is not for you...and that's OK!
 
Last edited:

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
I did a little reading about "snapshot RAID", and it seems to me that a redundant storage system that only becomes redundant after a cron job runs is a fundamentally flawed idea.

I'm not sure why you mention a cron job. You can, of course, schedule snapraid to run using cron or other means, but you can also run it directly.

Anyway, there is certainly no flaw when used for a media server. The files on a media server are mostly static, except occasionally when new media is added. Right after new media is added, the snapshot RAID parity and checksum data is updated.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
Silenus: ZFS is certainly a TERRIBLE choice for a media server. You seriously need to stop giving people bad advice.

Your list of advantages for ZFS are hilarious. You wrote:

copy-on-write, full end to end checksumming, automatic scrubbing, instant snapshots,

COW: ooh, COW certainly provides a great benefit for a media server. Actually, it is at best neutral, and at worst, counterproductive for a media server, since it can result in filesystem fragmentation which slows reads. Besides, COW is available with other filesystems if you are crazy enough to think your media server needs it.

end-to-end checksums: this has always been a false claim of ZFS zealots; since ZFS does not allow you to manually enter a checksum for a new file you have just added, it is clearly NOT end-to-end; in reality, if your data gets scrambled before or during being copied onto ZFS, you will not know it. So the real claim is ZFS keeps checksums on all your data. Nice. Except, so does snapraid (and flexraid, and disparity, and btrfs...)

automatic scrubbing: this is trivial if checksums are already implemented, and is certainly not unique to ZFS

instant snapshots: not needed for a media server filesystem, can be useful for an OS filesystem, but of course, snapshots are hardly unique to ZFS

So, none of your reasons make ZFS better for a media server than snapshot RAID. And here are some reasons why snapraid is better for a media server than ZFS:

o new drives can be easily added, 1 drive at a time

o works with HDDs of different capacities, utilizing all space on each drive

o no data migration necessary since it just uses drives formatted with your chosen filesystem

o if you lose more drives than you have parity, you only lose the data from the dead drives

o power efficient: during movie playback, only one drive needs to spin up

o works with your existing OS, not need to change to a different OS

Clearly, ZFS is a terrible choice for a media server.
 

Vegemeister

Junior Member
May 10, 2012
13
0
66
COW: ooh, COW certainly provides a great benefit for a media server. Actually, it is at best neutral, and at worst, counterproductive for a media server, since it can result in filesystem fragmentation which slows reads.

You are arguing performance in favor of a userspace kludge that doesn't stripe across disks?

end-to-end checksums: this has always been a false claim of ZFS zealots; since ZFS does not allow you to manually enter a checksum for a new file you have just added, it is clearly NOT end-to-end; in reality, if your data gets scrambled before or during being copied onto ZFS, you will not know it. So the real claim is ZFS keeps checksums on all your data. Nice. Except, so does snapraid (and flexraid, and disparity, and btrfs...)

Do you expect every program that writes files to be modified to support some fuse ioctl() for passing checksums to the underlying storage? That's what the ECC memory is for. Filesystems are supposed to provide abstract storage.

instant snapshots: not needed for a media server filesystem, can be useful for an OS filesystem, but of course, snapshots are hardly unique to ZFS

Why would you build a file server and only use it for media? Backups. Project directories so you can work on whatever you're doing from any machine in the house.

o new drives can be easily added, 1 drive at a time

But that means you're operating close to full capacity. I don't really see the problem with upgrading the disks all at once.

o works with HDDs of different capacities, utilizing all space on each drive

But why not just buy a bunch of disks that are the same size to start with? Some kind of array made from miscellaneous used disks will just consume more power and make more noise than a simple mirror with two large drives of current vintage.

o no data migration necessary since it just uses drives formatted with your chosen filesystem

See above.

o if you lose more drives than you have parity, you only lose the data from the dead drives

So? You still lost data. Go get the backups and put it back.

o power efficient: during movie playback, only one drive needs to spin up

Cool.

o works with your existing OS, not need to change to a different OS

That is nice, but it's not that big of a deal. Surely some of your desired services will run on OpenIndiana/FreeBSD. The GPL incompatibility of ZFS is a bit of a drag. I will welcome the day when btrfs is stable enough for production use.
 

Silenus

Senior member
Mar 11, 2008
358
1
81
I will end here. After this jwilliams, you may continue yelling at the internet if you so desire.

I mention copy on write because it is fundamental to the way zfs works. It is what allows write integrity when updating existing data, it's what makes instant snaps possible, cloning filesystems, zfs send/receive replication ect. And yes...none of these are essential for simple media storage. Even so I can still think of situations where it could be useful, even just for media storage. For that matter, none of Snapraids features are essential either, only useful and convenient. Why not just have a plain system with a single drive? That would work too. See (again)...this is part of the choice. These ZFS are features there IF you do want to use them. And someone might find uses for them if they DO have them available.

End-to-end checksumming on ZFS has always meant that data will remain consistent from the time it hits host memory, through being written to disk, to being read back from disk later. Apparently you define it differently (semantics again). Of course it can't know whether you sent it bad data, no filesystem can. But hey, if you can manually create a checksum on a file in snapraid after you've copied it...that's great. That another +1 for snapraid then.

And you know what else? All those features you list for snapraid...are also great. That sounds like it has some good flexibility for a simple big media storage setup. But unlike you and ZFS, I might actually recommend snapraid to a friend if I determined it was a good fit for them, rather then continually dismissing it as "terrible" because it didn't fit MY needs.

For me, I will STILL trade those snapraid features for what ZFS has. Even if we limited the discussion to just a simple media storage setup....and we limit it to JUST talking about data redundancy and data integrity...I am willing to give up whatever snapraid has for zfs for one reason. The redundancy from raid and data integrity from checksums, and it's data correction if needed all happens in real time, on the fly with no intervention or scheduling required. The scrubbing is a bonus. Yes snapraid can do file level redundancy via "syncing", and checking data via "check", and fixing data via "fix". But these are all manually run separate functions! Ok, perhaps they can be scheduled to run automatically via certain jobs ect.....but in no way would I consider that robust and seamless. And that is why I would honestly rather have the continuous, on the fly, seamless data redundancy and integrity I get from the tight integration of OS/filesystem/logical volume manager that a ZFS rig gives me. Yes really I would.

But now lets extend the use case just a bit further. And this following is more for others reading who might be curious about what a zfs storage setup can give you. Vegemeister already touched on some of this, but it bears repeating. Like many others here I am an enthusiast, and it stands to reason that perhaps I'd like to use my home built storage server for just a little bit more than mostly static movie storage. If you do...or even if you think you might do more with your storage someday, here are some more features of a ZFS system that could prove to be VERY useful. Aside from the already discussed robust software raiding including single, double, or triple disk parity, and data integrity/correction from on the fly from checksums...you also have:

- a very fast native CIFS kernal (in solaris based systems)
- nfs v3 or v4
- afp support with netatalk package
- COMSTAR iSCSI and FC/FCOE ability

- compression
- encryption (with ZFS v30 and up, currently Solaris 11 only)

- Snapshots. Instant snaps of individual filesystems regardless of I/O load. After snaps, individual files or whole folders can be restored right from windows via previous versions!
- Cloning. Ability to create instant, writable, filesystem clones based on snaps which share a common set of blocks
- Replication. Robust, block level, incremental replication of filesystems via zfs/send receive which carry all metadata including ACL's ect.

- ARC. Caches reads in ram, and can use as much or as little ram as you care to give it
- L2ARC. Ability to add a large capacity second level of caching to a pool via a fast SSD to supplement the ARC.
- ZIL caching. Ability to add a separate high speed log device for the zil which acts as a write caching device to accelerate sync writes.

FreeBSD based systems have almost all the above except iSCSI, encryption, and native CIFS kernel (they use SAMBA instead). Now some of these features are of course available on others OS's and other filesystems. For me, having this total package available...in open source form fits MY needs.

Anyone building a large volume storgae system, even at home, should be making an informed choice. And once you are informed there is no "terrible" choice...only choice. If I can help inform a few people about a ZFS option I sure consider that service...not "bad advice." So....if anyone else out there is building storage take a look at the ZFS solutions. You might like them. Fortunately there are also lots of other options too. Use what fits you best!
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
Silenus:

You are grasping at straws.

You say COW is not needed for a media server, but still, maybe, somehow, hopefully, someone might just possibly want it anyway, for no particular reason but that you have a hammer and think that everything is a nail. Ha!

That is bad advice. ZFS is a terrible choice for a media server, and your wishing and hoping will not change that fact.

As for ZFS being useful for non-media storage usage, certainly it is. ZFS is a good choice for many workloads, but it is a terrible choice for a media server. It would be foolish beyond belief for someone to use ZFS for a media server just because ZFS is useful for, say, an OS filesystem. It is TERRIBLE advice to suggest someone should use ZFS for their media files just because they might want to use it for OS files. Those are two completely different jobs, and it makes no more sense to use the same thing for two different jobs then it would to hire one person to do the job of heart surgeon and also the job of programming the software for the MRI and CAT scanner.

It is sad that ZFS zealots refuse to accept the facts and continue to give bad advice.

ZFS is a terrible choice for a media server.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |