unRAID or FreeNAS?

VirtualLarry

No Lifer
Aug 25, 2001
56,450
10,119
126
I've got an unRAID 5.x (trial version) server, that I never bothered to get around to paying for the full license for. The server has six 7200RPM 2TB Hitachi drives on the ICH9R, and four 5400RPM 2TB Hitachi drives, currently on a PCI Silicon Image 4-port SATA1 controller card. Mobo is Gigabyte P35-something, with 8GB DDR2, and a Q6600, underclocked to 1.8, I think.

I'm only actually using three of the 7200RPM 2TB drives, one for parity, and two for storage.

Would like to shake things up a bit here, and maybe switch to FreeNAS and use RAIDZ2, after reading the post that 10 drives is an "ideal" number for both RAIDZ2 and 4K-sector drives. (Although, my drives are 512byte, not even 512e, they were the last drives made that were 2TB that worked with WHSv1, my original preferred choice of OS. New replacement drives would be 4K with 512e though.)

I am planning, at some point, to purchase a 4-port SATA6G Marvell-chipset controller card, although I could throw in two 2-port cards. (Currently already have some 2-port Marvell and ASMedia cards.)

Am I on the right track, or should I simply pay for an unRAID 6.x license, to take advantage of the virtualization stuff?
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
This for a home NAS? Well, I'd not use either if it were me.

But if your not comfortable with CLI and these are your only two choices I'd probably go with FreeNAS.
 
Last edited:
Feb 25, 2011
16,822
1,493
126
Depends what you want to do.

unRAID is, well, not RAID - this allows individual drives to spin down and stay spun down while you read data off of other drives. Performance is low (single-drive equivalent). But it'll also make maximum effective use of disk capacity if you mix and match sizes of drives in the future (although all you're mentioning ATM is 2TB drives.) Need more space? Add another drive!

Preferable behaviour for a media server, IMO.

ZFS will provide much better performance across the board, and much less disk flexibility. (All the drives have to be the same size, stay spun up together, can't add drives to an existing RAID, etc.)

FreeNAS and unRAID both rely on plugins - user-created and preconfigured containers not unlike a Docker container. (But not a Docker container - I'm not being exactingly specific here, don't send the technically-correct police after me!) unRAID (at least in version 5.x) also had (has?) interface plugins that make the WebUI and NAS featureset extensible.

This for a home NAS? Well, I'd not use either if it were me.

Yeah, "Real Men" install Linux and configure it to suit their needs.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
Yeah, "Real Men" install Linux and configure it to suit their needs.

Meh. ZFS wasn't designed for small home NAS use. If you want to put together a 50+ drive system than ZFS is what you want. Less than a dozen drives? Why incur the overhead and slowness of ZFS??

I just don't see much of a point with these "NAS OSes" for whitebox servers. They shouldn't exist. You can do everything much easier and faster with any standard Linux distro. RAID it up with MDADM and share it our via Samba. It's a 20 minute job tops!

The argument of "I'd rather use a GUI" doesn't hold water with me. If you have the expertise to build a system from the ground up then learn the damn CLI.

Anyway, OP go with FreeNAS if those are you only two choices. I agree with Dave about Unraid. I actually have an Unraid box I purchased from off a friend. Its days are numbered...
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
Meh. ZFS wasn't designed for small home NAS use. If you want to put together a 50+ drive system than ZFS is what you want. Less than a dozen drives? Why incur the overhead and slowness of ZFS??

Which overhead and slowness? It's 2016. Everyone has a CPU that can handle fletcher checksums and lz4 compression. I mean, you can certainly do dumb things like turning on de-dupe w/o understanding what that does, setting compression to gzip9, and throwing all of your disks into one vdev (or worse, put each in its own vdev!), but the defaults for zpool create are perfectly reasonable.

I just don't see much of a point with these "NAS OSes" for whitebox servers. They shouldn't exist. You can do everything much easier and faster with any standard Linux distro. RAID it up with MDADM and share it our via Samba. It's a 20 minute job tops!

The argument of "I'd rather use a GUI" doesn't hold water with me. If you have the expertise to build a system from the ground up then learn the damn CLI.

FreeNAS does this perfectly fast also. Make a pool. Make a dataset. Check the "sharesmb" box. No one is forcing you to use a GUI and not everyone wants to use the CLI. It requires virtually 0 expertise to physically build a system so I'm not sure why that should affect whether or not someone uses gui's or cli's.

OP, what are you actually planning to virtualize? If you can't actually think of anything that you want to do right now, and your main plan is media server, I'd concur with dave_the_nerd and say just use your unraid license. You have a bunch of drives of different speeds and different sata controllers, since any one of them should be fine for streaming media, you don't really need to bother with the higher performance you get from striping data across multiple disks. Just let unraid keep extra copies for redundancy.
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
FreeNAS is more robust and, in general, a better file and storage system. For 95% of home users it's probably not worth the extra overhead (hardware, power consumption, etc.) over unRAID. If unRAID ever gets dual parity availability (maybe they have by now) then there isn't a whole lot more that FreeNAS will offer a typical home user/media server.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
I've got an unRAID 5.x (trial version) server, that I never bothered to get around to paying for the full license for. The server has six 7200RPM 2TB Hitachi drives on the ICH9R, and four 5400RPM 2TB Hitachi drives, currently on a PCI Silicon Image 4-port SATA1 controller card. Mobo is Gigabyte P35-something, with 8GB DDR2, and a Q6600, underclocked to 1.8, I think.

I'm only actually using three of the 7200RPM 2TB drives, one for parity, and two for storage.

Would like to shake things up a bit here, and maybe switch to FreeNAS and use RAIDZ2, after reading the post that 10 drives is an "ideal" number for both RAIDZ2 and 4K-sector drives. (Although, my drives are 512byte, not even 512e, they were the last drives made that were 2TB that worked with WHSv1, my original preferred choice of OS. New replacement drives would be 4K with 512e though.)

I am planning, at some point, to purchase a 4-port SATA6G Marvell-chipset controller card, although I could throw in two 2-port cards. (Currently already have some 2-port Marvell and ASMedia cards.)

Am I on the right track, or should I simply pay for an unRAID 6.x license, to take advantage of the virtualization stuff?

Just to get this out of the way, I wouldn't bother with FreeNAS/ZFS at all unless you have ECC RAM. While there isn't nothing keeping you from running without, the nature of how ZFS works means you have a significantly higher chance of data loss in the event of a bad memory read than you will with other options. Plenty of documentation both official and unofficial to back this up. The FreeNAS team specifically outlines this on their site.

Personally, for a home server I would avoid RAID solutions and look toward drive pooling solutions like Unraid, mhddfs/SnapRAID, or even Windows Storage Spaces/SnapRAID. The simple reason is scaling. Any of these solutions will easily let you add storage as you need it without affecting the existing data pool. You cannot add new drives to a ZFS array without destroying it. Second, with the above solutions only the drive in use spins up...ZFS or any other RAID solution means spinning up every drive even if you only need one small file, which adds wear and tear and unnecessary power usage.

That said, ZFS can give you a performance boost that other solutions cannot though how many that affects you depends on different factors. If your server will only serve a small handful of people, then that performance bump isn't that useful.

Ultimately, whichever solution you choose doesn't get you out of backing things up. Ideally you want two complete backups. Because of the expense of building these systems, I personally would almost rather see someone run a Raid 0 with two off system backups than RAID 5 or 6 with no or one backup. Rebuilds are incredible tough on the remaining array and it isn't uncommon to see a cascade failure occur simply from the additional burden.

Good luck!
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
Just to get this out of the way, I wouldn't bother with FreeNAS/ZFS at all unless you have ECC RAM. While there isn't nothing keeping you from running without, the nature of how ZFS works means you have a significantly higher chance of data loss in the event of a bad memory read than you will with other options. Plenty of documentation both official and unofficial to back this up. The FreeNAS team specifically outlines this on their site.

This gets repeated a lot, and it is incorrect. There is a good summary here:

http://www.openoid.net/will-zfs-and-non-ecc-ram-kill-your-data/

So what does your evil RAM need to do in order to actually overwrite your good data with corrupt data during a scrub? Well, first it needs to flip some bits during the initial read of every block that it wants to corrupt. Then, on the second read of a copy of the block from parity or redundancy, it needs to not only flip bits, it needs to flip them in such a way that you get a hash collision. In other words, random bit-flipping won’t do – you need some bit flipping in the data (with or without some more bit-flipping in the checksum) that adds up to the corrupt data correctly hashing to the value in the checksum. By default, ZFS uses 256-bit SHA validation hashes, which means that a single bit-flip has a 1 in 2^256 chance of giving you a corrupt block which now matches its checksum. To be fair, we’re using evil RAM here, so it’s probably going to do lots of experimenting, and it will try flipping bits in both the data and the checksum itself, and it will do so multiple times for any single block. However, that’s multiple 1 in 2^256 (aka roughly 1 in 10^77) chances, which still makes it vanishingly unlikely to actually happen… and if your RAM is that damn evil, it’s going to kill your data whether you’re using ZFS or not.
Your non-ECC RAM actually has to be far worse than just defective. It has to be an active, hyper cognizant agent that tries to create hash collisions for this to be a problem.

Also this remark from an actual Sun ZFS develeoper:

http://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=26303271#p26303271

There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
This gets repeated a lot, and it is incorrect. There is a good summary here:

http://www.openoid.net/will-zfs-and-non-ecc-ram-kill-your-data/

Your non-ECC RAM actually has to be far worse than just defective. It has to be an active, hyper cognizant agent that tries to create hash collisions for this to be a problem.

Also this remark from an actual Sun ZFS develeoper:

http://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=26303271#p26303271

I concur for the most part. My problem with ZFS Non-ECC is that its simply another possible fail point that doesn't exist in a ECC environment. Using Non-ECC RAM is completely reasonable as long as the user is aware of the possible downside. It isn't that ZFS needs ECC RAM to stay healthy...its that the checksum has to make assumptions that everything in memory is good and will eat itself alive on that assumption should something go wrong with the memory. ECC RAM greatly mitigates that risk. That said, this is a home server and as long as the user keeps two backups...I say WTF...go for it. I'll stay consistant with my opinion that backups are more important than uptime in a non-critical environment. You just won't be able to watch Bonanza for a few hours while you rebuild.

To quote the same cite that you posted:

"There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS."
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
everything in memory is good and will eat itself alive on that assumption should something go wrong with the memory.
This is not at all what happens.

If zfs asks a disk for a block of data, and that block's computed checksum doesn't match the stored checksum, it will ask for a redundant copy (either a copy, or reconstructed from parity or whatever). If the copy's computed checksum matches the stored checksum, then it will re-write the original block and log the error correction. But if the copy's checksum DOES NOT match its stored checksum, and there is no additional redundancy, zfs will log an unrecoverable error and move on.

It will not "eat itself alive". The only way you could have something like that happen is if the RAM were able to commit a bit flips that maliciously created hash collisions.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
This is not at all what happens.

If zfs asks a disk for a block of data, and that block's computed checksum doesn't match the stored checksum, it will ask for a redundant copy (either a copy, or reconstructed from parity or whatever). If the copy's computed checksum matches the stored checksum, then it will re-write the original block and log the error correction. But if the copy's checksum DOES NOT match its stored checksum, and there is no additional redundancy, zfs will log an unrecoverable error and move on.

It will not "eat itself alive". The only way you could have something like that happen is if the RAM were able to commit a bit flips that maliciously created hash collisions.

My words are probably not appropriate, but the metaphor is compliant. Sun recommends ECC for ZFS and they wrote the code. The FreeNAS guys recommend ECC for ZFS and we are talking about FreeNAS. We can go in circles about whether ECC RAM is necessary or a luxury, but for the most part it is universally accepted that ECC RAM is preferable over Non-ECC for ZFS. This opinion did not occur out of thin air and came from people far more knowledgeable about this subject than you or I (presuming you are as I am, a well-informed hobbyist). If you are a professional ZFS IT specialist, then of course I'll consider your words more carefully.

Are you arguing that this opinion about ZFS/ECC is pure sensationalism or paranoia? I'm willing to bet that there are people out there that using ZFS with non-ECC and have had zero problems. I'm certainly not saying that using ZFS with non-ECC guarantees problems...far from it; however, from a data management standpoint, running ECC ram with ZFS is superior to using non-ECC ram. Period. Is it 1% better or 75% better? That is subjective and I'm not here to make that case. I'm just sharing an opinion that is commonly held throughout the ZFS community as well as the authors of ZFS themselves.

It's one thing to say that you can get by with non-ECC RAM in a budget situation, but it is another thing to discourage the use of ECC ram by arguing that the risk is overblown. We are talking about file servers, whose job is quite literally the management of data. The risk of data corruption of any type should be mitigated where reasonable to do so.

But I digress, we are talking about a home server so in that spirit I defer the high ground. The risk is real, but albeit not as decisive. I appreciate the responses. This has been a instructive thread.

https://forums.freenas.org/index.php?threads/ecc-vs-non-ecc-ram-and-zfs.15449/
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
I concur with the general recommendation of using ECC RAM to store data that I care about, independently of the filesystem that data is living on. You're objectively "safer" (whether it's 1%, 10%, or 0.01% or whatever) and I'm not aware of a situation where ECC RAM would be worse for data integrity.

I don't agree that there are additional risks unique to ZFS with regards to the use of ECC RAM. That is say, if I were to interchange a pair of ZFS mirror vdevs with an XFS filesystem on top of an mdadm raid10, ZFS is in the worst case scenario, NO WORSE than XFS, and likely to be much better because of the redundancy + end-to-end checksumming.

I don't like saying something is "sensationalism" or "paranoia", but I think the opinion that ZFS has ADDITIONAL risks above and beyond the risks of some other filesystem when it is used w/o ECC RAM comes from a misunderstanding of how zfs scrubbing works, how checksum errors are handled, and what memory errors actually do. The thread you linked to on the FreeNAS forum is one of the sources of this misunderstanding.
 

Muadib

Lifer
May 30, 2000
17,965
854
126
I say stick to what you know, and use unRaid 6. I use it, and it took all of 5 minutes to go from 5 to 6, and most of that was spent downloading 6. Why are you thinking of going to Freenas?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,450
10,119
126
I say stick to what you know, and use unRaid 6. I use it, and it took all of 5 minutes to go from 5 to 6, and most of that was spent downloading 6. Why are you thinking of going to Freenas?

Because I'm cheap, and unRAID costs money?
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
It's just the trial version of 5.x. It's limited to one parity drive and two data drives.

If you can get a relatively recent version of Windows Server (maybe even just standard Windows 8 or higher, Pro version?) - you can use Storage Spaces, which in effect offers the same thing.

But I am not at all up to speed on Storage Spaces. Not in the least, so I can't really say much.
 

SViscusi

Golden Member
Apr 12, 2000
1,200
8
81
Also check out Nas4Free, closer to the old FreeNas and a little bit simpler for just nas functions.
 

PingSpike

Lifer
Feb 25, 2004
21,733
565
126
I just felt the ZFS/FreeNAS implementation was a poor fit for my needs. Everyone recommends it and it sounds like a great solution, but complete overkill for what I'm doing with a home file server. I want to spin down drives to save power and build a file server using mismatched disks. Unraid's Parity I wasn't even sure I needed but it adds some basic protection to disk failure that makes using a bunch of disks less risky. They are supposedly working on dual parity for the next release.

A similar free product that I played with for awhile is OpenMediaVault with the SnapRAID plugin installed. Its doesn't have the built in VM with KVM system that unraid just added, but its got something pretty similar to everything else. This can do disk spindown and store parity data. Snapraid doesn't do parity realtime IIRC, it writes it periodically. But it does do MD5 hashing I believe, and the parity is stored to a file instead of an entire disk making the setup more flexible.

OMV is based on debian, the plugin for snapraid is just a front end to that utility. I found OMV harder to use and snapraid harder to setup than unraid which is pretty much dead simple for the task. My main complaint was there didn't seem to be a GUI for restoring files with snapraid, its best feature you still needed to use through commandline. And when I ran OMV as a guest in ESXi the performance was terrible, just horrible. It ran perfectly fine on bare metal though. I've come to believe that there's an ESXi NFS bug or something that I may have been running across but I couldn't figure it out at the time and gave up.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |