You won't, without ECC, but the chances are small anyway. Hence ECC remaining a hot topic indefinitely.
Very nice explanation here..
https://pthree.org/2013/12/10/zfs-administration-appendix-c-why-you-should-use-ecc-ram/
(very informative series)
You won't, without ECC, but the chances are small anyway. Hence ECC remaining a hot topic indefinitely.
My QNAP TS-451 failed after a couple of months of usage. It was working fine then all of the sudden at 5AM one morning I was woken up to a failed NAS making loud beeps. It turns out the motherboard went bad. I had to RMA it and pay $20 for shipping. It took about a week to get it back and everything is working again but my confidence in the NAS's robustness is much lower than before. In fact, I purchased the NAS because my external Seagate drive died and took all of the data with it. Now I see why storing important data on the cloud is so important. I was planning to backup my important software projects locally to my NAS but now I will just host them on GitHub.
Honestly, the ease of setup on these consumer NASs is really nice. Things that would take tons of configuration on Linux just plain work. However, the hardware is very cheaply made and QNAP's QTOS is riddled with typos and shoddy interfaces. If this fails again I will roll my own with FreeNAS or a Linux distro.
Maybe at 1 or 2 drives but they get rather expensive for the larger capacities.
Dave said:That's about right - the motherboards are typically around $100 more, and the ram itself is just about double the cost.
(Not trying to take issue with your post or nothing, just explaining the pricing slightly differently for the benefit of those reading.)
Amazon price:
8 bay Synology $1000 to $1399
5 bay Synology $815
4 bay Synology $583
Those are prices without disks, aren't they?
My server cost about $550 plus the cost of disks, and it'll beat the crap out of one of those Synology units all day.
I understand the appeal of the appliances - really, truly I do. But the cost/benefit is SOOOOOO in favor of BYO if you have the technical know-how to do so... yikes. It's just painful to watch.
I completely agree with you. It is certainly cost effective to build your own NAS. More importantly the flexibility you get on your own device is unmatched. I have my own php and shell scripts that I use to identify duplicate files, compare folders, sort photos etc. I cannot use any of those in any ready-made NAS machines (speaking from experience with QNAP).
I think it is cheaper to buy if you only want a two drive NAS though, that goes for around $100.
I assembled a test setup with 6x3TB/7200RPM consumer grade drives (Toshiba and Seagate) in Raid-Z2 under Ubuntu 14.10. The setup is based on an old AMD CPU and 4GB-DDR2. With an empty pool I got 260 MB/s with no compression, and that speed remained more or less steady even when the pool is 96% filled (400GB remaining out of 11TB), I am still getting 247 MB/s. Bandwidth measured using dd command creating zero filled files each of size 100GB.
CPU may be the bottleneck. Load average around 7 on the two core cpu.
Probably this. Although that's still faster than dual-1GbE and means ZoL is probably "fast enough" even if it's not perfect.
The other potential concern with mixed drives in a RAID is that if one is a slower model it may hold the rest back. (Not sure how well ZFS accommodates this, or if it tries at all.)
ASrock C2550D4I (up to 12 disks, overpriced)
Your money invested in 'real hardware' will provide much better quality stuff you get per dollar spent. Plus it's fun too, and easy to get excited once you get into the cool stuff.
The CPU on that board is Avoton (business-class Atom) and is either a quadcore or octocore (8-core). The 2550 is quadcore, the 2750 is octocore. But you pay a lot for those extra cores...
Generally, ZFS is not very CPU-intensieve, but it is very RAM-intensive. Bottlenecked RAM will show as CPU usage instead. But generally the CPU doesn't have that much to do, unless you enable features like deduplication/compression/encryption. RAM quantity will determine how much performance potential you can squeeze out of your disks. ZFS basically scales with RAM quantity.
You should know that the chipset only provides 6 SATA ports (4x SATA/300 and 2x SATA/600) while the other 6 ports are provided by two Marvell controllers. Those do not provide full bandwidth, but are still good AHCI controllers if properly supported. ESXi might have problems, for example. But generally BSD and Linux should work great with it.
The power consumption will be a tad higher; since the board you quoted has IPMI feature meaning you have management over power/console. It can be a nice feature, but the downside is that the BMC-chip that makes this possible, uses as much power as a whole computer system (8W). So the power consumption of such a board is doubled to 16W. If you use an ATX-power supply this may grow well into the 25-30W range.
ZFS-on-Linux is not nearly as good as on BSD operating system however, so your performance will be lower, less features and you may encounter some stability issues. But it is manageable if you really like Linux.
Interesting info in here, I've been considering a QNAP TS-451 but I have a Core 2 system on a microATX board just laying around. Doesn't have a ton of SATA ports, though. Are there any reasonably priced PCiE SATA controllers that are known to work well with ZFS?
I recommend IBM M1015, which also has the LSI2008 SAS chip, because this controller can be flashed to IT-firmware. This means it will function in HBA mode instead of RAID mode. HBA is the term for a regular 'controller' (host adapter) without any RAID functionality. The controller will work in RAID mode as well, but has limitations and issues the non-RAID IT-firmware does not.
For example, the LSI controllers yield I/O errors on all disks if just one disk has a bad sector. This was fixed only days ago in FreeBSD 11-CURRENT branch. Not sure about Linux.
A card with LSI chip will also consume a fair amount of power (6 - 8 watts) when doing nothing, whereas AHCI controllers usually are 0,3 - 0,6 watts. The Intel controller will use zero watts with DIPM enabled, which is just awesome.
As Emulex said: these LSI cards can often be picked up for cheap, because they are sold by people who buy a ready-made server and do not need the controller and sell them off for cheap. That is why particularly the IBM M1015 is popular, i think, because there was quite some units sold on ebay. Very popular card for ZFS, but not my personal favourite. I prefer AHCI controllers instead. Those often have less bandwidth per port, though.
ZFS-on-Linux is not nearly as good as on BSD operating system however, so your performance will be lower, less features and you may encounter some stability issues. But it is manageable if you really like Linux.