Putting Together NAS with FreeNAS... RAID5

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
I'm gonna put together a NAS when I get my taxes back in a few weeks hoping that some of the old hardware I had lying around would work. Unfortunately, the P4 dual-core and LGA 1155 motherboard it was attached to failed to post so I tossed 'em. Worked backwards to an old Socket 939 and Athlon 2500XP that I had lying around and it required an AGP video card to run so I tossed those, too. 3rd in line is an old Socket 478 P4 1.6. Boots and posts fine but just would not install FreeNAS to a flash drive or to a HDD. It just kept getting stuck with unrecoverable sectors or some crap like that during the install. So I tossed that one, too.

You can skip to here
Wanting to try out FreeNAS, I disconnected all the HDDs from my tower and installed FreeNAS to a flash drive there with an old IDE drive I still had lying around. It was much easier than I expected and worked fine across my network with the laptops in my household so I was gung-ho till I discovered that it doesn't support RAID 5. As a media storage for BR and DVD rips I really want to have RAID 5, but full backup is just not necessary where I can always rerip in the future if rebuilding an array fails. I did a little reading and apparently zfs raidz1 is roughly equivalent.

Question 1:
What is the difference between ZFS raidz1 and RAID 5?

Question 2:
Intel Celeron G530 Sandy Bridge 2.4GHz LGA 1155 vs. AMD A4-5300 Trinity 3.4GHz FM2
Of these two 65W CPUs is the Intel worth $14 more in a NAS setting?
http://www.newegg.com/Product/Produc...82E16819116409
http://www.newegg.com/Product/Produc...82E16819113283

Question 3:
If I use a motherboard that supports RAID 5 (yes I know it isn't true hardware RAID), can I set up the RAID 5 on the motherboard and just import that array into the volume on FreeNAS and not even worry about creating it with FreeNAS?

Question 4:
Looking for 6GB with future expansion possible, so 4x2TB or 3x3TB for a RAID setup?

Question 5:
Is adding drives with FreeNAS or the motherboard RAID 5 as easy as just slapping in another HDD drive or two or is it a long complicated process?

Or finally, maybe someone can talk me into a good 4-bay+ NAS box for $200-$300 that I can throw these HDDs into if my plan just won't work. The reason I lean BYO is so I can add more drives in the future if necessary and I would like to create a 2nd 1TB volume for remote backup. It just seems that as I move beyond 4 bays, the price of a box is gonna be prohibitive.
 

Zxian

Senior member
May 26, 2011
579
0
0
I can't say much for FreeNAS or ZFS since I haven't used them personally, but unless you're buying a dedicated hardware RAID card you will not be able to expand a RAID5 array after adding a new drive to the system. Onboard RAID5 generally has very poor performance and support.

If you want to use RAID5, either get a proper controller or do it in software (mdadm or zfs).
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
I can't say much for FreeNAS or ZFS since I haven't used them personally, but unless you're buying a dedicated hardware RAID card you will not be able to expand a RAID5 array after adding a new drive to the system. Onboard RAID5 generally has very poor performance and support.

If you want to use RAID5, either get a proper controller or do it in software (mdadm or zfs).

Excellent, so I am assuming that to be able to expand in the future, I would need a Hardware RAID controller or use a NAS system like something from Synology or Qnap. Might not be a bad idea then to sacrifice on of the HDDs and just go with a RAID controller and add the additional HDDs as space requires in the future.

So, then if I used a Hardware RAID controller, I would set the arrays up in the BIOS for the RAID controller rather than setting them up in FreeNAS, correct? FreeNAS would just see it as one volume/disk?

How do you know which add-in cards are software and which are harware RAID?
 
Last edited:

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
http://lime-technology.com/

The use of Unraid will allow you to expand your array further (assuming you either have unused SATA ports or can install a PCI-E controller card) when the need arises. Unraid also works across a bevy of devices. Download the free version to make sure it recognizes your MB.
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
The unRAID looks very interesting because of the ability to upgrade. What's the function of the cache disk? Is it something I'll really want to invest an add'l $100 into?
 

Subyman

Moderator <br> VC&G Forum
Mar 18, 2005
7,876
32
86
You are not supposed to use hardware raid with zfs. Raidz1 offers the same redundancy as raid5. You can not add more drives later and keep parity. It's best to begin with what you want or be prepared to do a side step later. Raidz2 is recommended for important data.
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
You are not supposed to use hardware raid with zfs. Raidz1 offers the same redundancy as raid5. You can not add more drives later and keep parity. It's best to begin with what you want or be prepared to do a side step later. Raidz2 is recommended for important data.

Thanks for verifying that for me. That's what I was thinking as far as RAID 5 vs. RAIDz1. I dug deeper last night and I've dropped the idea of using any kind of add-in RAID card or using RAID from the MB BIOS.

After doing my research for a few hours last night, my dilemma is unRAID vs. FreeNAS. I'm really attracted to unRAID's ability to add drives in the future without having to start over from scratch. However, I'm having real trouble wrapping my head around the parity drive/redundancy without striping concept of unRAID and the ZFS system appears to be less prone to failure in the first place.

Right now, I'm leaning towards just adding a 5th HDD and going with FreeNAS. I can't imagine that 8TB won't provide all of my storage needs for the next 4-5 years but I'd hate to be wrong. Like I said, it won't be used for critical data, just Blu-Ray rips and audio files. I'll be setting up a real backup for critical files and my music files but if I have to re-rip DVDs and BRs, then that's OK. If it provides any sort of reliable redundancy, RAIDz1 will be fine.
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
http://lime-technology.com/

The use of Unraid will allow you to expand your array further (assuming you either have unused SATA ports or can install a PCI-E controller card) when the need arises. Unraid also works across a bevy of devices. Download the free version to make sure it recognizes your MB.


Don't know how much you know about unRAID. Thanks for introducing me to the option. I'm really interested in it. My hesitation is probably just based on ignorance but it leaves me with a couple of questions that I can't find the answers to.

#1 - How does the Redundancy/Parity Drive work? I can't wrap my head around how a single drive that is no larger than any of the data drives can mirror the entire array. It seems like it is basically a compression technique that sums the total of bit locations to EVEN so that if a drive fails, the parity drive reads the bit location as odd and can rebuild the drive once you've swapped the new one in. Will there be a benefit to having a better performing drive as the Parity Drive?

#2 - What is the point of the cache drive? Is it basically the unRAID version of RAID 6 where the system can withstand the failure of 2 drives or is it just "keeping it warmed up in the bullpen" so it can be swapped out more quickly? How much do I really need the extra cache drive?
 
Last edited:

RaiderJ

Diamond Member
Apr 29, 2001
7,582
1
76
After building a couple file servers for home use, my advice is to build properly from the start. You'll spend more money getting better parts/more drives/etc, but after all is said and done you can just let it run 24/7 and not think about it.

Currently I'm running ZFS on Solaris (see link in my sig), but considering a switch over to running ZFS on Ubuntu so I can better run Plex and other apps that are a PITA in Solaris.

One thing to add - upgrading a RAIDz is actually quite easy if you're just switching out drives for larger ones. I currently have a 6x2TB array that as a test I upgraded from a 5x2TB + 1x750GB array. Once I swapped in the last 2TB drive, the array resilvered itself and upped my storage space without any loss of data. I eventually plan to do the same with 4TB drives when they are a decent price.
 
Last edited:

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
After building a couple file servers for home use, my advice is to build properly from the start. You'll spend more money getting better parts/more drives/etc, but after all is said and done you can just let it run 24/7 and not think about it.

Currently I'm running ZFS on Solaris (see link in my sig), but considering a switch over to running ZFS on Ubuntu so I can better run Plex and other apps that are a PITA in Solaris.

One thing to add - upgrading a RAIDz is actually quite easy if you're just switching out drives for larger ones. I currently have a 6x2TB array that as a test I upgraded from a 5x2TB + 1x750GB array. Once I swapped in the last 2TB drive, the array resilvered itself and upped my storage space without any loss of data. I eventually plan to do the same with 4TB drives when they are a decent price.[/QUOTE]

So, what you are saying is that if I build a ZFS RAIDz1 with 4 x 2TB drives, giving me 6TB of storage space I could swap all drives out with 4TB drives by replacing and rebuilding the array one at a time to get 12TB of storage space? It's funny you brought that up because I was thinking of that exact scenario yesterday and wondering why it couldn't be done but figured there was some technical reason that I didn't know about and dismissed the idea.

Sure leans things in favor of RAIDz1.
 
Last edited:

ControlD

Diamond Member
Apr 25, 2005
5,440
44
91
Don't know how much you know about unRAID. Thanks for introducing me to the option. I'm really interested in it. My hesitation is probably just based on ignorance but it leaves me with a couple of questions that I can't find the answers to.

#1 - How does the Redundancy/Parity Drive work? I can't wrap my head around how a single drive that is no larger than any of the data drives can mirror the entire array. It seems like it is basically a compression technique that sums the total of bit locations to EVEN so that if a drive fails, the parity drive reads the bit location as odd and can rebuild the drive once you've swapped the new one in. Will there be a benefit to having a better performing drive as the Parity Drive?

#2 - What is the point of the cache drive? Is it basically the unRAID version of RAID 6 where the system can withstand the failure of 2 drives or is it just "keeping it warmed up in the bullpen" so it can be swapped out more quickly? How much do I really need the extra cache drive?

I have been running unRAID for about a year now and I will give it a big thumbs up. It is everything Windows Home Server should have been and as a plus it actually works all of the time.

(1) I'm not exactly sure what algorithm is used to calculate the parity of the array honestly. I do know that you don't want to lose more than one disk in the array at a time if you want to get 100% data recovery. The parity calculation takes a good bit of time so it is good to schedule your monthly (or more often) check to occur at a time when you don't need to access the array. I don't believe a super fast parity disk will help because all of the drives in your array are being read for the calculation. You just need the parity drive to be as large or larger than your biggest data drive.

(2) The cache drive sits outside the array. When you WRITE to the array and have a cache disk the data first goes to the cache drive bypassing the normal instant parity update. There is a script that will later move the data into the array. This greatly speeds up writes to the array. You can very easily determine which shares will use the cache drive if you have critical data (if the cache drive dies with data on it before it is moved you will have lost it). You can also install plugins and external applications to the cache drive. Many of us have our log files written the the cache as well to save needless writes on the flash drive.

Give unRAID a shot as well as FreeNAS. I tried them both for some time before going with unRAID. The main reason was that I like the idea of being able to add any sized disk to the array without worrying about the normal RAID headaches. Setup is dead simple and it is rock solid stability wise. I simulated a failed disk a couple of times and my data always got rebuilt 100%, so that was good enough to make me a customer.
 
Last edited:

Khar00f

Junior Member
Jan 7, 2013
5
0
61
To add to RaiderJ's explanation
I'm in the process of putting my own home built NAS and found a youtube video that's extremely useful that explains the "full" feature set of ZFS, I highly recommend you watch it, they talk precisely about your scenario at some point but also about some other advantages offered.

It starts about ZFS at the 28 mins mark.

Hope it helps

http://youtu.be/0Ug1qCXvZDg

Thank you,
Khar00f
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
So, kind of on a tangent, is there any validity to the idea that buying all the HDDs at once from the same vendor increases the chance of multiple drive failure? Sounds like a bunch of bull to me since I would think manu other factors would matter more.

Or, should I just buy a WD Red from NewEgg this week, another from NewEgg next week and then repeat for 2 or 3 more weeks?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
@OP
A1. RAID5 is a simple scheme that can be used with any filesystem; there are many hardware implementations of it.
RAID Z1 is a ZFS specific implementation that avoids the http://en.wikipedia.org/wiki/RAID_5_write_hole issue (a bug in RAID5)

A list of benefits of ZFS: http://www.enterprisenetworkingplan...842741/10-Reasons-You-Need-to-Look-at-ZFS.htm

A3: You could but you shouldn't. Let freenas make a pure software array which is far superior to what is integrated into mobo

A4: I suspect you meant looking for 6TB.
My biggest advice is to avoid RAID5 type parity.
Get multiple individual RAID1 arrays.

Make two RAID1 arrays each from a 2x3TB. When you need more space add another RAID1 array of whatever size is appropriate is available when you do the upgrade.
Then as more space is needed rotate out a pair of the smallest drives to be replaced by big drives.
A single large array with many drives is going to be difficult to upgrade for more space later on

A5: Its easy if you are adding them as seperate drive or their own new arrays. Is that what you meant or did you ask about expanding an array? (aka, going from a RAID5 array of 3 drives to a RAID5 array of 5 drives)

A6: Wanting both good AND cheap is great.
But at the price you are looking for your options are to build your own or settle for just cheap.
 
Last edited:

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
Awesome info. Thanks for all of the input. My costs will be a wash either way, so it comes down to performance vs. maintenance. Anyone with experience please chime in. Drive setups will look like-

unRAID
500GB Cache
2TB Parity
2TB Data
2TB Data
1TB Data

RAIDz1
4 x 2TB
or
3 x 3TB

#1 - Write Hole/bit rot problems..... how big of an issue will this really be? This would be my biggest reason for considering the RAIDz1 over the unRAID. I plan on using a cache drive with unRAID, so that really shouldn't be much of an issue since the data won't actually go to the array until the scheduled nightly transfer, correct? I understand the risk of using the cache drive if it fails before the nightly transfer.

#2 - How often do arrays really fail in the real world? Regarding RAIDz1, how long do rebuilds really take when a drive fails in an array? I will have a RAIDz1 that hill have 3-5 HDDs on it. It will be used 90% of the time for serving up (reading data) video, music and photos to 3 or 4 locations throughout the home. Writing would just be when I transfer a couple of .mp3s, photos or a BR/DVD rip a couple of times a week. I have a separate remote backup for critical data so if the array fails completely then it's a headache to rerip but not a crisis.

#3 - How long will my RAIDz1 array be offline for weekly scrubs? Is it something that can just be setup to happen from 1am to 4am on a Wednesday or does the array go down for an extended period of time? Does reslivering the array from a failed HDD really take a week? How often do you really experience the failure of a 2nd HDD during the reslivering?

In short, the maintenance and rebuilding in the event of HDD failure on a ZFS RAIDz1 setup are my only concerns. I understand that RAID is there for protection and that they should always be built for adequate redundancy so it seems like these conversations that I run across are REALLY skewed for worst case scenarios. I just want to get a real world feel for what kind of maintenance would be required for a RAIDz1 setup. If it is that much headache then I'll just stick with the unRAID.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
1. On HDDs its more likely to be a bit flip then a bit rot... I have experienced several random data corruptions on ZFS and on windows... ZFS fixed them windows didn't. (and neither will unRAID).
Cache still has RAID5 write hole, it just needs to lose power during the sync.

2. A degraded array that is rebuilding can still be used, its just slow and dangerous (over use it and you slow the rebuild process too much and risk a second failure) I have yet to actually need to do a rebuild on RAIDz.

3. Scrubs cannot be done offline. The array will be online and fully accessible while scrubbing. It will just be slower if you try to use it. The scrub process involves reading every single drive in the array, verified their data, and fixings any errors encountered. Its basically chkdsk for zfs
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
1. On HDDs its more likely to be a bit flip then a bit rot... I have experienced several random data corruptions on ZFS and on windows... ZFS fixed them windows didn't. (and neither will unRAID).
Cache still has RAID5 write hole, it just needs to lose power during the sync.

Hadn't thought of that shortcoming of unRAID.

2. A degraded array that is rebuilding can still be used, its just slow and dangerous (over use it and you slow the rebuild process too much and risk a second failure) I have yet to actually need to do a rebuild on RAIDz.

How long have you been running this array?

3. Scrubs cannot be done offline. The array will be online and fully accessible while scrubbing. It will just be slower if you try to use it. The scrub process involves reading every single drive in the array, verified their data, and fixings any errors encountered. Its basically chkdsk for zfs

So, setup the scrubs to happen for a couple of hours in the middle of the night and it shouldn't be an issue, right?

Excellent, thanks.
 

Nnyan

Senior member
May 30, 2003
239
1
76
I've just gone through this process (in the process of building everything) myself so I I would chime in. As you've become aware every decision has it's benefits and it's negatives you just have to prioritize. Just some background on why I made my decisions. I did not want any scheme where I was only able to loose one drive before I lost data.

unRAID: typically you can loose one drive before data loss but you can add additional parity drives to remedy. If you do end up loosing a drive you only loose the data on that one drive and all other drives can be plugged into a PC and accessed.

Key Pluses:
- Will power down your drives when not in used since data is not written across drives.
- Ability to mix and match drives of various sizes.
- Expandable (just add a drive)
- Very minimal requirements (cpu, memory)

Negatives/Consideration:
- One developer and while doing a great job it's something to keep in mind.
- No matter now many drives you have the max performance is equal to about one drive.
- No fancy error correction/checksum like ZFS

ZFS:
- Lots of fancy features (awesome snapshots, etc...) that may or may not matter to you.
- Checksum so it will do self healing of corrupted data, add in ECC memory and you are in good shape.
- Lots of options (mirror, stripe, raid5/6/10 equivalents)

Negatives:
- You can not expand a current VDEV, you have to add additional VDEV's of the same type (best practices) to expand your pool.
- Can't power down pool very much since any activity will keep your array active.
- can be a pain to setup if you don't know linux or get unlucky with hardware.
- Performance is equal to one drive PER VDEV. You can have a VDEV of 10 drives and that VDEV will perform at about the same as a single drive. Two VDEV's of 5 drives each give you the performance of two drives.
- Recommend 1GB of memory for every TB of data storage. If you are doing dedupe then this increases significantly.

Now I had been under the impression that if you have a VDEV in your pool in ZFS and you replaced a larger hard drive that you would not get the benefit of the added capacity to the pool until ALL the hard drives in the pool had been upgraded. EX: three 2TB drives in a RAIDZ1, if you replace one drive with a 3TB drive you would not get the extra TB until the other two drives where upgraded. But things may have changed since I learned about that. I will test this in my HP Microserver ZFS test bed and see.

Your questions:

#1 Write hole is not an issue with ZFS but in hardware RAID it can significantly impact write performance. The impact is also influenced by your RAID card, the cheaper the card the more you'll see this impact. I would stay away from hardware RAID for this and other reasons.

#2 This depends on many factors (how heavy the use, RAID level, hard drives used, environment, etc... ).

#2B The more data you have the longer the rebuilds and depends on the RAID type, your controller, etc... I've seen these things take days and I've heard others take longer with crappy hardware.

#3 I have auto-scrub on and I haven't really looked at this but I haven't noticed any significant downtime (but my ZFS test bed has not been running all that long).

Before I go on keep in mind that rebuilding an array is some heavy duty use, if any drive is going to fail it will do so during a rebuild. During my testing I gave up on RAIDZ1 (RAID5) because of the one drive failure limit before you loose data and the slow rebuild times. I then went to mirrored pairs (I read the same article as Blastingcap, and discussed this with Constantine, he agree's that if you have more then 2 drives RAIDZ2 may be betterof drives) and that got me an easy upgrade path (2 drive VDEVs) and fast performance (multiple VDEVs) but during testing I had several drives fail during my rebuild testing. If this had been real live data I would have lost data.

For that reason I ended up in a compromise with RAIDZ2 (RAID 6). It allows me to lose two drives without data loss and if I lose one drive and then a second during rebuild I'm still OK. That is safe enough for me for our critical data. For our media server (movies, music, etc...) I went with unRAID.
 
Last edited:

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81

Good read but I just don't see the necessity in mirroring for my purposes. I don't need the read speed and the data isn't critical. I'm not afraid to spend an extra 30% to get some redundancy, but I don't feel that my situation really warrants investing twice as much into HDDs. The expandability is nice, but if that becomes my main concern, I'll just go unRAID. If I go ZFS, I'll just buy enough up front to fulfill my storage needs for 3-5 years and then as $/GB comes down in the future I will create a new array with larger drives or just utilize whatever new technology may be out then.
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
I've just gone through this process (in the process of building everything) myself so I I would chime in. As you've become aware every decision has it's benefits and it's negatives you just have to prioritize. Just some background on why I made my decisions. I did not want any scheme where I was only able to loose one drive before I lost data.

unRAID: typically you can loose one drive before data loss but you can add additional parity drives to remedy. If you do end up loosing a drive you only loose the data on that one drive and all other drives can be plugged into a PC and accessed.

Key Pluses:
- Will power down your drives when not in used since data is not written across drives.
- Ability to mix and match drives of various sizes.
- Expandable (just add a drive)
- Very minimal requirements (cpu, memory)

Negatives/Consideration:
- One developer and while doing a great job it's something to keep in mind.
- No matter now many drives you have the max performance is equal to about one drive.
- No fancy error correction/checksum like ZFS

ZFS:
- Lots of fancy features (awesome snapshots, etc...) that may or may not matter to you.
- Checksum so it will do self healing of corrupted data, add in ECC memory and you are in good shape.
- Lots of options (mirror, stripe, raid5/6/10 equivalents)

Negatives:
- You can not expand a current VDEV, you have to add additional VDEV's of the same type (best practices) to expand your pool.
- Can't power down pool very much since any activity will keep your array active.
- can be a pain to setup if you don't know linux or get unlucky with hardware.
- Performance is equal to one drive PER VDEV. You can have a VDEV of 10 drives and that VDEV will perform at about the same as a single drive. Two VDEV's of 5 drives each give you the performance of two drives.
- Recommend 1GB of memory for every TB of data storage. If you are doing dedupe then this increases significantly.

Now I had been under the impression that if you have a VDEV in your pool in ZFS and you replaced a larger hard drive that you would not get the benefit of the added capacity to the pool until ALL the hard drives in the pool had been upgraded. EX: three 2TB drives in a RAIDZ1, if you replace one drive with a 3TB drive you would not get the extra TB until the other two drives where upgraded. But things may have changed since I learned about that. I will test this in my HP Microserver ZFS test bed and see.

Your questions:

#1 Write hole is not an issue with ZFS but in hardware RAID it can significantly impact write performance. The impact is also influenced by your RAID card, the cheaper the card the more you'll see this impact. I would stay away from hardware RAID for this and other reasons.

#2 This depends on many factors (how heavy the use, RAID level, hard drives used, environment, etc... ).

#2B The more data you have the longer the rebuilds and depends on the RAID type, your controller, etc... I've seen these things take days and I've heard others take longer with crappy hardware.

#3 I have auto-scrub on and I haven't really looked at this but I haven't noticed any significant downtime (but my ZFS test bed has not been running all that long).

Before I go on keep in mind that rebuilding an array is some heavy duty use, if any drive is going to fail it will do so during a rebuild. During my testing I gave up on RAIDZ1 (RAID5) because of the one drive failure limit before you loose data and the slow rebuild times. I then went to mirrored pairs (I read the same article as Blastingcap, and discussed this with Constantine, he agree's that if you have more then 2 drives RAIDZ2 may be betterof drives) and that got me an easy upgrade path (2 drive VDEVs) and fast performance (multiple VDEVs) but during testing I had several drives fail during my rebuild testing. If this had been real live data I would have lost data.

For that reason I ended up in a compromise with RAIDZ2 (RAID 6). It allows me to lose two drives without data loss and if I lose one drive and then a second during rebuild I'm still OK. That is safe enough for me for our critical data. For our media server (movies, music, etc...) I went with unRAID.

Excellent. There is some great info here.

I'd read about the benefit of ECC with ZFS. This may sound newb, but is using ECC RAM as simple as just buying ECC RAM? You don't have to worry about motherboard and CPU support?

In the back of my mind, I've been toying with the idea of RAIDz2. I'll have to do the math and see if the investment makes sense and in what scenario of hardware choices.
 

ControlD

Diamond Member
Apr 25, 2005
5,440
44
91
Excellent. There is some great info here.

I'd read about the benefit of ECC with ZFS. This may sound newb, but is using ECC RAM as simple as just buying ECC RAM? You don't have to worry about motherboard and CPU support?

In the back of my mind, I've been toying with the idea of RAIDz2. I'll have to do the math and see if the investment makes sense and in what scenario of hardware choices.

You will need a motherboard that supports ECC RAM. Get ready to spend some cash. There are probably some cheap ECC boards out there but in general you will be getting into server class hardware when talking about ECC support.
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
You will need a motherboard that supports ECC RAM. Get ready to spend some cash. There are probably some cheap ECC boards out there but in general you will be getting into server class hardware when talking about ECC support.

That's what I thought. After a cursory glance at NewEgg for just a cheap LGA1155 CPU, MB and Memory, it'll be at least $300 more for the privilege of using ECC. That gets me 2 or 3 more 2TB HDDs. RAIDz2 would be cheaper and is starting to look very attractive.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
That's what I thought. After a cursory glance at NewEgg for just a cheap LGA1155 CPU, MB and Memory, it'll be at least $300 more for the privilege of using ECC. That gets me 2 or 3 more 2TB HDDs. RAIDz2 would be cheaper and is starting to look very attractive.

The cheaper, although slightly higher wattage, way to get ECC: get a 7-series or later ASUS AM3+ mobo (you may have difficulty finding anything less than an 880G chipset these days though), Sempron 140 or 145, and bang, you have access to unregistered ECC RAM and a CPU that is heaps above Atom and can handle most home server loads without complaint. (If you want, you can get a Athlon II X2 or x4 or -e variant if you need more CPU horsepower, too.) If you already have a 7-series or later ASUS mobo then you can just repurpose that. I did, but my 7-series mobo broke after I accidentally touched it with a screwdriver, and by touch I mean gouged it. Else I would have had everything but ECC RAM already on hand to make a ZFS NAS box.

P.S. I don't think you read the entire article. Expanding a mirror, etc. easier. Also, the guy who said you need to know Linux to run ZFS is mistaken about that part. ZFS is not native to Linux first of all, and FreeNAS/NAS4Free/OI+Napp It are easy ways to set up a NAS without having to know much of anything about FreeBSD/Linux/etc.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |