YANB (Yet Another NAS Build) - this thread is not like the others, I hope!

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

destrekor

Lifer
Nov 18, 2005
28,799
359
126
This is where you start looking in the refurb market if you don't want to spend some cash. Enterprise hardware isn't cheap unfortunately. You are also looking at V5 which is Skylake and basically the newest most expensive out right now. I'd look at V4 or V3 (Broadwell/Haswell). I just upgraded an older host in my business to an E3-1230v3. I was using the 1220 variant, but it lacked HT so I upgraded. The 1230 with its 8 threads is now almost overkill for me. If it were me, I'd buy a yesteryear model processor used on eBay.

Unless you are building out a $10K+ SAN, you don't need 10Gbe. It just isn't needed in a home networking environment. One Gigabit is fine.

I was just thinking that an 8-disk RAIDZ2 array WILL saturate gigabit LAN. And if I just happened to want to transfer from two systems, now the LAN is a bottleneck. Perhaps it won't happen often, but I was just looking at it from the POV that if it's bundled on the motherboard, it certainly would be nice to take advantage of it. A 10Gb link from NAS to switch, while the rest of the network remains gigabit, at least the switch can dish out full-capacity gigabit to multiple systems.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
Sure, if all your doing is moving files around from one computer to another than you could saturate the link between them. But... will this be normal everyday behavior for your network? I'd do NIC teaming before 10Gbe in a home network and even then, getting any moneys worth out of teaming would be dubious at best.

A full BD rip without transcoding will consume at worst 50Mbps of bandwidth. So on a single 1Gbps network you could watch 20 BD rips at the same time assuming your NAS disks could keep up with the read demand.
 
Feb 25, 2011
16,823
1,493
126
Well let's say I have the router getting stressed fairly well with performing AV and my download is maxed with large files, and those files are going to the NAS, and I am also concurrently watching something from the NAS storage and it is being transcoded by Plex for subtitles (most of the time I should rarely ever need transcoding for downscaling or anything, I don't watch on mobile or tablets or anything - I need my big screen! ).

Think that's going to make the system crawl? With the quad-core mentioned previously? And with the 6-core Xeon D?

I figure that right there would be the heaviest use at any given time, and likely won't happen often. But I could get it setup to grab and stream through the NAS so that's similar, full download speed + concurrently watching it.

Just not sure what kind of CPU usage we're talking about here.

A couple streams - even HD ones - and network firewall/routing duty? Should be fine. File transfers aren't all THAT demanding.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Sure, if all your doing is moving files around from one computer to another than you could saturate the link between them. But... will this be normal everyday behavior for your network? I'd do NIC teaming before 10Gbe in a home network and even then, getting any moneys worth out of teaming would be dubious at best.

A full BD rip without transcoding will consume at worst 50Mbps of bandwidth. So on a single 1Gbps network you could watch 20 BD rips at the same time assuming your NAS disks could keep up with the read demand.

True, it won't be anywhere near a common occurrence that I need to do max file transfers on multiple systems. I'd just have to make sure to carefully schedule any system backups and Time Machine activity to minimize overlap. More than anything, doing too much at once will more likely force the NAS to max out IOPS, and it's always preferable to minimize that.

A couple streams - even HD ones - and network firewall/routing duty? Should be fine. File transfers aren't all THAT demanding.

Well the worry is more than just a couple HD streams. I'm worried about heavy network/firewall/AV activity while also transcoding BD rips. What kind of CPU and memory usage will that demand?
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
This is where you start looking in the refurb market if you don't want to spend some cash. Enterprise hardware isn't cheap unfortunately. You are also looking at V5 which is Skylake and basically the newest most expensive out right now. I'd look at V4 or V3 (Broadwell/Haswell). I just upgraded an older host in my business to an E3-1230v3. I was using the 1220 variant, but it lacked HT so I upgraded. The 1230 with its 8 threads is now almost overkill for me. If it were me, I'd buy a yesteryear model processor used on eBay.

Unless you are building out a $10K+ SAN, you don't need 10Gbe. It just isn't needed in a home networking environment. One Gigabit is fine.

It does seem that the C236 chipset is fantastic, so there's one thing to consider. And then there's the fact that it appears that the E3-1200 v3 CPUs aren't really that much cheaper when buying new retail, at least not that I found during my brief searches. And perhaps that's likely because I am only looking at the low-power parts, but that is still something I very much want to focus on. Maybe it won't be the most cost effective solution in the long-term, but I'd know that my electric bill will be lower, and more importantly, components will be kept cooler which helps all components last longer. And keeping them cooler will be easier to do with quiet fans so I won't have to stress location of the rack. In the short term it'll need to be able to be "nice looking" in the living room but out of the main eyesight, like in a rear corner or something. In the long run, whenever I get my own house, I'll build a network/AV closet of sorts.

And as I've mentioned before, and admittedly it's just a personal hangup I have, I just want to buy actual server parts brand new, unless I can get them for free from work or something so that there's literally zero investment.

I won't mind buying a nice capable managed switch for a great price, but I just want that peace of mind for the actual server.


And another thing I forgot to mention regarding 10GbE - I do get that I might not get full utilization at this time in a home environment, but I figure it's nice to have that future-proof capability, and if a board has 2x 1000base-T ports and 2x 10Gbase-T ports, I can in effect still treat that as if I had 4x 1000base-T ports until the need arises to actually utilize 10GbE. SFP+ complicates matters, and I'd like to avoid that for sure, but if need be, I could still get a cheap small gigabit switch with gigabit SFP uplink ports and utilize twinax DAC between that and the SFP+ 10GbE ports, at least I think that would work.

But I am trying hard to find the right hardware combination that is most cost-effective for what I want in the end, without letting the IO connectivity be the driving factor.
 
Feb 25, 2011
16,823
1,493
126
True, it won't be anywhere near a common occurrence that I need to do max file transfers on multiple systems. I'd just have to make sure to carefully schedule any system backups and Time Machine activity to minimize overlap. More than anything, doing too much at once will more likely force the NAS to max out IOPS, and it's always preferable to minimize that.



Well the worry is more than just a couple HD streams. I'm worried about heavy network/firewall/AV activity while also transcoding BD rips. What kind of CPU and memory usage will that demand?

That's what I mean by "streams" (don't cross them!)

https://support.plex.tv/hc/en-us/ar...kind-of-CPU-do-I-need-for-my-Server-computer-

Plex lists a C2D@2.4Ghz as sufficient for a single hd transcode, or a cpu with a passmark score of 2000. The Xeons should score in the 10k+ range on Passmark.

A really good network router/firewall has a cpu that couldn't even handle one transcode. Unless you're doing something crazy like deep packet inspection on BitTorrent traffic, your server won't even notice that.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
But I am trying hard to find the right hardware combination that is most cost-effective for what I want in the end, without letting the IO connectivity be the driving factor.

I know I'm beating a dead horse here. You REALLY need to get over your fear of surplus gear. It simply has no basis in reality.

http://www.ebay.com/itm/HP-DL380-G6...711527?hash=item43e9773127:g:IOsAAOSwstxVQrdA

$300. That's more than enough CPU, memory, and network connectivity for everything you want to do. Sell the 146Gb drives off (should be able to get $10-$20/ea. You're down to a total cost of ~$250. Buy a couple of small SSD's for your ZIL, then the largest 2.5" spinners you can get for your mass storage, and a USB thumb drive to boot off of.

Done.

Want to lower the power consumption, switch to L series processors. They're $30-$60/ea.

I would have more trust in older surplus servers than I would in new consumer level gear. Complete failures of servers aren't that common. I've got oodles of customers with servers of that era and older that just keep chugging away. It's rare to see any failures besides drives and RAID controller batteries (which is a non-outage failure). Since you're going to run ZFS, you don't even need the battery.

I've been running surplus gear at my house for many years now, never had a single failure. I've got a pair of L5640's in my host and I've never come anywhere to maxing it out and I've got 5+ VM's running at all times.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
^Off lease/refurb is the way if you want to save money. And with HP you at least get iLO for remote management. You can buy iLO advanced licenses off eBay for $25. I might still stick with 3.5" drives for a home NAS just to give priority to storage space over storage speed but you get the idea. There are literally thousands of off lease servers out there. Thousands upon thousands...

Look forward to seeing whatever you decide. Fun project.
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I know I'm beating a dead horse here. You REALLY need to get over your fear of surplus gear. It simply has no basis in reality.

http://www.ebay.com/itm/HP-DL380-G6...711527?hash=item43e9773127:g:IOsAAOSwstxVQrdA

$300. That's more than enough CPU, memory, and network connectivity for everything you want to do. Sell the 146Gb drives off (should be able to get $10-$20/ea. You're down to a total cost of ~$250. Buy a couple of small SSD's for your ZIL, then the largest 2.5" spinners you can get for your mass storage, and a USB thumb drive to boot off of.

Done.

Want to lower the power consumption, switch to L series processors. They're $30-$60/ea.

I would have more trust in older surplus servers than I would in new consumer level gear. Complete failures of servers aren't that common. I've got oodles of customers with servers of that era and older that just keep chugging away. It's rare to see any failures besides drives and RAID controller batteries (which is a non-outage failure). Since you're going to run ZFS, you don't even need the battery.

I've been running surplus gear at my house for many years now, never had a single failure. I've got a pair of L5640's in my host and I've never come anywhere to maxing it out and I've got 5+ VM's running at all times.

The horse has been sufficiently beat to a bloody pulp.

I've not turned my back completely on a used server yet, but it is near the bottom of my list. Saving money is great, absolutely, but peace of mind is serenity. And it's not quite like comparing used server hardware versus new consumer hardware. I'd be buying new server hardware, so there's that.

edit:
Another question about buying that kind of hardware used:
These servers are going to come with SAS backplanes. From what I gather, I can install SATA2/3 HDDs just fine as the connector is compatible (and I know it wouldn't be possible to plug in SAS disk into a SATA backplane). Correct?
 
Last edited:

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
destrekor said:
True, it won't be anywhere near a common occurrence that I need to do max file transfers on multiple systems. I'd just have to make sure to carefully schedule any system backups and Time Machine activity to minimize overlap. More than anything, doing too much at once will more likely force the NAS to max out IOPS, and it's always preferable to minimize that.

You would do your backups during 'off' hours. So late at night when your asleep or during the day at work. And if you aren't writing dozens of gigabytes to each VM per day, your backups will literally take 30 seconds if utilizing CBT. You don't do full backups of each VM at every backup. Veeam for example will do one full backup the 1st time and then do only subsequent changes on the remaining backups. They are called synthetic backups.

You are over estimating your usage to be honest. Eight 7200k drives in RAID 6 doing 80% reads is about 300 IOPS. You won't approach 50 IOPS in a home NAS unless you are doing heavy database work.
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
You would do your backups during 'off' hours. So late at night when your asleep or during the day at work. And if you aren't writing dozens of gigabytes to each VM per day, your backups will literally take 30 seconds if utilizing CBT. You don't do full backups of each VM at every backup. Veeam for example will do one full backup the 1st time and then do only subsequent changes on the remaining backups. They are called synthetic backups.

You are over estimating your usage to be honest. 8 RAID 7200k disks doing 80% reads is about 300 IOPS. You won't approach 50 IOPS in a home NAS unless you are doing heavy database work.

And even with the ~5400rpm spindle speed of the WD Reds, you are still right. As it should be obvious, I have zero experience with storage arrays and their performance. So thanks for keeping me informed!

As for backups, well, I was more concerned with system backups from my desktop and laptop when I have it powered on and on the network. But still, in terms of NAS performance, those should be exceptionally light tasks.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
And even with the ~5400rpm spindle speed of the WD Reds, you are still right. As it should be obvious, I have zero experience with storage arrays and their performance. So thanks for keeping me informed!

As for backups, well, I was more concerned with system backups from my desktop and laptop when I have it powered on and on the network. But still, in terms of NAS performance, those should be exceptionally light tasks.

Under 300 IOPS with 5400 drives but, still, you won't utilize what your system can produce in regards to CPU or storage speeds. Seriously, you will be surprised at how efficient ESXi is at scheduling CPU cores for work. You will want to spin up more VMs just to see your Xeon work harder than 10% on any given day.
 
Feb 25, 2011
16,823
1,493
126
edit:
Another question about buying that kind of hardware used:
These servers are going to come with SAS backplanes. From what I gather, I can install SATA2/3 HDDs just fine as the connector is compatible (and I know it wouldn't be possible to plug in SAS disk into a SATA backplane). Correct?

Yup.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Oh, and that HP DL380 G6... it has 2.5" bays, and large disk size support is abysmal.
I'd have to scrap the p410 as it looks like it's not supported well or is a suboptimal controller under FreeNAS.

Not sure how to turn that around into what I want to do with it. And even with low power, a dual socket system is going to have a lot of wasted energy.

I'm not sure just yet what the goal is for it, but I see a PowerEdge R710 w/ 6x 3.5" and an E5640 in the closet of old equipment here at work. It wasn't there recently - I'm going to have to inquire about that but I'm not sure it's what I'd want to work with either. I'd have to get a different controller instead of the PERC 6/i, accept a 6-bay limitation instead of 8 (technically better for RAIDZ2 anyway, but... dammit, 8 disks man! ), but it could work. Swap out the single E5640 in it for a low power model like the L5630 (40w TDP).
But then, Westmere-EP at 2.13GHz base, 2.4GHz turbo... is that going to really be enough? I'd still have to get more RAM, at it's using 6x1GB DIMMs right now.
I don't know what kind of idle that R710 pulls, considering it is a dual-socket system, not sure if there's wasted wattage for all the circuitry supporting the 2nd socket and everything. And I hear the Intel 5520 PCH is not exactly a power sipper.

There's a lot of reasons why putting together a low-power engineered system, to my exact specifications, while it may cost more in the end for the same performance than an off the shelf system, will likely be a lot better on energy usage and quieter. Plus, I get to make sure there isn't anything that isn't sitting there as wasted space, all that extra silicon and circuits = more leakage and wasted power if only using a single socket.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
Yeah, I get it. Putting together a system is part of the fun!

Here is the last system (new or otherwise) I'll show ya so I don't keep beating a dead horse: http://www.thinkmate.com/system/rax-xs8-1230v3

Choose your appropriate controller to fill up all 8 bays and that is compatible with FreeNas and away ya go!
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Yeah, I get it. Putting together a system is part of the fun!

Here is the last system (new or otherwise) I'll show ya so I don't keep beating a dead horse: http://www.thinkmate.com/system/rax-xs8-1230v3

Choose your appropriate controller to fill up all 8 bays and that is compatible with FreeNas and away ya go!

If you think you have any other systems that might just be appropriate, please, feel free to beat that horse something wicked!

I hope I don't come across as abrasive when reviewing some of these candidates, please know that I absolutely love the help y'all have provided me in this and other recent threads. It's absolutely terrific.
I can be a little OCD and stubborn, and know very much what I want, so it can be hard to beat it into me that there is something better than what I may view as the holy grail of all grails.

Keep the suggestions coming!
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Alright - back to the discussion of Low Power models.

Perhaps focusing on Low TDP isn't actually beneficial in my case? Yes, I'd like to ensure the system stays cool for as much time as possible, and keeps the power draw as low as possible.

But all Intel chips will idle around the same, yes? Most of the time the server should be at or near idle until there's transcoding to do. And that's where I'd want to limit the power, because transcoding will take all the CPU you give it, unless you give it more than it needs. I guess I'm wondering if Plex on FreeNAS will intelligently use multiple vCPUs and spread out the processing and keep things low, and will ESXi's scheduling basically ramp up the CPU to turbo frequencies or also try to intelligently balance the load and try to minimize CPU utilization?

I realize that at almost all other times the CPUs should be kept to lower clocks and thus low wattage. I'm just worried how often the system will be allowed to climb to high clockrates for extended periods of time. For other tasks that aren't real-time transcoding, I'd expect any bursts to be brief to finish the work, so perhaps the CPU could return to idle faster and potentially use less power? Whereas a low TDP CPU may be forced to stay above idle for a longer period of time to make the same calculations?

Would there be a way to, say, throttle through software (or ESXi) to set the NAS VM to keep a lower frequency threshold to help minimize the worst case scenario?
That way other VMs, which will have vastly different CPU requirements compared to real-time transcoding, should definitely utilize the CPU's high-end potential sparingly, if at all.

The one benefit to the Xeon D platform, in that case, is that the PCH is included in the on-die power budget, so a lot of the peripheral power draw will be collectively lower than in non-SoC platforms. So I still don't know if I'd achieve a total lower idle and still be better off using the Xeon D platform when compared to a more full-fledged motherboard with external PCH and fewer controllers bundled into the CPU TDP calculation.
 

Ranulf

Platinum Member
Jul 18, 2001
2,411
1,312
136
It looks like you're no longer looking at itx and the DS380 but if so, keep in mind that it is $150 + a sfx power supply. I spent $250 for mine with psu. It is also not screwless hotswap. I've also found it is not that quiet.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
Overthinking it.

You setup the VM with the amount of vCPUs desired in ESXi. When the VM needs to "ramp up", ESXi delivers (schedules) the time required to fulfill whatever the OS is asking to process.

The VM has no idea it's virtualized. It just uses the amount of CPU power it 'sees/needs' until it's done with whatever task it's working on and then goes back to idle or whatever baseline it was in before it needed the extra power. So if your baseline was idle, than your vCPU goes back to idle.

On another note, why FreeNAS? Why not just install a Linux flavor and use mdadm in RAID 6/10?
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Overthinking it.

You setup the VM with the amount of vCPUs desired in ESXi. When the VM needs to "ramp up", ESXi delivers (schedules) the time required to fulfill whatever the OS is asking to process.

The VM has no idea it's virtualized. It just uses the amount of CPU power it 'sees/needs' until it's done with whatever task it's working on and then goes back to idle or whatever baseline it was in before it needed the extra power. So if your baseline was idle, than your vCPU goes back to idle.

On another note, why FreeNAS? Why not just install a Linux flavor and use mdadm in RAID 6/10?

Mostly just because.

And I'd like the consistency between my OS X CLI experience and FreeBSD. And I've never touched FreeBSD before so, it's new waters to test.

And if I used Linux, it would certainly be ZFS on Linux. After doing my research, ZFS is definitely what I want to utilize. And I figure if I'm going that route, I might as well take advantage of FreeNAS. And I like what FreeNAS offers in terms of interface and platform.

edit:
And yes, I am definitely over think things. All the damn time.

I figured ESXi scheduling was as simple as it ought to be from that perspective. So that leads me back to the question I asked: is there a way to cap one VM's max CPU frequency? Would it be some power config done within FreeNAS/FreeBSD itself, in this case? I know it will be in only certain instances where I feel I'd see too much power used just because it can.

In the case of any tool that cuts commercials from recorded video, I wouldn't mind if that just got done quicker and got the CPU back to idle as fast as possible. But in the case of on-the-fly transcoding, that will take as much CPU power as it needs during the whole program.

But I guess if it isn't feasible it won't be significant. It might just be an hour or two at a time on occasion that transcoding actually needs to happen, so a relatively brief surge in power usage won't be remotely significant in the long run. If it's as uncommon as I expect, maybe it's only a nickle or dime extra per month, and shouldn't stress internal components at all.

There's still the question of total platform power use, which I suspect will be a [relatively] significant difference when comparing a standard setup vs the integrated SoC approach.

Decisions decisions. I've recently stumbled upon servermonkey.com so I'll be poking around there from time to time. If there's an absolutely significant savings potential, even if a CPU or something were to fail, at least it should be worth it. I won't be making purchases for what I expect to be a few months, but I'm just trying to compile a fairly-finalized component list with relatively accurate estimates so I can put a final budget goal in YNAB for the project. I like this new found capability in budgeting - I've always been absolutely terrible about money, but YNAB is brilliant and already helping significantly.
And maybe with a good parts list, I might be able to watch out for any crazy deals that make me buy an individual component early. Once I narrow down which HDDs I want to go with, I'll definitely be watching for that, as I'll spread the purchases across vendors and possibly across a wider time range.
 
Last edited:

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
ZFS isn't really designed for 'production use' on Linux. At least not that I'm aware of yet. Ext4 and XFS are what you'd want for Linux if you went that route. I use Ext4 for just about everything. I've never quite understood the drooling over ZFS to be honest.

I'm getting ready to convert my home NAS box from Windows with Unraid to Linux with Samba/mdadm. Should make a fun weekend project when I find the time.

To my knowledge there is no way to 'cap' the clock speed within ESXi for a VM.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
The VM has no idea it's virtualized. It just uses the amount of CPU power it 'sees/needs' until it's done with whatever task it's working on and then goes back to idle or whatever baseline it was in before it needed the extra power. So if your baseline was idle, than your vCPU goes back to idle.

That's not strictly true. Newer OS's do realize they are installed in a virtual machine.

And I'd like the consistency between my OS X CLI experience and FreeBSD. And I've never touched FreeBSD before so, it's new waters to test.

And if I used Linux, it would certainly be ZFS on Linux. After doing my research, ZFS is definitely what I want to utilize. And I figure if I'm going that route, I might as well take advantage of FreeNAS. And I like what FreeNAS offers in terms of interface and platform.

The only reason I stopped using FreeNAS is because I wanted Fiber Channel support. I had zero performance or reliability issues after several years of usage.

I've never quite understood the drooling over ZFS to be honest.

Because it's extremely fast and reliable when properly implemented?



destrekor: I was just using that DL380 as an example. Most HP/Dell/IBM's of that era are in the same price range. As you discovered the Dell R710 is available with 3.5" bays, HP has offerings as well (DL320, DL180, for example). They are all going to be very close to the same performance and power usage wise, assuming similar specs. I prefer HP's because you get better remote management and personally I find them to be better designed systems. I've got a thread about home lab servers as I've dealt with a LOT of them. Currently in my rack I've got 3 HP's and 2 Supermicro's. I've got a Dell for sale on Craigslist and I already sold off some IBM's.

I think it was in one of your other threads where I talked about power consumption. I get the desire for low power draw, I really do. But, there comes a point (IMO) where you are paying so much either in money on the new build or performance that it's not worth it anymore. I have 3 identical hosts at the house. I currently only have one on 24x7 until I can replace my power black hole of a switch.

DL380 G6, 2x L5640 (6 Core, HT, 2.26Ghz, 60w), 64Gb RAM (4x16Gb), 2x 128Gb SSD, 1x USB3.0 boot thumb drive, 1x Quad Port GbE NIC, 1x Quad Port 4Gb FC HBA, 1x PCIe USB3 card. Normal usage active power draw: 160w.

That's with 5-6 active VM's (mostly Windows based) including a domain controller, vCenter server, vCops server, Powerchute appliance, Sophos UTM, and file/media server. I realize the initial response is 160w is a lot of power. To an extent, it is. Honestly, I could take out the second L5640 and PCIe NIC to lower it a bit if I really wanted to.

But honestly, it costs me less per year in electricity than I pay for Netflix and I get just as much usage out it. No, I don't need that much power or all those VM's. I don't NEED Netflix or Ultimate Tier internet either. But I LIKE having all of them, and personally I'm happy to pay for the extra electricity.

It would take me like 10 years to make up the cost difference between my server and a new one in electricity costs. I've got such an excessive amount of spare capacity available that spin up another dozen VM's without the slightest concern.

To that end however, I will mention that you will be shocked by the power draw of enterprise level switches, especially Cisco. My switch in my rack uses more power than the servers. Yeah, let that sink in for a minute. Keep that in mind before buying a surplus switch. Most of the ones with fans are also very loud, again, more than the servers.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
What is that dd supposed to show? I can do the same thing using a RAID 10 array of SSDs on Ext4.

Don't get me wrong, ZFS is great. But for uses like a home NAS (he is talking about buying 5400 spinning rust for this build) and the cult like following that surrounds it within FReeNAS, it seems a bit silly actually. There is nothing wrong with Ext4, XFR, or Btrfs for a slow plodding NAS device in home use. I think people hear FreeNAS and think "oh I'll learn FreeBSD at the same time". Well if they really wanted to learn FreeBSD then they wouldn't actually use FreeNAS.

Anyway, I'm not saying it's wrong to use it. It works just fine obviously and has been around for ages and ages. I just question its usefulness for a NAS in non production use when other options are far easier to implement (mdadm).
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
What is that dd supposed to show? I can do the same thing using a RAID 10 array of SSDs on Ext4.

Don't get me wrong, ZFS is great. But for uses like a home NAS (he is talking about buying 5400 spinning rust for this build) and the cult like following that surrounds it within FReeNAS, it seems a bit silly actually. There is nothing wrong with Ext4, XFR, or Btrfs for a slow plodding NAS device in home use. I think people hear FreeNAS and think "oh I'll learn FreeBSD at the same time". Well if they really wanted to learn FreeBSD then they wouldn't actually use FreeNAS.

Anyway, I'm not saying it's wrong to use it. It works just fine obviously and has been around for ages and ages. I just question its usefulness for a NAS in non production use when other options are far easier to implement (mdadm).

Easier could be one thing, but what about relative safety? What happens in mdadm when it encounters a URE during resilver? And in other areas, ZFS goes a long way to ensure data integrity.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |