YANB (Yet Another NAS Build) - this thread is not like the others, I hope!

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Two things to keep in mind about the DL180 specifically.

1) It's only got Lights Out 100i rather than the Integrated Lights Out you'd get on DL380 (or similar). Lights Out 100i is a traditional IPMI (less features). If you want to be able to manage the server remotely, that's a pretty big deal.

2) It's only got 2 onboard NIC ports compared to the 4 you would get on a DL380 (or similar).
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
As I'm keeping my options open, I've taken to looking into chassis again, and came across this one:
http://www.ebay.com/itm/Supermicro-...889152?hash=item33b0c2cf40:g:DFsAAOSwX~dWqoPS

Is 600w (Platinum, mind you) still going to be a bad deal? Even at 10% (a 60w load) it has an 85% efficiency rating. Pretty damn stellar, really.

I also can't tell if that model has a power distributor. It might not be a bad thing in the future to add redundant power by simply buying a single module to add to the system. But something tells me that the Supermicro models that ship with 1 PSU can only technically support 1 PSU.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I'd try to stick with the ones that come with redundant power supplies standard just to be safe.

http://www.ebay.com/itm/Supermicro-...060407?hash=item5d5251f137:g:UyMAAOSw7ThUc50x

I saw one or two with 700W redundant, but from what I gather, the PSUs aren't as efficient as other models. They have quite a few 80+ Platinum and Gold PSUs, but they only list the 700W model as "efficient" with no qualifier for even Bronze or Silver, let alone better.

That worries me about using it for a server that should be under 100w at almost all times, closer to 50w at idle. A low efficiency could nearly double that 50w use.

If the idea is to get a model like that and replace the PSU modules with lower-power ones, that's a thought, but damn that kills the budget. I figure it would be better, in that case, to get a Chenbro chassis and buy a redundant PSU. Definitely cheaper.
But I'd love to have Supermicro all around... I hear terrific things, and from all accounts their PSUs appear to be amazing.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
So I think I'm reconsidering the disk array size and future expandability.

I was thinking 8x 3TB HGST NAS disks, but now I think it may be better if I start with 6x 4TB HGST disks in RAIDZ2, and get a 2U chassis with 12x 3.5" SAS/SATA3 bays. Then, when I feel a need to expand storage space, I can just get 6 more 4TB disks and add those to the pool.

I guess the question is then, can I do that? The idea I had in mind would be to then turn it into a RAID60-equivalent, with 2 RAIDZ2 arrays in the pool.

Can you convert the pool like that without data loss?
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
So I think I'm reconsidering the disk array size and future expandability.

I was thinking 8x 3TB HGST NAS disks, but now I think it may be better if I start with 6x 4TB HGST disks in RAIDZ2, and get a 2U chassis with 12x 3.5" SAS/SATA3 bays. Then, when I feel a need to expand storage space, I can just get 6 more 4TB disks and add those to the pool.

I guess the question is then, can I do that? The idea I had in mind would be to then turn it into a RAID60-equivalent, with 2 RAIDZ2 arrays in the pool.

Can you convert the pool like that without data loss?

No. You can always add a new array to your storage pool to make it bigger but you can't alter any existing arrays in any way.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
No. You can always add a new array to your storage pool to make it bigger but you can't alter any existing arrays in any way.

I thought so, but wasn't sure.

So what would the implication be for the data then? The pool would expand, but there wouldn't be any performance improvement then, right?

And with both arrays being RAIDZ2, the pool integrity would remain so long as no more than two disks die in any given array? But the whole pool and all data would be toast if a single array went belly up, right? Or would the data on the other array remain accessible/recoverable?

And since I would only be expanding the pool, could the second array be of a different configuration and size? Different number of disks, different disk size, even a different array configuration entirely (i.e. RAID10-equivalent as opposed to RAIDZ2?)
 

smitbret

Diamond Member
Jul 27, 2006
3,389
23
81
I thought so, but wasn't sure.

So what would the implication be for the data then? The pool would expand, but there wouldn't be any performance improvement then, right?

Correct, the performance would be limited by the actual array that stores the data.

And with both arrays being RAIDZ2, the pool integrity would remain so long as no more than two disks die in any given array? But the whole pool and all data would be toast if a single array went belly up, right? Or would the data on the other array remain accessible/recoverable?

If three disks went down in a RAIDZ2 then you would lose the data on that particular array. You would not lose the data on the other array(s) in the pool. The pool size would just shrink.

And since I would only be expanding the pool, could the second array be of a different configuration and size? Different number of disks, different disk size, even a different array configuration entirely (i.e. RAID10-equivalent as opposed to RAIDZ2?)

I believe that you can mix different array types in a pool. You just can't do it with nested RAID (50,60, etc.)
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
So I think I'm reconsidering the disk array size and future expandability.

I was thinking 8x 3TB HGST NAS disks, but now I think it may be better if I start with 6x 4TB HGST disks in RAIDZ2, and get a 2U chassis with 12x 3.5" SAS/SATA3 bays. Then, when I feel a need to expand storage space, I can just get 6 more 4TB disks and add those to the pool.

I guess the question is then, can I do that? The idea I had in mind would be to then turn it into a RAID60-equivalent, with 2 RAIDZ2 arrays in the pool.

Can you convert the pool like that without data loss?

smitbret said:
No. You can always add a new array to your storage pool to make it bigger but you can't alter any existing arrays in any way.

I think there may be a bit of a miscommunication here.

If you have a zpool consisting of a single raidz2 vdev, you can ALWAYS add an additional vdev to the pool. For example, you could certainly add a 2nd raidz2 vdev, and from then on, zfs would stripe data across both vdevs (biasing itself towards the one that is mostly free space)

With the notable exception of a single-disk vdev being able to be converted to a mirror, vdevs are effectively permanent, but you can always add more vdevs to a pool. This is both a nifty feature, and an opportunity to carelessly blow your foot off. A non-trivial number of people have done "zpool add" and created a new single disk vdev when they meant to do a "zpool attach" to replace a dead drive and gotten themselves in hot water.

smitbret said:
If three disks went down in a RAIDZ2 then you would lose the data on that particular array. You would not lose the data on the other array(s) in the pool. The pool size would just shrink.

I think there may be a miscommunication here also. If you have a zpool that has a raidz2 vdev as one of its vdevs, and you lose 3 disks in that vdev, you've lost not only the vdev but the pool. Redundancy in zpools comes in at the level of the vdevs. Losing ANY vdev loses the pool because all writes are striped across all vdevs.
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I think there may be a bit of a miscommunication here.

If you have a zpool consisting of a single raidz2 vdev, you can ALWAYS add an additional vdev to the pool. For example, you could certainly add a 2nd raidz2 vdev, and from then on, zfs would stripe data across both vdevs (biasing itself towards the one that is mostly free space)

With the notable exception of a single-disk vdev being able to be converted to a mirror, vdevs are effectively permanent, but you can always add more vdevs to a pool. This is both a nifty feature, and an opportunity to carelessly blow your foot off. A non-trivial number of people have done "zpool add" and created a new single disk vdev when they meant to do a "zpool attach" to replace a dead drive and gotten themselves in hot water.



I think there may be a miscommunication here also. If you have a zpool that has a raidz2 vdev as one of its vdevs, and you lose 3 disks in that vdev, you've lost not only the vdev but the pool. Redundancy in zpools comes in at the level of the vdevs. Losing ANY vdev loses the pool because all writes are striped across all vdevs.

So does FreeNAS/ZFS automatically start striping across arrays if you add another array to the pool?

If that is the case, do the arrays need to match in size and configuration, or is that only a requirement if you set it up to mirror the vdevs and not just stripe?
In other words, can I add a 24TiB raidz2 vdev to the zpool when the first raidz2 vdev is only 16TiB?


I thought I remember having read that it would stripe across vdevs when they were added, then got confused when it sounded like that didn't happen at all.


And when you add a vdev like that, does it simply start the striping at that point so you still retain some old information on the first vdev that remains un-striped? I still imagine it's all lost but just curious how it adapts to the ability to stripe across vdevs when originally there was only one.
And does it do as suggested above, where it has variable/biased stripes depending on available space on mixed vdevs?
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
So does FreeNAS/ZFS automatically start striping across arrays if you add another array to the pool?

I don't want to be pedantic, but I also don't want misinform you, and it can be quite confusing if we don't use the correct terminology. When you say "array", you mean "vdev", correct? ZFS always dynamically stripes any data writes across all vdevs that make-up a pool.

If that is the case, do the arrays need to match in size and configuration, or is that only a requirement if you set it up to mirror the vdevs and not just stripe?
In other words, can I add a 24TiB raidz2 vdev to the zpool when the first raidz2 vdev is only 16TiB?

Again, I'm assuming that by "arrays" you mean "vdevs". ZFS will not stop you from creating pools with all kinds of weird vdev configurations. You can have a pool of 3 vdevs with one vdev a pair of mirrored disks, one vdev a 4 disk raidz-1, and one vdev with no redundancy whatsoever. I wouldn't ever use a pool configured in such a way, but ZFS will not stop you.

You can certainly do what you describe, start with a single raidz2 vdev and then add a second raidz2 vdev. Your total pool capacity will be the sum of the capacities of the vdevs, and you'll likely even see a performance increase because writes will striped across both vdevs (although maybe not a huge increase because the 2nd vdev will be empty, the first will be somewhat full, and zfs will try to balance them in the long-term).

This sort of thing is a general feature of ZFS. It has very few limits, and the limits it does have are closer to infinity than they are to things you can actually do with commodity hardware on personal sized budgets. There are a trove of "just because you can, should you?" questions ZFS allows you to pose to yourself (Should I make a pool of 10,000 disks???) and will not stop you from executing them even if the answers are typically "no, I should not, that is not what I actually need, and worse, it is dumb".

I thought I remember having read that it would stripe across vdevs when they were added, then got confused when it sounded like that didn't happen at all.

And when you add a vdev like that, does it simply start the striping at that point so you still retain some old information on the first vdev that remains un-striped? I still imagine it's all lost but just curious how it adapts to the ability to stripe across vdevs when originally there was only one.
And does it do as suggested above, where it has variable/biased stripes depending on available space on mixed vdevs?

That's the idea. Data that has already been committed and not written to again, will just sit on disk unstriped. Future writes will be striped. Stripe sizes should favor vdevs with more free space, but I'm not that familiar with this point in close detail.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I don't want to be pedantic, but I also don't want misinform you, and it can be quite confusing if we don't use the correct terminology. When you say "array", you mean "vdev", correct? ZFS always dynamically stripes any data writes across all vdevs that make-up a pool.



Again, I'm assuming that by "arrays" you mean "vdevs". ZFS will not stop you from creating pools with all kinds of weird vdev configurations. You can have a pool of 3 vdevs with one vdev a pair of mirrored disks, one vdev a 4 disk raidz-1, and one vdev with no redundancy whatsoever. I wouldn't ever use a pool configured in such a way, but ZFS will not stop you.

You can certainly do what you describe, start with a single raidz2 vdev and then add a second raidz2 vdev. Your total pool capacity will be the sum of the capacities of the vdevs, and you'll likely even see a performance increase because writes will striped across both vdevs (although maybe not a huge increase because the 2nd vdev will be empty, the first will be somewhat full, and zfs will try to balance them in the long-term).

This sort of thing is a general feature of ZFS. It has very few limits, and the limits it does have are closer to infinity than they are to things you can actually do with commodity hardware on personal sized budgets. There are a trove of "just because you can, should you?" questions ZFS allows you to pose to yourself (Should I make a pool of 10,000 disks???) and will not stop you from executing them even if the answers are typically "no, I should not, that is not what I actually need, and worse, it is dumb".



That's the idea. Data that has already been committed and not written to again, will just sit on disk unstriped. Future writes will be striped. Stripe sizes should favor vdevs with more free space, but I'm not that familiar with this point in close detail.

Yes, anywhere I wrote arrays, I meant vdevs. My bad, I'm still getting up to speed with the terminology.

So ideally, the earlier I add a second vdev to the zpool, the better performance will be?

I definitely will only utilize a 6 disk zdev at first, maybe 3TB disks to save some cash so I can get 6 more sooner rather than later for a second vdev. It'll be nice to know that I can at that time go with perhaps 6TB in that second vdev, and over time I can then take individual 3TB disks offline and attach 6TB disks to the first vdev so I can eventually add that extra capacity as well.

Not sure yet, I might still start with 6x 4TB disks in the first vdev and just be happy with what I get when I eventually add a second vdev, and probably just stick to 4TB disks in that one as well. It may just be better long-term to plan for that eventual 32TB zpool, and if/when I outgrow that, perhaps add a DAS box and likely upgrade the HBA in the server at that time, or add another if I can.
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
Yes, anywhere I wrote arrays, I meant vdevs. My bad, I'm still getting up to speed with the terminology.

No worries, I know it reads like I'm being pedantic, but people often make assumptions that ZFS is just like md-raid or hardware raid or whatever, and sometimes they're right, and when they're wrong its often because there is a terminology problem.

Another thing to consider is that if you use say, smaller vdevs, like two disk mirrors, is that you can actually expand capacity by replacing the drives one at a time and rebuilding the mirror. Compared to raidz1/2/3 resilvering, mirrors typically resilver much faster. You could still do this with larger vdevs, but it will take a lot longer to get the extra capacity. So for example, if you have a zpool of two vdevs, each a mirror of 2x 1 TB drives, your total capacity is 2 TB. You could upgrade to 12 TB by replacing a 1 TB disk with a 6 TB disk one at a time in each mirror, and doing a resilver.

What are your current needs for storage and how quickly do you anticipate growing?
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Will a single 8 lane SAS2 HBA be perfectly sufficient for 12 or even 16 HDDs?

I'm looking into two possibilities: either a chassis that can support 12 or 16 disks, or a first chassis that only has 6 or 8 bays, and then down the road I add a DAS chassis with another 6 or 8 bays... I haven't decided yet if I'm going with 6 or 8 disk, but 8-disk chassis are plentiful and affordable, larger... I'm having trouble finding good ones that support good redundant PSUs without already coming with not as efficient or overkill PSUs. I don't think 800-900W, especially if they aren't even "80 Plus", are going to do me any favors.

That said, I'm thinking that perhaps 1x 8-lane HBA may actually be just fine, and I could have a single 4-lane cable go from the HBA to an expander backplane to support 6 or 8 disks, and then take the other cable, take an adapter that slots into an expansion slot and convert that internal cable to an external interface, use an appropriate external cable to a DAS, then reverse that and run that to another 6 or 8 bay expander backplane.

Is that feasible? Or is it a terrible idea and I should just invest in a 12 or 16 bay chassis up front?

I'm just trying to plan out the right motherboard, and I keep coming back to the mITX Xeon D-1528 offering from Supermicro. It has a total of 4 NICs, two gigabit, two 10GbE. The 10GbE, even running with just gigabit, are likely to use a little more power but not significantly, and if I do upgrade a switch later on to support the 10GbE, then that's even better. The reason I have to accept that is because these mITX boards only offer 1 PCIe slot, so that's the HBA and now no more expansion. I'd like to just get a basic used sub-$100 LSI-based HBA like the IBM m1015 or an LSI-branded one, and be done with it.

I'd much rather a FlexATX board, and the one I'm considering, which will cost maybe $200-300 more, has a D-1537, and a 16-lane LSI 2116 is integrated. The benefit to that board, aside from the integrated 16-lane SAS2 part, is it has two 2x PCIe 8x slots AND it's M.2 PCIe 4x port doesn't put the SSD over a hot Aspeed IPMI controller like on the mITX boards, and it supports the full length 22110 form factor though that's not a huge deal.
It only has 2 NICs, as the 2x 10Gb connectors are SFP+. So I'd definitely want/need an Intel 2 or 4 port gigabit ethernet expansion card.
And that 8-core part has a 1.7Ghz/2.3GHz clockspeed (standard/turbo), whereas the D-1528 is 1.9GHz/2.5GHz.
The D-1541 is better than both, though with a 45W TDP vs 35W TDP of the other two (yes yes, that's not everything... moving on). It's about the same price as the D-1537 part, but Supermicro hasn't announced any FlexATX boards with that part yet.

So, as you see, I'm trying to figure out the right combo while minimizing how many extra parts I need to add.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Keep in mind you are talking about spindle drives not SSD's. I'm running a Supermicro SC836TQ-R710B with an M1015 for the primary SAN and I have zero performance issues.

In addition, since you're going ZFS, for your planned usage just make sure you have enough RAM to handle whatever you're writing to it and it's largely moot how fast or slow your drives are anyways. You're not running a high I/O server that needs sustained speed so whenever you do a file copy to the server it's going to hit RAM first anyways.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Keep in mind you are talking about spindle drives not SSD's. I'm running a Supermicro SC836TQ-R710B with an M1015 for the primary SAN and I have zero performance issues.

In addition, since you're going ZFS, for your planned usage just make sure you have enough RAM to handle whatever you're writing to it and it's largely moot how fast or slow your drives are anyways. You're not running a high I/O server that needs sustained speed so whenever you do a file copy to the server it's going to hit RAM first anyways.

True. And I forgot to include the numbers that I used to convince myself but I just wanted to get a second opinion.

8-lane SAS2 = 48 Gbps total bandwidth. = 6 GBps
Split that up between 16 disks, and that's still roughly 375 MBps for each. Roughly SATA2 speed. Plenty for any spinner.
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
Will a single 8 lane SAS2 HBA be perfectly sufficient for 12 or even 16 HDDs?
It sounds like you have enough raw bandwidth with 8 pcie lanes, that you will be happy with benchmarks.

The real question is to make sure that the HBA has enough SAS links/ports. You probably want something like an LSI w/ "16i" in the title. It will have 4 mini-sas (SFF-8087 for modern ones) connectors on the HBA, and each host connector carries 4 SAS links, and can be connected to four drives via a 4-way forward breakout cable.

I think this is an older model, but here is an example of what I'm talking about:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118142
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
It sounds like you have enough raw bandwidth with 8 pcie lanes, that you will be happy with benchmarks.

The real question is to make sure that the HBA has enough SAS links/ports. You probably want something like an LSI w/ "16i" in the title. It will have 4 mini-sas (SFF-8087 for modern ones) connectors on the HBA, and each host connector carries 4 SAS links, and can be connected to four drives via a 4-way forward breakout cable.

Sorry when I said 8-lane I meant 8 SAS2 links.

Like an IBM M1015, it has 2 mini-SAS ports, with 8 SAS links.

Which is where the bandwidth calculation comes into play when using an expander. 8 SAS2 links sound like they should have zero problem driving 16 disks using multipliers, at least for spinners. Obviously SSDs are a different story.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Where do you stand with this project destrekor? I'm in the process of specing out a new setup (probably a vSAN cluster based on the Xeon D platform) and just wondered if you've purchased/configured any of your setup yet?
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Where do you stand with this project destrekor? I'm in the process of specing out a new setup (probably a vSAN cluster based on the Xeon D platform) and just wondered if you've purchased/configured any of your setup yet?

Nope, it's going to be awhile. I'm currently thinking this will happen during the summer.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Nope, it's going to be awhile. I'm currently thinking this will happen during the summer.

Ahhh I see. Well I'll have my build done probably by the end of next month so I guess I'll post my results. I plan to make a build log (on a different forum) when I get all my hardware in so I'll let you know when that happens. You don't even wanna know how much money I'm spending haha.
 

Gray05

Junior Member
Dec 27, 2015
8
0
0
After reading through most of this thread I gotta ask, what about the noise? Wouldn't most all of these used servers from Ebay be way louder than what could be built?

I'm researching for replacing my Synology DS415+ and Supermicro c2758 systems with one box. Power efficiency is something I get overly stuck on (kinda like the OP). Partly because I don't just look at what the kill-a-watt tells me. I agonize over my A/C working harder because of a unnecessarily warm server. However, what I care about more than a few dollars difference on my power bill is the noise. I already have a POE switch that I need to replace because it's too loud. I couldn't bare to build a server that I quickly end up hating because I can hear it when I'm laying in bed at night.

It's my impression that the Xeon D's will be WAY more capable of a powerful yet quiet system. a 35W TDP Xeon D seems like it would be oh so quiet in addition to cool.

I'm even tempted to wait for the Denverton chips so I can see what they offer. I'd prefer a higher performance architecture than Atom, but for the power envelope and price the c2758 I have now is pretty good. I'd expect the successor to be pretty good.
 
Last edited:

JimPhreak

Senior member
Dec 31, 2012
374
0
76
I got my new server rack in yesterday. Still waiting on some other parts next week but I should have things up and running in a few weeks.

Here is my Xeon D build log along with pics for anyone interested in doing something similiar.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |