"The BIG Networking Upgrade" - my home LAN, Strategies? Ideas? (Purchased 8-port 2.5GbE-T switches, cards)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

abufrejoval

Member
Jun 24, 2017
39
5
41
Since my file servers actually aren't just for file services but run multiple duties (deskop and Steam remote play among them), they run on Windows 2016R2. And I've never been able to get 10Gbit data rates out of them, even with FusionIO PCIe x8 SSDs on both ends, capable of what NVMe do in a slightly smaller form factor these days.

Even on iperf3 I don't get quite the numbers I get with CentOS on those very same boxes.

And in any case I get 600MB/s only with large files like complete VMs, not when backing up documents or source code. In those cases network speed doesn't matter at all, latency might. And those boxes run a single SATA SSDs for OS and Steam 4x 1TB 860 Evo JBOD or FusionIO 2.4TB SSD cache (around 1.5GB/s) as well as 8x 4TB 2.5" HDD RAID6 (primary) and 8x 2TB 3.5" RAID5 (backup) on LSI 9260 controllers, capable of 700-800MB sequential speeds in this setup.

One rather unexpected result: My kids tend to have the same Steam games and some of them like ARK Survival Evolved are quite huge, in excess of 100GB with thousands and thousands of files. So rather than having everyone put their own copy on local SSDs I thought it a good idea to put the game files on a network share, with an SSD JBOD underneath to ensure speedy delivery.

As it turns out, it took longer to load ARK from an SSD JBOD capable of transferring 1.5GB/s locally over the 10Gbit network , than to load that very same game from a single old 3.5" 2TB HDD.

The Windows overhead for opening a file over the network evidently ate every SSD or 10GBit speed avantage and it's somewhat peculiar to ARK. Other games with far fewer (but bigger) files did much better, but weren't nearly as popular.

But even on local storage, ARK load times are not quite what you'd expect, given how beefy my systems generally are (and how low the usage actually seemd).

And then I did one of my regular Steam on Linux experiments and was almost shocked how much faster ARK loaded on Linux from a lowly SATA SSD! Seems the general overhead for openeing a file on Linux is much less than on Windows, locally and much more so over the network.

Of course, the quality of the graphics on Steam for Linux was much lower for ARK, so it didn't really make the cut. Again, that varies much with the game engine, I have seem others that looked pretty much the same on both platforms.

10Gbit vs. 1Gbit almost never means 10x speed. You'll just have a plethora of new bottlenecks opening up to you: Enjoy, if you can!
 
Reactions: aigomorla

SamirD

Golden Member
Jun 12, 2019
1,489
276
126
www.huntsvillecarscene.com
Terrific insight into the world of 10Gb! Thank you for sharing!

I can't believe you couldn't max it out with the fusionIOs on both ends--those are beasts!

Makes me wonder what 40Gb and 100Gb really does...
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
It sounds like you only need high speed to the NAS/server, so i would address that instead of upgrading the whole infrastructure. I bought 2 used intel 10Gbe NIC's off ebay for $35 each and ran a direct connection from my PC to the server, now my backups/transfers go as fast as the array can handle most of the time. Cost me around $80 including the cable.
 
Reactions: SamirD

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,879
3,230
126
You can always go the route Rifter states.
Just connect one PC directly to the other via SFP+ DAC cable and skip the switch entirely.
Disable DHCP, set private network not in the same subset as your internet network.
Shown in this video:
That is what got me started in my 10gbe migration.

The only disadvantage at that is when other PC's access the machines they will be limited by a 1GBe connection.
Personally i had it setup that way until my family members started treating my server like an SAN, sort of what abufrejoval has going.
Directly mapping the shared drive onto the PC and using it as a secondary drive off the main OS.

I have all downloads from chrome/firefox for example go onto the server drive and not eat up SSD writes for example.

This is why i went to the switch route in Ver.2 upgrade of my house, as the server now has a 10GBe bridge to handle all the other PC's which have a 1gbe bridge.
I could of LACP's it, but meh... Id honestly prefer having 1 cable in favor of 4+

Ver.3 is probably when i go entire 10GBe network, as i have wired my house with Cat6 cable, and i would honestly like to have a few more PC's on the 10gbe connection.
 
Last edited:
Reactions: SamirD

abufrejoval

Member
Jun 24, 2017
39
5
41
It sounds like you only need high speed to the NAS/server, so i would address that instead of upgrading the whole infrastructure. I bought 2 used intel 10Gbe NIC's off ebay for $35 each and ran a direct connection from my PC to the server, now my backups/transfers go as fast as the array can handle most of the time. Cost me around $80 including the cable.

True, and that's what I tried to do initially some years ago, when I first started getting into 10Gbase-T with some company sponsored dual ported NICs (around €500 a piece at the time): I set up a complex peer-to-peer mesh between four servers using 1:1 connections to avoid the cost of the initial €5k-10k switch.

I am even doing something very similar now on 100Gbit in the corporate lab using Mellanox VPI adapters, which allow "host-chaining", where you cable your systems pretty much like you'd cable FC-AL or Token-Ring in the old days, hop-to-hop in both directions for redundancy and potentially fewer hops, depending on where your target was on the ring. Only in this case, the NICs are doing the routing automatically, no routing required on the host: All those chained NICs appear like they were connected to a switch (or should I say, token ring?)!

Again, you hit bottlenecks fairly quickly and it's not a lot of fun. In the home lab with the two windows servers, I've tried lots of things from link aggregation using quad 1Gbit cards to cross-connect 10Gbase-T NICs. Link aggregation doesn't really work unless you're able to employ multi-threaded copying (each single IP-connection only ever gets 1Gbit) and 10Gbit hit bottlenecks on small files: You really need specialized software to aggregate those small files into a big stream to take advantage of the bandwidth.

With the 100Gbit Mellanox it's a Linux-only platform and as it turns out, "host chaining" is something that Mellanox sort-of pushed in 2016, but doesn't want to follow up on in 2019 (might disabled it all together, after the NVidia aquisition): As of today VPI host-chaining only supports Ethernet semantics, even if the silicon on the ConnectX-5 actually is dual personality and will do Ethernet and Infiniband (that's why I bought them...), and nobody will sell a switch that supports both (market segmentation at work!): It's not a hardware issue, much like the Acquantia switch and NIC silicon, they are really just differently sized silicon chunks of the same base architecture.

Iperf3 throughput was rather disappoiting on those 100Gbit cards (Skylake Gold hosts), more like 30Gbit, even after I fiddled with the MTU sizes (4k seemed an optimum, 9k plain failed and 1500 yielded lower throughput). If data had to go another hop around the ring, performance fell to 15-20Gbit, a pretty steep cliff which again seems to indicate that Mellanox doesn't really want you actually use the host-chaining feature they advertised. Currently running NFS and GlusterFS across these links and at least there are no glitches (lots of resends, through), even if bandwidth doesn't really jump the order of magnitude you'd expect beyond the 10GBase-T network these machines also share.

That 10Gbase-T network is using an HP 48-Port 10Base-T switch that sounds like a rocket taking off: Definitely something completely unusable outside a data-center.

So the main reason I started upgrading my home-lab infrastructure is that Buffalo started selling 8 and 12 port NBase-T switches at around €50/port which could be hacked to be unnoticeable by swapping out the fans with fixed speed Noctuas. Those switches are using Aquantia silicon, two (8 port) or three (12 port) quad port dies and they work very fine with Aquantia 107 NICs, 2.5 RealTek (USB3) NICs or ordinary 10Gbase-T or 100MBase-1Gbase-T stuff at around 40 Watts TDP for the 12 port switch, effectively using less than 10 Watts in my setup: Quite a feat of using four differential signalling pairs of wires on RJ-45 with really complex modulation to achieve 10Gbit throughput at the same signal rates used for 1Gbit (can't change the physics of those copper wires, so you need to go analog on the signals).

The SFP or QSFP+ direct connect stuff will work just fine without a switch, but of course it doesn't quite give you the flexibility you may want. In my case, the Intel and Broadcom 10GBase-T NIC simply overheated and failed, because I was using noise optimized desktop PCs in my home-lab and they didn't provide the air-flow that those 10Watt PHYs required (really designed for rack server airflows). And they also very much exposed the deficits of both Windows and the backup-software I used, with effective throughput often sinking into the KByte range.

The Green-Ethernet 28nm PHYs from Aquantia make do with around 3 Watts and are therefore able to survive in a noise optimized desktop with passive cooling, just as SFP direct-connect probably would.
 
Reactions: SamirD

VirtualLarry

No Lifer
Aug 25, 2001
56,449
10,119
126
Well, the network, it's a-moving along. Picked up two of these last night:

https://www.newegg.com/asus-xg-c100c/p/N82E16833320272

Only I got mine on the open-box sale, with a 50% off promo code. Two for the price of one! Plus, open-box, so they were a little cheaper than new. Hopefully they're in working condition.

I've got two open 10GbE-T ports on my MX110 Netgear switch, meant for up-link and down-link (with 8 1GbE-T ports on it too), but I'll use those, with my 75ft Cat6 cables, to connect my host machines. Since I have the dual NICs, I might even isolate and put the NAS units on a separate LAN altogether.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,879
3,230
126
but I'll use those, with my 75ft Cat6 cables

Larry i think u may have problems with those nics and distances greater then 25feet on 10baseT.
And those cards will most definitely overheat... i remember reading a lot of people ranting and complaining about them, which is why i would of recommended you going on ebay and looking for Intel X540
 

VirtualLarry

No Lifer
Aug 25, 2001
56,449
10,119
126
Weird, I was reading Amazon reviews about the Intel 540 cards overheating, and the Asus doing fine.

FWIW, the heatsink on the Asus seems more substantial than other AQ cards, though perhaps not as much surface area as the spiked-fin heatsink designs.

I'll put them through the ringer when I get them. My current NAS units only have 2x 1GbE-T anyways.

Edit: What kind of cable should I be looking for? I need like 20-25' to clear over the doorway, and then another 20-25' over to the other PC desk. Maybe 35-40' total, maybe I should spec out 50'?

Cat6? Cat6a? Shielded or unshielded?
 
Last edited:

Rifter

Lifer
Oct 9, 1999
11,522
751
126
Larry i think u may have problems with those nics and distances greater then 25feet on 10baseT.
And those cards will most definitely overheat... i remember reading a lot of people ranting and complaining about them, which is why i would of recommended you going on ebay and looking for Intel X540

Yeah the intels are cheap enough no reason not to go that route. Mine are rock solid even at 600-700MB/s
 

VirtualLarry

No Lifer
Aug 25, 2001
56,449
10,119
126
So, I've gone in a bunch of different directions with my network upgrade plans.

Those two Asus Aquantia-chipset 10GbE-T cards? I installed one in my Asus B450-F / Ryzen R5 3600 PC, and it died a week later. Just NOTHING, no errors, doesn't show up in device manager at all. NOTHING.

Same thing happened to the 1GbE-T onboard Intel NIC on this board too, not too much longer later.

So, I bought a couple of Asustor 2.5GbE-T USB3.x NICs. One for my Asustor NAS, and one for my main client PC. Sadly, that proved to be a bit flaky on my powered USB3.x hub (actually, been having USB3.x issues in general on this PC).

So, right now, I'm using an ebay Chinese-special RealTek 1GbE-T USB3.x Type-A NIC for now. Every so often, the internet goes down, and I have to unplug the entire hub and plug it back in again to reset and get the internet back. For some reason, unplugging the USB NIC itself, and plugging it back in, doesn't reset everything in the drivers, and I have to restart Win10, sometimes I have to hit the RESET button on the case.

I have ordered, and they should be arriving soon, some PCI-E x1 RealTek-chipset 2.5GbE-T NICs from China as well. They were the cheapest that I could find, around $22 ea. shipped. The Asustor USB3.x 2.5GbE-T adapters went up in price like $10 after I ordered my pair of them, so those were a no-go, and being a captive Type-C connector on them, I had to buy some USB3.x Type-C to Type-A adapters, to fit into my USB3.x hubs, and those don't make the greatest mechanical connection. (I really dislike Type-C connectors, they seem to fall out or disconnect at the faintest touch.)


I discovered these tonight, a new product I think, RealTek-chipset branded USB3.x Type-A 2.5GbE-T adapters, for $22.99, minus even a 5% off coupon. Nearly as cheap as the generic Chinese RealTek PCI-E x1 NICs, but more versatile, being USB3.x Type-A, and more reliable connection-wise than the Asustor adapters with the Type-C to Type-A mechanical adapters.

Sadly, I'm broke right now, so I can't get them until I get some money together. If I had seen those first, I would have ordered them from Amazon, instead of the generic PCI-E x1 2.5GbE-T NICs from ebay. Oh well.

At some point, in the last few months, I picked up a pair of D-Link 8-port 2.5GbE-T switches, for fairly cheap. (Now they're going for almost $500 ea., crazy!)

And this month, I also picked up one of those $130 Microtik 4-port SFP+ 10GbE + 1-port GbE-T (management port), and a few more of the copper SFP+ 10GbE-T adapters. So I could run one of these switches, instead of the D-Link, for my client systems. (Microtik is like 1./2 or 1/3 the size of the D-Link, which is practically rack-mount size.) Plus, I could run 10GbE-T client NICs, which was a consideration for a while. It still is, I guess, if I can find some reliable PCI-E x4 ones. (Newegg had some 10GbE-T with the Tahuiti (sp?) chipset for $72.99 for Rosewill brand, but I have heard that they don't always work well with Linux yet.)

I'm still waiting for one of the major switch mfg's to release a 5-port or 8-port 2.5GbE-T switch for "consumer prices". Not sure how long that will take. Maybe when there is a greater uptake of 2.5GbE-T NICs on the market. But they can't wait too long, or 10GbE-T/NBASE-T/multi-gig switches will drop down in price likewise, and then there would be no point in a 2.5GbE-T-only 8-port switch for the same price as a 10GbE-T switch. Maybe that's why they haven't appeared.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,879
3,230
126
, I guess, if I can find some reliable PCI-E x4 ones.

10GB/e cards are all 8x.
gl finding a 4x one.
Some might be 4x if its a single port card but they will be physically 8x.

Cant also say i didn't warn you about that asus nic.
I heard from a while they are absolute garbage, and you were probably better off getting intel x540 nics off ebay.

Could you at least get an RMA on those ASUS nics? or is it ASUS RMA hell with those as well?
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,879
3,230
126
are you sure larry?

I thought open box carried all the standard warrenty as new.
I would still try and see if i could file an RMA with ASUS.
The worst what could happen is them denying the replacement.

Asus is like a picky girlfriend that can either shower you with yes, or snap at you for even asking.
May as well give that lottery a try.
 
Reactions: SamirD

Rifter

Lifer
Oct 9, 1999
11,522
751
126
I got them open-box == no warranty past Newegg's replacement policy, which is long gone.

You may want to try your luck direct with manufacturer, ive had white box no warranty items repaired before under warranty, you never know till you try.
 
Reactions: SamirD
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |