1Gbps is to slow

13Gigatons

Diamond Member
Apr 19, 2005
7,461
500
126
Why are we still stuck with such slow networking speeds?

USB 3.1 is 10Gbps and eSata is 6Gbps, even WiFi is pushing past 1Gb.

I'd like to see 10 or even 25Gigabit for networking speed.
 

PliotronX

Diamond Member
Oct 17, 1999
8,883
107
106
There hasn't been enough consumer interest, they are currently enthralled with imaginary numbers in wireless networking. NBase-T is just getting traction but that is going to be 2.5 or 5Gbps over existing ethernet. We are going to need a different cable standard for higher speeds. Be thankful for GbE because I still have to deal with "fast" Ethernet at a lot of locations.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,450
10,119
126
Why don't we (consumers) have 12Gbit SATA, like they do with SAS? Same reason, the vendors of "enterprise" equipment want to keep consumer gear in the near dark ages technologically speaking, so that they can maintain their margins on "enterprise" gear.

Intel is just as guilty. Back in the day, ALL PCs had ECC RAM. It just made sense, for reliability. But Intel saw that as a feature that they could use to differentiate their "enterprise" gear.

Edit: I wouldn't mind it too much, if consumer gear got 5Gbit/sec ethernet (backwards compatible with 1Gbit/sec ethernet), and enterprise gear stayed 10Gbit. It would at least be a step forward, and let us use existing wiring.
 

Red Squirrel

No Lifer
May 24, 2003
67,904
12,374
126
www.anyf.ca
They pretty much price 10g stuff high knowing that companies will pay for it. Same was done with gigabit for the longest time. Heck even now a managed gigabit switch is not all that cheap. I found a good deal for mine and paid about $200. It's a Dell and not a fancier name like Cisco but it does the job for me.

I don't really have a need for 10g myself but when it does get cheaper I may still use it for "back of rack" applications like connectivity between servers. For workstation 1g is fine for my needs. Heck the raspberry pi only does 10/100 and you can actually stream HD over that. Not sure about 4k mind you but don't yet have a 4k TV anyway.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Why don't we (consumers) have 12Gbit SATA, like they do with SAS? Same reason, the vendors of "enterprise" equipment want to keep consumer gear in the near dark ages technologically speaking, so that they can maintain their margins on "enterprise" gear.

Intel is just as guilty. Back in the day, ALL PCs had ECC RAM. It just made sense, for reliability. But Intel saw that as a feature that they could use to differentiate their "enterprise" gear.

Edit: I wouldn't mind it too much, if consumer gear got 5Gbit/sec ethernet (backwards compatible with 1Gbit/sec ethernet), and enterprise gear stayed 10Gbit. It would at least be a step forward, and let us use existing wiring.

LOL. Just LOL. Let's take off the tin foil hat for a minute here.

A) SAS is full duplex and dual ported. SATA is half duplex and single ported. SATA will never be as fast at SAS and it doesn't need to be.

B) Bandwidth/features don't come out of thin air. Where/how would 12Gbps SAS be implemented on a consumer system? A dual port 12Gbps SAS adapter would need 4x PCIe 3.0 lanes to be able to reliably feed it. The hardware isn't free either.

C) "Back in the day" computers were $3,000. Ignoring inflation, $3,000 will still get you a computer with ECC. But most people don't want to spend $3,000 therefore compromises have to be made.

D) "Backwards compatible" is the bane of innovation. You can get faster speeds with surplus gear for dirt cheap, but once people find out they'd need new cables and NIC's/HBA's, they rapidly lose interest. Hence why they're now creating new standards to try to get more speed out of existing hardware. We'd make a lot more progress in computing if we could just throw out backwards compatibility with every new innovation but nobody is going to go for that.

Then there's the whole supply and demand thing. Yeah, whining about 10GbE being expensive sounds cool and all, but let's be realistic here. What would you actually do with 10GbE? I know you (and most other people) don't have a storage system fast enough to actually need 10GbE. Spindle drives are going to struggle to saturate a pair of load balanced 1GbE ports. The cheap NAS units most consumers buy don't have the grunt to push through that kind of speed, even if the drives in it do. Most cheap NAS units can't even saturate a single GbE link regardless of drives. 12Gbps SAS/SATA? Consumer spindles don't come remotely close to needing 12Gbps. SSD's have moved to PCIe on the consumer side to get the speed and that's a far more economical and universal choice. A PCIe slot can be used for whatever the user wants and every computer has them. Embedded 12Gbps SATA would mean dedicating resources to storage, even if you didn't want it. Again, let's be honest here...



I can transfer a 15GB Bluray rip from my PC to the file server in 2 minutes over a single GbE link. If I had a 10GbE link, that would drop to probably around a minute and half because at that point the spindles can't keep up. I'm a pretty impatient guy. I mean I REALLY hate waiting. Every device I own is booting off flash storage because I hate waiting. But to get that above example under a minute, I'd have to go 10GbE (or faster) and replace 40TB worth of spindles with SSD's. Yeah, my wallet says no to that, as does most consumers. How much money would you spend to shave seconds off network transfers?

There's also little realities that most "prosumers" don't consider. Faster interfaces use more power. More power means more heat. More heat means more cooling which usually means fans. More fans means more noise. 10GbE NIC's have pretty sizable heatsinks on them and it's because they need it. Where are you going to physically put that on a consumer board? Now, how about the switch? Have you looked at how much power enterprise switches use and how much heat they put out? Older Cisco 24 port GbE (not 10GbE) switches are pulling over 100 watts even with nothing plugged into it. My HP Procurve ZL setup with 8x 10GbE ports and 96 GbE ports was pulling almost 400w with only a single 10GbE port and 24 or so GbE ports in use and POE disabled on all the ports. My Cisco MDS (24 port 4Gbps Fiber Channel) uses about 120w idle as well and it's noisy as shit. I'm talking hearing it through a wall loud. No normal person wants that in their house. Even with GbE, once you get to the 48 port range, you're pretty limited on your fanless options and since we're talking switches, you're talking little 20mm fans which mean a nice loud, constant, high pitched whine. Off the top of my head, I don't recall seeing a fanless true 10GbE switch (meaning not a GbE switch with a couple of 10GbE uplinks) and a quick Google search would seem to verify that. I've seen heat issues on AC1300 wireless cards even with heatsinks on the cards.

Lastly, not all GbE devices are equal. You're accusing the manufacturers of holding back consumer gear purely to keep their margin high. Do they have better margins on enterprise gear? Yes, absolutely. But is enterprise gear better hardware? Yes, which mean it's going to be more expensive. Have you ever tried to saturate a consumer GbE switch? I have and I can tell you I've killed a Netgear and two Zyxel switches by turning on DRS on my VMWare hosts and watching them thrash the network with vMotions and tossing in some svMotions for giggles. They may be 24GbE ports, but they don't have 24Gbps of switching capacity. As you can see, I sustained a constant 113Mbps transfer rate in the above example. I've had multiple people try to insist to me that you can't get that much speed out of a single GbE connection. Most people are "happy" with over 90MB/s. I've got Intel NIC's on both ends of that connection and a Cisco switch in the middle. Not all GbE is equal and quality parts cost more money. You're wanting a Ferrari for Kia prices. Could Ferrari's be cheaper? Sure. But they're never going to be Kia cheap. Who makes the NIC on most consumer boards? Realtek, because they're cheap. If you want Intel, you usually have to got with enthusiast or workstation boards. Have you looked for Realtek 10GbE NIC's? Guess what, they don't make one. You've got to go to the big boys if you want 10GbE.

You can get speed for cheap if you really want it. I paid about $300 for my surplus Fiber Channel setup around 3-4 years ago. I've got 16Gbps connectivity between my servers. My IT epeen got huge when I did that. But guess what? That's worn off and, I'm planning on going back to iSCSI over regular GbE. Why? Because honestly 4Gbps is sufficient speed and as mentioned above the FC switch is obnoxious. That switch alone is costing me $150/yr in electricity and out of the 10 devices in my rack, it's by far the loudest. The ProCurve I previously mentioned would cost me $450/yr. It's honestly not worth it.

If you want to go balls to the wall fast on the cheap, you can get a surplus 40Gbps QDR Infiniband setup for 3 systems for about $500 these days. But then you'll just end up realizing something else is too slow now.

Regarding USB 3.1 and eSATA, do you know what the maximum cable length is for those technologies?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,450
10,119
126
LOL. Just LOL. Let's take off the tin foil hat for a minute here.
Tin foil hat, eh? As if, enterprise gear mfgs DIDN'T like their huge margins?

A) SAS is full duplex and dual ported. SATA is half duplex and single ported. SATA will never be as fast at SAS and it doesn't need to be.
But they share a physical layer, don't they? They doubled the speed of the physical layer of SAS to 12Gbit/sec, with the same cables. They could have done that with SATA as well, but they didn't. Why? "Enterprise".
B) Bandwidth/features don't come out of thin air. Where/how would 12Gbps SAS be implemented on a consumer system? A dual port 12Gbps SAS adapter would need 4x PCIe 3.0 lanes to be able to reliably feed it. The hardware isn't free either.
I don't know, Intel's Skylake PCH chipset has plenty of connectivity. They could also always bump up the DMI link count/speed, if they really wanted to. I mean, with Kaby Lake PCH, they added another PCI-E 3.0 x4 link for a secondary NVMe SSD, but didn't boost the DMI link speed that I know of. They could have easily given us two 12Gbit/sec SATA ports.

C) "Back in the day" computers were $3,000. Ignoring inflation, $3,000 will still get you a computer with ECC. But most people don't want to spend $3,000 therefore compromises have to be made.
I'm not sure what price has to do with it. I built quite a few older rigs back in the day, that didn't cost nearly $3000, that came with parity or ECC (parity and ECC RAM are the same once you hit the 64/72-bit memory bus widths.) It was a standard feature, well, before "enterprise" took over.

Then there's the whole supply and demand thing. Yeah, whining about 10GbE being expensive sounds cool and all, but let's be realistic here. What would you actually do with 10GbE?
Yeah, yeah, we know, "640KB should be enough for everybody".

I can transfer a 15GB Bluray rip from my PC to the file server in 2 minutes over a single GbE link. If I had a 10GbE link, that would drop to probably around a minute and half because at that point the spindles can't keep up.
The point is, even consumer NAS units can saturate a 1Gbit/sec ethernet link. So, time to move up to the "next bigger" speed/size.

I mean, we didn't keep using SATAII until HDDs saturated the link, did we? No, we moved up. Well, until the chipset vendors and standards authors got bought out, and kept us at 6Gbit/sec for SATA, rather than boosting to 12Gbit/sec, necessitating things like NVMe, which take a lot more board space than a simple tiny SATA port. Maybe if we had 12Gbit/sec SATA, we wouldn't have the abomination that is SATAe on motherboards, sitting unused. Surely, SATAe support takes more bandwidth from the PCH, if actually implemented with a drive that speaks that. (Which doesn't exist yet?)

My HP Procurve ZL setup with 8x 10GbE ports and 96 GbE ports was pulling almost 400w with only a single 10GbE port and 24 or so GbE ports in use and POE disabled on all the ports. My Cisco MDS (24 port 4Gbps Fiber Channel) uses about 120w idle as well and it's noisy as shit. I'm talking hearing it through a wall loud. No normal person wants that in their house.
Again, you're using surplus enterprise gear, and complaining that it doesn't fit a typical consumer lifestyle. Surely, consumer 10Gbe gear would be built a bit differently, probably not with "jet engine fans" set up. And, given the timespan difference between the production of your enterprise gear, and the production of hypothetical consumer 10Gbe gear, surely they could reduce the power / heat / noise, as they would be one or several chip shrinks forward in the technology timeline.

Even with GbE, once you get to the 48 port range, you're pretty limited on your fanless options and since we're talking switches, you're talking little 20mm fans which mean a nice loud, constant, high pitched whine. Off the top of my head, I don't recall seeing a fanless true 10GbE switch (meaning not a GbE switch with a couple of 10GbE uplinks) and a quick Google search would seem to verify that. I've seen heat issues on AC1300 wireless cards even with heatsinks on the cards.
What kind of consumer deployment needs a 48-port switch of any speed? Most consumer don't have that many wired drops. For a consumer 10Gbe switch, I could see four 10Gbe ports, and 4-8 1Gbe ports on it.

Have you ever tried to saturate a consumer GbE switch? I have and I can tell you I've killed a Netgear and two Zyxel switches by turning on DRS on my VMWare hosts and watching them thrash the network with vMotions and tossing in some svMotions for giggles. They may be 24GbE ports, but they don't have 24Gbps of switching capacity.
OK, so consumer switch vendors LIE about their specs. Who's really wearing the tinfoil hat here? Every 5-port and 8-port dumb GigE switch that I've used, claims to have adequate switching capacity to handle all of the ports at line speed. Are they lying? I don't know, I haven't tried to push them. I know that other than a Netgear 5-port switch that had a plastic casing, I've never had a consumer switch die - from usage. (Well, I had a nice metal-casing TrendNet GigE switch stop working after a storm where the power went out.)

Regarding USB 3.1 and eSATA, do you know what the maximum cable length is for those technologies?
3-6 feet, I think? Surely, 10GbE over Cat6A should be able to push 50 feet, which would be plenty for my small apt.

Edit: Comcast already offers a 2Gbit/sec internet connection service.
 
Last edited:

A5

Diamond Member
Jun 9, 2000
4,902
5
81
3-6 feet, I think? Surely, 10GbE over Cat6A should be able to push 50 feet, which would be plenty for my small apt.

10G-BaseT is designed to do 100m (330 feet) over Cat6A and 55m over plain Cat6.

The problem is that it just doesn't work on Cat5, which is the vast majority of installed wiring, so at least we're getting 2.5G-BaseT soon for that use case.

Ideally we'd be able to rip it all out and replace with optical fiber, but I think even 10GbE optics are still too expensive for home use.
 

sdifox

No Lifer
Sep 30, 2005
96,171
15,776
126
Assuming 100MBps throughput, your gigabit network can transfer 8,640GB in a 24hr period. Are you mirroring alt.binaries?
 
Reactions: WhoBeDaPlaya

exdeath

Lifer
Jan 29, 2004
13,679
10
81
If you saw how crazy 4D-PAM5 is just to squeeze gigabit out of cat 5...

Problem is cat 5 is 125 MHz which was based on the 8/10b baseband for 100 Mbps.

10 gig is an entirely new modulation and requires 500 MHz cable to shoehorn it to use 4 pair UTP. It's really meant for twinax or single mode fiber.

I had 10 Gbps years ago with Infiniband. Nobody else did and I did all my storage using SSDs like removable cartridges anyway. It takes RAID0 SSDs on both ends to use it. Problem is it made transfers so fast that you no longer spent time transferring files and then question the worth of having something you don't really use and that nobody else is willing to interface with just to take nerdboner screencaps.

Blame people who don't want to spend $50 a cable on slow home network speeds. The rest of us found 10G+ alternatives years ago. 10 Gbps isn't expensive. Just 10 GbE is.
 

Genx87

Lifer
Apr 8, 2002
41,095
513
126
What do most people do at home? Cruise the internet on their 30Mbps internet connection. We are a long ways off anybody having a need for more than 1Gbps in the home. Hell, in the enterprise with a lot of things going into the cloud. We have the same issue and feel like we are regressing. Where does 10gbps fit except for storage when so many of the applications are running over our 60-100Mbps WAN link? Yeah Wireless AC deployments in theory could use more than 1Gbps links. And some of the enterprise\commercial APs are now offering bonded 1Gbps links. But the end users will most likely never see the performance increase.
 

JimKiler

Diamond Member
Oct 10, 2002
3,559
205
106
Problem is cat 5 is 125 MHz which was based on the 8/10b baseband for 100 Mbps.

10 gig is an entirely new modulation and requires 500 MHz cable to shoehorn it to use 4 pair UTP. It's really meant for twinax or single mode fiber.

I have Cat6 in my home and unlike my friends i was not upcharged $1k for it instead of Cat5e. However i presume i will need new jacks if i ever get it faster than 1GB.

I thought we have had Cat5 cables with 300+MHz speeds for over a decade. Or are the jacks limited to 125Mhz?
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
I have Cat6 in my home and unlike my friends i was not upcharged $1k for it instead of Cat5e. However i presume i will need new jacks if i ever get it faster than 1GB.

I thought we have had Cat5 cables with 300+MHz speeds for over a decade. Or are the jacks limited to 125Mhz?

Cat 5/5e is 100 MHz

Any 300 MHz UTP is probably just Cat 6 but was probably called Cat 5e before 6 became a standard.
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
I have Cat6 in my home and unlike my friends i was not upcharged $1k for it instead of Cat5e. However i presume i will need new jacks if i ever get it faster than 1GB.

I thought we have had Cat5 cables with 300+MHz speeds for over a decade. Or are the jacks limited to 125Mhz?
10G-BaseT uses the same jacks, afaik.
 

13Gigatons

Diamond Member
Apr 19, 2005
7,461
500
126
Something not mentioned here is that the 1Gbps is shared, so 5-6 machines getting backed up to the server slows things down a lot.

I can transfer a 15GB Bluray rip from my PC to the file server in 2 minutes over a single GbE link.

Try moving hundred's of GB then it starts taking hours.

Cat 5/5e is 100 MHz, Any 300 MHz UTP is probably just Cat 6 but was probably called Cat 5e before 6 became a standard.

The Cat 5e I just installed says it's 350mhz certified. I should have just gone Cat 6, but the Cat 5e was on clearance.
 

sdifox

No Lifer
Sep 30, 2005
96,171
15,776
126
Something not mentioned here is that the 1Gbps is shared, so 5-6 machines getting backed up to the server slows things down a lot.



Try moving hundred's of GB then it starts taking hours.



The Cat 5e I just installed says it's 350mhz certified. I should have just gone Cat 6, but the Cat 5e was on clearance.
Average homes dont do full machine backup to a server daily. And you can always stagger backup time.

Or walk an external hdd over for your hundreds of gb.
 

master_shake_

Diamond Member
May 22, 2012
6,430
291
121
All this complaining about 1gbps and here I am with 10gb InfiniBand.

You guys should check out the dirt cheap 10gb SFP+ nics.

The switches is where they get you.
 

bigboxes

Lifer
Apr 6, 2002
39,144
12,027
146
Something not mentioned here is that the 1Gbps is shared, so 5-6 machines getting backed up to the server slows things down a lot.



Try moving hundred's of GB then it starts taking hours.



The Cat 5e I just installed says it's 350mhz certified. I should have just gone Cat 6, but the Cat 5e was on clearance.

I've been using CAT6 for over 10 years now. I'd go CAT6a if I were building today.
 

sdifox

No Lifer
Sep 30, 2005
96,171
15,776
126
Just looked on ebay, humm, maybe it is time I step up to infiniband 40gb
 
Feb 25, 2011
16,822
1,493
126
But they share a physical layer, don't they? They doubled the speed of the physical layer of SAS to 12Gbit/sec, with the same cables. They could have done that with SATA as well, but they didn't. Why? "Enterprise".

No, they don't. The wiring for SAS is nothing like SATA.

The point is, even consumer NAS units can saturate a 1Gbit/sec ethernet link. So, time to move up to the "next bigger" speed/size.

Actually, his point was that a lot (most) of them CAN'T saturate that link.

I mean, we didn't keep using SATAII until HDDs saturated the link, did we? No, we moved up. *snip*

Because of SATA expanders/multipliers.

OK, so consumer switch vendors LIE about their specs. Who's really wearing the tinfoil hat here? Every 5-port and 8-port dumb GigE switch that I've used, claims to have adequate switching capacity to handle all of the ports at line speed. Are they lying?

Yes.

I don't know, I haven't tried to push them.

You should.
 
Reactions: XavierMace

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Tin foil hat, eh? As if, enterprise gear mfgs DIDN'T like their huge margins?

Here's a thought. How about we stick to facts here, rather than you telling us how you think things should work with nothing beyond wishful thinking the validate that opinion? You'll note I started right off by agreeing that enterprise gear has better margins. But introducing consumer gear doesn't impact the margins of their enterprise gear.

But they share a physical layer, don't they? They doubled the speed of the physical layer of SAS to 12Gbit/sec, with the same cables. They could have done that with SATA as well, but they didn't. Why? "Enterprise".

No they don't and no they didn't. The SAS physical layer happens to be compatible with SATA but it does not share it. SAS-3 (12Gbps) uses Mini-SAS HD (SFF-8643 and SFF-8644). More importantly. you're completely ignoring the fact that SATA is only half duplex.

I don't know, Intel's Skylake PCH chipset has plenty of connectivity. They could also always bump up the DMI link count/speed, if they really wanted to. I mean, with Kaby Lake PCH, they added another PCI-E 3.0 x4 link for a secondary NVMe SSD, but didn't boost the DMI link speed that I know of. They could have easily given us two 12Gbit/sec SATA ports.

No it really doesn't. The DMI link back to the CPU is basically a PCIe 3.0 x4 link. Which means it would have to be dedicated to the SATA controller. Otherwise, they'd be shipping boards with a "12Gbps" storage interface and then people would complain when they realize it's only 12Gbps if there's nothing else running off the PCH.

I'm not sure what price has to do with it. I built quite a few older rigs back in the day, that didn't cost nearly $3000, that came with parity or ECC (parity and ECC RAM are the same once you hit the 64/72-bit memory bus widths.) It was a standard feature, well, before "enterprise" took over.

You're not sure what price has to do with removing features? I don't even know how to reply to that. Do you realize that there's a lot of people with ECC capable systems that don't run ECC ram in it because the extra demands of ECC aren't considered worth it?

Yeah, yeah, we know, "640KB should be enough for everybody".

So, when you have no actual facts to counter with, you just make up statements. Got it.

The point is, even consumer NAS units can saturate a 1Gbit/sec ethernet link. So, time to move up to the "next bigger" speed/size.

Your continuing ability to ignore information never ceases to amaze me. No, most consumer NAS units don't completely saturate a GbE link. They usually get close, but they don't usually actually saturate it. But even if they did, you're still going to run into the limits of your drives far before needing 10GbE. A pair of 1GbE links would resolve that problem and be far cheaper and simpler to implement.

I mean, we didn't keep using SATAII until HDDs saturated the link, did we? No, we moved up. Well, until the chipset vendors and standards authors got bought out, and kept us at 6Gbit/sec for SATA, rather than boosting to 12Gbit/sec, necessitating things like NVMe, which take a lot more board space than a simple tiny SATA port. Maybe if we had 12Gbit/sec SATA, we wouldn't have the abomination that is SATAe on motherboards, sitting unused. Surely, SATAe support takes more bandwidth from the PCH, if actually implemented with a drive that speaks that. (Which doesn't exist yet?)

Yes, we did. The early SSD's were SATAII and were completely capable of saturating the interface. Now, you bring up the port again so in case you missed it the first time. SAS 12Gbps does not use the same port as SAS 6Gbps and there's no reason to think SATA 12Gbps would either. Maybe you've heard of U.2 drives. If you go look at their connector on the motherboard side (SFF-8639), you might notice they look quite similar to the SFF-8643 connectors used by SAS-3 and coincidentally also support SATA Express.

Again, you're using surplus enterprise gear, and complaining that it doesn't fit a typical consumer lifestyle. Surely, consumer 10Gbe gear would be built a bit differently, probably not with "jet engine fans" set up. And, given the timespan difference between the production of your enterprise gear, and the production of hypothetical consumer 10Gbe gear, surely they could reduce the power / heat / noise, as they would be one or several chip shrinks forward in the technology timeline.

No, again you're ignoring information provided to you in order to keep your fantasy land alive. You're being provided with facts in regards to how the technology works and you're countering with how you think it should work which has no basis in reality. I assumed you were going to do something like this which is why I even mentioned the AC1300 adapters but naturally you ignored that. I'll tell you what. As soon as you develop a cheap 10GbE chipset that doesn't generate heat, let me know and I'll be first in line for it. But I know you don't want to be confronted with facts, you just want to keep complaining that things don't work the way you want. NEW AC1300 adapters already need heatsinks and I've seen more than one still have overheating issues even with the heatsink. You're wanting something with nearly 10x that bandwidth on the NIC side and 8x or so on the switch side. My personal equipment is old, yes. The equipment I use at work is not, and guess what? It still runs hot.

What kind of consumer deployment needs a 48-port switch of any speed? Most consumer don't have that many wired drops. For a consumer 10Gbe switch, I could see four 10Gbe ports, and 4-8 1Gbe ports on it.

*sigh*

I don't know why I'm bothering at this point. I guess I'm a sucker for punishment. At no point in time did I suggest a consumer deployment needs a 48-port switch. I gave you what's called data points.

More bandwidth means more heat. A 48 port GbE switch would need 96Gbps of bandwidth in order for you to be able to fully saturate all the ports. Your above mentioned hypothetical 10GbE switch would need *gasp* 96Gbps for you to be able to saturate all the ports. Therefore, logically, one might expect the cooling requirements to be similar between the two products. Which, again, would suggest fans will be needed. Which, again, given the form factor of said switches, you'd be looking at "jet engine" fans.

OK, so consumer switch vendors LIE about their specs. Who's really wearing the tinfoil hat here? Every 5-port and 8-port dumb GigE switch that I've used, claims to have adequate switching capacity to handle all of the ports at line speed. Are they lying? I don't know, I haven't tried to push them. I know that other than a Netgear 5-port switch that had a plastic casing, I've never had a consumer switch die - from usage. (Well, I had a nice metal-casing TrendNet GigE switch stop working after a storm where the power went out.)

I never said they lie about their specs. A lot of consumer switches DON'T list their switching fabric bandwidth. Netgear and TrendNet do. TP-Link and Linksys do not. That said, it wouldn't surprise me if even the former ones couldn't sustain that given the number they give you is a nice round number that's exactly the number you'd need to fully saturate the switch. Compare that to enterprise switches where you get more exact numbers and that number exceeds what you'd need to saturate all your ports. For example my previously mentioned ProCurve is rated at:

Throughput up to 480.3 million pps
Routing/Switching capacity 645.6 Gbps
Switch fabric speed 691.2 Gbps

Average homes dont do full machine backup to a server daily. And you can always stagger backup time.

Or walk an external hdd over for your hundreds of gb.

And even if they did, that brings me back to my question of how much would the average consumer spend to make that faster? At 2Gbps you're going to be maxing out your spindles. Unless you're running all flash in your NAS at which point you're not the average consumer.

Or backing up your NAS to your server. Try ten hours. (You do back up your NAS, don't you?)

No, I replicate my 40TB SAN over a 16Gbps of FC bandwidth because I buy what does the job I want it to do. But, guess what? It's limited by spindle speed so it takes days to do a full back up anyways but I also don't care how long it takes because that doesn't impact anything.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |