Why don't we (consumers) have 12Gbit SATA, like they do with SAS? Same reason, the vendors of "enterprise" equipment want to keep consumer gear in the near dark ages technologically speaking, so that they can maintain their margins on "enterprise" gear.
Intel is just as guilty. Back in the day, ALL PCs had ECC RAM. It just made sense, for reliability. But Intel saw that as a feature that they could use to differentiate their "enterprise" gear.
Edit: I wouldn't mind it too much, if consumer gear got 5Gbit/sec ethernet (backwards compatible with 1Gbit/sec ethernet), and enterprise gear stayed 10Gbit. It would at least be a step forward, and let us use existing wiring.
LOL. Just LOL. Let's take off the tin foil hat for a minute here.
A) SAS is full duplex and dual ported. SATA is half duplex and single ported. SATA will never be as fast at SAS and it doesn't need to be.
B) Bandwidth/features don't come out of thin air. Where/how would 12Gbps SAS be implemented on a consumer system? A dual port 12Gbps SAS adapter would need 4x PCIe 3.0 lanes to be able to reliably feed it. The hardware isn't free either.
C) "Back in the day" computers were $3,000. Ignoring inflation, $3,000 will still get you a computer with ECC. But most people don't want to spend $3,000 therefore compromises have to be made.
D) "Backwards compatible" is the bane of innovation. You can get faster speeds with surplus gear for dirt cheap, but once people find out they'd need new cables and NIC's/HBA's, they rapidly lose interest. Hence why they're now creating new standards to try to get more speed out of existing hardware. We'd make a lot more progress in computing if we could just throw out backwards compatibility with every new innovation but nobody is going to go for that.
Then there's the whole supply and demand thing. Yeah, whining about 10GbE being expensive sounds cool and all, but let's be realistic here. What would
you actually do with 10GbE? I know you (and most other people) don't have a storage system fast enough to actually need 10GbE. Spindle drives are going to struggle to saturate a pair of load balanced 1GbE ports. The cheap NAS units most consumers buy don't have the grunt to push through that kind of speed, even if the drives in it do. Most cheap NAS units can't even saturate a single GbE link regardless of drives. 12Gbps SAS/SATA? Consumer spindles don't come remotely close to needing 12Gbps. SSD's have moved to PCIe on the consumer side to get the speed and that's a far more economical and universal choice. A PCIe slot can be used for whatever the user wants and every computer has them. Embedded 12Gbps SATA would mean dedicating resources to storage, even if you didn't want it. Again, let's be honest here...
I can transfer a 15GB Bluray rip from my PC to the file server in 2 minutes over a single GbE link. If I had a 10GbE link, that would drop to probably around a minute and half because at that point the spindles can't keep up. I'm a pretty impatient guy. I mean I REALLY hate waiting. Every device I own is booting off flash storage because I hate waiting. But to get that above example under a minute, I'd have to go 10GbE (or faster) and replace 40TB worth of spindles with SSD's. Yeah, my wallet says no to that, as does most consumers. How much money would you spend to shave seconds off network transfers?
There's also little realities that most "prosumers" don't consider. Faster interfaces use more power. More power means more heat. More heat means more cooling which usually means fans. More fans means more noise. 10GbE NIC's have pretty sizable heatsinks on them and it's because they need it. Where are you going to physically put that on a consumer board? Now, how about the switch? Have you looked at how much power enterprise switches use and how much heat they put out? Older Cisco 24 port GbE (not 10GbE) switches are pulling over 100 watts even with
nothing plugged into it. My HP Procurve ZL setup with 8x 10GbE ports and 96 GbE ports was pulling almost
400w with only a single 10GbE port and 24 or so GbE ports in use and POE disabled on all the ports. My Cisco MDS (24 port 4Gbps Fiber Channel) uses about 120w idle as well and it's noisy as shit. I'm talking hearing it through a wall loud. No normal person wants that in their house. Even with GbE, once you get to the 48 port range, you're pretty limited on your fanless options and since we're talking switches, you're talking little 20mm fans which mean a nice loud, constant, high pitched whine. Off the top of my head, I don't recall seeing a fanless true 10GbE switch (meaning not a GbE switch with a couple of 10GbE uplinks) and a quick Google search would seem to verify that. I've seen heat issues on AC1300 wireless cards even with heatsinks on the cards.
Lastly, not all GbE devices are equal. You're accusing the manufacturers of holding back consumer gear purely to keep their margin high. Do they have better margins on enterprise gear? Yes, absolutely. But is enterprise gear better hardware? Yes, which mean it's going to be more expensive. Have you ever tried to saturate a consumer GbE switch? I have and I can tell you I've killed a Netgear and two Zyxel switches by turning on DRS on my VMWare hosts and watching them thrash the network with vMotions and tossing in some svMotions for giggles. They may be 24GbE ports, but they don't have 24Gbps of switching capacity. As you can see, I sustained a constant 113Mbps transfer rate in the above example. I've had multiple people try to insist to me that you can't get that much speed out of a single GbE connection. Most people are "happy" with over 90MB/s. I've got Intel NIC's on both ends of that connection and a Cisco switch in the middle. Not all GbE is equal and quality parts cost more money. You're wanting a Ferrari for Kia prices. Could Ferrari's be cheaper? Sure. But they're never going to be Kia cheap. Who makes the NIC on most consumer boards? Realtek, because they're cheap. If you want Intel, you usually have to got with enthusiast or workstation boards. Have you looked for Realtek 10GbE NIC's? Guess what, they don't make one. You've got to go to the big boys if you want 10GbE.
You can get speed for cheap if you really want it. I paid about $300 for my surplus Fiber Channel setup around 3-4 years ago. I've got 16Gbps connectivity between my servers. My IT epeen got huge when I did that. But guess what? That's worn off and, I'm planning on going back to iSCSI over regular GbE. Why? Because honestly 4Gbps is sufficient speed and as mentioned above the FC switch is obnoxious. That switch alone is costing me $150/yr in electricity and out of the 10 devices in my rack, it's by far the loudest. The ProCurve I previously mentioned would cost me $450/yr. It's honestly not worth it.
If you want to go balls to the wall fast on the cheap, you can get a surplus 40Gbps QDR Infiniband setup for 3 systems for about $500 these days. But then you'll just end up realizing something else is too slow now.
Regarding USB 3.1 and eSATA, do you know what the maximum cable length is for those technologies?