1) maybe, maybe not. Variance works off of a lot of different factors
And is significantly easier to bin based off ( high leakage/low leakage) because there is simply less transistor variance. The funny thing is Intel have said in the future they are going to start doing the same thing but with small interposes, but keep fighting that fight right up until intel does it, then just pretend you always thought it was awesome.
2) Intel historically has had absolutely insane yields on their server parts. The large server dies have an absolutely insane lvl of redundancy and harvest ability. I think you severely underestimate the yields on Intel server parts.
Then why do they never sell 24 core Broadwell EP's why is skylake-SP only upto 28core..... go think about that for a minute. AMD can pick and match chips as you can see so far amd has had no problem delivering truckloads of 8 core chips.
The MCM really doesn't buy you anything over monolithic except not having to tape out a separate die. And that's the reason that AMD is doing it, because they are resource constrained, not because it is better. If they weren't resource constrained, they be doing a larger monolithic die.
Funny how you pick and choose to ignore the stuff that invalidates your point like the fact neither intel or amd can build what is in a 32core Naples Processor on a single monolithic die today.......
3) And no one would want to. 10Gbe? It is all but dead in the server market. New deployments are being done with 40Gbe/25Gbe/50Gbe/100Gbe.
Care for a signature bet?
And what is your source, dare i ask.
P. S. By the way, seriously, would not 10GbE be pretty outdated at this point, you know.
now i know you guys have no idea what your talking about........
10Gb/e = 10.3125G SerDes
40Gb/e = 4x 10.3125G SerDes
25Gb/e = 25.78125G SerDes
50Gb/e = 2x 25.78125G SerDes
100Gb/e = 4x 25.78125G SerDes
Now as of right this second the two most common TOR ports found in switches are either :
1. 10/25gbe switchable SFP+
2. 10/25/40 switchable QSFP
SO can you guess what 16x 10gbe interfaces can be physically pinned out as..... thats right 4x 40Gb/e, even if for some reason there is a limitation not allowing Zepplin to at layer 1 create a 40Gbe interface it still isn't a problem because QSFP ports can break out into 4x10gbe and then you can just use LACP. If you really need to you can 8x 10gbe LACP into TOR 1 and the other 8x into TOR 2. Now your done.
my Second point is no one is using 100Gb/e as access ports to Servers, 100Gb/e is uplink only, even the biggest switch ASIC i know of ASE-2/tomahawk is "only" 3.2/6.4 Tbps of throughput so there isn't enough bandwidth for it, then there is the fact you still have to get that traffic EAST/WEST/NORTH.
next time you want to try and make a BS point like that dont do it to a guy who designs highend networks for a living
edit: also most common server connectivity sold to enterprise is 10Gbe by a very long way.
USB 3.1 in the server space??? 128 lanes of PCIe is simply unneeded in this space or pretty much any other per socket (and before you scream about NVMe, no one wants to route that many lanes from the socket to a backplane, real NVMe deployment are done with PCIe switch chips).
So on PCIe lanes only because they have no other option, what you just said is everyone loves being bottlenecked.
So the funny thing is, I have been talking to nutanix engineers and they are really excited about what a 24/32 core naples with 128 lanes will mean for them, I better go inform them that they are wrong, cuz they dont understand hyper-converged........
And I'm pretty sure that Xeons will support more memory per socket.
Based off what? Your ignoring the fact that even if it can it will be at significantly lower throughput ( two less channels, lower max freq and then if they going 18dimms a proc it will run even slower).