In theory at least, with EMIB in the mix Intel could keep the CPU tiles pretty tiny. That would help mitigate the yield problems. And then do the uncore and everything else on some other node.
Sapphire Rapid is 7.
Are they just going to leapfrog 10nm and go straight to 7?
Who says they have yield issues
They just launched their biggest improvement to their line-up in a decade on the same process, why would they need 10nm yet if they can milk 14nm for another year?
Intel far needs better uarch designers and management than process tech, period. I don't know how Intel execs keeps a straight face when their 2017 GT2 iGPU is barely faster than their 2013 one while Apple got collectively 6x faster in the same period. If this isn't a clear sign of utter mismanagement, I don't know what is.
At this point I would be concerned about latency issues with cores connected via EMIB.In theory at least, with EMIB in the mix Intel could keep the CPU tiles pretty tiny. That would help mitigate the yield problems. And then do the uncore and everything else on some other node.
At this point I would be concerned about latency issues with cores connected via EMIB.
At this point I would be concerned about latency issues with cores connected via EMIB.
Intel far needs better uarch designers and management than process tech, period. I don't know how Intel execs keeps a straight face when their 2017 GT2 iGPU is barely faster than their 2013 one while Apple got collectively 6x faster in the same period. If this isn't a clear sign of utter mismanagement, I don't know what is.
It is unlikely that the CPU cores in a client product would be connected via EMIB (though in server this is a good way to add lots of cores). Instead, the CPU cores would be in their own complex complete with cache.
I really am curious to see the first EMIB processors come out because it's just not clear how everything will be structured/pieced together.
If EMIB isn't great for desktop users, what does that mean for Intel's future HEDT line up?It is unlikely that the CPU cores in a client product would be connected via EMIB (though in server this is a good way to add lots of cores). Instead, the CPU cores would be in their own complex complete with cache.
The interesting question, though, would be how the memory controller is connected. Since the memory controller would be used by all of the major blocks on the SoC, I wonder if it would be part of a discrete I/O block (basically going back to the old way of having the memory controller integrated into the chipset, but this time the chipset is close & connected via EMIB), or if it would just be in the CPU complex.
I really am curious to see the first EMIB processors come out because it's just not clear how everything will be structured/pieced together.
If EMIB isn't great for desktop users, what does that mean for Intel's future HEDT line up?
A client product likely would only have one CPU tile, although given how bad yields are, I wouldn't rule multiple tiles as an possibility.
The Sea-Of-Cores patent shows one possibility, with the CPU/GPU/FPGA? cores in a top die with everything else in a bottom die. Presumably they would have multiple tiles in both dies.
If EMIB isn't great for desktop users, what does that mean for Intel's future HEDT line up?
Future HEDT will probably be SKUs with more CPU tiles stitched on and GPU/other sutff left off.
Is Sapphire Rapid going to be 14nm+++? I've been told 10nm++ but I don't even see 10nm+ or 10nm yet. What about Icelake? Tigerlake?
And this all depends on the actual max per pin data rates that can be sustained over the inter-complex traces in the EMIB substrate.
12:01PM EDT - AIB a small sliver for communication and data streaming at 1 Tbps
12:09PM EDT - 20K EMIB connections up to 2 Gbps each
After Cascade Lake, its Icelake. Then its Sapphire Rapid on 10nm. There's a pretty good chance Icelake server would end up being the first 10nm++ part.
It's really high.
https://www.anandtech.com/show/1174...-stratix-10-fpga-live-blog-845am-pt-345pm-utc
Can't wait to see CPUs implemented this way.
After Cascade Lake, its Icelake. Then its Sapphire Rapid on 10nm. There's a pretty good chance Icelake server would end up being the first 10nm++ part.
It's obviously not the cheapest way of doing it. It may be the cheapest way of doing very high bandwidth interconnects. That may not be true on the client side for a while however.
I can't see anywhere besides putting a mid-range discrete class iGPU where EMIB would make sense on the client side. Everywhere else they are better off doing monolithic or MCM packaging. They need further monolithic integration with the PCH, so they can bring Core chip based platforms down into power levels Atom and ARM chips are at.
The Lenovo leak mentioned "Ice Lake Refresh" which I take to mean Tigerlake. And Tigerlake (well the CPU cores anyway) is really just Icelake on 10++.
No reason to think SR isn't 7 at this point. But it would likely only be the CPU cores themselves on 7.
Ice Lake Refresh refers to a refresh of the platform to include an Ice Lake-based processor family.
Source?And Tigerlake (well the CPU cores anyway) is really just Icelake on 10++.