Yeah Intel says 10++ will have a rearchitected metal stack so maybe (i am just guessing here) they are backing off from aggressive cobalt use along with a slight density reduction to MMP=40nm
So Intel finally had to concede that 10nm HVM is now only in 2019. Whats scary is they did not want to commit to H1 2019. This makes it look like a nightmare scenario. If Intel cannot ship 100-150 sq mm client chips by mid 2019 the outlook for 400 sq mm ICL-SP in late 2019 is really bad. BK finally accepted that Intel went too aggressive on density.
https://www.cnbc.com/2018/04/26/intel-execs-on-10nm-chip-delays-bit-off-a-little-too-much.html
I never understood what Intel gained by going from 40nm MMP to 36nm MMP . A 10% scaling gain was not really worth the added risk given that it forced SAQP on lowest 2 metal layers. I also think the decision to go cobalt was also linked to the same decision to go with MMP=36nm.
https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-versus-globalfoundries-leading-edge.html
"
The specific linewidth where cobalt becomes a lower resistance interconnect solution depends on several factors but is right around the linewidths being utilized here. My belief is that Intel used cobalt because they have a 36nm MMP and it made sense for them to do so. GF published a paper on 7nm process development with IBM and Samsung at IEDM in 2016 and that process had 36nm MMP and used cobalt for one level of interconnect. My belief is that with a 40nm MMP in the GF 7nm process cobalt wasn't needed and it is more expensive than copper, so GF didn't use it. Cobalt also offers higher electromigration resistance than copper and GF did use cobalt liners and caps around their copper lines to meet their electromigration goals.
The bottom line is Intel used cobalt because it makes sense for their process and GF didn't because it didn't make sense for their process. As we move to foundry 5nm and below processes I do expect to see more cobalt use and eventually ruthenium."
Interestingly we have TSMC and GF who went with DUV only first gen 7nm process, 40nm MMP and SADP for metal layers. Samsung went with 7nm EUV which avoided SAQP for lowest metal layers and 36nm MMP. Samsung is brute forcing its way to EUV on the strength of its DRAM/NAND cash machine. Intel is caught in no man's land and chosen what now seems to be in hindsight the route with most risk and it backfired badly. This experience should be a hard lesson for not only Intel but all foundries of how its very important to manage risk when developing leading edge process node by making pragmatic technology choices and decisions.
....OK......I looked at all this again and want to update my guess.....FWIW.....I'm thinking now AMD plays it safe on ZEN 2 and goes 12 core and doubles the L3 cache plus adds the AVX 512 extensions. I didn't realize ZEN 3 is only a year behind........so now I think they go 16 core on ZEN 3.........I doubt they had enough firm conviction in the 7 nm process (to risk 16 core) when they committed to 12 core design. Again from total P ~ 1/2 fCV^2 + W (total) I (leak.)
I'm assuming no voltage change at 7nm.........I'm also assuming they want to increase f as the new design is probably designed for higher freq. nominal operation............now their power vs frequency curves for 7nm would indicate they could double the core count to 12 at the same V and f and keep the power about the same. But if they want to increase f at least 15% and add AVX, I think they can only go to 12 cores safely and maintain a power level around 200W in a 4 chip MCM at perhaps 2.6/3.1/3.7 base/all core/max for the 48 core flagship MCM. ...........I don't know how much area the AVX 512 extensions take....(anyone?)....but without them and if they only go 12 core, I would expect a smaller die of maybe 180 mm^2 each........so the AVX extensions will add to that ....also have no feel for the added power running AVX (anyone?)
............anyway, that's my best guess and reasons.............comments and your predictions.....anyone?.................and in regards to Intel:
....I think these are the relevant comparisons to make my point.
Core i7 8700K 6 12 3.7GHz 4.7GHz 12MB 95W 14nm++
Core i7 7700K 4 8 4.2GHz 4.5GHz 8MB 91W 14nm+
R7 2700X 8 16 3.7GHz 4.3GHz 16MB 110W 12nm
R5 1600X 6 12 3.6GHz 4.1GHz 12MB 95W 14nm
.....let's look at what they did in going from 14nm+ to 14nm++...............Intel claims another 10 % additional drive current using the 14nm++....from a purported 4nm increase in fin height from 42nm to 46nm and maybe a small further optimization of short channel effects or strain.......and they increased contacted poly pitch from 70nm to 84nm.....so why the increase.................for one thing (among others), it decreases the PC-Contact capacitance in all the logic blocks and which apparently is significant.........so from active power ~ 1/2 fCV^2..............they added a lot of C by adding two more cores (maybe 30% more on a total chip basis) but reduced C in all logic blocks a bit by relaxing the contacted poly pitch......they dropped all core frequency SIGNIFICANTLY from 4.2GHz to 3.7GHz (12% decrease) even though all core freq from the device improvement alone could have gone up perhaps 6% had they not added 2 more cores (it went up 4.5 % on single core).....finally they increased power to 95W (5%)...........don't get hung up on the actual numbers......but everything they did goes in the right direction to support my contention that power/core is the REAL ISSUE.....it is not yield alone because at 14nm where the yield is very good...... Intel could have easily gone with a larger chip by going 8 core vs 6 IF they could have maintained a 3.7GHz all core and about 100W.....THEY COULD NOT..............AMD does 8 cores at 3.7GHz all core while Intel can only do 6 at 3.7GHz all core..............and we know if they increased core count to 8, the all core freq would have to come down significantly AGAIN as it did from 4 core to 6 core................also note that power/core issue bites Intel hard at the high core count / high freq / high performance regime .............however is masked for low power and mobile where I bet they drop the supply voltage significantly (remember V^2 dependence on power).................this 14nm++ IS the 10nm device.....there will be NO additional device improvement going to 10nm.............so power/core reduction must come from scaling alone.........but 10/14 linear scaling is only 71%...........so 30% on C does not allow them to double the number of cores because C must be cut in half at the same f and voltage to maintain the same approx power..........BUT.........they scale the LOGIC quite a bit more by adding the two process features at 10nm of dummy gate and contact over active silicon......this could improve the logic density by maybe 15% beyond (like maybe equal to a 8% further linear reduction in the logic) what they would get with just the 10/14 linear scaling alone.........even if EVERYTHING worked at 10nm, Intel can' get close to doubling the number of cores............while based on the power vs freq curves AMD showed, they at least could get close.......but I think now, they play it safe and only go to 12C per chip but with a higher freq basic design (ZEN2), increase f perhaps 15%. ...........those power vs freq curves must have had Intel dropping turds everywhere................................does some/all/any of this make sense to you guys??..........comments please................you guys hiding??