Raghu,
Many hypothesized that AMD's very high transistor/mm2 chip density impeded the GPUs from achieving high clock speeds.
There are 5 key areas you need to consider.
The first is that not all of the building blocks of the GPU scale linearly with a node shrink since not all types of transistors scale the same. From GTX480->GTX580, we have learned that not the same types of transistors make up the entire GPU die. Moving from Hawaii to Fiji resulted in a shader and TMU increase of 45%, but the number of ROPs stayed the same despite a 36% increase in die size.
The second is that transistor speed/frequency may negatively impacts density (read the first paragraph under the SRAM graph in the ExtremeTech article below). Packing more transistors into tinier areas causes hot spot formation, which increases GPU temperature and power usage, while hampering max sustained clock speeds. It's easily possible that the move from 1.05Ghz clocks on Fiji to 1.5-1.55Ghz clocks on Vega 10 would require a hit to AMD's transistor density and thus die size. Fiji also had what barely 5-8% overclocking headroom? We don't yet know if Vega will suffer the same fate.
The third is that higher die size (I.e., transistor count) does not always correlate linearly with more performance. That is because some transistors are spent on non-gaming performance logic. Transistors are often used on power gating, and other building block functions of the GPU (such as media engine, 8-bit and 16-bit floating point operations) that don't impact PC gaming performance.
https://www.extremetech.com/computi...unts-theyre-a-terrible-way-of-comparing-chips
If Vega comes in using 225-240W of power, that's a significant reduction to 280W of Fury X:
https://www.techpowerup.com/reviews/Zotac/GeForce_GTX_1080_Amp_Extreme/27.html
I personally don't care if the GPUs use even 700-800W (have owned R9 295X2+R9 390 TriFire), a lot more people are obsessed about power usage which may have forced AMD to move away from 280-300W flagship chips (and for me this is a huge step for the desktop since it means possibly 15-25% of extra performance would be left on the table to save $2-3 a month in electricity costs).
The fourth is that if AMD is designing a NGCN architecture on a relatively new for them 14nm/16nm node. Since Polaris 10 is not a NCU design, the team which designed Vega is unlikely to have benefited from many lessons learned on the Polaris design. The architectural changes between Polaris 10 and Vega 10 are far greater than between Maxwell and Pascal. We should not expect AMD's best NCU design out of the gate. We should rather expect 2nd or 3rd iteration of that design to maximize transistors/mm2 efficiency (for example when AMD shifts Vega to 7nm).
The fifth point is controversial and my own opinion. Looking at Steam users, and previous successes (or lack thereof) of ATI's/AMD's 9800XT, X800XT/X850XT, X1800XT, X1950XTX, HD4890, HD5870, HD6970, HD7970/Ghz, R9 290X, Fury X, how much more evidence do we need that the competitor's flagship will always outsell ATI's/AMD's flagship, no matter what! If you were AMD, how much resources and effort would you put into Vega 10 for the sake of flagship sales? Not much. The reason AMD MUST design chips like Vega is to iterate them as next-generation mid-range or use as a foundation for future designs. They need a fast card for workstation, neural network, deep learning, professional use this generation. They can't wait every 2-3 years.
AMD cannot be naive to think that 70-80% of high-end users won't buy their cards as long as they lose performance by even 5%. AMD isn't stupid to see that a lot of high-end buyers are already in the G-Sync eco-system. AMD already knows that they will have missed out on a whole year of high-end sales with consumers buying 1080s and Titan XPs. AMD isn't naive to think 1080Ti won't launch by March-April. When all of that is considered, the cost of achieving flagship crown performance at all costs (300W flagship) is probably not even worth it. The market simply isn't there. It's amazing AMD even keeps bothering since their cards were far more popular during HD4870/5870/6970 days. Almost no one bought the Fury X, a chip that beats a stock 980Ti at 1440p/4K:
https://www.techpowerup.com/reviews/Zotac/GeForce_GTX_1080_Amp_Extreme/29.html
Fury X is one of the most hated cards in the market but recall FX5800U/5950U, 6800U, 7900GTX, GTX680, GTX780/780Ti. Those cards lost to ATI/AMD's flagships and still sold as well or outsold AMD. Notice the disproportionate hate/sales penalty for AMD's flagship vs. the competitor's?
The point of this last rant is context. AMD has lost context during the last generation. They cannot price a flagship at the same price as the competitor and expect it to sell well since most high-end consumers buy the competitor, while objective gamers found 980Ti's bonus VRAM and 20-25% overclocking headroom superior. These factors weren't in place during HD7970/R9 290X generations, which is why these same objective customers bought those cards.
Vega 10 can be successful even if it's slower than the competition as long as it's priced correctly.
*** As noted above, Vega 10 is not 550mm2.