PC Polaris has less features than PS4 Polaris, and at the same time, AMD delayed Vega after PS4 PRO launch.
Show me the error in my reasoning. (Saying its a custom APU its not valid, neither is to blame it on DX11/12)
1. Vega supposedly comes with HBM2, which may be supply constrained and/or may have been too expensive to manufacture in volume during most of 2016.
2. It's also possible AMD needs to hit specific yields & profitability targets when considering die size vs. wafer cost. Almost no one will purchase a Vega 10 single chip card for $899-$1200. AMD doesn't have a large userbase of PC gamers willing to throw
whatever $ it takes to have the latest. Historically, ATI/AMD flagship cards priced above $549 don't sell well, and Fury X is no exception as it didn't sell well at all at $649.
The first 2 points are probably critical since
AMD has finished the design of Vega 10 a long time ago.
3. Process node optimization for higher GPU clocks - this is probably another huge point that has delayed Vega 10's launch. If only RX 480 launched with 1400-1450mhz clocks, it would have easily outperform the GTX1060. Vega 10 would be a lot more competitive for AMD if it launches with 1400-1450mhz clocks as opposed to 1266mhz clocks. Hopefully AMD has learned something from Polaris 10/11's launch to be able to increase GPU clocks for Vega.
As far as Polaris 10 not having some features that PS4 Pro's GPU does, do you realize that RX 480's specs are way more powerful than the GPU in PS4 Pro? Polaris 10 launched way before PS4 Pro and if AMD had incorporated those features into RX 480, it could have made the die size larger and delayed the launch. Sony requires certain things that RX 480 users will never care for -- like 4K checkered upscaling. Since the CPU in current gen consoles is so underpowered, the GPUs have to do more heavy lifting which isn't required for the RX 480. Your comment is strange because the GPU in Xbox 360 was more advanced than the X1800/X1900 series of the time too. Are you suggesting ATI screwed over PC gamers during X1900 era as well? It's common for some GPUs inside consoles to be more advanced than the consumer variants since MS/Sony/Nintendo can and do pay for custom design.
Considering NV launched the FE GTX1060 for $299 less than 6 months and it's already possible to buy an RX 480 4GB for $155-180, AMD delivered huge price/performance for PC gamers, I don't understand how AMD screwed PC gamers?! Even more so when we see that in the
latest titles such as Infinite Warfare and Titanfall 2, RX 480 is beating GTX1060.
Unfortunately, this is the same architecture as Maxwell, with few tweaks. And remember, Maxwell was supposed to be 20nm arch. first, but because of the fail of the process it had to be ported back to 28 nm, with excluding of some of the features from the uarch.
Paxwell can do some concurrent Async compute + graphics though and Maxwell/Kepler cannot.
Compare AIB 970 vs. AIB 780Ti vs. AIB 1070 in 3DMark Fire Strike Extreme vs. TimeSpy:
FireStrike Extreme
MSI Gaming 780Ti is 5.5% faster than MSI Gaming 970
Asus Strix 1070 is 65.9% faster than MSI Gaming 970
TimeSpy
MSI Gaming 780Ti is 0.6% slower than MSI Gaming 970
Asus Strix 1070 is now 74.7% faster than MSI Gaming 970
G1 Gaming 1080 is 27.7% faster than EVGA 980Ti SC+ under Fire Strike Extreme but that grows to 36.8% under TimeSpy. 1070 AIB is almost 15% faster under TimeSpy than 980Ti AIB is.
http://www.hardware.fr/articles/952-13/benchmark-3dmark-fire-strike-time-spy.html
It's more reasonable to call Pascal Maxwell+ since it's theoretically better at handling DX12/Async compute. The fact that NV introduced dynamic scheduling into Pascal shows that if they were to scale their CUDA cores, they may also run into shader under-utilization as GCN has. It's probably why NV is building the ability to perform more parallel concurrency into their future GPU architectures since they know DX12 is the future and DX11 serial workloads will be outdated at some point. It seems logical that the more shaders there are, the more chance of shader underutilization.
It's logical that Vega's architecture for DX12 will be much more advanced than Pascal since rumours have it AMD will shrink Vega 10 in 2018 and it'll remain in the line-up as some RX 500/600 series card. It means AMD designed Vega to not only compete with 2017 Pascal cards but also with some 2018 Volta cards. For AMD's plan to work though, it would mean even more DX12 games with Async Compute, even more AMD GE titles, and counting that the shrunk Vega's higher GPU clocks will be enough to compensate for Volta's newer architecture + GPU clocks (the risk is that Volta may be even better suited for DX12).
I think AMD would benefit even more if they released faster RX 465, 475, 485 (or RX 560/570/580) cards than $550-650 Vega, and got back into making competitive mobile dGPUs. All that effort on Vega for 2-3% of the entire dGPU market in AMD's case considering most $500+ buyers simply buy NV anyway. They are better off figuring out a way to make more power efficient GCN SKUs for the mobile market. I personally feel that the market for flagship $500+ cards is
so unrealistically biased in favour of the competitor that Vega's true benefit will only come into play once it's shrunk in 2018 and priced at $299-399. HD7870, HD7950, R9 290 and RX 480 are far more beneficial for AMD than a $600 flagship.