Beyond the glitz and glamor of these often over-hyped and over-marketed proprietary approaches to speeding up gaming performance from both AMD/Nvidia, is raw performance. In raw performance, Vega is a quite compelling compute platform that matures by the day as they get features working and drivers refined.
https://www.techpowerup.com/gpudb/b4893/powercolor-rx-vega-56
FP32 : 11Tflops
FP16 (2:1) : 21 Tflops
Find a comparable Nvidia card w/ this performance. AMD goes after the raw performance while shaping open source paradigms around it. Nvidia goes for non-standard meme cores and shapes proprietary paradigms around it.
http://blog.gpueater.com/en/2018/04/23/00011_tech_cifar10_bench_on_tf13/
http://blog.gpueater.com/en/2018/03/07/00003_tech_flops_benchmark_1/
It's the Intel/AMD paradigm.
I'm quite confident that an opening was created way back when vega was introduced. The drivers, open source ecosystem, and vulkan are evolving around it.
At these prices, Nvidia has now brought negative attention to themselves and people will now begin piecing apart their meme architecture and looking for flaws. It was exactly the wrong thing to do for a new and unproven architecture. In that, they will find many failed preparatory standards. A large footprint of hardware locked behind their new cloud services model. By comparison to AMD, people will discover similar raw compute capabilities w/o the proprietary nonsense. AMD's problem are the drivers and tooling which seem to be improving with time which was ofc going to take longer than an completely proprietary in house approach w/ a much more focused micro-architecture.
Take ray tracing for instance.... I hope people don't actually believe the new proprietary ray tracing section of Nvidia's GPU pipeline is doing
all of the ray tracing compute
completely in parallel. Obviously it isn't which is why you take a performance hit when you enable it... Why? Because the ray tracing cores only do a portion of the ray tracing whereas a large amount is still emulated in the traditional pipeline.
Then you have DLSS which is literally a meme feature on top of meme tensor cores. The new architecture is like an eternal beta test platform. None of the new features are standalone pipeline compute regions. They're like teaser compute units.. Just enough to call it a new feature while not achieving much. The real question is what they did in terms of the SMs to speed performance. That's all that really matters. In this, there has always been an opening.
7nm for both Nvidia and AMD is where the real comparison starts. It's a point in which they both will have had enough time to start locking in the makings of a new and bold platform outside of meme beta feature sets.
Ray tracing belong on a completely separate Die imo w/a high level, low latency copy/coherent BVH tying it to the main GPU pipeline. If you had any awareness of what it takes computationally and memory wise to do more proper ray tracing, you'd understand any reference to it on a monolithic die is a meme.
As such, everyone is on the same footing.. Gimmick emulated speed up features. Tensor cores are memes for AI and Ray tracing cores are memes for Ray tracing. Everyone is going to have to go MCM to be competitive and I don't see this in GPUs yet. This and Intel is coming into the game hard mode and many others are making dedicated accelerators just as I indicated. The future is anyone's. Were in a big hardware boom/wave. I'm not betting on any particular horse nor am I becoming indebted to someone's dev board level premature architecture.