"Over tesselation" that's hilarious. I guess NVIDIA has too much image quality. Too much performance.
No, it's not hilarious, it's a stupid waste of GPU horsepower. I had GTX470 Tri-SLI back then instead of 5850 Tri-Fire and even then I thought over-tessellation is wasteful.
Jersey barrier in DirectX 9
Enhanced, bionic Jersey barrier in DirectX 11
http://techreport.com/review/21404/crysis-2-tessellation-too-much-of-a-good-thing/3
http://www.extremetech.com/extreme/...rps-power-from-developers-end-users-and-amd/2
Tessellation has failed to live up to the hype since the entire premise behind it was to have Adaptive tessellation that would make objects far more detailed up-close
without unnecessarily wasting GPU resources. The way tessellation is used in GameWorks titles it the opposite of that.
What was promised in 2010 regarding tessellation when Fermi dominated HD5800 series never came to fruition in modern games.
Seamless Level of Detail
"In games with large, open environments you have probably noticed distant objects often pop in and out of existence. This is due to the game engine switching between different levels of detail, or LOD, to keep the geometric workload in check. Up until this point, there has been no easy way to vary the level of detail continuously since it would require keeping many versions of the same model or environment. Dynamic tessellation solves this problem by varying the level of detail on the fly. For example, when a distant building first comes into view, it may be rendered with only ten triangles. As you move closer, its prominent features emerge and extra triangles are used to outline details such as its window and roof. When you finally reach the door, a thousand triangles are devoted to rendering the antique brass handle alone, where each groove is carved out meticulously with displacement mapping. With dynamic tessellation, object popping is eliminated, and game environments can scale to near limitless geometric detail."
http://www.nvidia.ca/object/tessellation.html
Also, unlike tessellation, Async Compute is built from the ground-up to be a
performance enhancing feature on Async Capable GPU architectures. In other words, DX12+Async Compute is a win-win. GPU architectures that have poor Async Compute architecture suffer minimal to no loss in performance vs. DX11 while more advanced architectures gain performance. Tessellation is
always a performance degrading feature. Not only that, but under DX12, CPU bottlenecks are lifted, allowing lower and mid-range CPUs to perform even better with DX12 GPUs. The easiest way to think of tessellation is increasing geometric complexity vs. Async Compute which is running more tasks in parallel -- aka multi-threading.
The fact that you compare them as like-for-like either shows you have no clue how Async Compute works or you are just trolling and defending your brand as usual.