This seems to be one of the most bizarre ways to compare videocards. In modern games, the performance of a videocard is impacted by many factors, not just 1 spec in particular:
1) Shaders (ALUs/shaders)
2) Texture fill-rate (TMUs)
3) Memory bandwidth/compression technology
4) Pixel fill-rate and rasterization (ROPs)
5) Compute (DirectCompute, ASync Compute, Pre-emption, etc.)
6) Cache (L2 caches are continuing to increase)
7) Memory amount (VRAM bottlenecks are becoming more prominent as we enter the 2nd half of the current gen console cycle).
8) Geometry performance (while modern GPUs can handle tessellation better, this is still a key component that impacts GPU performance)
I am not an electrical/GPU engineer, and I can only imagine how complex balancing all of these areas must be when designing a new chip.
To discard all of the facets that make for a
balanced GPU design and focus entirely on ALUs (TFlops) is the same thing as judging how advanced/good a modern camera is strictly based on its Megapixel marketing label. While it can be true that a higher megapixel camera can often be superior (assuming it's a balanced design with great optics),
it doesn't mean that there is a direct correlation between camera quality and its mega-pixel rating.
Then, why do some insist on deriving/comparing videocards based on Tflops?
GTX580 = 1.58Tflops
GTX680 = 3.25Tflops
OR
R9 290X = 5.63Tflops
Fury X = 8.6Tflops
OR
GTX 780Ti = 5.35Tflops
GTX 980Ti = 6.06Tflops
Just because in some cases Perf/Tflops may scale nearly linearly, it doesn't mean that Tflops itself is an accurate measure of how fast a videocard is without looking at all the other factors. In the modern era, it has now become clear that even memory bandwidth, ROPs, shaders/CUDA cores cannot be directly compared anymore with any reasonable degree of accuracy. There are no more shortcuts to gauge how fast a new videocard will be based on paper specs.
We even saw way back during HD7970 vs. HD6970 that using only paper specs prior to real world results can lead to erroneous conclusions:
"Even with the
same number of ROPs and a similar theoretical performance limit (29.6 vs 28.16), 7970 is pushing 51% more pixels than 6970 is."
In the real world, despite the same number of ROPs as the 6970, 7970 crushed it.
We can just as easily use any another example to prove the opposite. Synthetics show that Fury X and 980 absolutely demolish the R9 290X in pixel fill-rate.
Could we approximate real world gaming performance from pixel pushing power of GTX980 and Fury X relative to the R9 290X? Real world games show that R9 290X trades blows with the GTX 980 and Fury X isn't anywhere close to 2.38X faster in games.
More than ever, we absolutely require benchmarks in real world games. Paper specs alone are starting to prove to be very unreliable when looking at how modern game engines and how graphics cards have evolved.
I am actually surprised so many people insist on comparing graphics cards using Tflops. Even the recently launched GTX1070/1080 cards show that no correlation needs to exist.
GTX1080 has 37.3% higher TFlops paper spec but in the real world is only
21-22% faster.
In conclusion, Performance/Tflops seems like another questionable (made-up?) metric to add to the list because it has no reliability whatsoever.