It's OC'ed to hell and beyond, 150MHz on the core, 150MHz on the HBM2, probably sucking in close to 500W! The CPU is OC'ed a little bit as well. The 1080Ti is not OC'ed though, it can easily reach 2050MHz at a much lower ~350w. So yeah not a fair comparison at all. Despite all that it still couldn't beat the 1080Ti.
I am guessing RX Vega will ship with 1650MHz watercooled version, and less than that if they plan on an air cooled one.
Gamers nexus had 1700mhz and 1100mhz at 400Watts with their custom water build for FE. Its possible vega could run at 500W with the proper Pin arrangement and bios tweaks, as the cards appear to be thermally limited, and not necessarily architecturally. I'm not sure why someone would want 500Watts, but it might be possible.
RX Vega could possibly clock much higher for less wattage if HBM2 and building around it actually turned out to be the issue. While technically it should take less power and produce less heat, a design could be flawed or use improper voltages resulting in spiking temperatures and power draw. Since RX vega will use one stack of 8 rather than two, there is a scenario where that could benefit clocks, power draw and performance.
An 10% increase in clocks results in ~33% increase in graphics tests scores?
I'm confused, unless those are full and cut GPUs.
Vega FE throttles hard. The lower score was likely from the typical 1400mhz rather than its registered 1600mhz.
----------------------------------------------------------------
Vega's design issue has more to do with Pascals HUGE jump, and less to do with what was likely amd's initial performance aim. It is likely AMD expected 1070 or slightly greater performance to be rough approximate for nvidia's high end when they began the design(prerelease of either architecture).
Using standard OC vendor cards(which are typical OC within a certain TDP envelope, not max potential), traditional boosts or "one and done" autoclocking, disagrees with the common assumption that Maxwell was the biggest architectural leap and pascal was poor, at least as far as gaming is concerned. A good site that I find matches performance TIME and TIME again whether its ssd/hdd/ram/cpu/gpu is
http://cpu.userbenchmark.com/. I find when I average reviews together for hardware, it tends to match the specs coming out of this one website closest. It also shows direct comparisons.
Let's use the 80 editions as an example(this way we avoid the various Titans and differing/nonexistent placements depending on release):
Nvidia Recent history:
-
http://gpu.userbenchmark.com/Compare/Nvidia-GTX-580-vs-Nvidia-GTX-480/3150vs3157 20%
-
http://gpu.userbenchmark.com/Compare/Nvidia-GTX-580-vs-Nvidia-GTX-680/3150vs3148 40%
-
http://gpu.userbenchmark.com/Compare/Nvidia-GTX-780-vs-Nvidia-GTX-680/2164vs3148 35%
-
http://gpu.userbenchmark.com/Compare/Nvidia-GTX-780-vs-Nvidia-GTX-980/2164vs2576 30%
-
http://gpu.userbenchmark.com/Compare/Nvidia-GTX-1080-vs-Nvidia-GTX-980/3603vs2576 65%!!
I also decided to search through some recent comparisons and jump to some old reviews pages just to make sure. In terms of average performance the jumps in Gaming appear to MATCH cpu userbenchmarks suggestions.
I do not understand where this idea that Pascal was a Poor jump or architectural change occured, but its not true. It might not have made any real IPC improvements per clock or introduce major architectural additions, however the typical and easy to gain boost Clocks and the improvements in perf/watt were huge.
Someone custom water cooling and greatly increasing the power envelope of the 980(or equivalent 900 cards) might have been able to increase the maximum headroom beyond the average effective speeds in the numbers above, but the vast vast vast majority of owners will not do this(or be able to do this), and the resulting power draw will be enormous in comparison to pascal.
This is the sole reason AMD is so far behind(aside from access to money that went into zen instead.). AMD likely expected something around 35% effective jump during the initial design stages. Pascal however made a DOUBLE generational jump in terms of gaming performance(again see above). If Pascal was only a single 35% generational jump, vega would be performing to expectation. This also leads to the other often repeated lie that the 60s become 80s and the 70s become titans. No, generally speaking GTXs replace the card directly above them + or - 10%. The 60s become 70s, the 70s become 80s, and the 80s become titans. ONLY PASCAL truely broke this trend.
Just for sake of argument:
http://gpu.userbenchmark.com/Compare/Nvidia-GTX-1080-vs-AMD-Vega-Frontier-Edition/3603vs3929
Similar in performance, but generally speaking the FE performs slightly worse. Not many benchmarks for it, so its hard to guage its exact measurement(also consider its tendency to throttle), but this was reflected in the reviews that showed the poorest performers(the throttlers) 10% above the 1070 and the best performers on equal footing with the 1080.
Why AMD chose to continuously delay release for a year is odd, but now the question is if they can play with the power envelope and the one stack of 8 HBM2 for the RX, along with driver support to drive the card to its maximum(though Power obsessed) performance and hope they can manage 10-15% above 1080 when all is said and done. I see no instance where it will match a 1080TI under any scenario(that would require a 3 generational jump for AMD), so they might as well push the limits of the card as much as possible.