Polaris and Pascal tested in 16 2016 Titles [HardwareUnboxed]

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Didn't hear about the Founder's Edition then I take it?

I heard about the early 1080 and 1070 owners getting hosed with Founders Edition cards, but not the 1060.

I just checked all the popular Canadian sites and none of them have any Founders Edition 1060's. NewEgg, NCIX, Amazon, BestBuy. Perhaps I'm not looking in the right spot or maybe they no longer sell them?

If they can't be bought they probably shouldn't be used for power consumption differences.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
I think it does have ramifications that matter to consumers. The fact that NVidia has such a large lead in performance per watt/mm2, also practically guarantees that NVidia will always have the faster hardware. NVidia hardware does more with less, and since there are limitations on what consumers think is acceptable in regards to power consumption, AMD can only make their GPUs so large and power hungry before people say, "WTF!"

Fury X vs 980 Ti is a perfect example of this. Pascal only compounds the problem for AMD, so if Vega doesn't come with a very significant efficiency improvement, then AMD is toast.

Latest gaming tests (December 2016-January 2017) have Fury Nano (175W TDP) at the heels of GTX 980Ti (250W TDP) and Fury X faster at 1440p/4K. People havent realized yet how much more efficient Fiji has become, today Fury Nano is the king of perf/watt in the high-end 28nm era.

Also as i have said before, i will like to see the NVIDIA cards using a locked Dual/Quad core with DX-11 vs AMD cards with DX-12 and see who does more with less hardware
NVIDIA cards does very well under DX-11, but people here constantly forgetting/ignoring that DX-12 main advantage TODAY is the CPU performance. Up until last year everyone was talking about AMDs CPU overhead, today nobody talks about CPU performance in new games. Well yes most people in this forum has OCed CPUs above 4GHz, that is fine but there are millions of people that dont have OCed 4.5GHz CPUs or 8-12-18 Threads.

Also to add, GPU power consumption alone is one thing, but Total System power consumption under gaming is what we care about. I have seen lately that NVIDIA GPUs use more CPU power and total system consumption differences between GTX 1060 and RX 480 is not higher than 20-30W tops and getting lower in DX-12 games. This is also something reviewers havent investigated yet, power consumption in DX-12 titles.
 

zinfamous

No Lifer
Jul 12, 2006
110,810
29,564
146
Its quite clear the RX480 has the upper hand in DX12 (but not an outright advantage) and the GTX1060 has the upper hand in DX11. However right now and for upcoming titles, it seems like there is no IQ benefits from using DX12. DX12 right now only seems to either improve performance (mostly on AMD cards and sometimes on nVIDIA cards) or regress performance and/or graphical glitches.

That being said, DX11 is still here to stay, its far more stable and on most titles, provides more performance than DX12. So i think its a wash between the cards because if im a GTX1060 owner, i'd run games mostly on DX11 which might be faster than a RX480 in DX12 on the same game vice versa.

Some might buy the RX480 because of the DX12 mileage, but im thinking by the time DX12 actually provides more benefits other than performance (id actually be more concerned with game stability than performance at this point in time) there will be newer, faster cards out by then.

Sure, but you have to realize that the people that buy these cards primarily for that "DX12 mileage," aren't going to be buying those new cards 1 to 3 years from now that are better (maybe even "just as good" wrg to DX12 efficiency?) with a more mature DX12. Those that buy for longevity tend to skip a generation, maybe even 2 (plenty of people out there still rocking 7870s and such), so the new faster card of the day isn't really on their map. Just look at how the R8-R9 series has aged extremely well over the last several years with owners getting great DX12 performance now, even compared to the newer 480 or even nVidias comparable or even higher performance options.

This tends to be the AMD buyer's thinking, anyway, and it makes sense when they see their 3 year old cards getting faster and faster every year, while 2 successive generations of cards from the competitor have been resigned to the dustbin during that same time (not that they are bad and don't work--they just don't have the same longevity). Again, it's just different consumer-targeting from both companies. nVidia doesn't have as good of a DX12 implementation until Volta, not to say that Pascal is shabby because the raw performance is there, but as we see time and time again on these forums with the "DX12 doesn't matter!" comments, people don't buy nVidia thinking that the card will be cutting edge a year from now. It will still perform very well, but they have a history of designed obsolescence (that is too strong of a word, but you know what I mean).

Now, at the same time, AMD has a problem with all these wonderful fancy DX12 designs in their cards, as that simply go unused. From what I recall reading recently, the on-paper TFLOP advantage that 480 has over 1060 is mostly unrealized because much of that core is going underutilized because it requires developers to design for those benefits--which they really aren't doing. We see this mentioned that 1060 is on par with 480, despite a significant TFLOP advantage...but it appears that the real truth is that 480 is leaving much of that TFLOP performance advantage on the table because AMD is, again, ahead of the curve when it comes to software developers. But that's what market dominance gets you, not to say that DX12 being a bit more complex in taking advantage of those designs does add constraints to developers...I'm assuming.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Latest gaming tests (December 2016-January 2017) have Fury Nano (175W TDP) at the heels of GTX 980Ti (250W TDP) and Fury X faster at 1440p/4K. People havent realized yet how much more efficient Fiji has become, today Fury Nano is the king of perf/watt in the high-end 28nm era.

Fury Nano is a very selectively binned card, which I don't think is representative of the Fury lineup to say it mildly. NVidia could do something similar if they wanted to make a point as well. As for the Fury X being faster, that's only compared to reference clocked models. I've always said it, but comparing a water cooled Fury X to an air cooled reference clocked 980 Ti is a dishonest comparison, as the Fury X is running near the limit of its potential already (solely due to water cooling) whereas the 980 Ti has loads of untapped performance. Elite model 980 Tis can hit and sustain 1500/8000 clock speeds on air and blow the doors off of a Fury X, much less water cooled setups.

That said, I do agree that AMD has made significant strides in improving the efficiency of their drivers recently, which has increased Fury's performance. However, it's the same story with AMD as it's always been..

Johnny come lately.

NVIDIA cards does very well under DX-11, but people here constantly forgetting/ignoring that DX-12 main advantage TODAY is the CPU performance. Up until last year everyone was talking about AMDs CPU overhead, today nobody talks about CPU performance in new games. Well yes most people in this forum has OCed CPUs above 4GHz, that is fine but there are millions of people that dont have OCed 4.5GHz CPUs or 8-12-18 Threads.

DX12 dramatically reduces CPU overhead, but when the game still runs slower in DX12 mode than it does in DX11, it doesn't seem like much of an advantage. There are just a small handful of games that run better in DX12 for both vendors, but that list is growing as developers get more skilled and confident in using it.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
DX12 dramatically reduces CPU overhead, but when the game still runs slower in DX12 mode than it does in DX11, it doesn't seem like much of an advantage. There are just a small handful of games that run better in DX12 for both vendors, but that list is growing as developers get more skilled and confident in using it.

We havent seen any DX-12 vs DX-11 game with slower locked Dual/Quad Core CPUs, you only see the games run faster in DX-11 because the reviewers use 4.5GHz or faster Core i7s.
I have a feeling the games will run faster in DX-12 even with the NVIDIA cards when we use slower CPUs. Use a Core i3 6100 with GTX 1060 and measure the same games in DX-11 and DX-12 and im sure most of the time DX-12 will be the faster API even with NVIDIA Pascal cards.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Last edited:

Spjut

Senior member
Apr 9, 2011
928
149
106
We havent seen any DX-12 vs DX-11 game with slower locked Dual/Quad Core CPUs, you only see the games run faster in DX-11 because the reviewers use 4.5GHz or faster Core i7s.
I have a feeling the games will run faster in DX-12 even with the NVIDIA cards when we use slower CPUs. Use a Core i3 6100 with GTX 1060 and measure the same games in DX-11 and DX-12 and im sure most of the time DX-12 will be the faster API even with NVIDIA Pascal cards.

I've seen plenty of users on different forums sporting older/slower CPUs that say DX12 is running better in for example Rise of the Tomb Raider. Users with Sli have also reported DX12 multi GPU to be superior.

The only DX12 game I've played personally is Hitman. Benchmarks showed a negative view of DX12, and while I haven't done any thorough testing, I noticed directly that DX12 performance was more consistent for me. DX11 clearly had parts that felt slow and weren't there in DX12. That's on the original GTX Titan and i7 4770k.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,362
5,025
136
Its amazing this still gets posted when AMD themselves have said they didn't bin Nano.

You can actually achieve close to Nano efficiency on a Fury/X if you have a sample capable of undervolting -90mV or so, and reduce maximum frequency to 900-950 MHz or so. Which would indicate the PCB design and BIOS of the Nano are likely "throttling" the card versus a Fury/X to keep it closer to peak efficiency.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I love seeing the same exact debate play out over and over. "Efficiency is number 1!" -> "On a purely numerical basis DX12 Fury Nano was the most efficient in 28nm." -> "Uh... Nano doesn't count."

Can we keep this on Polaris and Pascal per the thread title?
 

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
The R9 Nano actually consumes 183W and the 980 Ti 211W. Not nearly so big a difference as the paper TDPs suggest, and the Nano is barely more efficient than the Fury (Fiji Pro), as the 14W less power usage is offset by the slightly lower performance.

https://www.techpowerup.com/reviews/AMD/R9_Nano/28.html

The Nano may be slightly ahead of the 980 Ti in efficiency (I don't think any recent tests have been done), but neither can match the 980 (which actually consumes less than the 970 somehow).
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Its amazing this still gets posted when AMD themselves have said they didn't bin Nano.

https://community.amd.com/thread/187797#2671010

Wow, if the chips aren't binned, then someone at AMD's marketing department screwed up because several reviewers that previewed/reviewed the Fury Nano mentioned that they were binned; including our very own Ryan Smith:

In order to achieve this AMD has turned to a combination of chip binning and power reductions to make a Fiji card viable as the desired size. The Fiji GPUs going into the R9 Nano will be AMD’s best Fiji chips (from a power standpoint), which are fully enabled Fiji chips that have been binned specifically for their low power usage. Going hand in hand with that, AMD has designed the supporting power delivery circuitry for the R9 Nano for just 175W, allowing the company to further cut down on the amount of space required for the card.

Now there is a catch here, and that these efficiency improvements come from chip binning and a careful crafting of the product specifications, and not from an architectural improvement. On the whole Fiji still needs to draw quite a bit of power to achieve its peak performance and R9 Nano doesn’t change this. Instead with R9 Nano AMD makes a deliberate and smart trade-off to back off on GPU clockspeeds to save significant amounts of power. The last 100MHz of any chip is going to be the most expensive from a power standpoint, and as we’ve seen this is clearly the case for Fiji as well.

It's hard for me to believe that they aren't binned, and to be honest, comments by AMD's marketing department need to be taken with a mound of salt. If AMD is using the standard process for Fury Nano that it does for the rest of the Fiji GPUs, then why on Earth couldn't they deliver a fully featured air cooled version with 1050Mhz clock speeds? Is Fiji that terrible that it needs an extra 100w plus water cooling to be able to reliably hit 1050Mhz?

And whats the power usage on those?

Not as much as you think. My GTX 980 Ti Amp Extreme could hit 1500/8000 on air with no voltage adjustments and still stay within the 275w limit for the majority of my games. Only in games that were particularly GPU intensive like Witcher 3 (with hairworks on and shadows on ultra) and Crysis 3 would cause the GPU to throttle.. A water cooled card though would likely sustain the 1500/8000 clock speeds even in GPU intensive scenarios..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I love seeing the same exact debate play out over and over. "Efficiency is number 1!" -> "On a purely numerical basis DX12 Fury Nano was the most efficient in 28nm." -> "Uh... Nano doesn't count."

Right, and you honestly don't believe that Nano is binned specifically for low power? If it's not binned, then why on Earth couldn't AMD deliver an air cooled Fury X within a 275w TDP envelope?
 
Reactions: tviceman

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Anyway, going back to Polaris and Pascal, it's evident that AMD still has a long way to go to match the efficiency of Pascal. But Polaris was definitely a step in the right direction. We shall see how close they get with the Vega architecture. Either way, it's unfortunate that neither the Pascal or Vega cards will be able to deliver 4K/60 FPS with all the eye candy this generation I guess that for those like me who are holding out for proper 4K performance, will likely have to wait for Volta/Navi.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
Just Stop Using Perf/W metric.consumers don't care about perf/W.
We know AMD can't reach that Nv's lvl , So Why do you look for Negative part?
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Just Stop Using Perf/W metric.consumers don't care about perf/W.
We know AMD can't reach that Nv's lvl , So Why do you look for Negative part?

But you're wrong. Consumers do care about perf/w. It matters in notebooks, it matters in SFF cases, it matters with single slot cards, it matters with people who want quiet rigs, it matters with people who aren't trying to deforest the Amazon. Whoever is getting killed in the perf/w battle says it doesn't matter. People in the Nvidia camp said the same thing when Fermi flopped out of the gate, and now that the situation is ENTIRELY reversed with Polaris failing to move the meter forward people in the AMD camp are acting ignorant. What is ironic, though, is that it matters more NOW than it did THEN with Fermi. Thin/light yet powerful laptops are plentiful now but were not 6+ years ago. SFF designs are plentiful now, but were fairly scarce 6+ years ago. People can package extremely capable rigs today with a GTX 1070 and i5 CPU using less power than one GTX 480.

So yes, it does matter.
 
Reactions: Carfax83

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,542
2,542
146
While performance per watt does matter, to some people, it is only one metric to go by. Both of you are entitled to your opinion on this, but please do not argue about it in this thread, which is about Polaris and Pascal performance in games.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
The R9 Nano actually consumes 183W and the 980 Ti 211W. Not nearly so big a difference as the paper TDPs suggest, and the Nano is barely more efficient than the Fury (Fiji Pro), as the 14W less power usage is offset by the slightly lower performance.

https://www.techpowerup.com/reviews/AMD/R9_Nano/28.html

The Nano may be slightly ahead of the 980 Ti in efficiency (I don't think any recent tests have been done), but neither can match the 980 (which actually consumes less than the 970 somehow).

Take note that Techpowerup uses Metro Last Light for the gaming tests to measure the power, if they will use a different Game (2016), things will change in favor for the Fiji cards. Not to mention that Metro is an NV optimized game,its like using HITMAN for AMD cards.
I would really like to see perf/watt in a neutral latest game like BF1, no extra advantages for either AMD/NV in BF1, the game is optimized for both.

Also to add again that measuring GPU only power usage (what TPU does) is one thing, total system energy consumption is another. TPU measures only the GPU power, that is fine but we also need to see the total system power use, one card may need more CPU performance than the other and that will increase or decrease total system power and at the end this is what we need to know not the GPU alone.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
But you're wrong. Consumers do care about perf/w. It matters in notebooks, it matters in SFF cases, it matters with single slot cards, it matters with people who want quiet rigs, it matters with people who aren't trying to deforest the Amazon. Whoever is getting killed in the perf/w battle says it doesn't matter. People in the Nvidia camp said the same thing when Fermi flopped out of the gate, and now that the situation is ENTIRELY reversed with Polaris failing to move the meter forward people in the AMD camp are acting ignorant. What is ironic, though, is that it matters more NOW than it did THEN with Fermi. Thin/light yet powerful laptops are plentiful now but were not 6+ years ago. SFF designs are plentiful now, but were fairly scarce 6+ years ago. People can package extremely capable rigs today with a GTX 1070 and i5 CPU using less power than one GTX 480.

So yes, it does matter.

Consumers doesnt care about Perf/watt (except for those environmentalists perhaps), OEMs/System builders and Server builders care about perf/watt. In my almost 20 years in IT/PC Re/E-tail not even one person had ever asked for perf/watt for a Desktop build. Not even in the recent recession years that income has become less and electrical cost has risen. And the power difference between Rolaris 10 (RX 480) vs Pascal GP106 (GTX 1060) is not that much for people to really care about, not for the electrical cost, not for the use in SFF cases, not even for people that want to save the Amazon.
Im not saying that power consumption is not a concern in general, for example everyone will agree here that its better to have the RX 480 at 150W TDP vs R9 390 at 250W TDP at the same performance but the power difference between Polaris and Pascal is not what we had between Hawaii vs Maxwell to really talk about it. Polaris vs Pascal is not the same as Cypress (HD5870) vs Fermi (GTX 480).

Things change in Laptops, this is perhaps the only area that perf/watt is essential for Consumers but without know it. And that is because most of the people with Laptops are using them plugged on the wall. Perf/watt is essential for Laptops due to low thermal constrains of the design and battery life not because of the power usage as we take it in Desktops. But since the vast majority of Laptop Gamers play with the Laptop plugged in the wall and not on battery mode, perf/watt in this case is only a matter of thermal constrains of making a thinner laptop, energy consumption again doesnt matter to the Consumer.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Ok im sorry i posted about perf/watt, i just thought it will be relevant to the conversation. If the admins want i will delete my two post above.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Right, and you honestly don't believe that Nano is binned specifically for low power? If it's not binned, then why on Earth couldn't AMD deliver an air cooled Fury X within a 275w TDP envelope?
I literally never said that. Don't put words in my mouth. Stop with this garbage about Nano. It has nothing to do with Polaris and Pascal. Stop derailing the thread.
 
Reactions: Bacon1

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Sorry one more thing about power comparison (this is about Pascal vs. Polaris on the desktop) but feel free to delete if you think it's derailing.

I recently bought a Pascal based 1050 Ti over a RX460 due to the restrictions of a very small custom designed case (OSMI). It's 8.1 litres in size and shaped like a trash can where the video card sits flat on the bottom so the amount of heat output out is very important to manage. While the 1050 Ti isn't the best price performance card (this probably goes to the regular 2GB 1050 or 4GB RX470), it's the fastest card per square inch with the lowest heat output. However It's more about managing heat output than the amount of power expelled (although obviously related, heat output is not necessarily impactful as it depends on how that heat is radiated out of the case).

Now if you've got a typical size case with decent airflow that can handle at least a 10.5" card (most decent cases these days fit in this category, even cheaper ITX cases) the power and heat argument kind of go out the window when we're comparing the Rx480 to the 1060 (or the 4GB 470 vs 3GB 1060) as the power differences are pretty low.

To put it simply, ask yourself the following question.

"Do I really need a bigger power supply or better case cooling to handle the 480 over the 1060?".

Once you add up total system power consumption you realize the 30 - 40W difference that the video card makes under heavy loads makes very little difference on the choice for the rest of your system components, and unless you're facing a corner case like I mention above, it's really a non issue.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |