The technical merits: Polaris vs. Pascal

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ocre

Golden Member
Dec 26, 2008
1,594
7
81
GCN 4 is solid, forward thinking, future-proof. Pascal is just Maxwell on speed. It's not going to hold up for DX12/Vulkan workloads, which will be omnipresent by next year. VR is a bit of a toss up, not sure how that will pan out.

GP104 is much better manufactured. They have really honed in on TSMC 16nm, pushed clocks and power efficiency to uncharted heights. AMD have struggled with GloFo 14nm. Which is not surprising, considering this is a brand new architecture on a brand new process which has never seen as discrete GPU built at a fab which has never made a discrete GPU. As a result, it clocks very, very low. The optimum clocks for Polaris 10 is probably in the ~1 GHz range, while GP104 is in the ~1.5 GHz range. That's an unbelievable 50% advantage. Yes, at 28nm Nvidia had a clock speed advantage, but it was more like ~20%.

14nm never brought about the expected advantage in density either. In fact, on 28nm, AMD was more than 7.5% dense than Nvidia, so they have actually lost on 14nm.

On the other hand, by foregoing perf/W with 480, AMD have been able to push performance while bringing the cost way down and retaining availability. They have turned off power management features, pushed voltages way high to meet the lowest common denominator - push as many P10s capable of 480 clocks as possible. So, AMD can afford to sell P10 at $200 with a relatively wide availability. 470 will get perf/W back in line.

So, it's a bit of a mix and match. P10 would have been much better at TSMC 16, but the move to GloFo had to happen. It'd be reasonable to assume they'll work it out by Vega time. On a pricing level though, $200 is still great for 480, and I imagine 470 will be even better.

Perhaps it's time to put down the kool aid. We heard all that stuff about how amd made such huge strides in major architecture changes while Nvidia just relied on the node. It's ridiculous.

You know, Nvidia and amd used the exact same 28nm node last gen and Nvidia had built a design that had amassing power consumption. This was 100% a result of the design that was built for and around efficiency.
Pascal has many changes but amd fans completely right them off. All of them.
The great efficiency of pascal is due to a lot of effort. It's a result of Nvidia's focus. If you took a look, you would see that power consumption was a major focus and Nvidia built a new and phenomenon level of capabilites. Since you do disregard all other improvements in pascal, you should at least know that the high speed and efficiently is only possible because of the time and effort put in pascal. It's how they achieved these things and not some accident
 
Feb 19, 2009
10,457
10
76
It's been really tiring to see so many attack on Nvidia and their ceo on their pascal 1080 event. It was an absolute bashing party around here. Of course there were statistics and figures that were best case scenarios but I bet I read 10,000 post of negative blasting and claims Nvidia lied. It was ridiculous to read, people going on and on about things like the 1080 faster than 980 sli. Ridiculous because there were plenty of cases where the 1080 was faster than 980 sli, so many in fact that there were several reviews that showed on average the 1080 was indeed faster that 980 sli. But, ever still, this was still blasted-nvidia lied, it's only a few percent faster so it don't count. Or other claims, well it's only because sli scaling so that means Nvidia lies. Best case scenarios have been used for ever know, I remember slides from amd of a bunch of games with truly obscure and bizarre settings that they used to prop up the appearance that their card was faster than an Nvidia card. Only when that card launched, reviews found a completely different result. Yet at those obscure settings, with no af on a strange set up of test systems...It could have been true that they found some rare special cases where their card actually was faster than that Nvidia card. This, which had very different results from reviewers, this they got very little flack over.

The point, my point... things like 1080 faster than 980 sli was not just some obscure and super rare case set up special for a 1080 win. There were reviewers finding the 1080 faster than 980 sli on many counts, and even averaging faster...but Nvidia lied because it was only a few percent.

I honestly am surprised by 480 power consumption. Amds claims of 2.8x and nearly every single person here was positively sure that amd was gonna be so far ahead of pascal in perf per watt that Nvidia was gonna be in serious trouble. Surely you were convinced that the 480 was gonna be more efficient than the gp104 too.

But yet there is crickets when it comes to that. Really, let's try to not be so critical of one while ignoring some seriously way out there stuff that comes from another.

What do you mean crickets? Please excuse yourself because I've been quite vocal about how crap Polaris is due to the power usage. I posted several long posts on this topic and actually got accused of being a negative troll against AMD. I kid you not. Someone mistook me for a NV shill...
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
But yet there is crickets when it comes to that. Really, let's try to not be so critical of one while ignoring some seriously way out there stuff that comes from another.

If by crickets you mean a whole lot of people loudly denouncing it to the point where it's largely been drowning out the people saying it's actually pretty good price/perf and where I had to explain that people were as mad as they were in part because they're worried about how it'll scale, then sure.
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,269
12
81
There are no crickets about the power consumption. It's too high. Less than 110W was definitely a dream, but the 480 is going to be positioned in this generation the same as the 7850/7870 was positioned in its generation, and as such I would have expected the 480 to be no more than 130W and definitely not exceed its stated 150W TDP. The 7800 were the second tier performance of its generation and operated between 100-140w, not above it.

It's performance/watt is definitely an improvement over the 7970, 280x, and 390 cards, but it should be more considering this is a mid range card.

It'll be interesting to see if the 480s power consumption improves over time as the process matures, or if AIB cards with better coolers and power delivery can help the cause. Looks like lottery is also real, so AIBs have a chance to put golden chips in their high end cards.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
RX 480 (full Polaris 10)
232 mm2
5.7b transistors
Density: 24.57 million transistors /mm2

GTX 1080 (full GP104)
315mm2
7.2b transistors
Density: 22.86 million transistors /mm2


The Technical and Performance Metrics
GP104 is a 36% larger die.
GP104 has 21% more transistors.
Polaris 10 is 7.5% more dense than GP104.
GTX 1070 and 1080 are 70-80% more efficient depending on 1080p or 1440p.
GTX 1080 is 75-85% faster depending on 1080p or 1440p.
GTX 1070 is 50% faster at every resolution.

Perf/$
RX 480 8gb is 66% more cost effective (perf/$) than GTX 1080 (current prices).
Crossfire RX 480 8gb is 17% more cost effective than GTX 1080.
RX 480 8gb is 25% more cost effective than GTX 1070.
GTX 1070 is 9-14% more cost effective than crossfire RX 480 8gb.
RX 480 4gb is 50% more cost effective than GTX 1070 and twice as cost effective as a GTX 1080.



I based these results on techpowerup's RX 480 review graphs between the 1440p and 1080p resolutions, and also Ryan Smith's summary on the last page of the RX 480 preview. It will be interesting to make these comparisons again when GTX 1060 comes out. I will update this thread when it happens if people are interested in continuing a discussion.


Good info.

This means every single person who recommended people get a 960 4gb instead of spending $50 more and getting an R9 290 which was more than 50% faster, will insist that no one increase their $250 RX 480 budget to $400 for a 1070, right?
 

know of fence

Senior member
May 28, 2009
555
2
71
[...]
14nm never brought about the expected advantage in density either. In fact, on 28nm, AMD was more than 7.5% dense than Nvidia, so they have actually lost on 14nm.

On the other hand, by foregoing perf/W with 480, AMD have been able to push performance while bringing the cost way down and retaining availability. They have turned off power management features, pushed voltages way high to meet the lowest common denominator - push as many P10s capable of 480 clocks as possible. So, AMD can afford to sell P10 at $200 with a relatively wide availability. 470 will get perf/W back in line. [...]

So the RX 480 equals GTX 970 (145 TDP 2x 6pin connectors) in performance as well as performance per watt. Both are min. spec for VR.

Which quite officially still puts AMDs graphics cards one generation behind. But now they are cheap, probably because of the mentioned horrible binning leeway.
After the Apple A9 release we also knew that 16ff TSMC chips were superior to their 14nm Samsung counterparts. Maybe not 43% better power efficiency kind of superior.

Assuming Nvidias answer will consume 100 W and cost 300 USD, I wonder how fast this 100 $ difference will break even with power cost... Hmm 10000 hours@0.2$/kWh. If it weren't for the high idle consumption (caused partially by 8 GB ram) then it may still be a toss-up, especially for casual gamers in cold rooms.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Gibbo shipped aprox as many 480 day one as entire 1080 all time.
Technical merrit is fine but driving a business forward to profitability is crucial and thats where amd have been lacking.
Selling few huge dies of 390/fury from tsmc and getting fine because of wsa is crap compared to loading mainstream to this world using gf.
Its a world of difference.
The finer technical details is important and interesting but in this context they are miniscule.
 

Snarf Snarf

Senior member
Feb 19, 2015
399
327
136
After the Apple A9 release we also knew that 16ff TSMC chips were superior to their 14nm Samsung counterparts. Maybe not 43% better power efficiency kind of superior.

Assuming Nvidias answer will consume 100 W and cost 300 USD, I wonder how fast this 100 $ difference will break even with power cost... Hmm 10000 hours@0.2$/kWh. If it weren't for the high idle consumption (caused partially by 8 GB ram) then it may still be a toss-up, especially for casual gamers in cold rooms.

Do keep in mind that while GF is licensing Samsung's process this is the first major release for GF on 14nm and I wouldn't expect them to be one in the same in terms of quality. I may be proved wrong after the process matures a bit but I have very little faith in GF's ability to perfectly replicate what Samsung was able to do with the same process.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
So in short the NVIDIA cards are better, but AMD lowered their prices to remain competitive.

Not quite.

The nVidia cards are faster. They are in a higher market tier. They also cost a lot more. Very doubtful AMD "lowered prices." They likely targeted $200 from the beginning of the design phase.
 

Mikeduffy

Member
Jun 5, 2016
27
18
46
How about we look at dx12 only - since the vast majority of games going forward will be using this API.

From the HardwareCanucks review we see that the RX480 literally smashes everything Nvidia has - or will have - in performance/$. It's a huge embarrassment for Nvidia - just look at this:
Moving on to DX12 and we see AMD’s new architecture really coming into its own against the NVIDIA cards. It absolutely demolishes the GTX 970 across the board (even in NVIDIA-friendly games like Tomb Raider) and even manages to run circles around that once-expensive GTX 980. These tests show Maxwell’s performance in current DX12 applications is nothing short of embarrassing and proves this architecture simply wasn’t designed with these types of workloads in mind. How this translates to Pascal or upcoming DX12-based games is unknown at this point (remember, our sample size is quite small here) but something drastic needs to be done if NVIDIA’s mid-tier competitors are to have any hope against Polaris.

Polaris10's directx12 performance will only get better - 1060 is screwed cause they'll have to clock it to the moon to get directx12 parity with a 480.

I find it so funny that Enthusiasts of this forum want newer hardware, but they'd rather use an old API.
 

Irenicus

Member
Jul 10, 2008
94
0
0
Gibbo shipped aprox as many 480 day one as entire 1080 all time.
Technical merrit is fine but driving a business forward to profitability is crucial and thats where amd have been lacking.
Selling few huge dies of 390/fury from tsmc and getting fine because of wsa is crap compared to loading mainstream to this world using gf.
Its a world of difference.
The finer technical details is important and interesting but in this context they are miniscule.




This is really important, and it looks like it might be undercut sooner than expected with the nvidia 1060 launch being moved up. AMD needs more marketshare, and to do that they need to offer more performance for less money than the competition. Even with the increased power usage compared to pascal (this is to be expected to some extent, they have two hardware schedulers that do not exist in the same way in pascal that use additinal power).

AMD needs more marketshare so that the install base of GCN cards makes it more worthwhile to design more games with the kinds of optimizations a game like hitman has vs say, some gameworks title.

Had they had the floor to themselves for a longer period, I think it would have worked and achieved its ends, but nvidia is looking to cut that short. They don't want to allow amd to increase marketshare too much, they don't want to make it more attractive to have games designed with the kinds of optimizations and designs that allow a hitman type game or warhammer total war to excel on gcn.


Marketshare matters because this is the exact arugument you see from nvidia loyalists when they say game devs should not bother incorporating anything other than gameworks since nvidia is so dominant on the pc gaming market side.

This is where I hope gpuopen comes into play, amd needs to make it trivially easy to shift the kinds of optimizations used on consoles over to the pc side of things in dx12 so they are made by default. AMD needs more total wars compared to tomb raiders.
 

deanx0r

Senior member
Oct 1, 2002
890
20
76
So in short the NVIDIA cards are better, but AMD lowered their prices to remain competitive.

Ding ding ding.

They've been doing the same thing with their CPU. I hope Zen or Vega won't suffer the same fate.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
What do you mean crickets? Please excuse yourself because I've been quite vocal about how crap Polaris is due to the power usage. I posted several long posts on this topic and actually got accused of being a negative troll against AMD. I kid you not. Someone mistook me for a NV shill...

just to clarify, the "lie" in your post is what inspired by response. it was meant to be more in general orientated and not so much specifically towards you.

posting on my phone at work in a hurry and didnt/dont have a lot of time.

but regardless, its not the fact that no one is bringing up power consumption, they are. what i mean about crickets....nvidia was bashed and attacked for every single thing on their pascal reveal, lies lies lies, Jen Haung this and that.
I dont see these same people holding AMD to anywhere near the level. Its clear that power consumption has been talked about, but no huge thread calling amd out naming them liars roasting them to dust.

its not that i want this to happen, i just think it should be noted how much nvidia was burned for no where near as outrageous claims vs outcome.

basically, a little chilling when it comes to nvidia liars....your post was over 20 bucks difference.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
What do you mean crickets? Please excuse yourself because I've been quite vocal about how crap Polaris is due to the power usage.

It's not just power usage though; it's getting beat badly in perf/transistor and perf/mm2, with the latter being despite having a denser design. The $199 price for the 4gb is undeniably good for the performance, but we're looking at a gulf between Nvidia and AMD to resemble the same gulf between AMD AND Intel, which is a more unfavorable position considering Nvidia and AMD are on equal footing process wise.

The more true comparison to Polaris is obviously GP106, that should paint a much clearer picture of how this generation is shaping up.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
I am personally pretty impressed with rx480 the GPU, not the reference card. AIBs should be pretty compelling.

Large GPUs have a perf/watt advantage, just compare 980 to 960.. that's because non compute components on the chip take similar size on the big chips as they do on small chips. So this overhead is lower on big GPUs.

So by that token if you compare the rx 480 to its last generation cousin the R9 380, the rx 480 has almost 2x perf/watt improvement. It is also better than the gtx960 in perf/watt. And these are conservative numbers based on tech power up's chart since rx 480 has 8Gb of DDR5 VRAM which consumes 40 watts and I don't think TPU normalized for this discrepancy.

It is significantly behind Pascal, however those are bigger chips and we know Pascal/Maxwell don't have a command processor and ACEs for instance, also less FP resources etc..

So when I consider all these points I think rx480 falls within the hyped expectations of the card, AMD have touted 2x improvement in the past which is what they delivered pretty much.
 

biostud

Lifer
Feb 27, 2003
18,392
4,962
136
Is there any signs that shows that nvidia will tank in DX12? So far it just seems that AMD can use it to catch up to nvidia. Same goes for async compute. One thing is the theoretical advantages, but will there be any signifcant advantages in real life?

To me it seems that nvidia simply has the most efficient technology, and in 2-3 years when DX12 is the dominant API, nvidia will have made some new GPUs that can handle async compute much better than now.

I would really like AMD to beat nvidia, but it doesn't seems to happen.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
Is there any signs that shows that nvidia will tank in DX12? So far it just seems that AMD can use it to catch up to nvidia. Same goes for async compute. One thing is the theoretical advantages, but will there be any signifcant advantages in real life?

To me it seems that nvidia simply has the most efficient technology, and in 2-3 years when DX12 is the dominant API, nvidia will have made some new GPUs that can handle async compute much better than now.

I would really like AMD to beat nvidia, but it doesn't seems to happen.
Do you think Nvidia can add all these DX12 features to their GPU without compromising some of the pure graphics workload efficiency? I doubt it.
 

biostud

Lifer
Feb 27, 2003
18,392
4,962
136
I am personally pretty impressed with rx480 the GPU, not the reference card. AIBs should be pretty compelling.

Large GPUs have a perf/watt advantage, just compare 980 to 960.. that's because non compute components on the chip take similar size on the big chips as they do on small chips. So this overhead is lower on big GPUs.

So by that token if you compare the rx 480 to its last generation cousin the R9 380, the rx 480 has almost 2x perf/watt improvement. It is also better than the gtx960 in perf/watt. And these are conservative numbers based on tech power up's chart since rx 480 has 8Gb of DDR5 VRAM which consumes 40 watts and I don't think TPU normalized for this discrepancy.

It is significantly behind Pascal, however those are bigger chips and we know Pascal/Maxwell don't have a command processor and ACEs for instance, also less FP resources etc..

So when I consider all these points I think rx480 falls within the hyped expectations of the card, AMD have touted 2x improvement in the past which is what they delivered pretty much.

It delivers roughly the same performance/watt as the GTX 970, a two year old 28nm design. Not really that impressive IMHO.

https://www.techpowerup.com/reviews/AMD/RX_480/25.html
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
It delivers roughly the same performance/watt as the GTX 970, a two year old 28nm design. Not really that impressive IMHO.
There are quite possibly bottlenecks that make this an unbalanced design, similar to Fiji, thereby delivering less perf/mm2 & even lesser perf/W than was expected. So,compared to Hawaii we have 4 less CUs, half the number of ROPs & roughly two thirds bandwidth, something similar went on going from Hawaii to Fiji.

We'll get a better idea when we get more balanced designs, with future GPUs, but one thing we've seen is that GCN is more prone to such anomalies than Maxwell, Pascal since it can't clock too high.

As for the release it is disappointing, but if power usage was taken care of it'd be the card to own for anyone who's budget is 250$ or thereabouts.
 

know of fence

Senior member
May 28, 2009
555
2
71
I am personally pretty impressed with rx480 the GPU, not the reference card. AIBs should be pretty compelling.

Large GPUs have a perf/watt advantage, just compare 980 to 960.. that's because non compute components on the chip take similar size on the big chips as they do on small chips. So this overhead is lower on big GPUs.

So by that token if you compare the rx 480 to its last generation cousin the R9 380, the rx 480 has almost 2x perf/watt improvement. It is also better than the gtx960 in perf/watt. And these are conservative numbers based on tech power up's chart since rx 480 has 8Gb of DDR5 VRAM which consumes 40 watts and I don't think TPU normalized for this discrepancy.

It is significantly behind Pascal, however those are bigger chips and we know Pascal/Maxwell don't have a command processor and ACEs for instance, also less FP resources etc..

So when I consider all these points I think rx480 falls within the hyped expectations of the card, AMD have touted 2x improvement in the past which is what they delivered pretty much.

In techpowerup charts 980 is significantly ahead of 980 ti, which is a much bigger chip. Performance per watt suffers when these things inevitably run into a bandwidth bottleneck, and all flagship cards do. So the size argument is dubious.

With Maxwell Nvidia did a similar thing, they released the small 750 ti first, instantly denying all claim AMD held over the budget market. Now jumping 1 and a half nodes we finally get a card that has the same (load) power efficiency as the 750 ti, from AMD. But it's pretty clear that AMD has to yet implement the kind of fine grained power management/ power gating that allowed Maxwell, to basically jump a generation without a node shrink. This isn't a matter of sizes.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
In techpowerup charts 980 is significantly ahead of 980 ti, which is a much bigger chip. Performance per watt suffers when these things inevitably run into a bandwidth bottleneck, and all flagship cards do. So the size argument is dubious.

With Maxwell Nvidia did a similar thing, they released the small 750 ti first, instantly denying all claim AMD held over the budget market. Now jumping 1 and a half nodes we finally get a card that has the same (load) power efficiency as the 750 ti, from AMD. But it's pretty clear that AMD has to yet implement the kind of fine grained power management/ power gating that allowed Maxwell, to basically jump a generation without a node shrink. This isn't a matter of sizes.
I don't think it's the power gating, I think it's simply the fact that Maxwell stripped a lot of the compute resources from Kepler in order to achieve the efficiency it gets in purely graphical workloads.

Example:


If you calculated perf/watt in these types of workloads Maxwell''s efficiency would be abysmal compared to Kepler and GCN.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |