[Benchlife] R9 480 (Polaris 10 >100w), R9

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
How it will achieve this, can you tell me?

I think the top end Polaris 10 of this year will wind up between a 390X and the FuryX.

I think it will clock at least 25% higher, it will dynamically turbo parts of the chip that need to be, its primitive discard accelerator included and schedulers will be much improved. I'm no GPU engineer so I have no idea what all of this adds up to. What I can say I am is a business professional with a job that is heavily focused on costs and market prices.

I have several years in Supply Chain both in the purchasing side as well as the operations side and while I can't read the minds of the guys at AMD, I can evaluate what I think they aim to do based on the current business climate.

It just makes sense especially based on their statements, that they want to bring 390X+ performance down to ~$300. I think this will end up highly case dependent as I think many of the tweaks found in Polaris will drive DX12 performance as compared to older DX11 titles. I think the trend we see with console GCN synergies will continue and GCN will continue to perform better compared to nvidia counterparts for the next few years.

I will say that I think Polaris 10 will be within 10% of FuryX at 1440P. Let that go on the "record" :sneaky:

With a lack of timely GDDR5X and HBM2 the Polaris will have to rely on highly clocked GDDR5 memory through a 256-bit bus; however I think we will see multiple hardware/software changes that allow it to not tank at 4k. Remember, even the 980 with its 256-bit bus didn't do terribly at 4k, it just couldn't scale as well as the 290x with its much wider memory interface.

It just makes no sense to me that AMD would release a drastically updated architecture on a extremely improved node just to give 390 performance at $300. I mean.. we already have that! AMD knows we already have that. They have stated "new performance/price levels" so why not take them at face value? To me that means ~FuryX (and we all know if P10 hits 48FPS and FuryX is 51FPS then this goal is met) @ $300.
 

Adored

Senior member
Mar 24, 2016
256
1
16
They'll target the smart perf/Watt curve but chances are they could fit Polaris 10 into a range between 390 and Fury X, there's only 30% between them anyway. At 390 level it'll have staggering perf/Watt - maybe even 2.5x, but at Fury X perf level perhaps not so much.

Some rumours I don't really like though, for example when was the last time AMD didn't have a dual 6-pin midrange card? If Polaris 10 is 135W max then a single 6-pin would be enough.

Giving up on 15-20% performance to save 40W of power doesn't make a lot of sense to me - unless it really is bandwidth constrained. I still think Polaris 10 will be on a level with Fury X and Polaris 11 will be around 380X.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Profits, nonexistent.....

What? AMD already sells a 438mm2 8GB GDDR5 390 for $325. Why would they make no $ selling a 232mm2 14nm card for $249?

NV's $199 960 ~ $499 680.

Do you not remember $299 660Ti ~ $499 580, or $549 7970 -> $299 280X 2 years later?

$700 780Ti -> $330 970 alone suggests we should in theory get a $349 14nm/16nm card that's at least as fast as a 980Ti/Fury X. If AMD/NV don't, it's more because they are milking the FinFET generation...

Giving up on 15-20% performance to save 40W of power doesn't make a lot of sense to me - unless it really is bandwidth constrained.

I agree. Make the fastest 75W card and the fastest 150W card. Those are key market segments for power conscious users/OEMs. When you are that close, why make a 60W and a 110W card? Seems like you are leaving a lot of performance on the table while not maximizing those key TDP segments.

They can do:

R9 470 = 50-60W
R9 470X = 70-75W (Polaris 11)

R9 480 = 110-120W
R9 480X = 140-150W (Polaris 10)

I'd do that instead.

It just makes no sense to me that AMD would release a drastically updated architecture on a extremely improved node just to give 390 performance at $300. I mean.. we already have that! AMD knows we already have that. They have stated "new performance/price levels" so why not take them at face value? To me that means ~FuryX (and we all know if P10 hits 48FPS and FuryX is 51FPS then this goal is met) @ $300.

960 peaks at almost 120W.



$300 price though is way too high then. If AMD brings 390 performance into 110-120W space at $199-219, that's 72% higher graphical performance over what we have now with the GTX960 (62% / 36% = 72%).

I guess if you already have 970/290/290X/390, this doesn't impress in the least (i.e., GTX480 owners don't really care that 750Ti is almost as fast with 1/4th the power usage since they moved on to the 980Ti by now). But for people who spent $150-200 this gen on low end 950/960/380 level cards, this is a true generational leap in a 110-120W space.

R9 390X is 83% faster (66% / 36%) than a GTX960 at 1440p. That means if NV improves perf/watt 1.83X for that market segment with GP106, it gets you a 118W card with R9 390X's performance. Polaris 10 110-120W would be a direct competitor?
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
I am not, but you forget, that because it will be 256 Bit GPU(most likely) it will have... 32 ROPs. Unless AMD will increase the ratio of ROP to memory controller.

Since Tahiti, the ROPs are decoupled from the memory controllers.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
In my opinion 125W will be MAX TDP for full Polaris 10 chip.

If we think that AMD's estimates are accurate about efficiency of Polaris we are looking at 2560 GCN4 core GPU with 100W of TDP, or 3072 GCN GPU with 125W of power.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
In my opinion 125W will be MAX TDP for full Polaris 10 chip.

If we think that AMD's estimates are accurate about efficiency of Polaris we are looking at 2560 GCN4 core GPU with 100W of TDP, or 3072 GCN GPU with 125W of power.

So that would mean a 6144 GCN GPU would fit inside 275W-300W TDP?

Using your logic, 2560 Polaris 10 mid-range market segments but flagship cards will have 5000-6000 GCN?

HD7970 (high-end) = 2048 SPs (60% more) vs. HD7870 (mid-range) = 1280 SPs

Polaris 10 2560 (mid-range) = (60% more) implies 4096 SP 250W Vega, but you have a 3072 GCN inside a 125W TDP? Do you see a flaw with that logic?
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,670
136
Yes, they will have, but the TDP will be lower cause of HBM2 .
 
Feb 19, 2009
10,457
10
76
DICE @ GDC this year gave a presentation about some cool features in their game engine Frostbite. One of the feature they talked about was a software based primitive discard. Where they run compute on the shaders during the next-scene setup to determine what to discard and skip rendering altogether.

They claimed it gains ~20% in performance, despite taking some shader/time to run that stage.

Think about how a hardware feature that handles this step would do to IPC as well as reducing bandwidth requirements.

ie. If you are not rendering 1/3 of the scene because it's behind the fov, hidden, you are automatically throwing away all those polygons, but you also bypass those textures for the objects, and any shadow or shader post processing.

Polaris also has a better command processor, instruction pre-fetch, improved cache, SIMD level power gating and turbo...

Hard to see it not having a huge performance improvement per SP.
 

el etro

Golden Member
Jul 21, 2013
1,581
14
81
Will post my predictions too!


Polaris 11 cut: R9 380 performance, 50W average power consumption.

Polaris 11 Full: bit faster than R9 380x, average power consumption over 50W. Top perf/w of Polaris series, ties with Polaris 10.

Polaris 10 cut: as fast as Fury vanilla, average power conmsumption ~100W;

Polaris 10 Full: faster than Fury X, consumption over 100W, notebook oriented design.


Prices i don't have a good bet, but top Polaris i would try $400 or $450.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
DICE @ GDC this year gave a presentation about some cool features in their game engine Frostbite. One of the feature they talked about was a software based primitive discard. Where they run compute on the shaders during the next-scene setup to determine what to discard and skip rendering altogether.

They claimed it gains ~20% in performance, despite taking some shader/time to run that stage.

Think about how a hardware feature that handles this step would do to IPC as well as reducing bandwidth requirements.

ie. If you are not rendering 1/3 of the scene because it's behind the fov, hidden, you are automatically throwing away all those polygons, but you also bypass those textures for the objects, and any shadow or shader post processing.

Polaris also has a better command processor, instruction pre-fetch, improved cache, SIMD level power gating and turbo...

Hard to see it not having a huge performance improvement per SP.
Primitive discard is a double edged sword. They have already implementing it since bf3... Results were mixed. Low poly count objects would need to be decomposed to dodge pop in situations where the object reaches a treshhold that isnt rendered anymore but some small part of it would have been still visible and thus causing the pop in.

Sent from my XT1040 using Tapatalk
 
Feb 19, 2009
10,457
10
76
Primitive discard is a double edged sword. They have already implementing it since bf3... Results were mixed. Low poly count objects would need to be decomposed to dodge pop in situations where the object reaches a treshhold that isnt rendered anymore but some small part of it would have been still visible and thus causing the pop in.

Yes, there's pros and cons and DICE talked about improving it in Battlefront. They iterate it to get it better.

I'd imagine they would test it with a threshold value, higher and they get rid of more objects from being rendered, but risk pop-ins, it will have to be fine tuned for the best result.

But as games increase in scene complexity, there's a lot of wasted performance that you as a gamer will never see.

Prime example, Hitman, standing outside that boat but looking at it. There's thousands of NPCs being rendered but you aren't seeing them. Take a current GPU and go to that scene, your FPS will drop but most of the scene is just a large boat of no complexity! There's a reason AMD demo that particular part. They know full well their Polaris uarch will be much more efficient with that scene as it discards most of the NPCs from the rendering pipeline. (cheating hehe)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
tell us how you really feel.

After-market 980Ti vs. 980
28% faster at 900p
35% faster at 1080p
49% faster at 1440p
56% faster at 4K
http://www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_XtremeGaming/23.html

Conclusion, if you are not using DSR/VSR/SSAA/mods at 1080p or not gaming at 120-144Hz, you are CPU limited at 1080p in many modern games, even with a Fury X/980Ti and a 6700K OC.

It's even worse for AMD under DX11. Literally CPU limited with an i7 4790K @ 4.9Ghz (!).

That's why the context of 1080P 60Hz gaming is very important. NV's 980Ti doesn't magically go from 28% faster than 980 to 56% faster. It's because at lower resolutions, the GPU load is nowhere close to 99-100% since the Skylake CPU isn't fast enough.

It wouldn't be shocking if Polaris 10 is 90% of Fury X's performance at 1080p, but what about 1440P and 4K?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
After-market 980Ti vs. 980
28% faster at 900p
35% faster at 1080p
49% faster at 1440p
56% faster at 4K
http://www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_XtremeGaming/23.html

Conclusion, if you are not using DSR/VSR/SSAA/mods at 1080p or not gaming at 120-144Hz, you are CPU limited at 1080p in many modern games, even with a Fury X/980Ti and a 6700K OC.

It's even worse for AMD under DX11. Literally CPU limited with an i7 4790K @ 4.9Ghz (!).

That's why the context of 1080P 60Hz gaming is very important. NV's 980Ti doesn't magically go from 28% faster than 980 to 56% faster. It's because at lower resolutions, the GPU load is nowhere close to 99-100% since the Skylake CPU isn't fast enough.

It wouldn't be shocking if Polaris 10 is 90% of Fury X's performance at 1080p, but what about 1440P and 4K?

Default GXT 980Ti is only 26% faster than Default GTX 980 at 4K according to the TPU link above.
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
It just makes no sense to me that AMD would release a drastically updated architecture on a extremely improved node just to give 390 performance at $300. I mean.. we already have that! AMD knows we already have that. They have stated "new performance/price levels" so why not take them at face value? To me that means ~FuryX (and we all know if P10 hits 48FPS and FuryX is 51FPS then this goal is met) @ $300.

AMD would be stupid not to. Make it a bit cheaper than 390(x) at similar performance but higher margin for them. Use marketing to sell by performance/watt. Plus your total system cost will be lower. Lesser PSU is enough and you will need less power (= cheaper VR).

And don't forget mining. If it delivers on that front (same performance as 290/390 at half the power) it will sell like hot cakes. In fact AMD should do everything to make mining work at launch and factor that in the price. I however doubt they will do so.

Once Vega is released they can lower the price.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
AMD would be stupid not to. Make it a bit cheaper than 390(x) at similar performance but higher margin for them. Use marketing to sell by performance/watt. Plus your total system cost will be lower. Lesser PSU is enough and you will need less power (= cheaper VR).

And don't forget mining. If it delivers on that front (same performance as 290/390 at half the power) it will sell like hot cakes. In fact AMD should do everything to make mining work at launch and factor that in the price. I however doubt they will do so.

Once Vega is released they can lower the price.

Stupid or not, AMD has already said that they will be offering 390/970 level performance for less than what it's currently selling for, so somewhere around $250 (or lower) is the only possibility really.
 
Feb 19, 2009
10,457
10
76
Stupid or not, AMD has already said that they will be offering 390/970 level performance for less than what it's currently selling for, so somewhere around $250 (or lower) is the only possibility really.

Pretty sure Raja has emphasized enough times about Polaris. It's designed to bring a the best leap in perf/w and perf/$ AMD has ever done.

It's also been stated several times by AMD the architecture itself is designed for and optimized for 14nm FF (for those still clinging to next-console APU being 28nm!!).
 

Mopetar

Diamond Member
Jan 31, 2011
8,024
6,480
136
Some rumours I don't really like though, for example when was the last time AMD didn't have a dual 6-pin midrange card? If Polaris 10 is 135W max then a single 6-pin would be enough.

My guess is that they'll be binning some of their best chips so that when GDDR5X becomes more available they can release versions of the cards with the better memory and higher clocks under the 480X and 490X names.

This gives them a good performance bump and the ability to fill in different price points that they may not have originally hit. As much as they talk about having the best performance per dollar, they're still going to want a somewhat better card with a ~$75 price bump to temp people into upgrading.

I think that's where the dual 6-pin cards will come into play.
 

MrTeal

Diamond Member
Dec 7, 2003
3,587
1,748
136
I really wish wccftech would stop just making up their own slides and presenting them like they're official AMD ones.
That's even outside the fact they apparently have no idea what AMD uses "XT" to denote in their product stack.
Further up ahead, we have the C98XXX board, which is without a doubt, the Polaris 10 GPU. Shipping data from Zauba has actually identified this particular GPU as Baffin XT. Which means we are looking at the cut down variant of Polaris 10.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Default GXT 980Ti is only 26% faster than Default GTX 980 at 4K according to the TPU link above.

Boom. You walked straight into the trap I laid.

After-market 980Ti vs. 980
28% faster at 900p
35% faster at 1080p
49% faster at 1440p
56% faster at 4K

You just said a reference 980Ti is only 26% faster than a reference 980 at 4K. I also showed that a 20%+ factory pre-overclocked 980Ti is only 35% faster at 1080p but this lead grows to 56%.

There you go, you yourself just proved that 980Ti is CPU bottlenecked at 1080p. That's why 1080p 60Hz benchmarks are often misleading for comparing GPU speed - what you are comparing is the extent to which flagship cards are CPU limited with modern CPUs and how efficient/inefficient AMD/NV DX11 driver overhead is. How much more proof is needed? You want video proof? :thumbsup:

Using CPU limited scenarios/peasant gaming resolutions is NOT a proper GPU comparison that shows just how capable flagship cards really are. If and when next generation games become so GPU intensive that 980Ti/Fury X are fully maxed out at 1080p, then we'll revisit.

This is why for next generation cards, you will see Vega and Big Pascal/GP104 grow their GPU lead against current gen cards as we go to 1440P and again to 4K. Their lead at 1080p will be smaller due to CPU bottlenecks (until more GPU-intensive next gen games are added to the test suites such as BF5, Mirror's Edge Catalyst, Deus Ex Mankind Divided, The Division, etc.).

AMD would be stupid not to. Make it a bit cheaper than 390(x) at similar performance but higher margin for them. Use marketing to sell by performance/watt. Plus your total system cost will be lower. Lesser PSU is enough and you will need less power (= cheaper VR).

It's not that simple since NV is rumored to be launching 3 GP104 SKUs. Pricing will also be dependent on where Polaris 10 stacks up against 1060Ti/1070 level cards. It's not really going to matter that today R9 390 costs $329.99 because 14nm/16nm cards will be competing against each other, not against 390/970 as those will be EOL.

It's hard to imagine any scenario where 390 level of performance is not available at $249.99-$269.99. Otherwise, there would be almost no progress made other than lower power usage.

Less than 2 years after $549 7970/$499 7970Ghz launched, AMD delivered 280X for $299.

Actually, we should have ~ 90-95% of Fury X level of performance in a mid-range $299-349 card if you ask me.
 
Last edited:

funks

Golden Member
Nov 9, 2000
1,402
44
91
I agree. Make the fastest 75W card and the fastest 150W card. Those are key market segments for power conscious users/OEMs.

Somebody give this guy a samwich, I personally believe they should only have 3 TIERS.

Bottom Tier - a card like the 750TI, with no external power cable required (350W PSU), fastest Polaris 11 they can make (less than 100$)

Middle Tier - a card that requires that basically goes up to 150W, one 8 pin power connector required (550W PSU), fastest Polaris 10 they can make (less than 250$)

Balls-Out Tier - a card that basically goes up to 300W, two 8 pin power connectors (for the ballers with their 750W -> 1000W PSU's) - Vega - (less than 500$)

Anything lower than the Bottom Tiers needs to go to integrated graphics. Doesn't make sense to have too many different type of cards like they do now, pretty sure developers would love it too, and makes it alot less confusing for consumers.

Considering all 3 consoles will be/are running GCN (Nintendo, Sony, Microsoft), they have the advantage game porting wise for the foreseeable future. Additionally, it helps when the developers don't have too many targets so they can test/optimize their presets for those tiers to give the consumer the best experience out of the box.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |