[Rumor (Various)] AMD R7/9 3xx / Fiji / Fury

Page 59 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Azix

Golden Member
Apr 18, 2014
1,438
67
91
In CCC, if you will bring down the TDP to lets say 50% Hawaii will run at 145W of TDP at around 85-90% of base performance.

http://forums.macrumors.com/threads/nmp-2015-where-is-it.1849662/page-8#post-20880421 The quote is most important.

Power efficiency is an oft-used negative against the large-die Hawaii chips, but I've been playing with powertune settings and Furmark recently as an experiment to fit a "hot and noisy" AMD card into an SFF with limited cooling.

Actually, I stand by an earlier post I made that says I think AMD pushed Hawaii silicon too far.
With both GPU-Z and Furmark able to report power consumptions, I can see a 100W reduction in power consumption on 290X cards for as little as 5% performance loss.

If you have a Hawaii card, I urge you to crank power limits down in the overdrive tab of CCC and see what the resulting clockspeed is under full load. Even in a worst-case scenario, I'm seeing a typical clockspeed of 850MHz with the slider all the way to the left at -50%

That means that Hawaii (the two samples I personally own, at least) can run at 850+MHz on only 145W (half the 290W TDP). As mentioned, that's a worst-case scenario using a power-virus like Furmark. Under real gaming situations (I was messing around with Alien Isolation on 1440p ultra settings) the clocks averaged about 925MHz yet my PC was inaudible; Fans that normally hum along at 55% were barely spinning at 30% during my gameplay.

As Nvidia has proved, you can make a 28nm chip run efficiently. I think the design of Hawaii holds up very well under vastly reduced power constraints - AMD just pushed it outside its comfort zone in order to get the most out of it.

In saying that, the "underpowered" 290X is around the same performance as my GTX970 and also the same cost - significantly higher than a GTX960 4GB. I don't know if die-harvested 290 cards deal with power limit caps like the cherry-picked 290X cards.


from http://techreport.com/news/27996/4gb-gtx-960s-trickle-into-retail-channels?post=893388#893388



This actually means they could have got massive energy savings if they went with global foundries 28nm, doesn't it? higher clock for lower energy on an already really efficient architecture. Too bad they invested in 20nm. We might have had a better 28nm chip for midrange
 
Last edited:

The Alias

Senior member
Aug 22, 2012
647
58
91
My 4770k was not OC'd when I ran that benchmark on the 390x. Stock 4770k and 32GB Mushkin DDR3.
So that means your 390x has to have something else going for it since you're 3.5-3.9 ghz 4770k is going against a 4.4ghz 4770k of the same architecture and your 390x only has a 20 mhz advantage over HEXUS's 1030mhz 290x, yet you're still beating them by 200 points. pcb shots and leaks may indicate rebrands, but the numbers are irrefutable. Something else has to be going on with the 390x.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
But still an issue at this stage? That would be an outright catastrophy.

Remember this slide that Maddie posted a while back? Not sure where it came from, but it discusses two GPU tapeouts on "28HPM": one ">500sq.mm" and one ">350sq.mm". The GPU with a size of >350 sq. mm. could easily be Tonga; though I don't think AMD released an official figure, some sites estimated it at 359 sq. mm., presumably using calipers. The thing is, a Google search shows that 28HPM is a TSMC mobile process. That would explain a lot... yield problems (if it's a different process than AMD was using on their previous GPUs), the higher transistor density, and why the R9 285 is so much less efficient than the R9 M295X... because the desktop version is being clocked outside its comfort zone. (Note that the 918 MHz clock speed on the R9 285 is lower than most desktop GCN parts.)

I suspect the only reason AMD made this part at all was because Apple wanted it. And since Apple was in the driver's seat, they got a part optimized for their application. This still doesn't explain why they left in junk silicon such as the extra memory controller blocks, unless that was to increase yields by allowing for redundancy. Or unless they started with Tahiti as a base and did a rush job making as few changes as possible. I still think Tonga was supposed to be on 20nm, and they had to hurriedly back-port it to meet Apple's deadline. If it wasn't for the Retina iMac contract, I bet it would have been cancelled entirely.

That still leaves open the question of what the >500 sq. mm. chip on a mobile process could be. Surely not Fiji... why would they even consider 28HPM for that?
 

BryanC

Junior Member
Jan 7, 2008
19
0
66
Remember this slide that Maddie posted a while back? Not sure where it came from, but it discusses two GPU tapeouts on "28HPM": one ">500sq.mm" and one ">350sq.mm". The GPU with a size of >350 sq. mm. could easily be Tonga; though I don't think AMD released an official figure, some sites estimated it at 359 sq. mm., presumably using calipers. The thing is, a Google search shows that 28HPM is a TSMC mobile process. That would explain a lot... yield problems (if it's a different process than AMD was using on their previous GPUs), the higher transistor density, and why the R9 285 is so much less efficient than the R9 M295X... because the desktop version is being clocked outside its comfort zone. (Note that the 918 MHz clock speed on the R9 285 is lower than most desktop GCN parts.)

I suspect the only reason AMD made this part at all was because Apple wanted it. And since Apple was in the driver's seat, they got a part optimized for their application. This still doesn't explain why they left in junk silicon such as the extra memory controller blocks, unless that was to increase yields by allowing for redundancy. Or unless they started with Tahiti as a base and did a rush job making as few changes as possible. I still think Tonga was supposed to be on 20nm, and they had to hurriedly back-port it to meet Apple's deadline. If it wasn't for the Retina iMac contract, I bet it would have been cancelled entirely.

That still leaves open the question of what the >500 sq. mm. chip on a mobile process could be. Surely not Fiji... why would they even consider 28HPM for that?


Apple does not care about this Tonga part. They don't even carry it in their stores (the ones they actually sell are all Pitcairn based). Very very low volume - there's no way AMD could recoup the costs of designing Tonga if they are relying on such a niche product. I can't believe the iMac had anything to do with AMD's Tonga strategy.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Maybe all of the evidence isn't in yet?

Indeed not, and it seems the R9 390 uses WAY less power (at least under Furmark) than the R9 290.

R9 390:



R9 290 (same clocks):



Also, reviews show the same range of power draw for the R9 290 under Furmark:

http://www.anandtech.com/show/7601/sapphire-radeon-r9-290-review-our-first-custom-cooled-290/4



Subtract the 80W idle - with about 11W going to the GPU, and the R9 290 OC uses about 340W!!... while the R9 290 seems to be using about 100W less!

Of course, this is just Furmark, and the BIOS could be regulating power differently...
 

DiogoDX

Senior member
Oct 11, 2012
746
277
136
Furmark is useless because of the limit that AMD and Nvida put on the cards. Ask the guys to run 3dmark or other same canned bench and compare.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
FWIW, the Furmark GPU-Z screenshot was originally posted here by someone who bought the card at Best Buy.

Maybe it's a new stepping, sort of like Godavari was over Kaveri. Or maybe AMD just lowered the power limit in the BIOS, which would stop FurMark from guzzling so much power, but would have little effect on power usage during gaming. It will be interesting to see what TechPowerUp's review shows when the time comes. As a point of comparison, the Sapphire R9 290X Tri-X OC peaks at 253W during gaming (Crysis 2), 316W in FurMark.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
So that means your 390x has to have something else going for it since you're 3.5-3.9 ghz 4770k is going against a 4.4ghz 4770k of the same architecture and your 390x only has a 20 mhz advantage over HEXUS's 1030mhz 290x, yet you're still beating them by 200 points. pcb shots and leaks may indicate rebrands, but the numbers are irrefutable. Something else has to be going on with the 390x.

What?

From system to system, the score varies. Everything from different memory modules to background apps will have an effect on the score. Heck, take the same CPU,GPU, RAM, HD, and put them in a different mother board, boot the same OS and you will have a different result.

Yes, even the motherboard will impact the results.

This is the issue with not having a real review site compare results in a controlled environment. Taking some random guys result and comparing with an entirely different system is wrong wrong wrong.

Review sites make sure the comparison is fair and things are equal.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Keep in mind that the 390/390X have faster GDDR5 clocks (now 1500 MHz, when the original 290/290X used 1250 MHz as its default). Now, if you take an existing R9 290/290X card and overclock its RAM to 1500 MHz, improvements will be minimal. But that doesn't mean the same would necessarily be true for the 390/390X. This is because overclocking RAM on AMD cards can sometimes actually result in worse performance, due to the way the BIOS handles memory controller timings. These are often optimized for what the card ships at.

During the mining craze, The Stilt designed an optimized BIOS for Hawaii cards that allowed lower latencies at higher memory clocks, and added some power efficiency as well. I don't think this would have worked well for gaming because it wasn't optimized for that, but there's every reason to think that 1500 MHz GDDR5 with a better BIOS could offer ~10% performance improvement at the same core clocks. That's about the improvement that the R9 270X (1400 MHz memory) offered over the 7870 (1200 MHz memory).
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Fury is looking like, at the company level (not consumer level), a major disaster. HBM1 is troublesome and yields are low and costs are high.


HBM1 yields are apparently exactly as expected. HBM2 yields, however, are possibly more problematic - which could be from whence the rumors have arisen - HBM2 concerns will delay the next generation (for AMD and nVidia).

Quite literally every company involved has stated that there are no HBM1 production concerns. And there shouldn't be, all the technologies are established to at least some degree and production has been ongoing for quite some time already.
 

chimaxi83

Diamond Member
May 18, 2003
5,649
61
101
Furmark still being used as a measure of anything useful... more clueless people to disregard completely.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
HBM1 yields are apparently exactly as expected. HBM2 yields, however, are possibly more problematic - which could be from whence the rumors have arisen - HBM2 concerns will delay the next generation (for AMD and nVidia).

But a delay on HBM2 (or FinFET+) will hurt AMD a lot more than it will hurt Nvidia. AMD needs HBM2 and GloFo 14nm FinFET+ in order to bring the next generation to market; until that happens, they're stuck selling grossly uncompetitive products, many of them dating back to 2012. Nvidia, in contrast, doesn't need to worry nearly as much because they already invested in 28nm Maxwell; if anything, they benefit from being able to get a longer life out of those successful designs.

Basically, Nvidia had the R&D budget and the foresight to make a contingency plan for the failure of Moore's Law, and AMD didn't.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
They did that before. Nothing changed but the name.

When have they done this before? Every time they have changed the name, there has been some sort of change to the silicon. There have been times where it has been a rehash of an existing one, but never an identical part with a different name to my knowledge.
 

Wreckem

Diamond Member
Sep 23, 2006
9,459
987
126
Apple does not care about this Tonga part. They don't even carry it in their stores (the ones they actually sell are all Pitcairn based). Very very low volume - there's no way AMD could recoup the costs of designing Tonga if they are relying on such a niche product. I can't believe the iMac had anything to do with AMD's Tonga strategy.

You are thinking about currently released products.

Apple will likely be refreshing both iMac lines and the Mac Pro line in September. It is very likely that Tonga Pro and Tonga XT will be in the bulk of their desktops(iMacs and Mac Pros) by years end.

With Apple moving completely away from Nvidia, Tonga Pro and Tonga XT are their only logical choices for their desktop lines going forward this year and next. We will know in September.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
When have they done this before? Every time they have changed the name, there has been some sort of change to the silicon. There have been times where it has been a rehash of an existing one, but never an identical part with a different name to my knowledge.

The 7870 used the Pitcairn GPU. The R9 270/270X used a GPU that is codenamed Curacao, but as far as anyone can tell, is 100% identical to Pitcairn.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Furmark still being used as a measure of anything useful... more clueless people to disregard completely.

Yeah, I had to download it just to make the comparison with the posted information :'(.

Furmark is useful for finding maximum power draw, though, so...

That said, the R9 390 was running at the same clocks, so some type of efficiency improvement HAS occurred. It can't just be a lower power tune limit - otherwise the clocks would be lower to reduce the power draw.

Our options:

1. Good binning for the 390[x] GPUs
2. Tweaked layout from TSMC
3. New Stepping

Any of these could make the 390/x worthwhile over a 290/x if you're in the market for either. :thumbsup:

The GPU shot we have suggests a very recent manufacturing date versus the newest 290/x GPU shots I could find on google images, T1R*~T2* seem to be 290/x GPUs and T3V* seems to be 390/x... but there is nowhere near enough data available to determine that adequately.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
The GPU shot we have suggests a very recent manufacturing date versus the newest 290/x GPU shots I could find on google images, T1R*~T2* seem to be 290/x GPUs and T3V* seems to be 390/x... but there is nowhere near enough data available to determine that adequately.

Unfortunately, on the picture I saw, the model number on the GPU was too small to read. Hawaii had a model number of 215-0852000. I would be interested to know if these are the same or different on the 390/390X cards. FWIW, the 270/270X GPU had a different model number than the 7870, but the 280X had the same model number as the 7970 Boost GPU. This corresponds with the fact that (for whatever reason) AMD saw fit to rename Pitcairn to Curacao with the advent of the 200 series, but left the Tahiti codename as-is.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
But a delay on HBM2 (or FinFET+) will hurt AMD a lot more than it will hurt Nvidia. AMD needs HBM2 and GloFo 14nm FinFET+ in order to bring the next generation to market; until that happens, they're stuck selling grossly uncompetitive products, many of them dating back to 2012. Nvidia, in contrast, doesn't need to worry nearly as much because they already invested in 28nm Maxwell; if anything, they benefit from being able to get a longer life out of those successful designs.

Basically, Nvidia had the R&D budget and the foresight to make a contingency plan for the failure of Moore's Law, and AMD didn't.

You're Thinking and typing with your butt again

This stuff makes my head hurt because I can't figure where it comes from. You and that other guy just throw this stuff around. Both need better processes. HBM1 could suffice if it had to. Nvidia has cut enough out of their chips to get to where they are, not sure they want to cut more.
 

The Alias

Senior member
Aug 22, 2012
647
58
91
What?

From system to system, the score varies. Everything from different memory modules to background apps will have an effect on the score. Heck, take the same CPU,GPU, RAM, HD, and put them in a different mother board, boot the same OS and you will have a different result.

Yes, even the motherboard will impact the results.

This is the issue with not having a real review site compare results in a controlled environment. Taking some random guys result and comparing with an entirely different system is wrong wrong wrong.

Review sites make sure the comparison is fair and things are equal.

If anything was leeching performance it would be on the random Guy's system, not Hexus,the professional review site. Not to mention Hexus's CPU was overclocked. You bring up intangibles, but the tangible differences far outweigh the intangible ones, yet the 390x still overcame them
 
Feb 19, 2009
10,457
10
76


Unless the new series throttles Furmark like the old days, that's a massive efficiency gain. But the clockspeed is 980mhz flat (& GPU load maxed) so it does not look like major throttling is occurring?

As said and probably something most of us can agree to, for the rumored price, its worth it if its 10% faster with some efficiency gains, otherwise is a major fail. Here's hoping AMD is competitive.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
So they're finally segmenting the VCG section? Hope it works.

On Topic:
One day away for some legit answers, I hope. The above info for 390 power consumption looks amazing, if I'm reading it right.

I'm putting all my chips on Fury. I'll most likely delay my 4K upgrade for Skylake-K and 14nm cards.

Right now I just want a cheap 1440p capable card. Seeing saw price cuts on 980's at Newegg, perhaps they know 390X performance is gonna be nipping if not passing them.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
About those power consumptions. Am I reading something wrong, or In Furmark new GPUs draw around 250W of Power? In gaming scenarios it would be what... 190-220W?

If thats the case, I think that the first benchmarks on ChipHell of "Captain Jack" card are getting more and more possible...
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |