The 480: power consumption, PCI-E powerdraw

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
You mean like I already have multiple times such as the Strix 960?

Huh!? You're argument was never that the Strix 960 was bad, you're argument was that it was disingenuous to claim that only the 960 Strix was bad. You have so far completely failed to back up this claim.

Then you started mentioning the 750 Ti being bad, which once again you have completely failed to back up.

For the millionth time, there was no average power draw from the PCI-E slot for the Strix established...it is simply not mentioned anywhere in Tom's testing.

And for the millionth time read the freaking article! This is the third time that you are just dead wrong about what it says.

Take a wild guess as to what the yellow dotted line in this graph represents (hint: it's spelled out right in the graph itself):
 
Last edited:

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Normally the connectors or pins etc can get damaged by excessive heat which is due to the continuous current flowing in them. Temperature changes aren't instantaneous so normally the specification will have a specified maximum continuous current and maximum pulse current (this will be much higher than the former).

In this case, clearly the PCI-e slot is drawing alot of continuous current above its specification. This could potentially lead to damages in the pins or connector.

Short momentary pulses are ok most of the time (as long as this is within spec), but continuous currents above specification could be bad news..
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
The more I consider this situation, the more I think that GloFo 14LPP yields are still very poor and that AMD is using silicon that really should have been cut down to a lower SKU or discarded in the RX 480 reference cards. I suspect the best bins are being held back for mobile/AIO products (especially for Apple). And I wouldn't be at all surprised if the next tier down was sold to AIBs, with the reference cards getting the worst, most marginal chips.

What I can't piece together is why many of the cards out there don't seem to be running optimally. The release-day slide deck made a big deal of features like adaptive clocking and boot-time power supply calibration. I was under the impression that this was supposed to eliminate the need for AMD to overvolt everything out of the box, but that doesn't seem to be the case; multiple users report substantial power reductions (and less throttling!) with no loss of stability by undervolting in the new WattMan software. But why isn't this being done automatically using the new features? There would still be inconsistency if they want to maximize yields, but at least the cards that are physically capable of better economy would be able to achieve it out of the box. Could the new features simply be broken or inoperable? That would be incredibly embarrassing for AMD if true, that such an oversight could get past pre-release quality control, but at least it means that the existing cards could be made much more efficient with BIOS and/or driver updates.

GloFo should improve their process in time - everyone does, and they've been able to eventually get the 32nm and 28nm processes well-refined despite some early stumbles. By the time AMD rebrands P10 next year, parametric yields should have improved enough to be able to provide a decent boost to efficiency, performance, or both.

Various tests indicate that RX 480 is bottlenecked by memory bandwidth, so I'm not quite clear why it was set to 1266 MHz. Comparing with rumors and early benchmarks, it looks like clocks were increased fairly late in the production process. Why? Obviously the cooler and board were designed for something closer to 110-120W than 150-160W. It would be interesting to see how the card does if it was downclocked to, say, 1150 MHz, with concomitant voltage reduction. I bet that perf/watt would go way up.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
The more I consider this situation, the more I think that GloFo 14LPP yields are still very poor and that AMD is using silicon that really should have been cut down to a lower SKU or discarded in the RX 480 reference cards. I suspect the best bins are being held back for mobile/AIO products (especially for Apple). And I wouldn't be at all surprised if the next tier down was sold to AIBs, with the reference cards getting the worst, most marginal chips.

Well Raja did pretty much say that they Polaris architecture was build around Polaris 11, so it does look like Polaris 10 may have pushed the sweet spot a bit too far both relative to the architecture but also relative to the process.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Well, it makes no sense to send any cards to reviewers that might have this problem.

One would think that AMD would check the cards before giving them out to make sure there are no such surprises.

So, I have to conclude that somehow AMD was unaware of the possibility of some cards/chips being out of spec.

Looking at past release reviews of reference cards there is no indication that AMD hand picks the cards. Thus the early issues that are sometimes discovered.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Has amd commented on this whole power issue beyond "we are working on it"? It just kind of seems like they're:


What would you like them to say? They've been informed and are checking to see if they can determine the issue in these instances. As someone else stated, if your card is drawing more than 75W from the mobo RMA it.
 

A_Skywalker

Member
Apr 9, 2016
79
4
71
What would you like them to say? They've been informed and are checking to see if they can determine the issue in these instances. As someone else stated, if your card is drawing more than 75W from the mobo RMA it.

How can I even check from where it draws the power?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Their cards were tested and approved at PCI-SIG's lab, at this stage I expect they're thinking it's a firmware issue. It's nowhere near as big of a controversy as people are making it out to be. Everyone is posting Tom's results but ignoring the fact that Tom's still recommended the dang card. Talk of it frying a motherboard is just nonsense.

If it's all you got to complain about you scream it really loud.

No coil whine, no noisy pump. Can't complain about heat, noise, or throttling because the finger will get pointed straight at nVidia too. What's left? Some cards drew over pci-e spec. Let's make a lot of noise about it. lol
 
May 11, 2008
20,041
1,289
126
I thought tin whiskers were limited to pure tin solder? Isn't the point of the solder additives to prevent tin related issues (especially the expansion of tin under sub zero temperatures).

Solder additives are indeed the way to reduce the effect.
Lead used to prevent tin whiskers, but it is no longer allowed for consumer products. (Tin whiskers are great for a consumer society, it guarantees that you will have to by a new product in time).
( I am not really too happy about it.)

Some manufacturers seem to have used pure tin for a short time.
But that is not a good idea at all.

They use copper, silver and antimony sometimes as additives for lead free solder.


http://www.eetimes.com/document.asp?doc_id=1279227


EDIT:
Just thought about it, but another way to reduce the effects of broken of whiskers that float around on the pcb, is to use humiseal.
But this will not go for consumer products since humiseal is a conformal coating, i doubt it is environmentally friendly.
Everything that is meant to last, is automatically toxic. Only way to prevent nature from taking back what we took from nature.
With humiseal you do not have to worry about foreign objects making potential short circuits.
Of course, tin whiskers that are not broken of and continue to grow, could theoretically pierce the humiseal conformal coating and still make a short circuit.
Or the tin whiskers could grow underneath the humiseal conformal coating and make a short. But i am not sure about this.
Really something that humiseal could answer.

http://www.humiseal.com/
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
We don't know what the component ratings are.

Peak ratings are often quite high for components.

It may be that 200W peaks are okay, but the 70W average isn't, for a given component.

Overall, I would think the average number is the important one.

Average takes highs and lows into account. It's important but not the end all. Peak is important but is often much much higher. The spec you would worry about is drawing more than 75W "continuously". That's very hard for a device to do with any kind of a dynamic load, like a game.
 
May 11, 2008
20,041
1,289
126
The current spikes aren't an issue for the connector. Voltage spikes could be as high voltage can cause arcing, but we're well away from the kinds of voltages where you'd worry about that. Current spikes can cause issues with electronics, but this isn't one of those times. Here you're looking at heat causing an issue, and that's all determined by average power.

This i would think so too when thinking of burned connectors. Maybe the reviewers should include not just an average value of the power but also an RMS value of the current that the card draws. That would be directly inline with the allowed dc current through the connector. And then it will show it is less than the maximum current allowed by the connector specification.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Maybe 480 owners can downclock their cards until a solution shows up. Which brings me to a theory... I wonder if the 480 was meant to be released at 1080mhz stock clocks (as per initial leaks) which may not have had any power issues. And at a later stage, too close to manufacturing deadlines, bumped it to its current 1120mhz clocks without thorough testing.

initial leaks were 850MHz IIRC.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
How can I even check from where it draws the power?

Personally, I wouldn't worry about it. Honestly, ~10W is nothing to worry about. Look at the O/C'ing people do and the resulting power usages. This is really a non issue for consumers.

Just curious, have any of these sites reported pci-e power usage before? It's not a test I recall seeing before.
 

ZGR

Platinum Member
Oct 26, 2012
2,054
661
136
Any confirmed reports of this thing killing motherboards or causing damage of any kind? I'm curious to know how many older cards may draw more than 75W from the PCIe slot. I do agree that an extra 10W is most likely harmless on most motherboards.
 
May 11, 2008
20,041
1,289
126
The more I consider this situation, the more I think that GloFo 14LPP yields are still very poor and that AMD is using silicon that really should have been cut down to a lower SKU or discarded in the RX 480 reference cards. I suspect the best bins are being held back for mobile/AIO products (especially for Apple). And I wouldn't be at all surprised if the next tier down was sold to AIBs, with the reference cards getting the worst, most marginal chips.

I agree. That is exactly what i was thinking too. They have a lot of clients that need chips. I almost would think these chips are made with LPE and the next chips would be LPP. But i think only the early version for in house testing were made with 14nm LPE (If it is possible, it has some limitations when it comes to sram) .

http://www.samsung.com/semiconductor/foundry/process-technology/14nm/

According to this article it says samsung 14nm lpp is ready for mass production in early januari 2016.
http://www.anandtech.com/show/9959/samsung-announces-14lpp-mass-production
http://news.samsung.com/global/sams...-14-nanometer-finfet-logic-process-technology

At that time, there was already an polaris demo shown.
http://www.anandtech.com/show/9886/amd-reveals-polaris-gpu-architecture

So, that would have been maybe on 14nm lpe. GF have an alliance with samsung, and licensed the 14mp lpp and started production soon as possible. I am sure of it, that the next batch will be a lot better.
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Yes to damage it seems, not many but this sort of card will get put on cheap motherboards. Also more likely to just cause long term damage/shorten the lifespan than immediately brick something you imagine.

The other issue is if anyone does overclock/cross fire them. Toms immediately backed off long term overclocking tests for fear of damaging their test bed when the draw on the PCIE slot went up to 100w.

It sounds like Toms and places have checked this for a bit. Probably mostly entirely incidental because there's no even halfway sane reason to break this specification - even if you go over your net pcie + x pin connector budget it sounds like the 6/8 pin connectors are very much more robust to going over specification.

The only halfway sane explanation is that they were fairly 'sure' it'd only be a ~12/30w chip and didn't rethink when it turned out some of them were much worse (see GF stuff immediately below.).
 

ZGR

Platinum Member
Oct 26, 2012
2,054
661
136
Yes to damage it seems, not many but this sort of card will get put on cheap motherboards. Also more likely to just cause long term damage/shorten the lifespan than immediately brick something you imagine.

The other issue is if anyone does overclock/cross fire them. Toms immediately backed off long term overclocking tests for fear of damaging their test bed when the draw on the PCIE slot went up to 100w.

It sounds like Toms and places have checked this for a bit. Probably mostly entirely incidental because there's no even halfway sane reason to break this specification - even if you go over your net pcie + x pin connector budget it sounds like the 6/8 pin connectors are very much more robust to going over specification.

The only halfway sane explanation is that they were fairly 'sure' it'd only be a ~12/30w chip and didn't rethink when it turned out some of them were much worse (see GF stuff immediately below.).

We need reviewers to try to kill test motherboards with the RX 480. The cheaper, the better! :twisted:
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
What would you like them to say? They've been informed and are checking to see if they can determine the issue in these instances. As someone else stated, if your card is drawing more than 75W from the mobo RMA it.

Still I would at least have expected some kind of damage control from them. They usually respond within hours when something like this comes out. Even if they don't yet know the details, they could put out a PR message saying that a "small number of reference boards" might draw more power than allowed "under certain conditions" etc. to try and lessen the impact. This is their latest GPU that's going to make up the backbone of their GPU sales for the foreseeable future, and people are scared to install the cards because they might damage their mobos. It's a PR nightmare, for sure.
 
Feb 19, 2009
10,457
10
76
AMDJoe - Today at 10:41 AM As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).

Tldr: fix in upcoming driver



https://www.techpowerup.com/223833/official-statement-from-amd-on-the-pci-express-overcurrent-issue

Their power controls in Wattman needs improving as well, the cards are running 1.15v when its not required. Lots of users on reddit now reporting major undervolting WHILE overclocking. Clearly it's not optimized properly.
 
May 11, 2008
20,041
1,289
126
For those interested about asic quality :

Download GPU-z and hover above the top of the program and select settings.
Then a tab is shown with asic quality.

The next new feature is ASIC quality, designed for NVIDIA Fermi (GF10x and GF11x GPUs) and AMD Southern Islands (HD 7800 series and above), aimed at advanced users, hardware manufacturers, and the likes. We've found the ways in which AMD and NVIDIA segregate their freshly-made GPU ASICs based on the electrical leakages the chips produce (to increase yield by allotting them in different SKUs and performance bins), and we've found ways in which ASIC quality can be quantified and displayed. Find this feature in the context menu of GPU-Z. We're working on implementing this feature on older AMD Radeon GPUs.

At TPU there is a huge forum thread about posting asic quality:

https://www.techpowerup.com/forums/threads/post-your-gpus-asic-quality.189077/

Unfortunately for the the A10-6700 that i have, there is no asic quality value.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Normally the connectors or pins etc can get damaged by excessive heat which is due to the continuous current flowing in them. Temperature changes aren't instantaneous so normally the specification will have a specified maximum continuous current and maximum pulse current (this will be much higher than the former).

In this case, clearly the PCI-e slot is drawing alot of continuous current above its specification. This could potentially lead to damages in the pins or connector.

Short momentary pulses are ok most of the time (as long as this is within spec), but continuous currents above specification could be bad news..

You make it very hard to believe you are electrical engineer.

First, good luck burning connector due to excessive current. First think that would go pop would be traces or power regulators.

Short spikes is what kills connectors/pins.

The reason for burned connectors is most of the time human error, or sometimes damaged connector. To burn the pins like shown in the photos above, you don't have to draw any excessive currents beyond specifications.

Bad contact between two pins and even light load will make it burn. Add current spikes to that, and you have a recipe for a disaster.

I've seen a 10m cable 1.5mm2 @220V go pop at the connector with 1<kW load on them. Bad contact, sparks, smoke, tears.

On the other hand I've used 40meters long 1.5mm2 @220V with welder plugged in 2kW+. The welder had problems because of Vdrop, cable got pretty warm and the 16Amp fuse went off from time to time. But in this example the cable had no connectors other than your regular socket/plug.
 

renderstate

Senior member
Apr 23, 2016
237
0
0
May 11, 2008
20,041
1,289
126

nenforcer

Golden Member
Aug 26, 2008
1,767
1
76
The Strix960 is the only problematic model but at least it the average draw doesn't exceed pcie specs. Having a RX480 reference model fail is clearly alot worse.
QUOTE]

An AIB model failing is worse IMO. The AIB models are the very ones people are looking to to FIX the 480 problem. AIB always have better quality and components than reference, that's why they exist, whereas reference is the lowest priced version. Most people expect AIB 480 to ship with minimum 8 pin connector for example, where the reference ships with a weaker 6 pin connector.



The Power Color RX 480 Devil is already confirmed to have an 8-pin power connector although I can't tell by that picture it looks like only a 6-pin is plugged in with 2 pins hanging out. Most people will have no problem with AIB RX 480 having an 8-pin power connector in fact they will expect it to allow overclocking headroom. It's the same difference as the 750 Ti connectorless versus those with the 6-pin connector to at least attempt some overclocking if you wanted to.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |