That's not true. I'm usually very favorable towards AMD but this is a failure. 14nm FF, such a small chip, should be not chewing 150W during gaming.
Compare to it's predecessor of this class, anyone remember Pitcairn, the 7850 and 7870?
Polaris 10 is worse when factoring in the node jump.
That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.
Now, I would have thought the RX 480 a much better product if it's gaming load was ~100-110W. That would imply AMD low ball clocks to get perf/w, leaving more performance on the table for overclockers or custom cards. But it's right at the edge.
As a gamer, I still think it's a great GPU at the price.
* I bought 2x RX 480 8GB, $379 AUD each.
GTX 970 3.5GB & AMD 390 8GB are ~$449 AUD. GTX 980 4GB is $629 AUD (got a price cut last week from its usual $749!!).
390X 8GB is $529 AUD. 1070s are ~$779 here and 1080s are $1199, ridiculous prices.
Logically, you can't say RX 480 is a bad GPU for the price. It is good for gamers to have that performance class down at mainstream prices.
But as a tech enthusiast, I am very disappointed at seeing such a small FinFet chip suck down that much power. To me, that's a failure, most likely GloFo but in the final analysis, AMD takes the blame because they should have known better and be more honest about expectations.
You don't get to stand there and claim 2.8x perf/w and talk about all this efficiency and coolness you get from 14nm FF, when the card runs 82C and at the limits of its power PCB.
I can tell you right now with facts, that 1.26ghz is operating beyond it's optimal clocks for the process. Why? Look here:
1.4ghz OC with a aftermarket cooler:
http://oc.jagatreview.com/2016/06/t...deon-rx480-ke-1-4ghz-dengan-cooler-3rd-party/
Power usage jumps to 183W, which is insane for such a small clock speed bump.
All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.
I raised these points in the other thread and some of you accuse me of being negative on AMD (falsely even). But, AMD don't get to go to a new node AND HYPE UP efficiency gains and talk about 2.8x perf/w and be so far being Pascal on perf/w.
This is what my logic tells me, I don't need to sugar coat the analysis because I am not a blind fanboy.
100% agree. Although the 480 brings great value to the $200-230 price point replacing overpriced turds like the 960 and to a lesser extent the 380/380x and becomes my immediate recommendation for anyone who is on a budget, as a product itself is quite the failure for the enthusiast type of user on a budget. No overclocking potential and power consumption goes through the roof if you try to do so. No aftermarket cooling is going to fix that, unlike Hawaii. The GPU itself is to blame here.
GF has failed hard, sincerely I don't think anyone expected such a level of failure from them, even considering they had help to get their 14nm node running.
Polaris as an architecture certainly is good and a nice upgrade over previous GCN versions, a lot less resources (ROPs and memory bandwidth in particular) with a little clock speed bump equaling a 390/390x in most cases while halving power. This is what was expected in the peformance front yet the final product then is lacklustre because power characteristics aren't what you would expect from a 14nm class product (vs Intel's and TSMC's versions of the node and products)
It's even worse when you think that this very foundry is going to make Vega/Navi and Zen. Logic says one could expect P10 to be some kind of pipe cleaner to refine the node for the next products because as it is, it's not viable. Too much variance, it's unacceptable. Even the HD4770 as a pipe cleaner for TSMC's 40nm node wasn't this bad for being one of the first products out of the gate at that time.
Vega family chips had better come out of the gate with these underlying problems fixed... I wonder how P10 could have been if made on TSMC's 16nm process, I'm pretty sure power wouldn't jump to 183w for a paltry 150MHz overclock.
Last edited: