where did you pull those prices from?
I can make up numbers too. how about 200$ vs 550$? ahahaha
That's not true. I'm usually very favorable towards AMD but this is a failure. 14nm FF, such a small chip, should be not chewing 150W during gaming.
Compare to it's predecessor of this class, anyone remember Pitcairn, the 7850 and 7870?
Polaris 10 is worse when factoring in the node jump.
That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.
Now, I would have thought the RX 480 a much better product if it's gaming load was ~100-110W. That would imply AMD low ball clocks to get perf/w, leaving more performance on the table for overclockers or custom cards. But it's right at the edge.
As a gamer, I still think it's a great GPU at the price.
* I bought 2x RX 480 8GB, $379 AUD each.
GTX 970 3.5GB & AMD 390 8GB are ~$449 AUD. GTX 980 4GB is $629 AUD (got a price cut last week from its usual $749!!).
390X 8GB is $529 AUD. 1070s are ~$779 here and 1080s are $1199, ridiculous prices.
Logically, you can't say RX 480 is a bad GPU for the price. It is good for gamers to have that performance class down at mainstream prices.
But as a tech enthusiast, I am very disappointed at seeing such a small FinFet chip suck down that much power. To me, that's a failure, most likely GloFo but in the final analysis, AMD takes the blame because they should have known better and be more honest about expectations.
You don't get to stand there and claim 2.8x perf/w and talk about all this efficiency and coolness you get from 14nm FF, when the card runs 82C and at the limits of its power PCB.
I can tell you right now with facts, that 1.26ghz is operating beyond it's optimal clocks for the process. Why? Look here:
1.4ghz OC with a aftermarket cooler:
http://oc.jagatreview.com/2016/06/t...deon-rx480-ke-1-4ghz-dengan-cooler-3rd-party/
Power usage jumps to 183W, which is insane for such a small clock speed bump.
All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.
I raised these points in the other thread and some of you accuse me of being negative on AMD (falsely even). But, AMD don't get to go to a new node AND HYPE UP efficiency gains and talk about 2.8x perf/w and be so far being Pascal on perf/w.
This is what my logic tells me, I don't need to sugar coat the analysis because I am not a blind fanboy.
mine is never off, what about yours?It's fairly easy to see where those numbers came from, I'm sure you can put your thinking cap on and figure it out.
Absolutely not true. If you had any memory of history.
Pitcairn was very efficient.
The 7950 and 7970 was very efficient, actually almost the same as Kepler.
It did not get bad until the Ghz ed came and Hawaii.
Though Hawaii itself, if you compare it's competitor, big Kepler, was actually close. Now Hawaii destroys big Kepler for similar power usage.
With Maxwell, NV had a big leap forward, leaving AMD behind. But Fury & Nano actually brought them back very close. Depending on the review, Asus Fury & Nano beat Maxwell on perf/w.
Even the Fury X vs 980Ti, similar power usage (~235W vs 250W), similar performance. These were all on 28nm at TSMC, the same as NV's GPU.
AMD going to Polaris, enhanced GCN designed for perf/w and they failed.. I don't think it's the architecture when they've shown in the past they can do it. The difference this time is the node.
It's further disappointing, because if a 232 mm^2 Polaris chip is pulling 150W+ there's no hope for AMD to be competitive with Vega unless something changes.
mine is never off, what about yours?
so where did he pull those prices from?
That's not true. I'm usually very favorable towards AMD but this is a failure. 14nm FF, such a small chip, should be not chewing 150W during gaming.
Compare to it's predecessor of this class, anyone remember Pitcairn, the 7850 and 7870?
Polaris 10 is worse when factoring in the node jump.
That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.
Now, I would have thought the RX 480 a much better product if it's gaming load was ~100-110W. That would imply AMD low ball clocks to get perf/w, leaving more performance on the table for overclockers or custom cards. But it's right at the edge.
As a gamer, I still think it's a great GPU at the price.
* I bought 2x RX 480 8GB, $379 AUD each.
GTX 970 3.5GB & AMD 390 8GB are ~$449 AUD. GTX 980 4GB is $629 AUD (got a price cut last week from its usual $749!!).
390X 8GB is $529 AUD. 1070s are ~$779 here and 1080s are $1199, ridiculous prices.
Logically, you can't say RX 480 is a bad GPU for the price. It is good for gamers to have that performance class down at mainstream prices.
But as a tech enthusiast, I am very disappointed at seeing such a small FinFet chip suck down that much power. To me, that's a failure, most likely GloFo but in the final analysis, AMD takes the blame because they should have known better and be more honest about expectations.
You don't get to stand there and claim 2.8x perf/w and talk about all this efficiency and coolness you get from 14nm FF, when the card runs 82C and at the limits of its power PCB.
I can tell you right now with facts, that 1.26ghz is operating beyond it's optimal clocks for the process. Why? Look here:
1.4ghz OC with a aftermarket cooler:
http://oc.jagatreview.com/2016/06/t...deon-rx480-ke-1-4ghz-dengan-cooler-3rd-party/
Power usage jumps to 183W, which is insane for such a small clock speed bump.
All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.
I raised these points in the other thread and some of you accuse me of being negative on AMD (falsely even). But, AMD don't get to go to a new node AND HYPE UP efficiency gains and talk about 2.8x perf/w and be so far being Pascal on perf/w.
This is what my logic tells me, I don't need to sugar coat the analysis because I am not a blind fanboy.
I couldn't have said it better. I'll root for AMD for no other reason than I tend to like an underdog, but this release is unacceptable in my opinion.
Multiple reviewers have had cards that draw power above spec in a way that could be dangerous.
It's further disappointing, because if a 232 mm^2 Polaris chip is pulling 150W+ there's no hope for AMD to be competitive with Vega unless something changes. I hope that it's just an issue with GF and that AMD can sort it out, either by changing to a different fab or just waiting for the process to mature.
http://pcpartpicker.com/products/video-card/#c=369&sort=a8&page=1 450$ for the cheapest 1070Maybe you should get a new one then...
Post #456.
This is all 100% true. I said the same myself; at $199 it's a good buy for the card for the chip itself is an epic engineering failure vs. the competition. AMD would have been best served to have scrapped Vega long ago and concentrate on making Polaris 10 and Polaris 11 much more efficient than they are now.
Its been maybe 6 years since I built a computer. I was ready this summer to build around this 480. The performance is a little underwhelming. For a new person building we are stuck in a hard spot. The 480 is not faster than a 290x/390x and the 1070 1080 is not easily obtainable price or supply wise. I want to get a 2k monitor but this leaves me with the only choice but to buy a last gen fury X ( which I do not want since it sucks so much power) or 980/980 ti. If amd were to drop or at least announce the 490 then I would not be as disappointed but now all I have to do is wait and see how the 1060 is.
Well im thinking Vega's improved perf/w will primarily come from HBM2. GCN5 will just be further minor tweaks. This is my expectation for it.
on performance parts? nope. gpus have gotten way too power hungry the last 5 years.Does anyone know if either amd or nvidia is releasing any passively cooled gpu around now.
I said performance parts. Pitcarn is not a performance part..
I could dig up the numbers, but we all know when the GTX680 (Kepler) came out, it was not only faster for the games in those days but consuming less power than the 7970 (Tahiti).
I agree, something isn't right with the GloFlo 14nm LPP process.That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.
...
All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.
That's not true. I'm usually very favorable towards AMD but this is a failure. 14nm FF, such a small chip, should be not chewing 150W during gaming.
Compare to it's predecessor of this class, anyone remember Pitcairn, the 7850 and 7870?
Polaris 10 is worse when factoring in the node jump.
That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.
Now, I would have thought the RX 480 a much better product if it's gaming load was ~100-110W. That would imply AMD low ball clocks to get perf/w, leaving more performance on the table for overclockers or custom cards. But it's right at the edge.
As a gamer, I still think it's a great GPU at the price.
* I bought 2x RX 480 8GB, $379 AUD each.
GTX 970 3.5GB & AMD 390 8GB are ~$449 AUD. GTX 980 4GB is $629 AUD (got a price cut last week from its usual $749!!).
390X 8GB is $529 AUD. 1070s are ~$779 here and 1080s are $1199, ridiculous prices.
Logically, you can't say RX 480 is a bad GPU for the price. It is good for gamers to have that performance class down at mainstream prices.
But as a tech enthusiast, I am very disappointed at seeing such a small FinFet chip suck down that much power. To me, that's a failure, most likely GloFo but in the final analysis, AMD takes the blame because they should have known better and be more honest about expectations.
You don't get to stand there and claim 2.8x perf/w and talk about all this efficiency and coolness you get from 14nm FF, when the card runs 82C and at the limits of its power PCB.
I can tell you right now with facts, that 1.26ghz is operating beyond it's optimal clocks for the process. Why? Look here:
1.4ghz OC with a aftermarket cooler:
http://oc.jagatreview.com/2016/06/t...deon-rx480-ke-1-4ghz-dengan-cooler-3rd-party/
Power usage jumps to 183W, which is insane for such a small clock speed bump.
All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.
I raised these points in the other thread and some of you accuse me of being negative on AMD (falsely even). But, AMD don't get to go to a new node AND HYPE UP efficiency gains and talk about 2.8x perf/w and be so far being Pascal on perf/w.
This is what my logic tells me, I don't need to sugar coat the analysis because I am not a blind fanboy.
I agree, something isn't right with the GloFlo 14nm LPP process.
FinFETs have a 150% decrease in power, so, what caused AMD to have to pump so much voltage to the chips?
If you look at the GPU-Z shots, most of the cards are revision C7, so, it looks like AMD tried 7 times to get the power monster under control (I doubt it was anything else).
Raja was saying that they had Polaris 11 up & running first, then Polaris 10, so, I think it is safe to assume Polaris 11 was using the 14nm LPE process to get the power requirement they needed for the deadline of the OEMs (which has already passed), and using 14nm LPP for Polaris 10.
To make this easier to understand for people who love to flame, this issue that we are talking about here doesn't mean that it is using more power than the cards that they replaced, they are not. It is lower.
The issue here is, that the 14nm node was supposed to have greater power savings overall, and we aren't seeing as big of a power savings using GloFlo's 14nm LPP process. In fact, it looks like the same power curve for their 28nm process that AMD's CPU is on. Which is why this is puzzling.
The 480 is still a good card for the target audience, but, it should have been using even lower power than what we are seeing.
Samsung has made their products using 14nm LPP & LPE, and form what I read, they aren't having the same power issues, but, then again, they don't have as big of chips as what AMD is doing.
The question is why, what is it that is needing so much more power than expected from the new 14nm node?
I agree, something isn't right with the GloFlo 14nm LPP process.
FinFETs have a 150% decrease in power, so, what caused AMD to have to pump so much voltage to the chips?
If you look at the GPU-Z shots, most of the cards are revision C7, so, it looks like AMD tried 7 times to get the power monster under control (I doubt it was anything else).
Raja was saying that they had Polaris 11 up & running first, then Polaris 10, so, I think it is safe to assume Polaris 11 was using the 14nm LPE process to get the power requirement they needed for the deadline of the OEMs (which has already passed), and using 14nm LPP for Polaris 10.
To make this easier to understand for people who love to flame, this issue that we are talking about here doesn't mean that it is using more power than the cards that they replaced, they are not. It is lower.
The issue here is, that the 14nm node was supposed to have greater power savings overall, and we aren't seeing as big of a power savings using GloFlo's 14nm LPP process. In fact, it looks like the same power curve for their 28nm process that AMD's CPU is on. Which is why this is puzzling.
The 480 is still a good card for the target audience, but, it should have been using even lower power than what we are seeing.
Samsung has made their products using 14nm LPP & LPE, and form what I read, they aren't having the same power issues, but, then again, they don't have as big of chips as what AMD is doing.
The question is why, what is it that is needing so much more power than expected from the new 14nm node?
on performance parts? nope. gpus have gotten way too power hungry the last 5 years.
LOOOL... but the 480 is about to fight the 1060... however I can't see the 1060 winning since is a 50% cut of the 1080.... That is worse than 980!
PS: It's me or Intel Iris Pro ended into a eternal oblivion?