It should be a minor upgrade at the least (wait for benches to confirm)
But by pure maths it should be atleast a 390x but with the added architectural improvements perhaps faster. Im expecting this to benefit in games more than synthetic benchmarks.
I made a graph guesstimating real world power draw of a RX480 based on the link ShintaiDK provided.
And I think what everybody can agree on is that the market as been starved of real improvement. 14nm and 16nm are a godsend to gamers. There are market constraints though. Anything beyond $250-$300 is a really tough sell to most people. Most of my gaming friends (even not poor and ~30y/o) would never buy a >$300 graphics card.
AMD's reference cards (even most custom cards) have gaming power usage much lower than the rated board power. The rated TDP is only met in workloads like mining or HPC.
You can look at many of their previous gen cards and the trend is as above.
RX 480, 150W board power, ~100W gaming load is about right.
If 1.5GHz+ cards will increase voltage to pass validation, say good bye to perf/watt.
Those cards may reach close to GTX 1070 performance but at an increase in power consumption, perhaps even more than GTX 1080.
I would actually hope that gaming power usage will be closer to the maximum power usage with the updates to GCN. I don't necessarily think it's a good thing that the early GCN cards used much less power in games than when running specialised compute code.
All cards are like this. Games do not push cards 100% of the time because different things happen during different times. Walking through a field will likely use less power than fighting 300 people while casting spells.
Things like mining can push cards harder because its doing the same thing over and over non stop.
Nvidia lists gaming TDP and AMD list highest utilization TDP.
He was not asking for a change in how TDP is presented. What he wanted was the card to be used fully during gaming which by the nature of games wont happen.
I would actually hope that gaming power usage will be closer to the maximum power usage with the updates to GCN. I don't necessarily think it's a good thing that the early GCN cards used much less power in games than when running specialised compute code.
Nvidia lists gaming TDP and AMD list highest utilization TDP.
He was not asking for a change in how TDP is presented. What he wanted was the card to be used fully during gaming which by the nature of games wont happen.
Not sure you are grasping how nVidia and AMD rate their TDPs.
AMD = MAX power draw. ie: actual TDP
nVidia = GAMING power draw. They often go over that when doing things that heavily utilize the card.
I for one would NOT want my card to utilize every available watt when gaming.
I made a graph guesstimating real world power draw of a RX480 based on the link ShintaiDK provided.
I don't think most ppl realize the difference in how each company lists their TDP thus are surprised when one card has lower real world TDP vs the spec.
I don't expect the cards to be fully utilized, but AMD has at least recently shown lower gaming performance per FLOP than the competition, as well as a larger delta in power consumption between gaming and compute. Case in point, 3.8TFLOPS for 7970 vs 3.1 for a GTX680 or ~6TFLOPs for a 390X vs 4.6 for a 980.
Obviously compute workloads vary (Bitcoin mining would draw less power than some games for example due to essentially zero MC load), but a shrinking delta between compute and graphics workloads would be welcome if it was caused by better gaming utilisation of the GPU resources. Hopefully as developers come to grips with DX12 and more games are released this starts to happen more.
If you lock FPS, then yes, you want to make sure that the card can meet the FPS target. But we're moving away from locking FPS with the sync schemes.Games vary, compute loads usually do not. If a game is maxing out a card during moderate use, then it wont be able to do anything during scenes where more is going on. Its true that you want the drivers to be as efficient as possible, but wanting the delta to be reduced is the wrong way to look at it.
If you lock FPS, then yes, you want to make sure that the card can't meet the FPS target. But we're moving away from locking FPS with the sync schemes.
If you lock FPS, then yes, you want to make sure that the card can meet the FPS target. But we're moving away from locking FPS with the sync schemes.
He really did type that, didn't he.....