Are those cards going to be as efficient as NVIDIA's offering?
The GTX 1070 pushes ~6.5 TFLOPS, 150W and requires a 8-pin pcie power connector.
The RX 480 pushes 5.5TFLOPS, 150W and requires a 6-pin pcie power connector.
Almost no one who took the time to carefully study AMD's statements and Polaris 10 specs expected Fury X and especially not Fury X+ levels of performance out of a <240mm2 die, 2304 shaders with a 256-bit bus, back then rumoured 1.05Ghz-1.1Ghz clocks, and regular GDDR5. The only people who did where those who never followed the leaks/news carefully or those who were dreamers. Even then I would say some of them were realistic and expected higher end 175W 2560 shader Polaris 10 to be Fury X+. Very few people made such statements about a cut-down P10. The performance is exactly where many of us predicted it would land. The price is the surprising part.
Last time we had a node shrink AMD launched the 7870/7850 that performed at the same level as the previous gen top end cards and Pitcairn was 212mm^2. It also had fewer shaders than Cayman but even with the poorly optimised for GCN launch drivers it still out performed the 6970 by a good margin.
I do not see any reason to think that the performance profile for P10 will be much different to the last node shrink which would put it in the 390X - Fury X performance category for the cut down and full die respectively.
The big question will be prices and they could do $249 and $349 again like with 7870/7850 or they could be more aggressive and do something like $229/$299.
It is important because:One company charges $700+ for a 300mm2 die GFX card and no one bats an eyelid. Why are people obsessed with a few watts (cents per day) of efficiency? The difference is HUNDREDS of dollars upfront. So bizzare.
Here is what I said in the Guru 3d Rumour thread.
I did not anticipate a $199 4GB part but it looks like I am pretty close to the mark in perf/price.
I do wonder about a $299 part though, perhaps they are planning a more stealth launch of that to stop NV doing a pre-emptive price drop on the 1070 or perhaps it is a similar situation to Tonga where Apple had all the full die parts.
You are right. An investor loosing money is still called an investor and probably her main audience.
It is impossible to defend her after this
You can't expect something valuable out of that article after reading this.
One company charges $700+ for a 300mm2 die GFX card and no one bats an eyelid. Why are people obsessed with a few watts (cents per day) of efficiency? The difference is HUNDREDS of dollars upfront. So bizzare.
One company charges $700+ for a 300mm2 die GFX card and no one bats an eyelid. Why are people obsessed with a few watts (cents per day) of efficiency? The difference is HUNDREDS of dollars upfront. So bizzare.
As an enthusiast I am not really qualified to critizise AMD's design choice but damn! Why not target 300mm2 die size for mid range Polaris 10? Polaris 11 still could have been kept the same for laptops but if this chip is going in to desktops and GloFo was already getting good yields based on Samsung's results.. I just think AMD undershot the sweet spot here. A ~3200sp part with the same 8-pin connector as the 1070 would have been a sweet card and they could have easily charged $400 for it and still under cut NV.
Seeing a lot of disappointment in different forums. That has to do with the huge expectations around this card, many expecting Fury X+ all-around performance. Of course we have yet to see numbers, but Geforce GTX 970/980 performance (told to the press in the Macau event according to Videocardz) is not out of reach for GP106, especially if rumours of a 192-bit 6GB VGA replacing GM204 are correct.
only 15 posts and you have already caught on? that is a pretty fast learning rate, bravo.One company charges $700+ for a 300mm2 die GFX card and no one bats an eyelid. Why are people obsessed with a few watts (cents per day) of efficiency? The difference is HUNDREDS of dollars upfront. So bizzare.
From the history, Nvidia usually understate their TDP while AMD usually overstated it.Are those cards going to be as efficient as NVIDIA's offering?
The GTX 1070 pushes ~6.5 TFLOPS, 150W and requires a 8-pin pcie power connector.
The RX 480 pushes 5.5TFLOPS, 150W and requires a 6-pin pcie power connector.
Am I the only one who feels the TPD for the RX480 is a bit high or, at least, unexpectedly high for this card and performance level? If the performance in DX11 and DX12 games is roughly between 970 and 980, as has been reported, then AMD is only just catching up to or barely surpassing Maxwell's perf/w with the jump to 14nm FinFet. Moreover, add 15-30W, and you have a stock 1080, which is much much faster than a 980. What am I missing, because AMD is touting enormous performance/watt gains (presumably against Hawaii, I suppose)?
What we're missing is actual power usage figures.
From the history, Nvidia usually understate their TDP while AMD usually overstated it.
The main target for Polaris 10 and 11, would appear to be GP106 and GP107, but even so AMD may still intend to compete to some degree with the GTX 1070. It all depends upon whether or not a higher binned version of Polaris 10 exists (Lisa Su's comment about covering the entire segment from $100-300 would seem to suggest so), and how said version performs.
I don't expect a $300 Fully unlocked Polaris 10 to beat or even match a GTX 1070, but if it can get close enough (say within 10%), then they would effectively be competing with each other.
The GTX 1080 is of course out of the question and will probably have free reign until GP102 and Vega.
It's really not all that different from 28nm launch, where 7850s were around $200 and OCed to equal OC'ed GTX580. Definitely not the most ridiculous deal in gpu history, but certainly not a bad deal.
That's not correct. Plenty of AMD cards above their TDP. Just look at Techpowerup.
Want to see VR explode? Supply VR equipment to the top 3 porn producers. Make them sign an NDA where the equipment came from, of course. :sneaky:
Regarding the AotS bench shown at the presentation there is statement on reddit by an AMD spokesperson.
https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/
AMD_Robert said:System specs:
In Game Settings for both configs: Crazy Settings | 1080P | 8x MSAA | VSYNC OFF
- CPU: i7 5930K
- RAM: 32GB DDR4-2400Mhz
- Motherboard: Asrock X99M Killer
- GPU config 1: 2x Radeon RX 480 @ PCIE 3.0 x16 for each GPU
- GPU config 2: Founders Edition GTX 1080
- OS: Win 10 64bit
- AMD Driver: 16.30-160525n-230356E
- NV Driver: 368.19
Ashes Game Version: v1.12.19928
Benchmark results:
2x Radeon RX 480 - 62.5 fps | Single Batch GPU Util: 51% | Med Batch GPU Util: 71.9 | Heavy Batch GPU Util: 92.3% GTX 1080 – 58.7 fps | Single Batch GPU Util: 98.7%| Med Batch GPU Util: 97.9% | Heavy Batch GPU Util: 98.7%
The elephant in the room:
Ashes uses procedural generation based on a randomized seed at launch. The benchmark does look slightly different every time it is run. But that, many have noted, does not fully explain the quality difference people noticed.
At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title.
The content being rendered by the RX 480--the one with greater snow coverage in the side-by-side (the left in these images)--is the correct execution of the terrain shaders.
So, even with fudgy image quality on the GTX 1080 that could improve their performance a few percent, dual RX 480 still came out ahead.
As a parting note, I will mention we ran this test 10x prior to going on-stage to confirm the performance delta was accurate. Moving up to 1440p at the same settings maintains the same performance delta within +/-1%.
Looking at TPUs latest review (the 1080 review), the only card that goes above the TDP when looking at typical gaming is the 970 (11W over):
When looking at peak values, then the 780 Ti (18W over), 970 (46W over),980 (19W over), 1080 (4W over), 285 (10W over) 290X (4W over), 390 (5W over) and Fury X (5W over) all go above TDP. Of course looking at peak values don't necessarily indicate anything, since it may or may not be equivalent to a thermally significant time period (which is the relevant metric for TDP):
From the history, Nvidia usually understate their TDP while AMD usually overstated it.
That's not correct. Plenty of AMD cards above their TDP. Just look at Techpowerup.
The 290X is over as well.
290X 44W over.
RX 480 isn't rated at 150W TDP because it uses much less.
Its the same myth and excuse that was used in the CPU section for a long time. One that is baseless at best.