RussianSensation
Elite Member
- Sep 5, 2003
- 19,458
- 765
- 126
Your summary conveniently doesn't include any compute tests where 7970 mostly beats 6990 and against 6970 it's closer to 70-80% advantage, and GPGPU was the driving force behind arch changes.You are missing a key difference ==> HD5870 destroyed HD4870 in the most demanding games, often matching HD4870 x2 and GTX295. HD5870 had no problem doubling HD4870's performance in demanding games at the time.
I did a summary of GTX590 vs. HD7970 vs. GTX580 based on Anandtech's review. GTX590 is on average 32% faster than HD7970. HD5870 was nearly as fast as an HD4870 X2 in at least 30-40% of the benchmarks.
Also I believe they have huge amounts of performance to gain fixing DX9 titles on GCN because the numbers for old dx9 games make less than no sense at the moment.
Your summary conveniently doesn't include any compute tests where 7970 mostly beats 6990 and against 6970 it's closer to 70-80% advantage, and GPGPU was the driving force behind arch changes.
Also, I looked at anandtech's 5870 review, and this 5870 equal or better to 4870x2 pretty much happened only when CF didn't scale all that well which seems to have happened more often back then.
After a couple years of driver improvements for SLI/CF, I'd say it has become harder for single GPUs to compete against X2 from previous gen...
As for your compilation of unplayable games: in two of them GTX 580 is under 20fps, and since 45fps isn't playable, I guess 50+ is playable?
Are you expecting 150% improvement for Kepler over GTX 580?
Even for the other two games Kepler will need 100+% improvement to meet your "playable" rating.
the move to gpgpu or simd is more or less a mandated course given deferred rendering in games and other compute functions. neither amd or nvidia have a choice in the matter, if they want to stay relevant.
watch the dice bf3 presentation and you can get an idea of what is going on. they are dividing the rendering pipeline into large sets of simpler functions calculated in the first pass(color, spec, surface normal, ambient occlusion) and then solving for multiple light sources later. the sheer number of conditionals and dependencies require you to have powerful gpgu style schedulers.
as long as deferred rendering is the vogue for future game engines, older style vliw plug and chug architectures are not an option to pursue. the added benefit of being able to do hpc compute and being able to synergize with cpu development is just gravy as far as amd's motivation to go gpgpu is concerned.
Their improvement is due to a massive shift from 40nm to 28nm. An architecture that sacrifices performance in a field (gaming) will still come ahead with such a huge improvement.
snip
It's your right to ignore all the architecture improvements that you as a gamer don't find useful and label them meaningless, but what ultimately matters for AMD is revenue/market success of the product and I don't agree that they are insignificant looking at the overall picture. I also doubt that nVidia doesn't care about GCN's GPGPU improvements...My summary took every single game benchmark from Anandtech's Review. I didn't include compute because I feel that's meaningless for consumers at the moment. I don't do any "professional compute tasks" on the desktop. Do you? If people cared about compute, not a single HD5xxx series would have sold in consumer space when Fermi was available for 2 years. Feel free to tell me which programs you use that take advantage of the added compute performance.
When we can use the GPU to accelerate common tasks, then I think it'll be a more exciting development for consumers.
How much better did nVidia get 14 months later?I agree, but that doesn't mean I am going to give it a pass, esp. when AMD thinks it's impressive to beat GTX580 by just 25% on a new 28nm process, 14 months later.
I'm not judging your standards, I was just using the numbers to derive a conclusion that it's pretty much a given that Kepler will fail to make those games playable by the same standards.For racing games, I want 45 fps minimum.
For FPS games, I want 60 fps average, with 40 fps minimums.
If I was ok with 20 fps minimums and 45 fps average, I'd either write for [H] or I play my PS3 @ 30 fps without spending a dollar to upgrade PC hardware.
I bought 4870 3 years ago. And I'm still waiting for a card that *I* would find a meaningful upgrade. 580 GTX is about the minimum performance-wise, but at $500 it's pricier than I'd care, not to mention ~60% more than I paid 4870. 7970 is a better value, but also more than I'm willing to pay for. But I don't think of either of them as lackluster just because they aren't for me.150% more? No. But at least 50% more over GTX580, not 25% that HD7970 delivered.
In 2 years time I purchased HD4890 ($175), GTX470 ($190) and HD6950 @ 6970 ($230). Those 3 cards in total cost $595 without resale. Going from HD4890 --> HD6970 netted me 75-80% increase, at least. Since HD7970 is only 40% faster than my HD6970, my upgrade path involves paying $550 to just get 40% more performance? I was hoping for a much higher performance increase from the factory, so that the HD7950 might have been a viable upgrade, but now it might only be 25-30% faster for $450 over my card.
Fermi was hyped by some as the second coming of Jesus, huge chip and 6 months later it had to rock, right. And it turned out to be ~10-15% faster than 5870 and a lot hotter. I could be wrong of course, but there's a certain déjà vu feeling here. AMD can already easily come out with ~10% faster clocks, put another 5% driver improvements and Kepler will need to be ~60% faster just to get to the same position they were in with 580 vs 6970, and they will be late. I'll be very impressed if they can pull off 70+% improvement in Q2. That 1024 core beast supposedly coming in Q4 will be coming very close to SI's successor me thinks.Agreed. There isn't a double standard on my part, trust me. If GTX680 is only 40% faster than GTX580, I won't be impressed either and skip it too.
When we can use the GPU to accelerate common tasks, then I think it'll be a more exciting development for consumers.
The 5870 doubled the spu of the 4870. The 7970 adds 33% to the 6970. ~doubling of performance isn't a reasonable expectation.
They had to do it sometime, just like Nvidia had to do it sometime. GPGPU is the future, and a lot of money can be made on it.
GPGPUIn Q3’2011 NVIDIA’s Professional Solutions Business (Quadro + Tesla) had an operating income of 95M on 230M in revenue. Their (consumer) GPU business had an operating income of 146M, but on a much larger 644M in revenue.
If you look at the 33% shader increase (1536-2048), it doesn't look like they sacrificed any performance. It appears that GCN is no better at gaming, except for geometry throughput, than VLIW4 (a side benefit of the improved compute performance), but does improve compute performance.
I never said HD7970 should double HD6970's performance, but 50-60% would have been nice. Also, relative to GTX580, it's only 25% faster after 14 months. I expected more -- here is why.
A) Comparing HD7970 only to the HD6970 misses the point since HD7970 is not a $370 card anymore, but a $550 card. Normally, we get more performance at the same price, or similar performance at a far lower price. In this case, what kind of a performance boost should be expected in the graphics card industry at the $500-550 level given the timeframe? Do you think 25% is satisfactory in 14 months?
B) Would we have been satisfied if each previous generation was only 25% faster? So in that case, GTX480 only needed to be 25-30% faster than GTX285/HD4890 and GTX285 only needed to be 25-30% faster than 8800GTX/HD3870. I think the performance increase should be measured from a generational perspective, in which case 25% more misses that mark by miles. Why is HD7970 exempt from historical standards of expected performance improvement vs. the previous generation high-end card (regardless if it's from NV or AMD)?
C) If for a moment we assume that 25% is satisfactory after 14 months of waiting from the highest performing card 14 months ago, then by those standards, we should expect no more than a 25% increase 14 months from now as well. If we would be unhappy with the fastest card being only 25% faster than HD7970 by May 2013, then logically we should expect a far greater increase than 25% in 14 months, or that performance level achieved in a much shorter period of time.
I would have been extatic if HD7970 was $379, but it isn't.
There are some significant implications from this:
1) Normally, current high-end cards would now drop to $200-250 levels, as clearance. This is unlikely to happen now because HD7970 didn't beat GTX580 enough. Since AMD and NV gave up on improving performance at the sub-$200 level, that means for gamers on a budget, it's very difficult to increase their performance level by a lot without having to drop a lot of $$.
2) If both AMD and NV go back to $500-600 price levels, then we are back to previous decade of pricing. I have no problem with that, however, in that era, the $500-600 price levels for new generations were accompanied by 50-100% performance boosts over the previous fastest card from either camp.
If we as gamers become satisfied to pay $500-500 for just a 25% performance increases every 14-15 months, what kind of a message are we sending to AMD and NV? They'll start giving us 25% performance increases, instead of the 40-50% we usually expected from them during the transition from the previous high-end card from either camp to a brand new generation. Just my 2 cents.
** My post is from a gamer's point of view, not the AMD point of view. I already said that from AMD's perspective, the $550 is more than justified since it's 25% more performance for 10% higher price than the GTX580. But for us gamers, this is a huge step back in terms of expected performance increase from 1 generation to the next. ** If this is what the future holds, I am disappoint.
How much better did nVidia get 14 months later?
I'm not judging your standards, I was just using the numbers to derive a conclusion that it's pretty much a given that Kepler will fail to make those games playable by the same standards.
I bought 4870 3 years ago. And I'm still waiting for a card that *I* would find a meaningful upgrade. 580 GTX is about the minimum performance-wise, but at $500 it's pricier than I'd care.
AMD can already easily come out with ~10% faster clocks, put another 5% driver improvements and Kepler will need to be ~60% faster just to get to the same position they were in with 580 vs 6970, and they will be late. I'll be very impressed if they can pull off 70+% improvement in Q2. That 1024 core beast supposedly coming in Q4 will be coming very close to SI's successor me thinks.
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review
So, I am looking at the anantech review of the AMDs new graphics architecture and all I can see is fermi.
It outright says that this is not as good for gaming, but better for compute. And so for necessity they are going that way, providing absolute minimal boost in gaming performance over current gen in graphics due to dumping VLIW4 for SIMD.
I have to say that was not at all what I expected. I thought we would see nVidia backpedal on fermi rather then AMD embrace it.
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review
So, I am looking at the anantech review of the AMDs new graphics architecture and all I can see is fermi.
It outright says that this is not as good for gaming, but better for compute. And so for necessity they are going that way, providing absolute minimal boost in gaming performance over current gen in graphics due to dumping VLIW4 for SIMD.
I have to say that was not at all what I expected. I thought we would see nVidia backpedal on fermi rather then AMD embrace it.
I don't do any "professional compute tasks" on the desktop. Do you?
I am surprised that your are suprised by this.
GPGPU is here to stay and have become more and more important.
And like I have predicted before, we will see a chnge of stance on this forum.
The "argument" will not be that GPU physics is a turd anymore.
It will be changed to NVIDIA's GPU physics is a turd...but AMD's is good.
Wait and see.
Another thing is to look at NVIDIA's margins in the HPC segement.
AMD really wants some of the green.
The last generations they tried to lower their prices to keep marketshare (and little did that help them) but that strategy hurt their bottom line.
Now they have new manegement, they need to make money...and the way to do that is to follow the course that NVIDIA have followed since the G80.
But I am glad that "perf/watt" will die off...and performance again will be the metric of choice.
That will stop a lot of irrelevant noise.
And like I have predicted before, we will see a chnge of stance on this forum.
The "argument" will not be that GPU physics is a turd anymore.
It will be changed to NVIDIA's GPU physics is a turd...but AMD's is good.
Wait and see.
Err, no. The "argument" is that GPU physics in its current form alienates consumers, and because of this, adoption isn't going to happen.
And it will remain so if AMD's take also alienates consumers and isn't widely adopted. This has very little to do with brand and a lot to do with the -fact- that developers aren't going to waste resources on features that only apply to a fraction of the market. (Remember that PhysX has a performance tax as well, so it's not only not applicable to AMD buyers, it's only being used by Nvidia users with hardware that can handle it!)