I've been following this argument and I can't say I agree, especially when you look at it from several perspectives.....
Which part? The part where there is only a 50-55W difference in power between high-end cards? You think that's material when it comes time to you buying a high-end graphics card in the context of the overall system power consumption and in the context of us enthusiasts who add 100-150W of power to the CPU through overclocking alone?
Your entire section of that post addressed how HD6950 and HD6970 are better values for the $$ vs. the 580, etc. How did that come up I have no idea....because the only thing that was actually being talked about was that a total system with GTX580 consumes about 15% more power than a similar system with an HD6970. You can't run a Graphics card without a CPU, motherboard, ram and hard drives, can you? So all in all arguing about 50-60W of power differences between top GPUs is very strange to me. I mean if I was sweating so hard about 50-60W of power, I'd disconnect all the lights in my house and live in total darkness, never put up a Xmas tree, hang dry all my laundry, bike to 7/11, never use a blow dryer, etc. The total system is already using 350-400W of power. People who care about power consumption so much are probably using i3s and are gaming on HD5770s. For example, if HD7970 used 250W of power instead of 200W, but came out clocked at 1.1ghz from the factory, you wouldn't be more impressed? That would be FAR more impressive to me.
This is why the 7970 is that much more impressive. To an average consumer, one might look at it and say "oh, it's only 20-25% faster than the GTX 580, that's not much." But then, once you start analyzing different metrics and become a better consumer, it really shines.
No one is arguing that GTX580 is a better buy than HD7970. I don't know why you keep defending it because obviously the 7970 is better. I am just shaking my head at "enthusiasts" who think there are *large* differences in power consumption among high-end GPUs, because 50W is minor, considering most of us here are running overclocked CPUs, etc. From an engineering perspective, it makes the 7970 more impressive, and it should considering it's a 28nm GPU.
I think what you are saying is that because GTX580 has higher power consumption to begin with, with overclocking, the power consumption difference will rise far above 50W because 28nm process is more power efficient. Agreed.
Any consumer regardless of technical knowledge can do a quick price/performance comparison and say, "wait, that's 20-25% extra performance for only $50 at the enthusiast level - that's unheard of, and it has double the vRAM!"
HD7970 > GTX580. No argument here.
Then, if that consumer is an enthusiast, one can look at the technical merits of the chip itself, such as the lower power consumption and overclockability, and the picture just gets better and better. As I stated above, if you crank the 7970 to a GTX 580's power consumption level, it will slaughter it.
Again, you are talking about 200-250W videocards. But like I said, you can't use a videocard by itself. So at the end of the day, it's still a 400 or 450W system. If hypothetically the GTX580 system consumed 300W, but a 7970 system consumed 400W, would you buy it instead of the faster HD7970? No, you wouldn't. The lower power consumption of the HD7970 over GTX580 is a nice bonus, but in the global context, it's meaningless for enthusiasts who buy $500 graphics cards. They are going to buy the faster card, assuming it has a reasonably quiet cooler. In the context of high-end GPUs, HD6970, GTX580, HD7970 are all power hungry cards. To me anything that's hitting almost 200W+ is a power hungry GPU. If I could buy a high-end GPU with 50-75W of load TDP power, then I'd consider that a material difference worth talking about.
With the 7xxx series, I think you'll see all those advantages disappear.
I don't think so, because HD7970 isn't much faster than GTX580 in Dragon Age 2, Crysis 2, Metro 2033 and BF3. It doesn't make sense that HD7970 is barely faster than GTX580 in the latest games. That means there are some fundamental things holding it back (like GPU clocks, or something else).
My comment was addressing an implication made by another poster that certain games shouldn't count because they are NV sponsored and therefore the performance differences in them are less relevant. And again you went on about poor Tessellation implementation in Crysis 2, etc., none of which had anything to do with my post. My point is simple, any game whether it's NV or AMD sponsored should count, especially if it's a popular game.
If there are 20 games that run better on AMD cards, I want to know that so it makes my buying choice easier next time I upgrade. From where I am standing, HD7970 is barely better than GTX580 in BF3, Crysis 2 and Metro 2033. It really doesn't matter to me as a gamer that NV spent $ ensuring that those games work better on its hardware, because AMD could have done that too. I hope AMD spends more $ and works closer with developers to optimize game code for their hardware too. The point still stands that HD7970 still does poorly with Antialiasing Deferred compared to GTX580 in BF3. That's a red flag to me (not because I play BF3, but because future games will use Frostbyte 2). If deferred AA is the direction of future game engines, then I am going to wait to see how Kepler generation does because I am not going to drop $500 on a new generation of graphics card that is showing less than stellar gains in the most demanding games over a 2 year old Fermi tech. For gamers who can afford to upgrade often, this isn't even an issue. They'll sell the 7970 and move on to the next best thing (HD7980, GTX680, etc.).