Yes, you are. Just like you confused how a CPU runs in a game. See a pattern forming?
I am not confused at all. C-states and other power savings technologies allow the CPU to run below its nominal frequency in idle states. Since none of these technologies affect operating frequencies at
load states, the 2500k @ 3.4ghz will always consume less power at load in games than 2500k @ 5.0ghz would, regardless of C-states, EIST, etc.. C-states and power saving technologies do not help a processor when its being loaded by a game if the game actually needs the CPU's resources. You can try to minimize the power consumption at load by specifying the overclocking by using Turbo multipliers on a per core basis, but once the game loads a core, it's going to 5.0ghz when in use, which means that your CPU will always consume more power at load than a stock 2500k.
But more so, you already stated that 30-36 fps is perfectly playable for you in a FP game. You then sighted SKYRIM as the game where you actually *needed* 2500k @ 5.0ghz. However, 2500k/2600k can
easily achieve that without overclocking, then why would you overclock your CPU to 5.0ghz and incur the extra power consumption penalty if you care about power consumption? Surely for someone who absolutely cares about 50-55W of power, this would prohibitive? You said yourself you don't care to game at 60 or even 120 fps where the CPU performance actually makes a huge difference in a game like SKYRIM.
Perhaps you believe that your CPU uses 5-10W of power more at load when it's overclocked and running a game at 5.0ghz?
You don't understand the argument. Ahhh, I've finally broken you. Why didn't you just save face and admit your intent from the start instead of having me drag you through the mud for the last three hours?
I didn't have any intent, it's the only remaining logical conclusion left after 3-4 pages of seeing your side. You thought GTX480's 15-20% performance advantage over 5870 was "meaningless". You thought GTX580's 20% performance advantage over HD6970 was unimportant even though GTX580 had that lead for 14 months. You argued that HD6970 can be overclocked to GTX580 speeds, but when
a fully overclocked GTX580 was put head-to-head against an overclocked HD6970 and crushed it, you then claimed that GTX580's performance advantage in overclocked state didn't add to playability, but in fact it easily cleared 30 fps in games where HD6970 could not.
If all you want is 30 fps in games on the PC, then
GTX580 was perfectly fast for that for 14 months. Suddenly, HD7970 is a must buy at $550 with only 20% more performance over GTX580. Really?
I understand the argument clearly. In all situations where you cannot logically defend your view, you attack the poster (like Notty22's statistical knowledge which has no relevance, or my monitor (again no relevance), or dismiss any enthusiast's desire to game at 50-60 fps in a FPS or a racing game as not the norm, etc.). You dismiss most websites anyone links unless they show what you want to see (like H and TPU only) as not counting since they are running "canned benchmarks". Ironic, because completely ignore facts such as GameGPU.ru and Bit-Tech.net running manual runs, and well in your mini-HD6950 to HD7970 review, you yourself plan to use canned/in-game benchmarks. Really?
You make a big deal about 50-55W of power consumption, while ignoring that
HD7970 is already a very high power consuming card to begin with.
The best part you discuss how important power consumption is while coming from a previous generation i5 760 @ 4.1ghz and an HD5850 @ 1000mhz+.
I think this graphs speaks for itself:
Source
Everyone knows that I am a price/performance guy and I don't hide it. People might not agree with me and I am 100% OK with that. At least I consistently ripped GTX580 for being overpriced vs. GTX570 and HD7970 for costing $550 and bringing so little over 580 after 14 months at 2560x1600 where enthusiasts on 30 inch monitors actually needed it. I also said power consumption wasn't a big deal for high-end cards like GTX480/6970 because in the context of their overall system power consumption (with overclocked CPUs), it's a drop in the bucket. Even if people didn't agree with this view, I was consistent.
In your case, you throw arms in the air over 50-55W of power in some cases,
somehow try to correlate power consumption of a videocard, the temperature of a GPU and its "real" TDP with its % overclocking abilities (completely ignoring node, power circuitry, how much room was left on the table for GPU clocks at a specified voltage per node) and in other cases (like i5 @ 4.1ghz and 5850 @ 1000+ mhz), power consumption isn't really important because the card was a great price/performance candidate. So now price/performance trumps power consumption. You seem all over the place.
In other cases 30 fps is fast enough for enthusiast PC gamers, and you seem to think that HD7970 provides a whole new world of playability when the card can't even break 30 fps in
Metro 2033, or
Dragon Age 2, or
Crysis 2 with AA at 2560x1600, your monitor's resolution. Really?
Notice a pattern of inconsistency?