The Itanic sank long ago, IDC. Intel has been making sure of that by slowly poking holes in the hull
Not disagreeing with you, my point was just to say that if you look at Intel's multi-faceted microarchitecture efforts it isn't like they aren't trying.
Intel's (and AMD's) turbo-core/boost approach would have really delivered something the customer's want if they had actually implemented in silicon what they all talked up in powerpoint:
If we really got a single-core that clocked itself so high as to truly command the entire available TDP headroom then we'd be feeling far more "enthusiastic" about these YoY CPU rollouts IMO. :\
Because temperature.
Temperatures since Sandy Bridge have been steadily rising, most likely both due to performance as well as Intel being cheap.
There is a graph around here somewhere, I forget who crafted it otherwise I'd search for it, that shows the extrapolation of drive current versus operating voltage for both 32nm and 22nm and it shows the 22nm curve flattens out in such a way that we ought to expect 32nm to be superior to 22nm in the high-clock regime.
And that is what we see, even with delidded IB's put on vaporphase or LN2 where temperature is not the issue, getting an IB to clock as high as a SB is basically a no-go unless you got yourself a dud of a SB or a golden IB sample.
Yeah, Intel spent along time perfecting it's FinFet Tri-gate processes, but clearly they were optimizing for lower power consumption (even with the HP process) and not higher clocks (Mark Bohr already stated this, IIRC) - so this is what they get. With no competition, this is also what we get.
Again, as has been mentioned (in various threads), 32nm may well be the pinnacle of clock speeds for modern processors for some time.
Definitely engineered for the goals Intel had, which was to enable a die shrink that gave them the ability to economically pack more cores onto their XEON chips as well as lowering the power consumption to enable them to get into the mobile markets.