I thought 22nm didn't improve the high end compared to 32nm because of the switch from planar to finfets? Something about the frequency/voltage curve being a different shape, so that the increase was huge at low voltages but nonexistent at high voltage. Since the move to 14nm doesn't have the sort of fundamental structural differences that the move to 22nm had, we should see improvements again at the high end...
We'll know for sure when somebody gets their hands on an unlocked 14nm part.
Not really, I mean the apparent cause-and-effect are not real. The observations are real.
It is true 22nm saturates at a lower voltage relative to 32nm. And it is true that 22nm uses finfets whereas 32nm uses planar. But those are unrelated aspects as far as the voltage/clockspeed curves are concerned.
Intel could have just as easily directed their 32nm R&D group to tune the 32nm planar transistors to have the same voltage response profile as they directed the 22nm R&D team to develop.
The voltage profile is because of one thing - Intel wanted to go after the mobile markets more than they wanted to go after high clockspeeds.
If they doubled the 22nm xtor team's budget then the xtor 22nm xtor team could have done both. But if you (as in Intel's budgetary decision makers) don't double the resources then the engineers have to prioritize and make trade-offs in terms of engineering scope.
So they targeted tuning the 22nm finfets so as to enable much lower voltage operations at same or higher clockspeeds relative to 32nm while pegging the top-end voltage/frequency curve to be comparable to 32nm. Maintain on the high-end, but gain ground on the low-end, all while operating within the development budget they were allocated.
22nm (or 14nm) could have just as easily gone the other direction, on the same budget, at the expense of giving up the frequency/power/voltage scaling at the low-end (and thus giving up on any business plans for mobile products).
14nm could quite easily become a repeat experience for the enthusiast desktop crowd if Intel directed the 14nm xtor R&D team to push even harder on tuning for ever lower operating voltages and power consumption while not giving them the budget (or the prioritization) to improve voltage/frequency scaling at the top-end.
I think what happened was the finfet transition just so happen to coincide with the shifting priorities from desktop frequency profiling to mobile frequency profiling, and to the layperson it can appear as if one required the other (or vice versa) when really it was just a coincidence in the timing for both transitions.
But it really just comes down to R&D priorities and directives. When Apple is banking $18B profit in 90 days for selling mobile phones, that tells Intel they are missing out on huge profit opportunities for not having allocated enough of their own R&D resources towards prioritizing the development of lower-power enabling process nodes (and products).
At the R&D level that filters down in the form of "stop worrying about optimizing or improving the top-end frequency curve, focus on optimizing the other-end of the curve as that is where the customers are currently willing to throw their disposable income towards!"
If you were Intel, which would you rather have -
$3.7B in quarterly profits or
$18B in quarterly profits?