Again, It's like you are not reading what I'm writing.
Let's do a virtual experiment.
Let's say you're part of the team in Intel which tests CPU's for the binning process (There is a team like, pretty big actually, they write the software to automate the process).
Let's say you are working on a CPU manufactured by Intel in the xx Process (The name doesn't matter).
Now, you build histogram of the speed and power consumed by each CPU out of the Fab per each month.
You do it for 4 years.
What do you expect to see?
I will tell you what you will see.
Let's assume the distribution is approximately Gaussian.
Hence you define it by Variance and the Mean.
With time the Mean of the speed will rise, the mean of the power will go down (Per speed) and the variance will get smaller.
Till now it happened (I can promise you that) on each and every process Intel used, Each of Them!
You saw that once with Haswell but in small marketing scale.
Till now, Intel has never used that as a "feature" since they had better things to show.
But now, when they have little to show (Process wise) they use it as a marketing feature.
That's what I'm saying, the improvement is there.
It has always been there in some ways.
When you play cards you can only play with the ones you have.
Intel, currently, doesn't have new process to show, so they market the regular over time improvements as a feature.
Funny thing is some users here sell it even better than Intel as this is the best thing ever.
The next step is, we never want to move to 10nm, we want 14+++ and then 14++++ .
This is just silly.
Intel markets what it can at this time.
It wished it could market better things like a real new process with much better performance.
Your argument falls apart because it's fundamentally wrong. Again, with 14nm+ it wasn't just a question of the process maturing; there were changes to the process that you simply can't make and expect to keep manufacturing the same chips -- such as the fin height.
As far as your claim that "we want 14nm+++ and then 14++++," there is actually some truth to that. If all you care about is raw transistor performance/frequency, then you actually want to
avoid shrinking because while you gain some performance with next generation transistors, you have to fight the increased parasitic capacitance that comes from pushing everything closer together.
By adding more "+" signs, Intel is improving transistor performance and not having to deal with the challenges that come from making things smaller. You also see this with other foundries, like Samsung and TSMC, who keep improving their older, more cost-effective processes.
The reason that companies want to shrink is purely a question of economics. For most products these days, you want to stuff more things in like bigger graphics, more cores, etc. so you want to shrink so you can keep throwing all of that stuff in without the size/cost becoming prohibitive.
Enthusiasts would probably love it if Intel did a 14nm+++ or a 14nm++++ and kept pushing frequencies up and maybe introducing new CPU architecture features, too. But that would be bad for mobile and mainstream desktop, since you want to keep adding features, which would not be feasible without a shrink after some point.