This might help drive your point further home, IDC
Although Intel will be improving performance over the 22nm FinFET node at 14nm FinFET, as well:
Excellent :thumbsup: And yes, for the foreseeable future Intel will be doing "all of the above" with their node shrinks.
My point was just to say these things (these process nodes) are the products of engineering.
If you want your next node to give you all the same performance metrics as today's node, but you want the resultant chips to cost 50% less or have 50% the environmental footprint (consumable wastestream from their production) or something else, then it really is as simple as correctly defining the project scope at the beginning of the formation of the node (the first year in development) and ensure your development engineers do their job.
I can understand how it might appear to an outsider as if these process nodes are sort of stuck in a one-rut track that only goes in a single direction but that really isn't the case.
cost per transistor is no where near that simplistic. The real test will be, if TSMC/GF 20nm only provides a shrink not a perf increase. lets see who shrinks what and when then.
In general the foundry's field process nodes which are not exactly "aggressive" in terms of electrical parametrics (Idrive, etc). Which historically has been fine because anyone needing super-duper ludicrous speed out of their ICs usually had their own fabs and could juice up their own nodes to accomplish the goal.
What that means for TSMC though is that there is essentially always room-to-improve left on the table at any given node.
So even if they scale planar CMOS to 20nm I still expect their to be performance benefits to come from it in comparisons to the same foundry's 28nm. (TSMC 20nm will be better than its own 28nm, but their 20nm may still perform poorly, electrically-speaking, compared to Intel's 22nm)
I wonder at what point we become I/O bound on the "typical" phone/tablet SOC .
The bandwidth requirements are way too low for it to become a practical issue anytime soon (next 10yrs).
It is a form-factor reality. They can't shove a chip into that form factor which is going to be silly-high performance like a discrete GPU that might need an quad-channel GDDR5 interface.
We are a long ways off from being I/O limited on a power-miser phone/tablet SOC.
(just want to point out one caveat - I/O costs money, as in it costs money to implement more sophisticated I/O that also has less areal impact, so don't be surprised if you see folks lamenting I/O on phone SOCs but that is a different argument, they are lamenting that limitations of silly-cheap budget I/O, not wanting to spend an extra nickel per SOC chip to go to the next level of I/O tech...but that is an accounting barrier, not a technology barrier)