While I'd agree that Intel do good R&D and have excellent manufacturing (which is mainly due to them being able to afford both due to their huge profitability), and that this means they can continue to pour vast sums into 22nm and beyond.
What I'd question though is that Moore's law can continue for much longer just because money gets poured into process R&D. Money is good but some problems
Hehe, yeah I made a decent living because it costs money to hire people who can solve the problems of scaling.
Moore's law has always been a matter of economics. The very reason the transistor itself was adopted in the marketplace was for cost reasons, and we started wiring them up on a monolithic substrates (creating IC's) for cost reasons, and we shrank them for cost reasons, and always have.
Moore's law isn't about shrinking, or doubling, its about capturing the rate at which one can reasonably manage the process of reducing manufacturing costs on a per-component basis.
But business is not motivated by cost-management alone, it is movitated by profits. And profits are the difference between costs and opportunities (selling price).
So, while Moore's law captures the cost-management opportunities, it also enables the other side of the gross margin equation - the opportunity to develop products that command higher ASP's (or prevent ASP erosion).
I can understand that it is easy to get oneself hung-up in thinking that shrinking is a physics problem, not an economic one, because that is what we are told to think...and it makes sense in a physical manner that of course you can't physically shrink things below that of some fundamental limit that involves the dimensionality of atoms and so forth.
But the folks that do work on node shrinks (me, as an example) see opportunities to shrink stuff in new ways on a daily basis and the sole reason those opportunities are not pursued comes down to money.
Pure and simple economics, shrinking is not the challenge. The challenge is shrinking while at the same time managing to build an integration process that accomplishes the shrink which is profitable, and more profitable than simply staying with your prior node.
It comes down to cost-management and gross margin opportunity. If the shrink doesn't create superior numbers than the node it is supposed to supplant then you have a problem, in the boardroom, not in the lab and not in the physics textbook.
For the folks who worry about "how can one shrink below a single-atom", all I can say is "
reciprocal space". (another
decent link)
In
physics, the
reciprocal lattice of a lattice (usually a
Bravais lattice) is the lattice in which the
Fourier transform of the spatial function of the original lattice (or
direct lattice) is represented. This space is also known as
momentum space or less commonly
k-space, due to the relationship between the
Pontryagin duals momentum and position. The reciprocal lattice of a reciprocal lattice is the original or
direct lattice.
For the same reasons we can make 193nm wavelength photons print structures that are a mere 35nm wide (i.e. the dimensionality is not the sole determining factor in the resultant product), just because the physical atomic dimensionality is 1-3nm that doesn't mean the electrical manifold within which the circuit itself is operating need be.
The problem with discussing reciprocal space with the laymen, and that includes the journalists who write the stuff about the ending of Moore's law that we get exposed to, are not really steeped in the mathematics necessary to conceptualize how it works, let alone how it will someday be leveraged to work for the continuation of increasing the density of electrical circuits. (in theory the physical limit to reciprocal space is
Plank's length, which is about 10^15 times smaller than the width of proton)
It is easy for laypeople to relate to the concept of an atom, and that since today's circuits exist within the engineering realm of discrete atoms it then becomes an easy concept to relate to laypeople the notion that shrinking the circuit is limited to the ability to shrink down to an atomistic length scale.
The problem of course is that this is just not the reality of "the end of the road for the physics of circuits", it is just the end of the road for leveraging discrete atoms as proxies for those circuits. We cross the same conceptual boundary line when we start talking about MLC nand flash technology and so forth.
When we are at the end of the road in terms of the economic viability in shrinking circuits physically by way of atoms, there are a multitude of avenues for materials scientists and physicists to continue making it possible to make more and more dense circuits at lower and lower costs per circuit.
(how can I be so confident in this? My credentials are: B.S. in Materials Science and Engineering, minors in Mathematics and Chemistry, Ph.D. in Chemical Physics, Adjunct Professor at UNT, R&D Process Node Development engineer at Texas Instruments, worked on nodes and their shrinks from 0.5um to 32nm, and NSF R&D grants for exploratory research on material shrinks to <10nm.)