Originally posted by: Idontcare
Originally posted by: DSF
In terms of silicon we're going to hit the limit pretty soon as far as process size goes.
I'd argue it's not quite like that. There is a limit to scaling process technology, it's impractical to do what it takes to shrink atoms just for the desktop market segment, but the limit we are approaching faster still is the financial limit.
The cost of developing successive process technology nodes is exquisitely prohibitive as you go below 45nm. For one the materials of choice become more and more exotic (as far as the industry is concerned) which means elevated risk which means elevated costs to quantify and characterize the risk, etc etc.
This is why you saw consolidation of R&D efforts in the form of the Crolles Alliance and the IBM Ecosystem (aka fab club) develop at the 90nm and 65nm nodes. The situation gets direr at 45nm and beyond.
Intel has the revenue stream to justify the R&D cost structure necessary to fund 22nm and 16nm node development. But does AMD and the associated IBM Ecosystem? Yes but not at a cadence of 2yrs/node...they will be forced to either throw in the towel (ala Texas Instruments) or reduce their process technology cadence to something that reduces the annual R&D commitment to something their revenue stream can cost justify.
The economic limitations will dominate process technology cadence for everyone but Intel going forward (beyond 45nm) more so than the technology challenges of scaling towards atoms.
That's not to say it isn't a challenge, the money is needed to afford the tools needed to solve those challenges. EUV at $180M per tool is a barrier to entry for developing 16nm process technology for any company whose annual sales volume is <$10B.
Originally posted by: Foxery
Also note that Moore?s Law is sometimes misquoted as "CPUs double in speed" every 2 yeas, but it's actually "CPUs double in transistors,"
And even that is an interpretation adopted some time after the seminal paper was published.
Gordon Moore wrote in his
original article that "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year."
What Moore was talking about is the cost structure of IC's and how the number of integrated components in an IC has a cost structure which has a minimum (too few components and the cost per IC is high due to fixed overhead costs of the business itself, too many components and the cost per IC is high due to reduced yields).
Mainstream media subsequently re-interpreted Moore's law and replaced the definition's use of "components" with similar (but not the same) words of transistors, clockspeed, or performance "doubling every
X number of months" where
X was 12 months, then 18, then 24 months as we go thru the decades since the paper's publication.