<< I found them informative, but at the same time, I remain a bit skeptical. I believe that based on the process described by Patrick, cpu releases, relating to their speed/date realeased, would not follow such a linear pattern. >>
I'm not sure whether or not to be offended by this statement. jmitchell, whether or not you are skeptical this is the way things work. This is the way things really are. I have no motivation to lie about my work - Intel doesn't pay me to post here.
They very rarely jump substantially because everyone does their job correctly. I described the process of speed path debug... for there to be a huge jump in frequency, it would have to mean that one (or just a few) circuits were holding back the rest of the chip from a substantially higher frequency. For this to occur, one of the designers would have had to really goof. And in the rare cases where this does happen, they are caught during the silicon debug process prior to release. Because everyone is designing to a target frequency, all of the circuits should run at approximately the same speed. There may be a few that don't due to modelling issues or possibly minor issues in the way that design was created, and these can be tweaked up with small changes.
What you find with speed path debug is that you get initially large jumps with very little effort (correcting mistakes), but as time goes on the gains gradually get harder and require more time and effort for lower return until eventually the gains are not worth the effort. Somewhere after the intial easy gains, the chip is released to manufacturing and from that point on, the design is merely tweaked. The gains get more gradual until at some point there no point in further tweaking and you wait for the next design to appear which may be an architectural revision, a compaction, or a process shrink.
The same occurs with manufacturing process development. Companies generate new process components for several years before the process is released for manufacturing, and the initial parts tend to be slower than the older process that they are replacing. So a 0.25um process will initially be quite a bit slower than a 0.35um process that it will eventually replace. And then gradually it is tweaked. The big gains come early - before the process is even used commercially. After that it's just gradual improvements.
For big jumps to occur, either people have to be making mistakes or you have to have a new paradigm. And engineers are well educated and there is a lot of peer review so the former is fortunately fairly rare, and the latter has not happened for as long as we've been using silicon. Silicon is evolutionary not revolutionary.
<< For example, if problems limit current speeds to 2.2ghz, is it not possible that one single weak point is causing that limitation, and that alleviating that problem could instantly allow the chip to reach 3ghz? >>
Not unless someone messed up badly. The way this works is that you have a process team designing a new process technology and they have a target for transistor parameters. The design team uses this data to set a goal for the design, say 2GHz. So, they design using the transistor parameters to run at 2GHz and all circuitry on the chip needs be designed to run at 2GHz or faster. When the chip is finished it should run at 2GHz. And if you want it to run at 3GHz you need to redesign all of the circuitry that is slower than 3GHz. But that should be nearly every major circuit on the chip because to design circuitry to run faster than the target just wastes power. So since engineers always try to pad their designs slightly, there may be only a few circuits holding the design back from 2.1GHz, and you can figure out what they are and fix them. But there will a lot more holding it back from 2.2GHz and you can maybe figure out these and with a lot of effort fix them. But at some point you will hit the limit where practically every circuit on the chip is holding you back, and there's no way you can fix them all, or if you could then it's easier just to redesign the chip from scratch.
<< In such a case, I am certain we would not see a 3ghz chip... but rather the standard 2.3 and 2.4 that we expect... >>
Why? What possible reason would you have to do this? It will cost you no more to make this hypothetical 3GHz chip than a 2.4GHz, and at 3GHz and, due to this jump that you described, you'd be miles ahead of the competition. You could charge a huge amount of money for it, and people would buy it. For data centers, space is money and you buy the fastest thing that you can whatever it costs. And if you are fast enough, you can start to move into the high-end space occupied by the $50k+ products (like PA-RISC, Alpha, Power4, UltraSPARC) and charge even more money because you are so much faster than these high-end workstation designs. Why would you trickle the release? Why would you deliberately let everyone catch up with you?