Netburst looking back

cytg111

Lifer
Mar 17, 2008
23,531
13,104
136
So Intel did netburst. Didnt work out.
If I have understood this move correctly it is because Intel envisioned chips going at 10GHz+ speeds.
So I am just wondering, what kind of physics caught up with Intel that couldnt be mitigated?
Intel obviously had a vision at some point, so what didnt pan out, what research failed to deliver? And following that, is it crazy to persue a return to the GHz race? Are we wiser now than then?
My question is obviously targeted our tech and history inclined members .
 

Thunder 57

Platinum Member
Aug 19, 2007
2,814
4,103
136
It's all been covered, a good google search should give you plenty of info. Basically, power and heat. The P4 had a long (for the time) pipeline of 20 stages. It was able to hit 3.2GHz. With Prescott they lengthen it again to an absurd 31 stages. That, and a die shrink, only lead to 3.8GHz. The P4 was a very complex chip and all of those stages used a lot of power. That turned into a heat problem.

Today were back around 20 stages of pipeline and hitting close to 5GHz, so yes, we are a bit more wise now. I don't see a return to the GHz race anytime soon. We need new materials/technology, really. I would recommend searching for "Why Prescott failed" or "Why Netburst failed". This should help get you started:

http://www.anandtech.com/show/1611/6
 
Reactions: cytg111

Bouowmx

Golden Member
Nov 13, 2016
1,140
550
146
https://software.intel.com/en-us/blogs/2014/02/19/why-has-cpu-frequency-ceased-to-grow

For standard users, the section "To flip the switch" applies. Increasing frequency requires too much power and cooling.

For world-record overclocking, the section "The conveyor" applies. Longest pipeline stage limits frequency. The author says that reducing stage length is difficult (keeping other factors like pipeline length equal?): one possibility is reducing the component's physical size, so that electrons need to travel shorter distances.

Depending on your background, you may already know the above.
 
Reactions: cytg111

cytg111

Lifer
Mar 17, 2008
23,531
13,104
136
It's all been covered, a good google search should give you plenty of info. Basically, power and heat. The P4 had a long (for the time) pipeline of 20 stages. It was able to hit 3.2GHz. With Prescott they lengthen it again to an absurd 31 stages. That, and a die shrink, only lead to 3.8GHz. The P4 was a very complex chip and all of those stages used a lot of power. That turned into a heat problem.

Today were back around 20 stages of pipeline and hitting close to 5GHz, so yes, we are a bit more wise now. I don't see a return to the GHz race anytime soon. We need new materials/technology, really. I would recommend searching for "Why Prescott failed" or "Why Netburst failed". This should help get you started:

http://www.anandtech.com/show/1611/6

I know these facts. What I am after is why Intel went to pursue this avenue, given tech available at the time this was a foreseeable consequence.. So I am thinking that Intel made a bet on some research that didnt pan out... Wondering what said research was and if it was worth investigating again...
 

cytg111

Lifer
Mar 17, 2008
23,531
13,104
136
https://software.intel.com/en-us/blogs/2014/02/19/why-has-cpu-frequency-ceased-to-grow

For standard users, the section "To flip the switch" applies. Increasing frequency requires too much power and cooling.

For world-record overclocking, the section "The conveyor" applies. Longest pipeline stage limits frequency. The author says that reducing stage length is difficult (keeping other factors like pipeline length equal?): one possibility is reducing the component's physical size, so that electrons need to travel shorter distances.

Depending on your background, you may already know the above.

Thanks for the link..

P ~ Cdyn*V2*f
(P = Power, Cdyn = dynamic capacitance, V = voltage, f = frequency)

So unless you have the power to bend fundamental laws of the universe that leaves capacitance. To me that reads that Intel was confident in its fabbing capabilities to reach those heights and that research into fabbing turned up cold OR that investments into fabbing was turned "mobile" instead of absolute performance. If the latter is true that means that there is a research field un-investigated that may yet yield us 10GHz+ super chips. Wrong?
 

Thunder 57

Platinum Member
Aug 19, 2007
2,814
4,103
136
What I am after is why Intel went to pursue this avenue, given tech available at the time this was a foreseeable consequence.. So I am thinking that Intel made a bet on some research that didnt pan out...

I'm not so sure Intel knew they were going to have a power problem. Die shrinks had kept power in check through lower voltages for many years, and clock speeds continued to increase. I don't believe Intel intended to lengthen the pipeline either, but were forced to. I really wonder what Tejas was supposed to look like. AFAIK not much ever leaked out about it.

Perhaps the the answer to your question lies here?

Since around 2005–2007 Dennard scaling appears to have broken down. As of 2016, transistor counts in integrated circuits are still growing, but the resulting improvements in performance are more gradual than the speed-ups resulting from significant frequency increases.[2][5] The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges, and also causes the chip to heat up, which creates a threat of thermal runaway and therefore further increases energy costs.[2][5]
 
Reactions: cytg111

cytg111

Lifer
Mar 17, 2008
23,531
13,104
136
I'm not so sure Intel knew they were going to have a power problem. Die shrinks had kept power in check through lower voltages for many years, and clock speeds continued to increase. I don't believe Intel intended to lengthen the pipeline either, but were forced to. I really wonder what Tejas was supposed to look like. AFAIK not much ever leaked out about it.

Perhaps the the answer to your question lies here?

So breaking in new research on the physical level leakage turned out to be a ... cold mistress.. So reading this right, leakage got the better of Intel?
 

Thunder 57

Platinum Member
Aug 19, 2007
2,814
4,103
136
So breaking in new research on the physical level leakage turned out to be a ... cold mistress.. So reading this right, leakage got the better of Intel?

That's how I understand it. This is why so much work has been done with SOI and FinFET. There seems to be some hope with FDSOI, but I'm not sure we will see it in high end chips. It seems to be more for lower power devices.

EDIT

I would have to go back and look, but I'm pretty sure Intel has said most of the benefit from 32nm to 22nm came from FinFET, and not the die shrink. This makes sense, as 20nm bulk seemed to provide very little if any benefit over 28nm. That is why we never saw 20nm GPU's.
 
Reactions: cytg111

Mopetar

Diamond Member
Jan 31, 2011
8,005
6,449
136
I know these facts. What I am after is why Intel went to pursue this avenue, given tech available at the time this was a foreseeable consequence.. So I am thinking that Intel made a bet on some research that didnt pan out... Wondering what said research was and if it was worth investigating again...

To some degree I think it was related to marketing. At that time, processors sold on MHz more than anything. Having bigger numbers made it easier to sell a processor, because even if you don't know much about how they work, one with bigger performance numbers is going to be better.

On the other hand, what was Intel's alternative? They wanted to build a new architecture instead of continuing to refine the existing one. I'm not even sure if someone had tried a design quite like NetBurst before so it's only easy to see all of the problems in hindsight.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,814
4,103
136
To some degree I think it was related to marketing. At that time, processors sold on MHz more than anything. Having bigger numbers made it easier to sell a processor, because even if you don't know much about how they work, one with bigger performance numbers is going to be better.

On the other hand, what was Intel's alternative? They wanted to build a new architecture instead of continuing to refine the existing one. I'm not even sure if someone had tried a design quite like NetBurst before so it's only easy to see all of the problems in hindsight.

Oh, I'm sure part of it was marketing. Think about it. "Hey we can lengthen the pipeline some, hopefully make up for IPC with better branch prediction, and then sell CPU's at a significantly higher frequency!". Not necessarily a bad idea either.

It's very true that it was difficult to convince someone that a 1.4GHz Athlon was competitive with a 2.0GHz P4. I had this problem even among friends who were into computers. I remember one said something along the lines of, "They measure MHz differently". Obviously not the case but even educated people had a hard time understanding why the Athlon was faster per MHz than the P4.
 

DrMrLordX

Lifer
Apr 27, 2000
21,802
11,157
136
Intel apparently did have Netburst chips in the lab reaching high clockspeeds. What they couldn't account for were the leakage issues they had with their (generally terrible) 90nm process.
 
May 11, 2008
20,041
1,288
126
Intel also introduced strained silicon with the 90nm process and used it since.

http://www.intel.com/pressroom/kits/advancedtech/doodle/ref_strain/strain.htm

http://www.eetimes.com/document.asp?doc_id=1145018
Mark Bohr, director of process integration and architecture at Intel's development facility in Hillsboro, said Intel has developed a form of strained silicon that enhances drive current by 10 to 20 percent but that adds only 2 percent to the cost of a processed wafer. Intel will be the first company to move strained silicon into manufacturing, Bohr said.



Bohr said the addition of strained silicon, and a gate oxide only 1.2 nm thick, will support the faster circuits. The 90-nm NMOS transistors have a drive current of 1.2 milliamps/micron; the PMOS devices are rated at 0.6 mA/micron. That's a significant increase over the 130-nm process.



Intel also scaled the maximum operating voltage down to 1.2 volts, from 1.4 V in the 130-nm process. Bohr said scaling the operating voltage has become more difficult with each process generation, largely because there are limits to how much the threshold voltage can be reduced. Reducing the operating voltage is key to keeping power consumption under control, and Intel is likely to operate its mobile processors and other power-sensitive products at around 1 V at the 90-nm node.

If i am not mistaken, the later developed finfet technique is for lowering the threshold gate voltage and less leakage, more gate control.
 

SPBHM

Diamond Member
Sep 12, 2012
5,058
410
126
Intel apparently did have Netburst chips in the lab reaching high clockspeeds. What they couldn't account for were the leakage issues they had with their (generally terrible) 90nm process.

well they made 65nm P4s and they had as high as 3.6GHz with a 65W TDP, so that (leakage) was probably going down quite a bit?
 
Reactions: cytg111

NostaSeronx

Diamond Member
Sep 18, 2011
3,688
1,222
136
P68(Netburst) and P7(Itanium) were closely tied together.

P68 was targeting instruction execution speed.
P7 was targeting instruction execution density.

Eventually, the two would have been fused by the eventual P8. Except, both designs went different ways and P8 was discarded for improved P6 via Core.

https://www.google.com/patents/US6629232
https://www.google.com/patents/US20020004895

There was something else P8 was suppose to bring.
https://www.google.com/patents/US20030163669
*cough* https://patentimages.storage.googleapis.com/US20030163669A1/US20030163669A1-20030828-D00005.png *cough* //read title and summary, then look at this image and patent #2.
 
Last edited:
Reactions: cytg111

cytg111

Lifer
Mar 17, 2008
23,531
13,104
136
P68(Netburst) and P7(Itanium) were closely tied together.

P68 was targeting instruction execution speed.
P7 was targeting instruction execution density.

Eventually, the two would have been fused by the eventual P8. Except, both designs went different ways and P8 was discarded for improved P6 via Core.

https://www.google.com/patents/US6629232
https://www.google.com/patents/US20020004895

There was something else P8 was suppose to bring.
https://www.google.com/patents/US20030163669
*cough* https://patentimages.storage.googleapis.com/US20030163669A1/US20030163669A1-20030828-D00005.png *cough* //read title and summary, then look at this image and patent #2.

So they had "morphcore" on the table allready back then? Will it ever materialize though?
Damn I sort of wanted Intels Itanium plan to pan out.

Still the point stands, as SPBHM pointed out they got leakage under control, we have 4-5Ghz procs now with ~60watts tdp .. Then why dont we have 10Ghz 120watt cpus? - And I know the graphs, its an exponential growth curve that kills anything above 5 at the moment... But why? Is it because the proces nodes are developed for and targeted mobile? Could 14nm be reengineered towards much higher frequencies? (i want my cake dammit)
 

littleg

Senior member
Jul 9, 2015
355
38
91
Northwood was a damned good core in its day, they just took it too far going for clockspeed with Prescott. The unexpected emergence of A64 changed the game.
 

cytg111

Lifer
Mar 17, 2008
23,531
13,104
136
Northwood was a damned good core in its day, they just took it too far going for clockspeed with Prescott. The unexpected emergence of A64 changed the game.

As in AMD64? Yea Intel should problary have cut AMD in on the new ISA. Instead it had its lunch completely ruined.
 

KTE

Senior member
May 26, 2016
478
130
76
Pentium Pro->Pentium M happened

The M mobile chips ran cooler and faster at 1GHz lower. Intel couldn't get anywhere near this performance scaling P4. The next challenge was process prowess to scale this mobile design in frequency.

Research and test vehicles do not necessarily account for real HVM results, and with ->P4, they didn't. Power blew up far above what was expected. Most future looking technology predictions were completely wrong as they were missing by then unknown transistor factors and behaviors. Intel realized the design was going no where due to low IPC and leakage->low efficiency problems.

Just remember that these innovative engineers are not allowed free reign and you don't get to see their genius except just some filtered, limited form. These are corporations with a lot of internal politics, and management calls the shots. Many choices the techies disagree with are still made, and they ignored. Save maybe the person who is tight with the Project Manager. That's humans 101 for ya.

Looking back, I think Intel were following IBMs high level design choices a bit too closely in those periods, but that is when they decided to part in some aspects.

I don't think a return to the GHz race is back on just yet. I suspect something is going to throw a spanner into the works once again, and companies will be back to the drawing board as to increasing IPC. Maybe even a new architecture.

I am sure others will have answered in better depth and breadth so this is just a high level topology.

Sent from HTC 10
(Opinions are own)
 
Reactions: cytg111
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |