Why does cpu design take so long?

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
And what, some AI handles everything?

It takes so long because its so complex and validation takes so long time to ensure its relatively bugfree. Unlike software that is a complete joke.
 

kimmel

Senior member
Mar 28, 2013
248
0
41
Have you ever written Verilog or VHDL? Have you ever had to take Verilog from someone who thinks #1ps is a synthesizable construct or enjoys writing verilog that looks like if(foo==1'bx)? Have you ever used EDA tools from Synopsis and Cadence? Have you ever had someone look baffled when you tell them they just blew through the entire area budget for their design when all they did was add one line of Verilog? (which created a giant array of cells)

All of these are why the design takes awhile. We're not even including actual manufacture, the subsequent debug time, and the fab turn around times.

People complain about C and C++ don't realize how good they have it that a relatively sane language is somewhat standard.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Have you ever written Verilog or VHDL? Have you ever had to take Verilog from someone who thinks #1ps is a synthesizable construct or enjoys writing verilog that looks like if(foo==1'bx)? Have you ever used EDA tools from Synopsis and Cadence? Have you ever had someone look baffled when you tell them they just blew through the entire area budget for their design when all they did was add one line of Verilog? (which created a giant array of cells)

All of these are why the design takes awhile. We're not even including actual manufacture, the subsequent debug time, and the fab turn around times.

People complain about C and C++ don't realize how good they have it that a relatively sane language is somewhat standard.


I'm certain that if I knew I wouldn't have tried to start a discussion about it. Perhaps it is a bit too low level for my fellow forumers and myself here at AT.
 

el etro

Golden Member
Jul 21, 2013
1,581
14
81
I guess Dresdenboy could help respond part of it to you.


Anyway think designing a processor is as complex as design a car engine...
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Why does cpu design take so long?

The answer is the project management triangle.



It takes a long time (in our perception of time) because the companies making the cpus don't want to spend an insane amount of money developing the chip so as to bring it to market faster without sacrificing quality and/or scope (features such as performance, power consumption, instruction sets, iGPU, etc.)

If 90% of the market was willing to pay 3x more for the same product with the same features provided it arrived to the market in just 2 yrs instead of 3 yrs then you can bet companies would be throwing even more money at developing the chips on a faster timeline.

Why don't they just write a program to design a cpu?

Going back to the project management triangle, you can do what you propose but not without sacrificing scope (features), quality (bugs, validating computer developed cpu is difficult), or development budget.

But in the end, since these products are born in a semi-capitalistic/semi-free market environment, the answer to pretty much any question you can come up with regarding the scope or timeline of the product is that the economics don't work out unless it is done the way they currently do it. (i.e. its the money)
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
The answer is the project management triangle.







It takes a long time (in our perception of time) because the companies making the cpus don't want to spend an insane amount of money developing the chip so as to bring it to market faster without sacrificing quality and/or scope (features such as performance, power consumption, instruction sets, iGPU, etc.)



If 90% of the market was willing to pay 3x more for the same product with the same features provided it arrived to the market in just 2 yrs instead of 3 yrs then you can bet companies would be throwing even more money at developing the chips on a faster timeline.







Going back to the project management triangle, you can do what you propose but not without sacrificing scope (features), quality (bugs, validating computer developed cpu is difficult), or development budget.



But in the end, since these products are born in a semi-capitalistic/semi-free market environment, the answer to pretty much any question you can come up with regarding the scope or timeline of the product is that the economics don't work out unless it is done the way they currently do it. (i.e. its the money)


Thank you for the response. A few follow up question if I may?
How far removed are automation tools from what I have posited in the op?
How would hand made cpu schematics be quicker to developed than having the design simulated ?

If you do follow up please keep your response to a high level of abstraction, maybe think of it as a challenge - hehe maybe try your world famous car analogies ,
 

gdansk

Platinum Member
Feb 8, 2011
2,488
3,377
136
There are shit loads of programs and code that go into the process of making a processor. Almost everything that can be automated is automated. Even the testing, verification, placement and routing. It's still very complicated after all that automation. Some areas of the design are extremely complex (branch predictors) while others are (usually) more simple such as the cache. Modern processors actually are so complicated that they must have AES, power control & monitor, and so much more in hardware. Verification, which is the most automated step, takes the longest time because some errors are not found until other errors are fixed first.

tl;dr: they're huge and do too many things.
 
Last edited:

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
From a simple enthusiast point of view I see processors as some of the most complex and amazing machines mankind is able to make, requiring collaboration from a wide range of disciplines (chemistry, engineering in many of its variants, physics, computing obviously, etc), and each year that passes the complexity gets even more mind boggling. I mean, from simple calculators with a few transistors to a 486 not that long ago with a million transistors doing their thing, we're now into the billions! Breathtaking stuff.

I can easily see how millions of man hours go into these little wonders we buy and use, it's amazing to think all the knowledge that's been put into practice to make these a reality. I love forums for this very reason, the possibility to see people in their respective fields give their take on the matter.

I can't even begin to think what would happen when a technology that does to the transistor what it did to the vacuum tube gets invented. Not to mention if it has the ability to scale and progress along the years just as much as the transistor did and does...
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
What are you guys all talking about with AMD using a program to automatically design a CPU? The synthesis that can be automated is very far from all of CPU design.

The question isn't really that much different from asking why Microsoft doesn't just write a program to design the next version of Windows.
 

Borealis7

Platinum Member
Oct 19, 2006
2,914
205
106
because even for Intel's top scientists, reverse-engineering a derelict alien spaceship on the far side of the moon takes a lot of time.
but you didn't hear it from me.
 
Dec 30, 2004
12,554
2
76
because even for Intel's top scientists, reverse-engineering a derelict alien spaceship on the far side of the moon takes a lot of time.
but you didn't hear it from me.

they reverse engineer AMD chips. that's how they write compilers that make AMD worse

and do more studies into the schottky diode.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,414
401
126
Why don't they just write a program to design a cpu?
ALL the damn stages are complex

1) IOS
2) Design (RTL)
3) Functional Verification (UVM, etc.)
4) Frontend (synthesis, STA)
5) Backend (physical design, DFT)
6) Chip finishing

Hell, I work on just synthesis, STA and physical design, and I already see plenty of issues to keep me busy (is your UPF/CPF right? why by Odin's nutsack did you add 500ps of uncertainty AND 10% of timing derates? why do you have dummy 1'b0/1'b1, etc.)

While it's true that we do have CAD programs to help out (Virtuoso, PrimeTime/ETS, EDI/ICC/AtopTech, DC/RC, etc.), it's a FAAAAAR cry from just typing in "make silicon" and being done with it.
If it were, you can be damned sure management wouldn't pay a bunch of neckbeards $100K+

One more thing - P ?= NP, which means that all CAD programs have to resort to heuristics. Something as "simple" as placement (especially when you have 100s of macros like in a complex CPU) is already a PITA, let alone CTS, routing, timing closure, power/leakage optimization, etc.
Add the fact that design is generally an over-constrained optimization problem, and it just makes thing much more complicated.

PHB : This block must fit in 1um^2, consume only 1nW, run at 10GHz, have 100% test coverage, OCC and implement all customer functionality.
Me : No problem. You want fries with that? :biggrin:
 
Last edited:

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
What are you guys all talking about with AMD using a program to automatically design a CPU? The synthesis that can be automated is very far from all of CPU design.

The question isn't really that much different from asking why Microsoft doesn't just write a program to design the next version of Windows.

And write a program to write that program....
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
I won't add to the detailed answers about such a projects complexity, the areas where automation is being used etc. But there is an interesting twist to the OP. Since I'm working with machine learning stuff I also know the possibilities of evolutionary computing, which can also be used to create electronic circuits. Genetic Algorithms and even more so Genetic Programming have been used to build some digital or analog circuits (made of max. tens of components) which are as good or even better than patented ones. But since there is a computational limit, the complexity of evolvable logic is also limited. A 20 bit multiplexer is a typical benchmark.

A more interesting option is to do a overall optimization using GA with microarchitectural simulations including estimations for power, delays, etc. This is an old idea of mine, but I also found some recent papers describing such possibilities.

If I find them, I'll post the links. Somewhere I also have presentations covering the PPro and Power 4 design steps, which I also could provide here.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
they reverse engineer AMD chips. that's how they write compilers that make AMD worse

Are you being serious? Because if you are, that isn't how Intel's dynamic code dispatcher ever worked.

They checked machine status registers to see what instruction sets the processor supported and branched to difference pieces of code based on that. There were for example x87-only, SSE2, SSE4 etc code paths, not Intel or AMD processor paths. So the code that ran on AMD processors is the same code that would run on older Intel processors. The optimization space that's particularly good for one uarch and paritcularly bad for another is pretty small, especially these days. Meaning that they'd be hard pressed to generate code that was explicitly worse on AMD processors without also being worse on the Intel processors that ran it. It may be that that code is generated with no attention whatsoever to avoiding glass jaws on AMD processors, but that's a far cry from generating code that's intentionally bad.

The problem was that they used Intel ID codes to perform this capability check instead of the vendor-neutral bits that all processors support, meaning that AMD (and VIA) would end up on dispatch paths that were not the most suitable for the processor. Although this varied depending on the compiler and library.

One thing that people often don't get is that the auto-dispatcher is optional and a lot of benchmarks that are considered suspect due to being compiled with ICC actually don't use the dispatcher and are only compatible with processors that support the targeted fixed instruction sets.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,414
401
126
A more interesting option is to do a overall optimization using GA with microarchitectural simulations including estimations for power, delays, etc. This is an old idea of mine, but I also found some recent papers describing such possibilities.
Looked at GAs a little when writing a placer based on shape-curve + slicing-trees a few years back. Perhaps I did not delve deeply enough, but it seemed a little difficult to implement some of its idealogies, eg. what exactly does one swap between two placement solutions, and does it necessarily "evolve" to a global (rather than local) minimum.

Also, found it a little difficult to express some problems (routing optimization, etc.) in terms of GA. To be fair however, it wasn't straightforward to express as an ILP as well.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
I would think that it probably isn't as tough to design a functional CPU core as it is to design a fast CPU core. While a machine can be programmed to look for ways to speed up the chip, it can only do things that it is programmed to do. At the end of the day, engineers need to sit there and ponder what they can do to get more work done on the CPU faster, with lower power and less die area.

Some of these things, like figuring out how the layout the transistors and route the wiring in a cache to save space would seem to be pretty easy for a computer to do. Some things, like adding new instructions for encryption and video decoding would seem like optimizations which would be easier for a person to tackle.
 

Jeff7

Lifer
Jan 4, 2001
41,599
19
81
I'm astonished at the speed of design. It's amazing to me.
From a simple enthusiast point of view I see processors as some of the most complex and amazing machines mankind is able to make, requiring collaboration from a wide range of disciplines (chemistry, engineering in many of its variants, physics, computing obviously, etc), and each year that passes the complexity gets even more mind boggling. I mean, from simple calculators with a few transistors to a 486 not that long ago with a million transistors doing their thing, we're now into the billions! Breathtaking stuff.

I can easily see how millions of man hours go into these little wonders we buy and use, it's amazing to think all the knowledge that's been put into practice to make these a reality. I love forums for this very reason, the possibility to see people in their respective fields give their take on the matter.
+1

Computer processors are incredibly complex devices. A new high-end chip fab can cost billions of dollars.
A consumer-level chip can easily have a few hundred million transistors.

Design a machine that has 200 million switches in it.
Done? Ok, great.
Are you sure it's 100% correct, and that some special combination of inputs won't cause some of it to trigger incorrectly? I sure hope so, the tooling is expensive.

Now each of those switches has to be a few dozen atoms wide. Consistently, too. Scrapping precision-doped pure silicon isn't cheap.



Write a program to do it? Sure. You could just string together a bunch of Pentium Pro cores and call it good; a computer program could surely handle that. Want more computing power? Just add more cores. Better add another 200A breaker box to your house to handle the power and air conditioning requirements.
Die shrinking is what pushes the limits. "Hey, the wavelength of UV light (a few hundred billionths of a meter) is becoming a problem. It's too big. We need smaller light." What would the computer program do to handle that?


They're incredibly complex little devices that incorporate some of our most advanced scientific knowledge in their production, which are designed using computers that can take the place of tens of thousands of people performing calculations with a slide rule or hand calculator.

They push the boundaries of what we know about physics, like reaching the point where insulating barriers are so thin that they can't reliably block electrons from passing through. Computers aren't yet smart enough to be able to figure out alternate design paths when physics says "You can't do this anymore."




I can't even begin to think what would happen when a technology that does to the transistor what it did to the vacuum tube gets invented. Not to mention if it has the ability to scale and progress along the years just as much as the transistor did and does...
Room-temperature quantum computing.


"If a quantum computer could be built with just 50 quantum bits (qubits), no combination of today’s top 500 supercomputers could successfully outperform it."

Problem is, making those quantum bits in the first place is kind of tricky.


Build something like that in a compact form, and walking not too far behind it will be an android at or beyond the level of Data.




Incidentally, it is this level of complexity that is a big reason you have multiple revisions of the silicon, and errata sheets for chips: The thing is such an incredibly complex machine that the human mind is simply unable to keep track of all its workings. Things are forgotten, or not anticipated.
Then you add software into the mix, which instructs that complex machine how to function. Sometimes it seems amazing that any of it works at all.
 
Last edited:

videogames101

Diamond Member
Aug 24, 2005
6,777
19
81
ALL the damn stages are complex

1) IOS
2) Design (RTL)
3) Functional Verification (UVM, etc.)
4) Frontend (synthesis, STA)
5) Backend (physical design, DFT)
6) Chip finishing

Hell, I work on just synthesis, STA and physical design, and I already see plenty of issues to keep me busy (is your UPF/CPF right? why by Odin's nutsack did you add 500ps of uncertainty AND 10% of timing derates? why do you have dummy 1'b0/1'b1, etc.)

While it's true that we do have CAD programs to help out (Virtuoso, PrimeTime/ETS, EDI/ICC/AtopTech, DC/RC, etc.), it's a FAAAAAR cry from just typing in "make silicon" and being done with it.
If it were, you can be damned sure management wouldn't pay a bunch of neckbeards $100K+

One more thing - P ?= NP, which means that all CAD programs have to resort to heuristics. Something as "simple" as placement (especially when you have 100s of macros like in a complex CPU) is already a PITA, let alone CTS, routing, timing closure, power/leakage optimization, etc.
Add the fact that design is generally an over-constrained optimization problem, and it just makes thing much more complicated.

PHB : This block must fit in 1um^2, consume only 1nW, run at 10GHz, have 100% test coverage, OCC and implement all customer functionality.
Me : No problem. You want fries with that? :biggrin:

Listen to this guy. Once you've used EDA tools, you'll understand. It's really not the design that takes so long, the problem is that about half your time is spent fighting the automation tools that we've built to do the complex stuff for us. It's not that the tools are inherently poor (although some are ) it's simply that the complexity of the computations being performed is high and getting it right without bugs isn't possible right now (although the small and secretive userbases don't help the process). To be frank it's amazing they whole tool chain works at all. I'm speaking mostly about synthesis onwards, RTL/HDL design is obviously a different beast.

Constraints are always hilarious, but it's more of a management problem or architectural problem, you just can't do everything.
"Oh sure, 10GHz is no problem, but unfortunately we've had to restrict the cell library to exclusively min length ulvt cells so the chip is burning 1kw. We also need 18 metal layers."

Have you ever written Verilog or VHDL? Have you ever had to take Verilog from someone who thinks #1ps is a synthesizable construct or enjoys writing verilog that looks like if(foo==1'bx)? Have you ever used EDA tools from Synopsis and Cadence? Have you ever had someone look baffled when you tell them they just blew through the entire area budget for their design when all they did was add one line of Verilog? (which created a giant array of cells)

All of these are why the design takes awhile. We're not even including actual manufacture, the subsequent debug time, and the fab turn around times.

People complain about C and C++ don't realize how good they have it that a relatively sane language is somewhat standard.

These people should be fired immediately.

It baffles me how someone can imagine such a statement in circuit form. It's almost like these guys don't even try to think about the actual implementation of their HDL. Whats strange is that this point was hammered in the Verilog courses I took. In one ear out the other I guess...
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |