Is current computing technology dying?

ThatWasFat

Member
Dec 15, 2001
93
0
0
There has been very little progress in the last year or so. Almost all of Intels new technology is based on trying to keep their old technology from melting, and AMD really hasn't done much either. There are no "big things" coming anywhere. A few dual cores coming, but it's all just the same stuff over and over, and I think it's running out of lifetime. Is the new life of computers based in nanotubes? Or perhaps quantum computing? How much farther do you think current transistor technology can go?
 

walla

Senior member
Jun 2, 2001
987
0
0
I think much of this is due to the fact that as transistor sizes and interconnects become smaller, analog characteristics such as resistance, capacitance and inductance become increasingly important factors that must be considered in architectural design. There are not many great engineering tools that incorporate both architectural and circuit-level characteristics in an accurate or practical manner - this is hindering innovation.

But obviously, shrinking process sizes increases the pervasiveness of circuit-level phenomena. Leakage current of transistors is increase, which is a major contributor to power consumption. Wire capacitance is increasing due to smaller width and longer length, as well as interwire capacitances that manifest themselves as distances between them shrink. I find that a lot of research in the computer architecture field right now is focusing on low power techniques...some of which present encoding algorithms to decrease switching activity, some of which can "sense" circuit-level phenomena such as current draw in order to optimize power vs performance. I've read some of these techniques, and it makes me realize that without this type of innovation, we simply can't sustain current trends in voltage and dimension scaling.

Of course, simple solutions to increasing performance without needing to scale is multiprocessing, multi-core chips. Theoretically, two processors stuck on one chip can yeild twice the performance as just one, and is a much simpler solution than improving single core performance. However, this can only be seen as a temporary solution to sustain performance expectations in the computer market.

I think if silicon were to have any long-term viability, it would require some breakthroughs in material science that could appreciably reduce capacitance of materials or increase conductivity. Barring that, I believe we are coming upon a wall, and probably the end of the silicon era within the next 20 years. It is my guess that optical computing will replace it...but I don't have a lot of insight into that.
 

WoodenPupa

Member
Feb 22, 2005
35
0
0
I knew when Intel withdrew from the 4 GHz barrier we were in trouble. Not that it's a bad thing in the long run, as it forces chip designers to start ushering in the new era. I can recall articles from a few years ago projecting that Moore's law would not even be able to *keep up* with the speed increases...Intel was showing off functional 10 GHz concept CPU's in their labs...look where we are now.
 

ribbon13

Diamond Member
Feb 1, 2005
9,343
0
0
One thing is for certain, were bound to electricity. The light in optical computing has to come from somewhere.

Also, another thing is getting back to the good ol basic finding the best algorithm and aseembly. Too many programs nowadays are bloated and sloppily coded. The #1 OS for example.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: walla
Of course, simple solutions to increasing performance without needing to scale is multiprocessing, multi-core chips. Theoretically, two processors stuck on one chip can yeild twice the performance as just one, and is a much simpler solution than improving single core performance. However, this can only be seen as a temporary solution to sustain performance expectations in the computer market.

On the contrary. I think this is the long term, highly viable, solution to sustain performance expectations. It's just going to get better and better.
It's all in the software model. Almost anything that really is performance craving can be made to go faster on multicores.
 

walla

Senior member
Jun 2, 2001
987
0
0
Originally posted by: Vee
Originally posted by: walla
Of course, simple solutions to increasing performance without needing to scale is multiprocessing, multi-core chips. Theoretically, two processors stuck on one chip can yeild twice the performance as just one, and is a much simpler solution than improving single core performance. However, this can only be seen as a temporary solution to sustain performance expectations in the computer market.

On the contrary. I think this is the long term, highly viable, solution to sustain performance expectations. It's just going to get better and better.
It's all in the software model. Almost anything that really is performance craving can be made to go faster on multicores.


That may be. However, if the core sizes aren't shrinking, and we are sticking 2,4,8 cores on one chip, chip sizes will become restrictively large and expensive. Don't you think? But perhaps this is a sustainable for a long term. Time will tell
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
They are still shrinking the transistor size.

.65 chips should show up by the end of the year for Intel.

This allows you to put more cores on the same die.

Multi-core is probably where the money will start to be at along with complier optimizations to take advantage of the multiple core
 

walla

Senior member
Jun 2, 2001
987
0
0
Originally posted by: Genx87
They are still shrinking the transistor size.

.65 chips should show up by the end of the year for Intel.

This allows you to put more cores on the same die.

Multi-core is probably where the money will start to be at along with complier optimizations to take advantage of the multiple core

Right, right. And the number of cores for a given chip area will scale quadratically will reduction in transistor size, circuit area.

So if we want 4 cores within a constant area, we need to reduce transistor size by half. Theoretically this would lead to about an 8-fold performance increase (2 for each core times 2^2 cores). If we were to expect a "Moores Law" type of performance increase, I estimate that you'd have to halve transistor sizes every 6 years to maintain that rate (if we double performance every two years).

When I think about it that way, multi-core architectures do seem much more viable. Of course the question becomes how efficiently computer architects and programmers can expose the potential gain in performance due to the parallelism of multi-core architectures.



 

ribbon13

Diamond Member
Feb 1, 2005
9,343
0
0
Man... I think its gonna be a bit slower in shrinking after 65nm... Voltage loss is incredible.
 

SuperTool

Lifer
Jan 25, 2000
14,000
2
0
International Technology Roadmap for Semiconductors

It shows where the industry is going in the near and far term.
Keep in mind it's a roadmap, so it assumes necessary inventions will be made to achieve these goals, even though they have no idea how to get there now for many of the items.
It's kind of a self fulfilling prophecy. They set the roadmap, and then semicon manufacturing companies do the research to try to keep up with it.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: walla
Originally posted by: Genx87
They are still shrinking the transistor size.

.65 chips should show up by the end of the year for Intel.

This allows you to put more cores on the same die.

Multi-core is probably where the money will start to be at along with complier optimizations to take advantage of the multiple core

Right, right. And the number of cores for a given chip area will scale quadratically will reduction in transistor size, circuit area.

So if we want 4 cores within a constant area, we need to reduce transistor size by half. Theoretically this would lead to about an 8-fold performance increase (2 for each core times 2^2 cores). If we were to expect a "Moores Law" type of performance increase, I estimate that you'd have to halve transistor sizes every 6 years to maintain that rate (if we double performance every two years).

When I think about it that way, multi-core architectures do seem much more viable. Of course the question becomes how efficiently computer architects and programmers can expose the potential gain in performance due to the parallelism of multi-core architectures.

There's another opportunity too, that deserves mention.

There is a huge untapped performance potential for software, from multicores. I understand somewhat how to program for that, and I don't really see a problem for software to adapt. And once demanding software starts to scale their performance with number of cores, a new opportunity presents itself:

Reducing the cores. What if we can keep 80% of the core performance on 30% of the transistors? On the same die, with more and smaller cores, we would get 2½ times the performance. Todays CPUs are pretty 'bloated' with logic, just to squeeze out a few % additional performance.

And here's yet another opportunity:
Yield on larger dies increases. Just disable those cores that don't work. So larger dies, incorporating more but smaller cores, becomes more economically viable. There's also a large gain in power consumption to be had from a slight downclocking.

Once software performance starts to scale close to geometrically with number of cores, lots of things that used to work against us, dramatic growth of power/heat, number of transistors for diminishing performance returns, and diminishing yield with die size, - suddenly starts to work for us instead. Small sacrifices result in a rich harvest of cores.
 

RaynorWolfcastle

Diamond Member
Feb 8, 2001
8,968
16
81
Originally posted by: Vee
There's another opportunity too, that deserves mention.

There is a huge untapped performance potential for software, from multicores. I understand somewhat how to program for that, and I don't really see a problem for software to adapt. And once demanding software starts to scale their performance with number of cores, a new opportunity presents itself:

Reducing the cores. What if we can keep 80% of the core performance on 30% of the transistors? On the same die, with more and smaller cores, we would get 2½ times the performance. Todays CPUs are pretty 'bloated' with logic, just to squeeze out a few % additional performance.

And here's yet another opportunity:
Yield on larger dies increases. Just disable those cores that don't work. So larger dies, incorporating more but smaller cores, becomes more economically viable. There's also a large gain in power consumption to be had from a slight downclocking.

Once software performance starts to scale close to geometrically with number of cores, lots of things that used to work against us, dramatic growth of power/heat, number of transistors for diminishing performance returns, and diminishing yield with die size, - suddenly starts to work for us instead. Small sacrifices result in a rich harvest of cores.

This only works to a certain extent and even then only on certain datatypes. That is, if you're proccessing video/images, this really involves applying the same operations on large datasers and lends itself well to parallel processing. On the other hand, something like AI needs fast linear processing since it's branchy by its nature.

As for optical computing, I think the main problem is that there is no such thing as cheap "optical memory" that can be generated with the kind of ease you have with electronics.
 

RelaxTheMind

Platinum Member
Oct 15, 2002
2,245
0
76
I just think a big part of the computing community are suffering from technology anxiety and getting the wrong idea from the discontinuing of the 4ghz+ cpus. We are just reaching one of many the many plateaus we have had in the past and ones we will have in the future. Its not ending its just taking a different direction.

IMHO Id rather them take their time with stability and ironing out flaws before putting it out in the market instead of deploying half featured beta versions that barely has any software to see of its benefits. I never wanted to be a good guinea pig... i would at least I would want to get paid to be one. ::cough-SLI-cough-64bit-cough::

I think we will have a better view as soon as dual cores lose the marketing code names and become mainstream so we can figure out which direction they went on the good ol cpu roadmap most of us have probably seen. I am just pondering what AMD has under their sleeves.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Originally posted by: walla
Originally posted by: Vee
Originally posted by: walla
Of course, simple solutions to increasing performance without needing to scale is multiprocessing, multi-core chips. Theoretically, two processors stuck on one chip can yeild twice the performance as just one, and is a much simpler solution than improving single core performance. However, this can only be seen as a temporary solution to sustain performance expectations in the computer market.

On the contrary. I think this is the long term, highly viable, solution to sustain performance expectations. It's just going to get better and better.
It's all in the software model. Almost anything that really is performance craving can be made to go faster on multicores.


That may be. However, if the core sizes aren't shrinking, and we are sticking 2,4,8 cores on one chip, chip sizes will become restrictively large and expensive. Don't you think? But perhaps this is a sustainable for a long term. Time will tell

The silicon itself is not that expensive - what costs the most are the great changes required every couple of years to shrink the process. A chip's cost is not dependent on its surface, as you might know. The original Pentium was introduced at a price close to 1000 USD, and at last it reached something like 50 USD (yes, after a core shrink). I think 100USD would have covered its price with generosity. So, a 4 core would have been some 500 USD, or half the price of it at the time of introduction.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Originally posted by: RaynorWolfcastle
Originally posted by: Vee
There's another opportunity too, that deserves mention.

There is a huge untapped performance potential for software, from multicores. I understand somewhat how to program for that, and I don't really see a problem for software to adapt. And once demanding software starts to scale their performance with number of cores, a new opportunity presents itself:

Reducing the cores. What if we can keep 80% of the core performance on 30% of the transistors? On the same die, with more and smaller cores, we would get 2½ times the performance. Todays CPUs are pretty 'bloated' with logic, just to squeeze out a few % additional performance.

And here's yet another opportunity:
Yield on larger dies increases. Just disable those cores that don't work. So larger dies, incorporating more but smaller cores, becomes more economically viable. There's also a large gain in power consumption to be had from a slight downclocking.

Once software performance starts to scale close to geometrically with number of cores, lots of things that used to work against us, dramatic growth of power/heat, number of transistors for diminishing performance returns, and diminishing yield with die size, - suddenly starts to work for us instead. Small sacrifices result in a rich harvest of cores.

This only works to a certain extent and even then only on certain datatypes. That is, if you're proccessing video/images, this really involves applying the same operations on large datasers and lends itself well to parallel processing. On the other hand, something like AI needs fast linear processing since it's branchy by its nature.

As for optical computing, I think the main problem is that there is no such thing as cheap "optical memory" that can be generated with the kind of ease you have with electronics.

I disagree with the AI - the intelligence works very well on slow parallel processing. The neurons have a switch time (from a state to another) of (maybe) milliseconds. But they are billions.
 

Leper Messiah

Banned
Dec 13, 2004
7,973
8
0
Originally posted by: walla
Originally posted by: Genx87
They are still shrinking the transistor size.

.65 chips should show up by the end of the year for Intel.

This allows you to put more cores on the same die.

Multi-core is probably where the money will start to be at along with complier optimizations to take advantage of the multiple core

Right, right. And the number of cores for a given chip area will scale quadratically will reduction in transistor size, circuit area.

So if we want 4 cores within a constant area, we need to reduce transistor size by half. Theoretically this would lead to about an 8-fold performance increase (2 for each core times 2^2 cores). If we were to expect a "Moores Law" type of performance increase, I estimate that you'd have to halve transistor sizes every 6 years to maintain that rate (if we double performance every two years).

When I think about it that way, multi-core architectures do seem much more viable. Of course the question becomes how efficiently computer architects and programmers can expose the potential gain in performance due to the parallelism of multi-core architectures.


we've kinda been doing the every six years or so thing, at least recently. 6 years ago was, what PIII (tuatlin core). Those were .25 or .18, something like that. now we've got .09s out and .065 comming rather soon.

However, we're going to hit a wall with silicon eventually. We can only make the transitors so narrow, and after that, we're going to have to find a new way to increase performance. I believe that Hard Disk Subsystems are the next thing we really need to change. While everything else has been tooling along at Moore's Law speeds of increase, HDDs have like maybe doubled in the last 5 years.
 

RichUK

Lifer
Feb 14, 2005
10,341
678
126
have you guys not seen the new chip that sony and ibm have produced. It has mega potential and i think that this might be the furture for a little while atleast over the current as it were x86 procs .... plus these chips can be bunched together to improve performance, much like dual procs but actually work as one as apposed to working together.

EDIT: didn't medtion that this is Cell
 

RaynorWolfcastle

Diamond Member
Feb 8, 2001
8,968
16
81
Originally posted by: Calin
I disagree with the AI - the intelligence works very well on slow parallel processing. The neurons have a switch time (from a state to another) of (maybe) milliseconds. But they are billions.

It depends on the AI model you use, I was referring to a fairly simple one, neural networks (which is what you are talking about) are a different approach that lends itself much better to parallelization.

 

complacent

Banned
Dec 22, 2004
191
0
0
Moore's law is often misquoted, and it has been misquoted here several times. Moore did not explicitly state that speed would double 18 months, but rather the complexity of integrated circuits doubles every 18 months, i.e. the # of transistors doubles every 18 months. Please, stop using Moore's law as a comparison to speed, because as we all know, complexity != to speed.

On that note, the next big push in computing, especially to super computing, is in parallel programing. This has been stated several times, but I would like to point out that 95% of the software most of us operate is not written with threads. Can it be? Sure. In games, individual characters can be threads, the AI can be a thread, the physics can be a thread. In MS Word, your spell checker, text editor, picture editor, thesarus, etc. can all be a thread. Parallel programming is the paradigm just as much as OO was the paradigm years and years ago.
 

ThatWasFat

Member
Dec 15, 2001
93
0
0
Well if the eventual goal is AI, Artificial Intelligence, I would think that a model we would try to emulate is how our intelligence came into being. Our neurons switch at an incredibly slow rate, about once per 5-7 milliseconds. But we have SO many of them. We also don't quite understand the algorithms the brain uses to compute.

We have a concept that it works massively parallel, but it's not defenite.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: RaynorWolfcastle
Originally posted by: Vee

...Once software performance starts to scale close to geometrically with number of cores, lots of things that used to work against us, dramatic growth of power/heat, number of transistors for diminishing performance returns, and diminishing yield with die size, - suddenly starts to work for us instead. Small sacrifices result in a rich harvest of cores.

This only works to a certain extent and even then only on certain datatypes. That is, if you're proccessing video/images, this really involves applying the same operations on large datasers and lends itself well to parallel processing. On the other hand, something like AI needs fast linear processing since it's branchy by its nature.

As for optical computing, I think the main problem is that there is no such thing as cheap "optical memory" that can be generated with the kind of ease you have with electronics.

"This only works to a certain extent and even then only on certain datatypes."

I really have to disagree with the gist of that statement. I have a hard time to think of anything performance critical that cannot be distributed on multiple threads.
"Branchy" or not has not much to do with this. You can afford to be much more optimistic about multicores.

As for optical computing, it's way hyped by media folks who don't understand the details, "speed of light" and all that drivel. Switching speed of light logic can conceivably be utterly awesome. So for that reason, some custom logic for very specific purposes could achieve insane performance. Like filters and other communication processing. But for a general CPU, I believe current transistor logic, consumer CPUs are already beyond what is even theoretically possible with optical logic.
Physical size and lightspeed will mean that signalling will permeate through light logic slower than in silicon, despite the slower switching speed of transistors.
And you're quite right about memory.

I suppose I could be wrong of course. And that there could come some conceptual logic architecture breakthrough, that means the switching speed of light could be harvested for general computing. I don't really believe that though. And even if so, I would think it would be a dead end, eventually. The future lies in being smaller than light can ever be.
We are already. And we have already made even molecular semiconductors and connected and tested them successfully. The challenge is building large circuitry with something else than current litographic integration. It's still a smaller challenge than an optical CPU, I believe.

More immediately, the key to CPU performance lies in the memory interface.
There are some possible ways to achieve a breakthrough here. Connecting the CPU directly to hundreds of MB ram, through hundreds of thousands of connections, by placing a ram die directly on top (or under) the CPU die. "Level" 4 cache?
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
There are things that just can not be done in parallell and not on multiple cores.
Some friends of mine are involved in the development on what is known as "multi-user detectors", which are used in telecom (base staions etc) . A MUD basically solves a system of equations over and over again very fast, the problem is that there is no efficent algorithm that works on parallell processors (there is no efficient way to divide the problem into "parts"). I should point out that the algorithm is always implemented in hardware using custom made circuits (that are not programmable) so there is no room for software-optimization. Hence you need a very fast serial "processor" that crunches numbers at something like 60-100 Gbs in real time

There is no way to build a fast enough MUD using silicon so III-V have been used for several years, the problem is that the bandwidth is going up (3G and soon 4G mobile phones) and soon even III-V materials will be too slow.
 

RaynorWolfcastle

Diamond Member
Feb 8, 2001
8,968
16
81
Originally posted by: Vee
"This only works to a certain extent and even then only on certain datatypes."

I really have to disagree with the gist of that statement. I have a hard time to think of anything performance critical that cannot be distributed on multiple threads.
"Branchy" or not has not much to do with this. You can afford to be much more optimistic about multicores.
Oh I'm not saying multicores are useless, I'm just saying not all problems are easily parelellizable, some aren't parallelizable at all. I think there are certain physical problems where you have to rely on successive small displacements repeatedly are like this. I think one example was several charged particles in close proximity.


As for optical computing, it's way hyped by media folks who don't understand the details, "speed of light" and all that drivel. Switching speed of light logic can conceivably be utterly awesome. So for that reason, some custom logic for very specific purposes could achieve insane performance. Like filters and other communication processing. But for a general CPU, I believe current transistor logic, consumer CPUs are already beyond what is even theoretically possible with optical logic.
Physical size and lightspeed will mean that signalling will permeate through light logic slower than in silicon, despite the slower switching speed of transistors.
And you're quite right about memory.

I suppose I could be wrong of course. And that there could come some conceptual logic architecture breakthrough, that means the switching speed of light could be harvested for general computing. I don't really believe that though. And even if so, I would think it would be a dead end, eventually. The future lies in being smaller than light can ever be.
We are already. And we have already made even molecular semiconductors and connected and tested them successfully. The challenge is building large circuitry with something else than current litographic integration. It's still a smaller challenge than an optical CPU, I believe.

Sort of... I think at this point quantum well and quantum dot lasers are showing some promise but there's nothing even close to the level of integration of modern CMOS processes out there. As a side note, there's a good chance that within a few years, board-level communications will become optical. With increasing bus speeds it becomes a pain to work with reflections that are all over the place. VCSEL's (silicon lasers, basically) have really come a long way and waveguiding in PCBs is definitely feasible. Look for optical interconnects to gain momentum; it'll probably show up first as a linking fabric in supercomputers.

There's a lot of interesting research being done in photonics right now.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |