will cpu's and gpu's eventually converge?

her34

Senior member
Dec 4, 2004
581
1
81
cpu's are becoming more parallel, gpu's are becoming more programable. they seem to be heading towards the same destination, but from different directions.

in 10 years, will we just have 1 chip? instead of buying a gpu card for better graphics, buy a second or third cpu?
 

BrownTown

Diamond Member
Dec 1, 2005
5,314
1
0
no, it is always a huge advantage to have a specialized graphics processor since you can optimize based on grpahics, wheras a CPU has to be decent at everything. If anything of the sort is to occur it would be having a CPU and GPU together on die, but they would still be completely differnet in architecture, just very close to each other so they can communicate better.
 

alpha88

Senior member
Dec 29, 2000
877
0
76
I think so.

I could see something like an Opteron paired with the Cell processors (on the same die). The general core can handle the "CPU" type tasks while massively parallel units can handle Video and Physics.
 

Loki726

Senior member
Dec 27, 2003
228
0
0
keep in mind that fewer than 10 years ago, all graphics processing was done on the CPU. Because CPUs were not optimized to do the needed graphical processing, they split into GPUs. I'm not saying it is impossible, but it is interesting that if CPUs and GPUs do converge into one chip, we will be right back where we started.
 

joshd

Junior Member
Apr 30, 2006
11
0
0
Well yea, it would be inefficiant, as by buying a 2nd CPU/GPU for graphics, you are buying another processor that can handle everything, so surely half of this second chips power would go to waste. I think they will stay as separate entities, even if they do end up very near each other, possibly on the same PCB or even die.
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
Well as the number of cores on a processor continue to multiply, it seems logical to me that graphics could yet again be done on the processor.
 

BucsMAN3K

Member
May 14, 2006
126
0
0
Well, considering that CPUs and GPUs are still trying to cram transistors on their die, to me it would seem like the only way that they can converge is either a leveling out in CPU processing demand, allowing for transistors to be used for graphics, or an advance in nanotechnology so that you can fit those extra transistors.
 

sdifox

No Lifer
Sep 30, 2005
96,164
15,775
126
I prefer 2 separate sub-ssytems simply because it is modular, replace cpu or gpu or both at your own pace. But there are some software work being done to tap into flops power of GPU so who knows.
 

avi85

Senior member
Apr 24, 2006
988
0
0
What I want to know is why they don't make the video cards modular I.E. you buy say a Geforce 7 series board then decide which GPU you want to buy (7300, 7600 etc.) and how much ram you need that way you could start off with a 7600 and upgrade to a 7800GT when you have more cash later without replacing the whole card same goes for ram (just pop in another 512 when you can afford it).

Also this way us DIYers would have more to play with
 

BucsMAN3K

Member
May 14, 2006
126
0
0
Originally posted by: avi85
What I want to know is why they don't make the video cards modular I.E. you buy say a Geforce 7 series board then decide which GPU you want to buy (7300, 7600 etc.) and how much ram you need that way you could start off with a 7600 and upgrade to a 7800GT when you have more cash later without replacing the whole card same goes for ram (just pop in another 512 when you can afford it).

Also this way us DIYers would have more to play with

You know, looking at how my graphics card is half the size of my motherboard...I wouldn't be suprised if eventually the graphics card is the mainboard and everything else is just built onto it. All hail the power of graphics.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
14
81
Originally posted by: avi85
What I want to know is why they don't make the video cards modular I.E. you buy say a Geforce 7 series board then decide which GPU you want to buy (7300, 7600 etc.) and how much ram you need that way you could start off with a 7600 and upgrade to a 7800GT when you have more cash later without replacing the whole card same goes for ram (just pop in another 512 when you can afford it).

Because the designs are pushing the limits of the connections between the GPU and the RAM.

Having sockets or slots drastically limits the performance of the connection compared to soldering the chips as close together as possible.

That's why motherboards are stuck at 800 MHz, 64 bit RAM, and graphics cards are running 1600 MHz, 256 bit RAM.

 

Future Shock

Senior member
Aug 28, 2005
968
0
0
Originally posted by: Mark R
Originally posted by: avi85
What I want to know is why they don't make the video cards modular I.E. you buy say a Geforce 7 series board then decide which GPU you want to buy (7300, 7600 etc.) and how much ram you need that way you could start off with a 7600 and upgrade to a 7800GT when you have more cash later without replacing the whole card same goes for ram (just pop in another 512 when you can afford it).

Because the designs are pushing the limits of the connections between the GPU and the RAM.

Having sockets or slots drastically limits the performance of the connection compared to soldering the chips as close together as possible.

That's why motherboards are stuck at 800 MHz, 64 bit RAM, and graphics cards are running 1600 MHz, 256 bit RAM.

Just to add - not only are the designs are pushing the limits, each new generation of GPU requires a newer specification of graphics RAM - either in clock rate, DDR rate, or number of bits of bus bandwidth. New GPUs simply require more memory bandwidth to pump out pixels at their stated rates - and the progress and design points (i.e., pixels per second, polygon fills per second, etc.) of the GPU is closely tied to where the designer knows memory will be in 12 months time of starting his design - usually using engineering samples of the memory chip in his prototype GPU cards.

So, even if you could re-use the memory, it would only serve to severely limit your shiny new GPU...

Future Shock
 

her34

Senior member
Dec 4, 2004
581
1
81
if you had a 16 core cpu, wouldn't that be able to match today's video cards?
 

Mark R

Diamond Member
Oct 9, 1999
8,513
14
81
Originally posted by: her34
if you had a 16 core cpu, wouldn't that be able to match today's video cards?

Possibly.

But, a modern graphics card, might have 48 processor cores - each one capable of working on 4 words of data simultaneously (kind of like an enhanced version of SSE).

If you could somehow get 16 4 GHz cores onto one die, it would certainly give very good competition to a 48 core 650 MHz GPU. This assumes that you have an FSB and RAM sufficiently fast to keep up with all those cores.
 

sdifox

No Lifer
Sep 30, 2005
96,164
15,775
126
Originally posted by: her34
if you had a 16 core cpu, wouldn't that be able to match today's video cards?


Assuming 25watt max use per core, that would be 400w... time to buy some stock in liquid nitrogen plants. I don't even want to get into the communication (with outside of the chip) mess.
 

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
A real possilibty is of course we will have one PU for each job, disk PU for disk stuff, Physics Pu for physics, graphcis PU, AI PU, Math PU, Net PU for everything netowrking, security(encryption) PU, sound PU etc etc etc, maybe like a cell processor just bunch it all up into one monster chip but actually 50 little ones together. Would be fun. Hope they get some new Borg technology by then. Maybe not something advanced like Borg by at least Martian computer technology to make it all work.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Would make a for huge mess in the marketplace, all practical problems with manufacturing aside.

Joe Schmoe wants a basic $300 computer for checking his mail and stuff, so what does he get?
A Celeron with the weakest possible GPU on-die?
Then his kids wanna play WoW all of a sudden, he now has to buy a new CGPU(or whatever they'd be called) with a decent graphics core, so he gets a Core Medium.
Then your average AT'er wants his new uber-l337 ultra gaming rig, so he uses his daddy's VISA to buy the best CGPU out there, which happens to be a Core Extreme.
Me on the other hand, I enjoy the occasional game, so I might want something in between the Core Medium and Core Extreme, say the Core Prettygood.

In the end, you'd end up with a boatload of different CPU's, not only of different speeds but with completely different cores, manufacturing that many different cores would make prohibitively expensive.
Just looking at nVidia, we have what? 4-5 different price/performance points, ranging from the stuff that's barely any better than integrated graphics to the $700 SLI-on-a-board cards, and those in turn can be SLI'd, so you have yet another price point.
Combining these with the already existing price points for CPU's, different speeds, cache configurations, dual/single/eventually quad core, etc, you'd have so many processor lines and price points that even the hard core enthusiasts would have trouble keeping up, Joe Schmoe can pretty much just forget about it.
 

Born2bwire

Diamond Member
Oct 28, 2005
9,840
6
71
I do not think that they will converge. Maybe we may start seeing them being on the same die (probably for laptops or other mobile systems), but the highly specialized nature of the GPU and the type of calculations we do with them compared to what we use a more generalized CPU for, there really isn't a way for them to merge as one single processing unit. It is true that they are starting to run software on GPU's. I just read a paper from December in IEEE about using GPU's to run computational electromagnetics code. But the reason for doing this is to take advantage of the optimized mathematics on GPU's. For example, in the article they were able to get a speed up of 40x by using a Radeon X800 compared to a AMD 3500+ for running 2D FDTD. But this is because FDTD can be run in massive parallel and uses simple multiplications and additions. So this is something that a graphics card is wonderfully suited for. But for algorithms that cannot be parallized, you won't be seeing such a phenomenal increase in speed if at all.
 

her34

Senior member
Dec 4, 2004
581
1
81
Originally posted by: Sunner
Would make a for huge mess in the marketplace, all practical problems with manufacturing aside.

Joe Schmoe wants a basic $300 computer for checking his mail and stuff, so what does he get?
A Celeron with the weakest possible GPU on-die?
Then his kids wanna play WoW all of a sudden, he now has to buy a new CGPU(or whatever they'd be called) with a decent graphics core, so he gets a Core Medium.
Then your average AT'er wants his new uber-l337 ultra gaming rig, so he uses his daddy's VISA to buy the best CGPU out there, which happens to be a Core Extreme.
Me on the other hand, I enjoy the occasional game, so I might want something in between the Core Medium and Core Extreme, say the Core Prettygood.

In the end, you'd end up with a boatload of different CPU's, not only of different speeds but with completely different cores, manufacturing that many different cores would make prohibitively expensive.
Just looking at nVidia, we have what? 4-5 different price/performance points, ranging from the stuff that's barely any better than integrated graphics to the $700 SLI-on-a-board cards, and those in turn can be SLI'd, so you have yet another price point.
Combining these with the already existing price points for CPU's, different speeds, cache configurations, dual/single/eventually quad core, etc, you'd have so many processor lines and price points that even the hard core enthusiasts would have trouble keeping up, Joe Schmoe can pretty much just forget about it.

i actually think it would simply things. there wouldn't be a problem of matching cpu to gpu performance. a video card would only be needed for connectors and outputing the signal, or that could be integrated into motherboard and you wouldn't need a card at all.

i see a single chip doing all the work, not 2 chips placed on same die. so getting better gaming performance means just buying a cpu with more cores, or faster cores.

dual socket motherboards would exist for enthusiast gamers. next year 4 core cpu's come out, so dual socket would make 8 core systems possible. maybe in future the gap between cpu and gpu will lessen

when you buy a $500 video card that only improves gaming. if instead that money was spent on a better cpu, or second cpu, it would improve all aspects of computing. same reasoning why some people are reluctant to buy a physics processor; using a dual core cpu or second video card would give more overall use even if performance isn't as good as dedicated physics processor.
 

kpb

Senior member
Oct 18, 2001
252
0
0
Will cpus and gpus converge? I'd have to say yes and no.

On one hand dedicated specialized units like GPUs will always be faster than more general units like CPUs in the areas they are designed for. There really isn't any way around this. Combine this with the innately parralel nature of graphics and you can for all practical purposes scale gpus infinitely by just adding more units doing more of the same thing which is simpler than speeding up performing the same task and also automatically scales with process shrinks. Your seeing this type of things in CPU's with move towards dual and quad core cpus but gpus are already way ahead with 6 groups that process 4 things at once in a 7800gtx, 7900gt/x for working on 24 things at once or the 1900gt/x that processes 48 vector threads and 16 pixel threads at once. This is much more parallel already than intel or amd expect to be for a long time and nvidia/ati will just continue along the same path they've been on and handle more things at once. Since a gpu will always be faster than a cpu unless we totally hit a wall where it's relatively trivial to make high rez photo realisitic 3d in real time gpus will stay around and I don't see us hitting that wall any time soon.

On the other hand minimal functionality for basic email, websurfing etc video is getting easier and easier as demonstrated by integrated video in a variety of examples. For this type of basic functionality I think we can and will see a "gpu" integrated into the cpu. Amd has already moved the memory controller into the processor and thier design is already modular enough they could definitely add a basic video card module to the chip using coherent hypertransport and some sort of pci-e attached device just displaying already processed video to a monitor. They've talked about coproccessors already, primarily for server space so far, but it could definitely be used for a low cost pc too once they get their 65nm process going.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: her34
Originally posted by: Sunner
Would make a for huge mess in the marketplace, all practical problems with manufacturing aside.

Joe Schmoe wants a basic $300 computer for checking his mail and stuff, so what does he get?
A Celeron with the weakest possible GPU on-die?
Then his kids wanna play WoW all of a sudden, he now has to buy a new CGPU(or whatever they'd be called) with a decent graphics core, so he gets a Core Medium.
Then your average AT'er wants his new uber-l337 ultra gaming rig, so he uses his daddy's VISA to buy the best CGPU out there, which happens to be a Core Extreme.
Me on the other hand, I enjoy the occasional game, so I might want something in between the Core Medium and Core Extreme, say the Core Prettygood.

In the end, you'd end up with a boatload of different CPU's, not only of different speeds but with completely different cores, manufacturing that many different cores would make prohibitively expensive.
Just looking at nVidia, we have what? 4-5 different price/performance points, ranging from the stuff that's barely any better than integrated graphics to the $700 SLI-on-a-board cards, and those in turn can be SLI'd, so you have yet another price point.
Combining these with the already existing price points for CPU's, different speeds, cache configurations, dual/single/eventually quad core, etc, you'd have so many processor lines and price points that even the hard core enthusiasts would have trouble keeping up, Joe Schmoe can pretty much just forget about it.

i actually think it would simply things. there wouldn't be a problem of matching cpu to gpu performance. a video card would only be needed for connectors and outputing the signal, or that could be integrated into motherboard and you wouldn't need a card at all.

i see a single chip doing all the work, not 2 chips placed on same die. so getting better gaming performance means just buying a cpu with more cores, or faster cores.

dual socket motherboards would exist for enthusiast gamers. next year 4 core cpu's come out, so dual socket would make 8 core systems possible. maybe in future the gap between cpu and gpu will lessen

when you buy a $500 video card that only improves gaming. if instead that money was spent on a better cpu, or second cpu, it would improve all aspects of computing. same reasoning why some people are reluctant to buy a physics processor; using a dual core cpu or second video card would give more overall use even if performance isn't as good as dedicated physics processor.

Problem is, you'll need specialized cores for the graphics part, be it on the same die or not.
Even your average $100 GPU is vastly superior to a general purpose CPU in graphics performance, so just having a load of general purpose cores wouldn't help.
 

orangat

Golden Member
Jun 7, 2004
1,579
0
0
It'll be hard for cpu/gpu to converge because it would probably double the die size.
But I forsee gpu/physics chips being integrated either on-die or on-board. The madness of shelling out $300 for cpu, $300 for a video card and $300 for memory (and probably another $300 for physics card) is getting ridiculous when I can get a gaming console for $300.
 

Bobthelost

Diamond Member
Dec 1, 2005
4,360
0
0
The trend is not for convergance at all, but for specialisation and seperation.

A CPU is a jack of all trades, it can do pretty much anything, but it's a bit crap at all of them compared to dedicated hardware. You can improve performance in a task by making the CPU faster, or redesigning the code so it can be broken down into parts to be executed by different components. GPU, Core0, Core1, PPU etc. The entire concept is based around splitting the problem into chunks and firing them off to different locations to be processed. In the case of dual core it's very crude, merely breaking off sections to be done at the same speed on a different core, while in GPU terms it's breaking off the task for a system that is designed to do that and that alone.

I'm more interested in the Sony CPU design, seven non identical cores each optimised for different tasks? Wonderful!

*goes to read up on PS3 CPU to see if he's talking crap*
 

kpb

Senior member
Oct 18, 2001
252
0
0
Originally posted by: Bobthelost
I'm more interested in the Sony CPU design, seven non identical cores each optimised for different tasks? Wonderful!

*goes to read up on PS3 CPU to see if he's talking crap*

Yeah that would be a good idea.

The PS3 cpu has 1 relatively simple general purpose cpu and 8 SPE's which are roughly programable dsp. All 8 spe's are identical but can be running different code. The PS 3 is planned to use only 7 spe's to improve yields by allowing one spe to be defective. arstechnica has some good articles if you want to read more about it.
 

Bobthelost

Diamond Member
Dec 1, 2005
4,360
0
0
Originally posted by: kpb
Originally posted by: Bobthelost
I'm more interested in the Sony CPU design, seven non identical cores each optimised for different tasks? Wonderful!

*goes to read up on PS3 CPU to see if he's talking crap*

Yeah that would be a good idea.

The PS3 cpu has 1 relatively simple general purpose cpu and 8 SPE's which are roughly programable dsp. All 8 spe's are identical but can be running different code. The PS 3 is planned to use only 7 spe's to improve yields by allowing one spe to be defective. arstechnica has some good articles if you want to read more about it.

Ta. I'll check that out.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |