cpu & gpu convergence

hahher

Senior member
Jan 23, 2004
295
0
0

in interview with tim sweeney, he mentioned this:

http://www.beyond3d.com/interviews/sweeney04/index.php?p=4

Finally, where do you think 3D hardware and CPU technology should be headed? Do you think we are likely see 3D hardware taking over some of the functions of the CPU, going beyond rendering?

I think CPU's and GPU's are actually going to converge 10 years or so down the road. On the GPU side, you're seeing a slow march towards computational completeness. Once they achieve that, you'll see certain CPU algorithms that are amicable to highly parallel operations on largely constant datasets move to the GPU. On the other hand, the trend in CPU's is towards SMT/Hyperthreading and multi-core. The real difference then isn't in their capabilities, but their performance characteristics.

When a typical consumer CPU can run a large number of threads simultaneously, and a GPU can perform general computing work, will you really need both? A day will come when GPU's can compile and run C code, and CPU's can compile and run HLSL code -- though perhaps with significant performance disadvantages in each case. At that point, both the CPU guys and the GPU guys will need to do some soul searching!



some questions:

1) in general what do you guys think about this? agree/disagree, how will it happen, future outlook, etc

2) if you could run multi cpu or multi-core setup with today's best cpu (p4 or amd64) with 1 cpu to act as 1 pipeline in video card, could this setup perform as good as radeon 9800xt or nvidia 5950? if not, what would be the best video card this setup would perform on par with?
 

borealiss

Senior member
Jun 23, 2000
913
0
0
I'm not sure if we're going to see this type of convergence any time soon. I've talked with some of my coworkers about the next logical step in architecture and this seems to be very close to what would come next. The only problem is that cpu companies and gpu companies are not one, so it's going to become a game of politics. That is until a gpu or cpu manufacturer decides to become both, in which case the question of who's design to integrate becomes moot. Ultimately i think we're going to see some of the features of both creep into one another. The reconfigurability of gpu's has gotten the attention of designers for cpu's and vice versa. We're even at a point where gpu's have 64k icaches, something that was exclusive to cpu's. i think one of the other limiting factors you're going to run into with merging something as complex as these two beasts is the hardware complexity/die space would be huge on today's processes. Right now the industry is struggling to get 90nm out the door, although some companies have gotten it right. And with that process already, there's already a concern about die space with just cpu's alone without the integration of another gpu. The different memory architectures that the two systems need are also very different, excluding UMA architectures. Graphics is always going to need a very expensive high speed memory interface for pushing pixels and textures. Cpu's do not need this much memory bandwidth because the nature of the datasets the two operate on are entirely different. Gpu's datasets are very calculable, very predictable. Cpu's have to have to deal with branch mispredictions and working sets of code that are much bigger than the pixel shader programs present in gpu's today. Two very different beasts for two very different datasets of code. One reason why gpu's can operate do so much more calculations than a 3ghz+ cpu is that they superscalar the crap out of it, adding as many pipelines as they want and only run at 500mhz or so. They don't have to deal with misprediction, a bad prefetch, etc... Unless the nature of cpu and gpu design change quite drastically, i doubt you would see a union of the two to the extent that they would share core functional units such as schedulers and prefetchers.
 

Pulsar

Diamond Member
Mar 3, 2003
5,224
306
126
It will be a long, long time (if ever).

The reasons are many, but here's the most important. Consumer processors aren't very stressed anymore. Why upgrade from a 1 ghz to a 3ghz CPU? To run windows XP?

By keeping the GPU modularized, it keeps the die size down and allows for individual component upgrades. For instance, acceptable sound performance has been available for years now, yet the cards are still separate. Yes, you can find integrated sound boards on the motherboard, but that's not the same as combining the processors.

It's the modularity of the PC that allows continual upgrading. If you want a fully integrated system, go buy a console like an x-box. In fact, even consoles are moving AWAY from the integrated functionality to a more modular concept that can be upgrade more like a PC. It's become obvious that while the initial benefit of integration (superior speed and functionality) helps some, it's far far more beneficial to modularize it and use commonatility of commands through systems like directx, opengl, etc.

I would expect something more along the lines of a seperate MOTHERBOARD for video cards, with expandible ram, various CPU supports, etc, far more than I would expect them to try to integrate everything onto the CPU.
 

MadRat

Lifer
Oct 14, 1999
11,944
264
126
When something like "Intel buys ATI" is in the news then you'll see it happen quickly. I'm not so sure VIA isn't already working on it, with the intent of mating S3 graphics and VIA processor technology into a single unified package. They would probably be working on it not necessarily for the PC market, but for the imbedded multimedia processor market.

I wonder how hard it would be for AMD and NVidia, since they are both "HT Consortium" members, to team up using HT links between an NVidia GPU and and AMD CPU? If it wasn't too complicated then perhaps they could work together on a dual-core project based on shared CPU/GPU memory access. Doing it that way would not necessarily be "convergence", but it would demonstrate how feasible it is to do such technology.
 

mbhame

Junior Member
Jan 30, 2004
3
0
0
Reading the tentative specs on DirectX-Next (Longhorn release) there are many conceptually-CPU-like-changes that GPUs will go through - but they seem like baby steps towards being a bona fide "CPU" as we know it today. I also think Sweeney's quote is somewhat out of context (re-referencing DirectX-Next here) and also assumes the next 10 years' history of CPU/GPU advancements will be identical to the past 10 years' history, which I don't see that anyone of any intellect can rest assured on.

That said, the argument can be made of almost ANY 'chip' in an entire PC *could* converge into the CPU. That sort of goes without saying. But you're also exponentially increasing cost by adding more and more other chips on one die - eventually making a mammoth of a chip - ie - nVidia's choice to remove audio functionality out of some of their new mobo chipsets...? But like LsDPulsar said - modularity is key.

*WHY* a GPU converge into a CPU is an entirely different matter and one that at this point in time I don't see fair reason to believe will happen (but what do *I* know) - the necessary memory subsystem alone would be a phenomenal achievement to produce at a consumer-accessible price.



 

rimshaker

Senior member
Dec 7, 2001
722
0
0
You know it'll happen... it just makes sense.

Same situation where the math co-processor was finally integrated with the rest of the cpu over a decade ago (i.e. 486).
 

MadRat

Lifer
Oct 14, 1999
11,944
264
126
Originally posted by: mbhame
*WHY* a GPU converge into a CPU is an entirely different matter and one that at this point in time I don't see fair reason to believe will happen (but what do *I* know) - the necessary memory subsystem alone would be a phenomenal achievement to produce at a consumer-accessible price.

Intel is selling P4EE's at $1000 and high end P4C's at $400, and the leading edge video cards usually bounce around $400-$500, so why not a $1000 highend combination P4C/GPU card? They put $200 video cards out there one step below bleeding edge, so why not a $300 mid-level combination card? Mating videocards to CPU's will probably be more of an "economy of scale" issue, than a technical achievement. (The biggest hurdle would be the limited RAM found in a videocard.) The GPU will probably need to run async to the CPU and be measured on pipelines, whereas the CPU will sell by raw MHz measurements. An example of what I mean would be "P4C-3GHz-Nx16"; P4C = CPU model, 3GHz = CPU rating, and Nx16 = GPU (with 16 pipelines?) model. Of course all this is theoretical.

 

Pudgygiant

Senior member
May 13, 2003
784
0
0
If the new GPU's could handle dnet, I'd be happy. Having a gpu 2/3 the speed of my processor seems a little redundant if I can't get full use out of it.
 

Brucmack

Junior Member
Oct 4, 2002
21
0
0
I think it's more likely that we'll just start seeing more and more non-video code running on GPUs, especially once PCI Express drastically improves the bandwidth from the GPU to the system. When this happens, I think it'll be beneficial to keep the cores separate, since it'll allow more parallelization.
 

Matthew Daws

Member
Oct 19, 1999
31
0
0
I like the comments mbhame made. I guess what Tim Sweeney is saying that, one the hardware level, in the coming years CPUs and GPUs are going to start looking much the same. I don't think he is saying that they will start to be used for the same functions, or in the extreme case that we'll get one chip which is both a CPU and GPU. There are too many advantages to keeping them separate: direct access to memory, optimisation for the main tasks they perform, and just explicit parallisation, as Brucmack points out.

In the much longer term, it's interesting to see what the RealStorm Engine people are up to. Their idea is that there are things, graphically, which cannot be done using polygons, and with CPUs getting so much quicker, it is slowly becoming possible to do real-time raytracing. If you have a very fast PC, it's well worth checking out the demo.

So I wonder if, in say five years, we'll see low-end systems shipping without 3D hardware, and just software (CPU) emulation. Probably not. Not because it can't be done, but just that GPUs are going to get cheaper and cheaper (think about what an nForce chipset can do, compared to the best 3D cards of five years ago).

I do think there will be special applications, like DNET, which might start to use the processing power of GPUs, and stuff like real-time raytracing that will shift some graphics work back to the CPU, but in general we'll see separate CPU and GPU for a while to come. But on consoles we might start to see a much blurrier setup: I think I am correct in saying that most next-generation consoles have multiple chips which are vaguely CPUs and vaguely GPUs.

Just a thought, --Matt
 

MadRat

Lifer
Oct 14, 1999
11,944
264
126
A hybrid CPU-GPU could be referred to as something like an MPU, multipurpose processor unit.

So if you went SMP with MPU's would that benefit your graphics too? The former 3DfX's technology, now owned by NVidia, for running multiple graphics cores would come in handy for such a design I'd have to think. If Intel made it they could call it Pentium-V, the Pentium with Voodoo built in...
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
The GPU has a lot of specialized functionality that isn't present in the CPU, so no, a P4 for each GPU pipeline would not be near as fast.

The reverse is not nearly as severe. GPUs could evolve to be the next general purpose CPU. Mid-high end GPUs are more sophisticated than today's best CPUs.
 

hahher

Senior member
Jan 23, 2004
295
0
0
Originally posted by: glugglug
The GPU has a lot of specialized functionality that isn't present in the CPU, so no, a P4 for each GPU pipeline would not be near as fast.

The reverse is not nearly as severe. GPUs could evolve to be the next general purpose CPU. Mid-high end GPUs are more sophisticated than today's best CPUs.

so what's a good estimate of a "cpu per pipeline" graphics performance? today's 3ghz cpu = ? (geforce 3?)

 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
It gets you the memory bandwidth of it maybe but the math involved in all the texture mapping and lighting calcs a conventional CPU will totally suck at.

This is one of the few applications where a Mac would beat modern PCs - Altivec is a lot better for what you are suggesting than SSE
 

Brucmack

Junior Member
Oct 4, 2002
21
0
0
Originally posted by: glugglug
The GPU has a lot of specialized functionality that isn't present in the CPU, so no, a P4 for each GPU pipeline would not be near as fast.

The reverse is not nearly as severe. GPUs could evolve to be the next general purpose CPU. Mid-high end GPUs are more sophisticated than today's best CPUs.

I don't know if a GPU evolving into a CPU is any easier. Yes, the GPU has specialized functions that aren't in a CPU. But a GPU is completely designed to run those functions and those functions alone. If a GPU could run a general purpose program now (which I consider unlikely, there's probably something missing), it would run horribly slowly because it's built for speed on very specific things.

You might argue that they could just beef up the pipelines for the rest of the functions. Doing that would likely require the other bits to be scaled down to keep everything working right. Basically, the architecture of a GPU is completely wrong for running general-purpose programs, so they'd have to be designing things from the ground up anyway.

Now, I'm not saying the opposite is any easier either. But you made it sound like GPU to CPU could actually happen, which is highly doubtful.

The markets are just way too different right now for either of these things to happen. GPUs are evolving far quicker than CPUs, and are actually not optimized very much (at low levels) because of this. CPUs on the other hand are developed more slowly, so they fine-tune things to squeeze everything they can out of them. In order for a GPU maker to make a CPU, it'd require a complete change in design mentality along with the fundamental architectural changes.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: Brucmack


I don't know if a GPU evolving into a CPU is any easier. Yes, the GPU has specialized functions that aren't in a CPU. But a GPU is completely designed to run those functions and those functions alone. If a GPU could run a general purpose program now (which I consider unlikely, there's probably something missing), it would run horribly slowly because it's built for speed on very specific things.

Until recently it was true that a GPU could not run general purpose programs.

The GeForce FX line and Radeon 9500/9600?/9700/9800 all have this new feature added where they can. There are C compilers to make stuff run on the video cards now.
 

Brucmack

Junior Member
Oct 4, 2002
21
0
0
Yeah, but the chips themselves aren't optimized to run non-graphics code, that's my point. The low core speeds are fine for the graphics pipelining, but not necessarily for general-purpose stuff.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Cpu's have to have to deal with branch mispredictions

GPUs will be 'dealing' with this inside of ninety days. They will almost certainly be executing all possible branches in parallel as the cost for a mispredict on a GPU would be staggering compared to a CPU.

also assumes the next 10 years' history of CPU/GPU advancements will be identical to the past 10 years' history, which I don't see that anyone of any intellect can rest assured on.

You are correct, most of the last ten years has been spent on speed and basic rasterization improvements(actually, we should say eight years as that is when serious development started moving forward on dedicated 3D hardware). Moving forward it is a race to fully programmable and after that point is reached back to speed.

Their idea is that there are things, graphically, which cannot be done using polygons, and with CPUs getting so much quicker, it is slowly becoming possible to do real-time raytracing.

Polygons are a limit now, the majority of advancements we have seen in terms of programmability on GPUs is to work around those limitations using alternative methods. Realisticly speaking radiosity is much more desireable then ray tracing for GPUs(that will take some time).

so what's a good estimate of a "cpu per pipeline" graphics performance? today's 3ghz cpu = ? (geforce 3?)

I ran a series of tests a few years back and an Athlon 800MHZ at the time was roughly 1% as fast as a GeForce using basic bilinear filtering at lower resolutions- and that got a lot worse when adding trilinear or anisotropic(and that was only with 2x anisotropic as that was the limit at the time). It isn't completely accurate as the GeForce was still processor limited in the test, but it wasn't even close to the speed of a Voodoo1.

Today's 3GHZ CPUs might be able to compete with a Voodoo1, forget anything newer particularly the TNT which could handle trilinear filtering at nigh no performance hit.

Yeah, but the chips themselves aren't optimized to run non-graphics code, that's my point. The low core speeds are fine for the graphics pipelining, but not necessarily for general-purpose stuff.

Essentially you are talking about a multi core processor however when looking at a GPU. The R3x0 as an example has eight different 'cores' if you look at it from the CPU angle. These cores are combined when writing out pixel data as writing quads is more effective in terms of mem access, but there is nothing stopping GPU makers from changing this to deal with 'CPU' style code. Their raw computational power per 'core' is also quite comparable to current processors despite the massive frequency gap, and that is ignoring there are eight 'cores' per part(in terms of per 'core' nV comes out pretty far ahead right now, but they only have four).
 

MadRat

Lifer
Oct 14, 1999
11,944
264
126
Maybe they could just make it possible to run encryption and decryption on them. That alone would add significant value to the GPU.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
I don't want to see this level of convergence because it'll basically break the players involved. A CPU should be fast and efficient at running standard generic code while a GPU should be fast for video related operations. It's the same thing as a NIC, RAID controller or sound card - all of them have been designed for dedicated tasks.

If you start making each piece of hardware act like the other(s) you'll end up creating a mess and all of them will lose efficiency in the key functions they were originally designed for.
 

MadRat

Lifer
Oct 14, 1999
11,944
264
126
Losing a little efficiency is fine if overall value is added. Multipurpose is the future.
 

hahher

Senior member
Jan 23, 2004
295
0
0
Originally posted by: BFG10K
I don't want to see this level of convergence because it'll basically break the players involved. A CPU should be fast and efficient at running standard generic code while a GPU should be fast for video related operations. It's the same thing as a NIC, RAID controller or sound card - all of them have been designed for dedicated tasks.

If you start making each piece of hardware act like the other(s) you'll end up creating a mess and all of them will lose efficiency in the key functions they were originally designed for.

3d started off based on cpu with software modes. so if one day, maybe 10 years from now, cpu's are fast enough to run 3d games, and graphics peak to where extra dedicated hardware doesn't provide that much benefit (much like 2d games today), then why not run everything off the cpu again.

that would also have the benefit of being easier to configure for, since you wouldn't have to deal with compatability, and with various hardware and drivers.

maybe there's a game programmer here who would comment on what they would rather program for: gpu based or cpu based game (if cpu performance were up to par)
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,996
126
3d started off based on cpu with software modes.
Yes and because it was too slow and ugly they made dedicated GPUs for the task.

so if one day, maybe 10 years from now, cpu's are fast enough to run 3d games,
That will never happen, precisely because of my response above.

and graphics peak to where extra dedicated hardware doesn't provide that much benefit (much like 2d games today),
2D acceleration still blows away any CPU. If you don't think so then set your acceleration slider to "none" or load a standard VGA driver and you'll see how slow your GUI operations will be.

then why not run everything off the cpu again.
Because it would be a step backwards; the CPU has plenty of other things to do without concerning itself with functions that dedicated hardware should be performing.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |