Linus Torvalds: Discrete GPUs are going away

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DominionSeraph

Diamond Member
Jul 22, 2009
8,391
31
91
July 2010: Nvidia drops their midrange GF104 at $200 and $230 in a blatent challenge to the price-gouged 5770 at $180. GF100 is sitting at $500 and $350.
Over the next 6 months the prices of GTX 460 and 5770 plummet, hitting the point where you could grab a 768MB/1GB GTX 460 for $90/110 in January 2011.
Same month Sandy Bridge drops with its HD 3000 graphics.
GTX 460 price rebounds to $130/160. 5770 for ~$100, and they park there.


2012: Ivy drops with HD4000. The midrange GK104 drops at... $500.
2013: High end GK110 drops at $1000. One year after the GTX 680 we get a cut-down GK104 at $250.
2014: Haswell drops with HD5200 available. Titan Z drops at... $3000. GTX 760 is still sitting at $250, and two years after launch Nvidia is still getting $320+ out of the GK104 in the GTX 770.

Wow, those integrated graphics have certainly undercut the dGPU market. Makes me want to break out my change purse to give Nvidia a pity purchase of a Titan Z, they're struggling so.
 

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
July 2010: Nvidia drops their midrange GF104 at $200 and $230 in a blatent challenge to the price-gouged 5770 at $180. GF100 is sitting at $500 and $350.
Over the next 6 months the prices of GTX 460 and 5770 plummet, hitting the point where you could grab a 768MB/1GB GTX 460 for $90/110 in January 2011.
Same month Sandy Bridge drops with its HD 3000 graphics.
GTX 460 price rebounds to $130/160. 5770 for ~$100, and they park there.


2012: Ivy drops with HD4000. The midrange GK104 drops at... $500.
2013: High end GK110 drops at $1000. One year after the GTX 680 we get a cut-down GK104 at $250.
2014: Haswell drops with HD5200 available. Titan Z drops at... $3000. GTX 760 is still sitting at $250, and two years after launch Nvidia is still getting $320+ out of the GK104 in the GTX 770.

Wow, those integrated graphics have certainly undercut the dGPU market. Makes me want to break out my change purse to give Nvidia a pity purchase of a Titan Z, they're struggling so.

What do you think this price gouging is indicative of? An attempt to sustain profit margins in the face of falling volumes.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,391
31
91
What do you think this price gouging is indicative of? An attempt to sustain profit margins in the face of falling volumes.

Ah yes like the Radeon HD4250 of the 880G was the reason the 5000 series went up in price. It wasn't that Fermi was late or anything.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
lol wut. There isn't an igpu that is even remotely close to competing anywhere near any of the 200-250$ cards.

Not yet no. And it may not even have to in terms of stopping the dGPU further progress. If wafer and IC design cost alone didnt already do that.




Note per transistor cost actually goes up below 28nm.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
And?
GM107 and GM108 are still on 28nm while offering twice the perf/watt over the "same" Kepler based GPUs.

And GM108 is cheaper and faster than anything Intel can offer right now...
 

_Rick_

Diamond Member
Apr 20, 2012
3,937
69
91
Performance and cost don't matter as much as this thread makes it out to be.
It is (relatively) easy to scale up integrated GPUs and get the same performance. There is no inherent performance deficit to integrating a GPU into a CPU.

There is a slight economic issue, with big dies being more expensive per area than small ones, due to the necessary defect management.

The reason integrated GPUs do not match high end dedicated GPUs is that the demand for the latter is relatively small, and there's no economical incentive to take this step -- why compete in a small, relatively low margin market, when you can compete in a much larger market, with similar margins. Especially with IC products, scale matters.

But, the advantages of tighter interconnection between GPU, CPU and memory are real; if a decent API and architecture were to be put into place they could be exploited.

Another limitation is that for compute - which is where the key advantage lies - you want at least double precision floating point arithmetic, whereas for graphics, single precision is often enough, as results aren't reused a lot, and instead thrown onto the screen. Balancing DP/SP performance is going to be a major challenge for mainstream-compute oriented hybrid processing products.

Again, this thread shouldn't be about performance comparison of current dGPUs and iGPUS. These numbers mean nothing. The real challenge lies in whether the advantages of integrating high performance GPUs with CPUs outweigh the cost. that is what's going to decide the future of dGPUs.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
And what makes you think APUs aren't made from wafers or cost nothing to design?

In terms of design cost, volume matters. And IGPs wins by a massive factor here. Intel can for example spend something like 1$ in GPu R&D on a chip to get the same as nVidia when nVidia spends 10$ per chip.

Wafer price is mainly a fabless issue. The 2 dGPU makers left are both fabless.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
And?
GM107 and GM108 are still on 28nm while offering twice the perf/watt over the "same" Kepler based GPUs.

And GM108 is cheaper and faster than anything Intel can offer right now...

This is a big deal. If foundries won't ever be able to bring cost per transistor down anymore, performance per dollar won't go up anymore, too. The only way performance could then go up is by improving the architecture, which, as Intel has proven with billions of dollars of R&D for processors, won't do much in comparison to other methods.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
The problems CPUs have with increasing performance is different to GPUs. The CPUs got stuck because there just isn't much more you can do to get performance out of a single stream of instructions. You can throw all the transistors at it all you like but getting a single thread of performance to go faster is hugely difficult and with massive diminishing returns if the clock speed wont increase. Its the lack of clock speed improvements that has Intel in trouble.

In the GPU world however the architectures are massively parallel, they should scale pretty linearly all the way up to millions of processing cores. Once you get to one shader per pixel then you start having your first issues with scaling. Its still possible to run multiple programs through the cores but you need to have awareness of the dependencies between them, so the scaling stops being perfect. They will probably hit memory bandwidth issues before this becomes an issue (some 30 years of doubling away).

We might not see future transistors improving as rapidly, that is likely because the 20nm marker and whatever for a process is more or less a lie. Some aspect of the process reaches those smaller numbers but its nothing like the full shrinks we saw up to about 45nm where the entire square of the transistor reduced. The big problem these days is power leakage which dominates how much computing you can put in one place. As they focus on it power consumption is dropping and allowing more transistors to be used, but the price per wafer is going up so the improvements are going to come at more cost, not less as we are used to.

The CPU design as it stands has two problems, memory bandwidth and power dissipation. Its massively down on bandwidth compared to the custom memory interfaces of the GPU, and its got more latency. With CPU bandwidth hovering around 50GB/s even for the quad channel CPUs its not even close to the 300GB/s we see on GPUs. Even DDR4 isn't going to do anything more than put it near 100GB/s on the quad channel system. That isn't going to close the gap, its going to widen quite a bit further with future GPUs coming. For power dissipation the CPU is also limited to about 125W. That is the low end GPU level territory today, most mainstream decent cards are more like 175W and the top end is over 300W. The CPU design as it stands really can't go higher than 140W without watercooling. These two aspects put serious barriers on a CPU being genuinely competitive with discrete GPUs. I also don't see either of them changing in the future plans from Intel. Just as the CPU/iGPU gets better so does the discrete card.

I just can't see Intel managing to do so much more with a lot less. Having 1/3 the power consumption available and 1/3 to 1/4 of the memory bandwidth and trying to hit levels of performance 3x per watt even with their process improvement is going to be seriously impressive. For Linus' prediction to come true I think we are looking at a big difference for the CPU and its architecture. The iGPU might be cheap but its never going to be fast compared to the discrete cards released next too it.

PS I don't know if anyone has tried to game on an iGPU recently - the drivers are still awful, lots of graphical issues in a variety of games. They don't exactly update them often either.

PPS Most developers are ignoing openCL and such because its a nightmare programming environment. That can't possibly be the future of computation, its too rubbish.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
The problems CPUs have with increasing performance is different to GPUs. The CPUs got stuck because there just isn't much more you can do to get performance out of a single stream of instructions. You can throw all the transistors at it all you like but getting a single thread of performance to go faster is hugely difficult and with massive diminishing returns if the clock speed wont increase. Its the lack of clock speed improvements that has Intel in trouble.

In the GPU world however the architectures are massively parallel, they should scale pretty linearly all the way up to millions of processing cores. Once you get to one shader per pixel then you start having your first issues with scaling. Its still possible to run multiple programs through the cores but you need to have awareness of the dependencies between them, so the scaling stops being perfect. They will probably hit memory bandwidth issues before this becomes an issue (some 30 years of doubling away).

We might not see future transistors improving as rapidly, that is likely because the 20nm marker and whatever for a process is more or less a lie. Some aspect of the process reaches those smaller numbers but its nothing like the full shrinks we saw up to about 45nm where the entire square of the transistor reduced. The big problem these days is power leakage which dominates how much computing you can put in one place. As they focus on it power consumption is dropping and allowing more transistors to be used, but the price per wafer is going up so the improvements are going to come at more cost, not less as we are used to.

The CPU design as it stands has two problems, memory bandwidth and power dissipation. Its massively down on bandwidth compared to the custom memory interfaces of the GPU, and its got more latency. With CPU bandwidth hovering around 50GB/s even for the quad channel CPUs its not even close to the 300GB/s we see on GPUs. Even DDR4 isn't going to do anything more than put it near 100GB/s on the quad channel system. That isn't going to close the gap, its going to widen quite a bit further with future GPUs coming. For power dissipation the CPU is also limited to about 125W. That is the low end GPU level territory today, most mainstream decent cards are more like 175W and the top end is over 300W. The CPU design as it stands really can't go higher than 140W without watercooling. These two aspects put serious barriers on a CPU being genuinely competitive with discrete GPUs. I also don't see either of them changing in the future plans from Intel. Just as the CPU/iGPU gets better so does the discrete card.

I just can't see Intel managing to do so much more with a lot less. Having 1/3 the power consumption available and 1/3 to 1/4 of the memory bandwidth and trying to hit levels of performance 3x per watt even with their process improvement is going to be seriously impressive. For Linus' prediction to come true I think we are looking at a big difference for the CPU and its architecture. The iGPU might be cheap but its never going to be fast compared to the discrete cards released next too it.

PS I don't know if anyone has tried to game on an iGPU recently - the drivers are still awful, lots of graphical issues in a variety of games. They don't exactly update them often either.

PPS Most developers are ignoing openCL and such because its a nightmare programming environment. That can't possibly be the future of computation, its too rubbish.

The dGPU market is shrinking and the pace will pick up with the transition to HBM/HMC. The bulk of GPU sales volume happens at the low end and if you look at the notebook market which sells one chip less than the desktop market it would be the worst hit.

http://forums.anandtech.com/showpost.php?p=36492759&postcount=175
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
PS I don't know if anyone has tried to game on an iGPU recently - the drivers are still awful, lots of graphical issues in a variety of games. They don't exactly update them often either.

This is an important thing here that's being ignored. You guys talking about prices going up are correct; however, that's to compensate for the death of the entry level and doesn't indicate that the mid-range and high-end will die soon. Sales will tell the story there. Will people give up PC gaming because the GPUs are too expensive? We can't say yet. We can say, however, that Intel is showing zero interest in making their IGPs suitable for gaming, and I've seen nothing to indicate that they're going to try to fix this. We have to see how the market will respond.

That said, I will admit that dGPUs will soon die on laptops. However, standard voltage CPUs will suffer the same fate.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
The dGPU market is shrinking and the pace will pick up with the transition to HBM/HMC. The bulk of GPU sales volume happens at the low end and if you look at the notebook market which sells one chip less than the desktop market it would be the worst hit.

http://forums.anandtech.com/showpost.php?p=36492759&postcount=175

This is assuming that AMD makes a huge comeback in the CPU/APU space. These advancements mean nothing if people are still buying Intel CPUs instead.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Not sure if Intel's seeming plans for some desktop models with much larger iGPUs in the next couple of generations would make sense if they weren't going to look after the drivers too. Of course huge companies don't always join up well
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
I must admit I have no interest in buying another laptop with a separate GPU in it. I don't game on a laptop though because its a horrible experience as its so much slower (CPU and GPU). Laptops aren't really a good gaming environment, so really all iGPU is really replacing there is a dGPU that doesn't need to be there.
 

_Rick_

Diamond Member
Apr 20, 2012
3,937
69
91
For power dissipation the CPU is also limited to about 125W.

It doesn't have to be. We're seeing plenty of CPUs hitting well into the 250W territory with OCs, with there being no real issues, besides needing slightly bigger cooling - but when you look at GPU cooling, it's not a problem to get the hear away. Since adding Xtors means increasing die surface (relative to separate CPU+GPU) the die-cooler interface shouldn't be limiting either, for a few gens at least.
The limit for graphics cards was set by PCIe standards, and AMd just went ahead and said "we don't care, here's performance". Socket 2011 CPUs already run at up to 150W stock, and even ATX allows for much more efficient cooling of something in the CPU socket than in an expansion card slot.

So thermal and power issues really are just an engineering choice with no permanent cost incurred.

You brought up the memory issue though, which is the real one.
GDDR5 is so much more expensive than DDR4, that you just cannot replace one with the other and magically "make away with memory copies". Since you want 32-64GB on a next gen computer with that level of computing power, the cost would be extreme. And that's before tracing distances and such become an issue. Hence my point that the GPU-CPU interconnect needs beefing up, with the GPU becoming a local SMP node with extra FPU capability. But, as I also pointed out in an earlier post, hell will freeze over, before AMD and intel get together to spec a common interconnect, which this would require.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
This is assuming that AMD makes a huge comeback in the CPU/APU space. These advancements mean nothing if people are still buying Intel CPUs instead.

Are you assuming that Intel will never improve their graphics architectures to a level where they can compete with AMD/Nvidia architectures. I would say its difficult but not impossible. Also Intel has the best process in terms of transistor performance and easily leads foundries by 12 - 15 months at 14nm. This means Intel will have the advantage as they can throw more transistors to make up for lower architectural efficiency. Intel is pushing HMC with Micron as AMD has chosen HBM which is a JEDEC standard. Both provide massive bandwidth to these so called IGP/APUs thereby eliminating bandwidth bottlenecks. Intel would make the transition in the next 2 years to HMC instead of wasting die space on EDRAM.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
I must admit I have no interest in buying another laptop with a separate GPU in it. I don't game on a laptop though because its a horrible experience as its so much slower (CPU and GPU). Laptops aren't really a good gaming environment, so really all iGPU is really replacing there is a dGPU that doesn't need to be there.

I think there's a much stronger argument that dGPUs will go away completely in laptops in the not especially distant future. Even the biggest laptops can't handle a 300W GPU, and I don't think there's really a market for people who don't care about laptop size and weight at all, so integration has a much bigger focus. And since the OEM tightly controls what combinations you can put in the laptop, and you generally can't update laptop CPUs or GPUs, there isn't nearly as much of an argument for user flexibility as there is with desktops.

Plus, there are harder limits on display resolution, you're not going to get people chaining 3 24" monitors together.

Personally I don't game on PCs very much and would be fine with current IGPs for my desktop too, but that's just me and I don't discount the $200+ GPU market.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Are you assuming that Intel will never improve their graphics architectures to a level where they can compete with AMD/Nvidia architectures. I would say its difficult but not impossible. Also Intel has the best process in terms of transistor performance and easily leads foundries by 12 - 15 months at 14nm. This means Intel will have the advantage as they can throw more transistors to make up for lower architectural efficiency. Intel is pushing HMC with Micron as AMD has chosen HBM which is a JEDEC standard. Both provide massive bandwidth to these so called IGP/APUs thereby eliminating bandwidth bottlenecks. Intel would make the transition in the next 2 years to HMC instead of wasting die space on EDRAM.

I'm more concerned about Intel's lack of gaming drivers. They seem to just expect everyone to come to them with minimum effort. It's up to them to prove me wrong. Until then, I just don't see anything to convince me that they care about anything other than talking about how much faster the graphics are than the previous generation.
 
Last edited:

rootheday3

Member
Sep 5, 2013
44
0
66
@Techhog...

can you expand on what you mean about "lack of gaming drivers"?

In general, Intel drivers in the last year or so "just work" with nearly all games, new and old . When they don't, Intel releases beta drivers on or near game release, including updates for Thief and Titanfall this spring and, within the last week, for Grid:Autosport. The new driver also offers CMAA (better quality post processing AA than MLAA or FXAA).

They are creating new graphics capabilities like PixelSync (which offers great improvement in the peformance and quality of effects like smoke and can optimizechairveffects like Tressworks at much lower perf hit) - and Microsoft has added that feature to DX12.

Intel has an extensive developer relations teamcof AEs who work with game developers to verify functional health and optimize perf on new games. Intel is working with Valve on Steam and has stepped up its OGL efforts as well- latest Windows driver supports OGL4.3 and includes extensions which reduce driver overhead.

Microsoft used Haswell iGpu to demo DX12 multithreading and driver/api overhead reduction at GDC - Intel is working closely with Microsoft and will have beta quality drivers for ISVs this fall and will be ready at Threshold launch.
 
Aug 11, 2008
10,451
642
126
The problem with Intel is the very confusing and limited availability of their high end igps. Seems like every generation is supposed to be a huge step forward, and Iris Pro, while still limited, was in fact a big improvement, but it has very limited availability in only expensive platforms. Normal run of the mill cpus with HD4600 or HD4400 are just a marginal improvement.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,452
10,120
126
Performance and cost don't matter as much as this thread makes it out to be.
It is (relatively) easy to scale up integrated GPUs and get the same performance. There is no inherent performance deficit to integrating a GPU into a CPU.
Uhm, memory bandwidth? Or are you assuming HBM / stacked DRAM already?
 
Mar 10, 2006
11,715
2,012
126
Are you assuming that Intel will never improve their graphics architectures to a level where they can compete with AMD/Nvidia architectures. I would say its difficult but not impossible. Also Intel has the best process in terms of transistor performance and easily leads foundries by 12 - 15 months at 14nm. This means Intel will have the advantage as they can throw more transistors to make up for lower architectural efficiency. Intel is pushing HMC with Micron as AMD has chosen HBM which is a JEDEC standard. Both provide massive bandwidth to these so called IGP/APUs thereby eliminating bandwidth bottlenecks. Intel would make the transition in the next 2 years to HMC instead of wasting die space on EDRAM.

Eh, I'd argue all day that architecture trumps process any day.

See: NVIDIA Tesla v.s. Xeon Phi (Knights Corner)

If Intel puts in the proper investment into its GPU architectures (and architectures we see coming today are a result of investments that began several years ago), then I could see this becoming a risk to NVIDIA/AMD dGPU businesses. However, I would not so easily discount both NVIDIA's and AMD's significant IP base and years of experience. I await the Gen. 8 disclosure that Intel will be providing at IDF and then the subsequent Broadwell benchmarks.

A MacBook Air with a Skylake-ULT + eDRAM + Gen. 9 could be pretty tasty.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,868
3,419
136
A MacBook Air with a Skylake-ULT + eDRAM + Gen. 9 could be pretty tasty.

I actually think it might swing the other way on the GPU side, Intel have the superior memory pipeline in the CPU/integrated gfx space. Right now bandwidth is the biggest limiter, AMD igp's are seriously bandwidth constrained. Also AMD/NV gpu uarch wont sit still either internal caches will likely increase, more flexible execution and prioritization etc. So when HMC/2.5/3D memory finally makes it it will allow AMD IGP's to really stretch their legs. On the CPU side steamroller is much better at ~19 watts then it is at 65 watts relative to haswell, so hopefully with excavator they can continue to improve.

The question is how long will it take those memory solutions to arrive, within skylakes time frame?

i really wish i had a steamroller to test with, my haswell x1 carbon was pretty rubbish on the clocks until i used throtalstop to "tweak" the turbo.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |