Linus Torvalds: Discrete GPUs are going away

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DominionSeraph

Diamond Member
Jul 22, 2009
8,391
31
91
In terms of design cost, volume matters. And IGPs wins by a massive factor here. Intel can for example spend something like 1$ in GPu R&D on a chip to get the same as nVidia when nVidia spends 10$ per chip.

Wafer price is mainly a fabless issue. The 2 dGPU makers left are both fabless.

Ah, I see you believe in Potionomics.

http://www.giantitp.com/comics/oots0135.html


Why are you operating under the assumption that fabs cost nothing to build or operate if you own them?

Intel isn't going to give you a Titan-class GPU in a $45 Celeron for the same reason they don't sell the Xeon E7 for $45 and for the same reason Nvidia doesn't sell the GK110 for $45. Die space isn't free. Intel owning their own fabs doesn't mean they can give you a 64 core processor with 6GB cache and a Titan Z grafted on for $1, and make it up with "volume."
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Ah, I see you believe in Potionomics.

Why are you operating under the assumption that fabs cost nothing to build or operate if you own them?

Intel isn't going to give you a Titan-class GPU in a $45 Celeron for the same reason they don't sell the Xeon E7 for $45 and for the same reason Nvidia doesn't sell the GK110 for $45. Die space isn't free. Intel owning their own fabs doesn't mean they can give you a 64 core processor with 6GB cache and a Titan Z grafted on for $1, and make it up with "volume."

I dont see how you came to that conclusion.

Are you saying volume is completely irrelevant? R&D cost the same, nomatter how many chips you produce. This is also why there is only 2 dGPU makers left in the first place.
 

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
Ah, I see you believe in Potionomics.

Why are you operating under the assumption that fabs cost nothing to build or operate if you own them?

Intel isn't going to give you a Titan-class GPU in a $45 Celeron for the same reason they don't sell the Xeon E7 for $45 and for the same reason Nvidia doesn't sell the GK110 for $45. Die space isn't free. Intel owning their own fabs doesn't mean they can give you a 64 core processor with 6GB cache and a Titan Z grafted on for $1, and make it up with "volume."

Cute cartoon, but entirely not fitting to this problem. "Potionomics" is referring to per-unit costs- ShintaiDK is referring to capital and R&D expenditure. This is a fixed cost regardless of units sold- the classic "the first one cost $1bn to make, and the rest of them cost $5" situation. You need to spend a vast sum up front to design your new GPU, and then need to sell a large enough volume to recoup that R&D.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Intel is not immune to higher cost per process. Also, TSMC produces 2 times or more wafers per month than Intel. TSMC can sell at lower margins due to higher volumes and still produce high profits.
One more thing, both AMD and NVIDIA use the same GPU IP in more than one product. They reuse the same IP on multiple products across different segments. That make up for the lower dGPU volumes.

Edit: That also lower time and R&D cost.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Intel is not immune to higher cost per process. Also, TSMC produces 2 times or more wafers per month than Intel. TSMC can sell at lower margins due to higher volumes and still produce high profits.
One more thing, both AMD and NVIDIA use the same GPU IP in more than one product. They reuse the same IP on multiple products across different segments. That make up for the lower dGPU volumes.

Edit: That also lower time and R&D cost.

TSMC sells more wafers yes. But they still have something like 40% of their entire capacity at 200mm wafers and below with 7 fabs and 1 joint venture. While they only have 3 300mm fabs. And 28nm doesnt even account for 20% of the volume.

Yes they reuse the same IP. But lets see on what. CPU division for AMD is collapsing and a mere shadow of itself. Tegra sales for nVidia is close to non existant today.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
TSMC sells more wafers yes. But they still have something like 40% of their entire capacity at 200mm wafers and below with 7 fabs and 1 joint venture. While they only have 3 300mm fabs. And 28nm doesnt even account for 20% of the volume.

I was referring to 28nm vs 22nm alone. TSMC produces more than 2x 28nm wafers than Intel 22nm.

Yes they reuse the same IP. But lets see on what. CPU division for AMD is collapsing and a mere shadow of itself. Tegra sales for nVidia is close to non existant today.

10+ million Console sales in 6 months is huge reuse of IP, and dont forget low power SoCs. Also both AMD and especially NVIDIA make a lot of revenue and profits from HPC segment.
 

NIGELG

Senior member
Nov 4, 2009
851
31
91
Discrete cards are going away?

Then we better enjoy them while they are here....and stop wasting valuable time arguing on forums.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I was referring to 28nm vs 22nm alone. TSMC produces more than 2x 28nm wafers than Intel 22nm.

Could you link to numbers on that?

10+ million Console sales in 6 months is huge reuse of IP, and dont forget low power SoCs. Also both AMD and especially NVIDIA make a lot of revenue and profits from HPC segment.

And how long until the next console? Console sales already peaked and are dropping fast.

Low power SoCs with hardly any revenue?

The HPC segment is already under heavy fire.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
10+ million Console sales in 6 months is huge reuse of IP,...

More like 8.2mil+ Ps4 & 4.7mil+ Xb1 sales (total of ~12,9mil) according to vgchartz.


And how long until the next console? Console sales already peaked and are dropping fast.

Their still selling like 100k PS4's each week, and 65k Xbox1's each week.
Over a month thats like 560k+ units each month, from the Xb1 & PS4.
Again if you go by VGchartz.



CPU division for AMD is collapsing and a mere shadow of itself. Tegra sales for nVidia is close to non existant today.

^ this is sadly true.

AMD need to get their APUs going faster, for the iGPUs.
HMC technology cannot come soon enough, for AMD's APUs.

Theres huge growth potential there though for laptops/tablets/low-mid end desktops.

Nvidia... will probably be fine in the ARM market.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
AMD need to get their APUs going faster, for the iGPUs.
HMC technology cannot come soon enough, for AMD's APUs.

Theres huge growth potential there though for laptops/tablets/low-mid end desktops.

Nvidia... will probably be fine in the ARM market.

Vgchartz is nothing but random guesses tho

The problem for AMD is, faster IGP means...less dGPU sales. Tho the pain will be even greater for nVidia and AMD can maybe shift some marketshare there. But again, with the anemic CPU performance, it may just be more loses.

nVidia is anything but fine in the ARM segment. Their revenue in the ARM segment today is not even half of what it was in 2012 for example.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,391
31
91
I dont see how you came to that conclusion.

Are you saying volume is completely irrelevant? R&D cost the same, nomatter how many chips you produce. This is also why there is only 2 dGPU makers left in the first place.

Die space isn't free. You cannot recoup the R&D of a large die processor by lowering the price to below its cost of manufacture. Selling in volume at a loss multiplies the loss, it doesn't result in profit.

If you can recognize that Intel can't sell the Xeon E5-2699 v3 for Celeron prices and make a profit, you should be able to recognize that they can't sell an APU of the Xeon's size for Celeron prices and make a profit. The iGPU is not free.
Intel can't put Titan-class graphics in every prebuilt for the same reason Nvidia doesn't. The volume does not equal profit.
 

Mand

Senior member
Jan 13, 2014
664
0
0
I dont see how you came to that conclusion.

Are you saying volume is completely irrelevant? R&D cost the same, nomatter how many chips you produce. This is also why there is only 2 dGPU makers left in the first place.

Operation costs increase with volume, though, and for semiconductor fabs operation costs (as well as initial startup costs for new equipment and lines) is rather large.

R&D is chump change compared to a production fab.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Die space isn't free. You cannot recoup the R&D of a large die processor by lowering the price to below its cost of manufacture. Selling in volume at a loss multiplies the loss, it doesn't result in profit.

If you can recognize that Intel can't sell the Xeon E5-2699 v3 for Celeron prices and make a profit, you should be able to recognize that they can't sell an APU of the Xeon's size for Celeron prices and make a profit. The iGPU is not free.
Intel can't put Titan-class graphics in every prebuilt for the same reason Nvidia doesn't. The volume does not equal profit.

I think you still completely miss the point. The R&D expense is the exact same. Nomatter if you sell 1 chip or 100 million. And R&D is one of, if not the biggest expense these companies got.

Also you compare a non existing binned SKUs instead of diesize. 2 completely different things.

As already said, its the first chip that cost billions. The next only cost the direct manufactoring cost. And thats not much. A regular quadcore GT2 chip for example is most likely below 20$.

Just look at AMD selling 315mm2 chips for ~100$.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Really? Intel for example spends 9B$ a year on R&D. Or about the same as 2 leading edge 300mm fabs.


I think Mand means that R&D on the iGPU are chump change compared to R&D for a new node. At least, that's the only way that comment makes sense to me.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I think Mand means that R&D on the iGPU are chump change compared to R&D for a new node. At least, that's the only way that comment makes sense to me.

Well we know TSMC expect to reach 1.8B$ on R&D in 2014. And we know nVidia for example is expected to reach around 1.35B$ in 2014. And lets assume both uses equally % on non GPU/node related R&D.

Not exactly chump changes. But yes, the node cost more.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
There are some other, interesting, trends that bode well for iGPUs. On the PC side Mantle has spurred MS to make some major in-roads in reducing API overhead in DX12. On the mobile side, Apple will be introducing Metal - based on developer comments this is going to give console like performance on tablets and phones.

In terms of dGPU, basically either AMD or Nvidia need to grab all the market share to have the volumes necessary to support an enthusiast market long term. A market pushed by multi-screen game play and real time cinematic feature sets, performance and ego. NV is the best placed for this at the moment because they still have a large professional/HPC market where they can bin lower quality chips to consumers.

The winner of the dGPU wars will finally be able to push forth a much more efficient API, similar to metal, for PCs. Then they can drive multi-screen and real time cinematic effects to a higher level and present a much better value proposition to discerning gamers (even on mid-range cards for single screens).

Of course, Intel, if they really put the manpower into it, can do the same thing software wise. So far, they mainly pay lip service to the enthusiast market, but that could change. But given the broad market they serve, the ROI just may not be there for them to spend the money required to compete, on some level, with dGPU. It may be cheaper, or even more palatable, just to leave some money on the table.

So in the end, the only question is can a dGPU maker remain profitable making AIB for high end consumers and the Professional markets? The answer will be given, in large part, by enthusiasts like us and the specialty markets. Are we willing to pay the price for the very best in performance and quality?

If you asked me six months ago, I would have said dGPU would be dead or on life support by 2020 - but I didn't consider the likelihood that one of the two remaining large scale dGPU developers would be knocked out of the game. So now, I think there are too many variables pronounce the death of the dGPU. Now I think that by 2020, we will have the information to determine the likelihood of dGPU's success or demise - after seeing the results of the shakeout between AMD and Nvidia.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Well we know TSMC expect to reach 1.8B$ on R&D in 2014. And we know nVidia for example is expected to reach around 1.35B$ in 2014. And lets assume both uses equally % on non GPU/node related R&D.

Not exactly chump changes. But yes, the node cost more.

Good point!
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
There are some other, interesting, trends that bode well for iGPUs. On the PC side Mantle has spurred MS to make some major in-roads in reducing API overhead in DX12. On the mobile side, Apple will be introducing Metal - based on developer comments this is going to give console like performance on tablets and phones.
Microsoft was already working on DX12 many years before AMD announced Mantle. Metal won't give the A7 more than its 115 GFLOPS.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
Microsoft was already working on DX12 many years before AMD announced Mantle. Metal won't give the A7 more than its 115 GFLOPS.
Is there any solid proof of this? With dx12's current release timeline it seems more likely that Microsoft was caught completely off guard by mantle and is likely using a lot of amd's work to gwt dx12 out the door.

Repi has hinted rather cryptically that dx12 seems very "similar" to mantle.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Is there any solid proof of this? With dx12's current release timeline it seems more likely that Microsoft was caught completely off guard by mantle and is likely using a lot of amd's work to gwt dx12 out the door.

Repi has hinted rather cryptically that dx12 seems very "similar" to mantle.

nVidia, Intel, Microsoft etc says it. AMD was even caught saying no such thing as DX12 in hopes to boost their own Mantle adoption.

We have been over this multiple times. So lets not take another round. This thread is about another subject.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
PR statements are only valid when it comes to dismissing AMD. Yeah we already know that.

DX12 announcement was just MS entering full damage control mode, Intel and Nvidia being his rhemoras
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Microsoft was already working on DX12 many years before AMD announced Mantle. Metal won't give the A7 more than its 115 GFLOPS.

My bad on the DX12/Mantle issue. As far as Metal goes - game engine developers such as Epic and EA's Frostbite showed considerable excitement over this (perhaps obligatory excitement). And while Metal obviously doesn't increase the limits of hardware, it can dramatically reduce GPU call overhead; here is an example: http://www.alexstjohn.com/WP/2014/06/08/direct3d-opengl-metal-full-circle/
 

Mand

Senior member
Jan 13, 2014
664
0
0
Well we know TSMC expect to reach 1.8B$ on R&D in 2014. And we know nVidia for example is expected to reach around 1.35B$ in 2014. And lets assume both uses equally % on non GPU/node related R&D.

Not exactly chump changes. But yes, the node cost more.

Sorry, I was unclear. It depends on what you mean by R&D. Doing the design work and the mask development for a new chip, using the same process, is a lot less expensive than developing a new process.

I mostly meant it in the context whereby people say that if AMD and Nvidia owned their own fabs, we'd get so much better stuff. Maybe, maybe not. Fabs are brutally expensive, doing all the work to develop them is expensive. Don't AMD and Nvidia have to pay people like GloFo and TSMC anyway, you ask? Yes, and no. Yes in the sense that TSMC's bill for their fab runs includes TSMC's development costs, but no in the sense that a lot of what TSMC learns about a new process node can apply to all of their customers, so not every individual customer needs to pay for the whole thing. Which is what would happen if AMD and Nvidia started their own fabs, and did all their own work on them themselves.

The R&D cost to Nvidia to take TSMC's 28nm node and make a new GK*** chip is really small compared to what it would take Nvidia to invest in running its own fab.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |