NVIDIA Pascal Thread

Page 62 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kris194

Member
Mar 16, 2016
112
0
0
This means that even though the foundries can achieve improved cost/transistor, cost/mm^2 is actually on the rise. Intel estimates that in going from 22nm (single patterning) -> 14nm (double patterning) led to a 30% increase in wafer cost. That's NOT small.

If we assume something similar for TSMC in going from 28nm to 16nm, then this means that the cost of 300mm^2 of silicon in 16FF+ is about that of ~390mm^2 of TSMC 28nm silicon, assuming normalized yields. TSMC 16nm yields are said to be crazy good right now, so let's assume similar yields.

In addition, the performance/transistor has gone way up, so while some people will sit there and gnash their teeth about how they're not getting the "right amount of silicon die area" it's ultimately delivered performance/$ that matters.

If 300mm^2 of 16FF+ silicon gives you way better performance than 600mm^2 of 28nm silicon, and it is being sold to you for the same price as the 600mm^2 part, then why complain? Why whine that your "rights" as "we the gamers" are being trampled upon by evil, greedy NVIDIA?

I agree but why don't they say how many the same chips they can make of wafer on both nodes? Moreover, it's obvious that cost/mm^2 is rising because you can put more transistors on the same space. Please, don't tell me that Nvidia had to double price of their mid-range chips because manufacturing cost rised that much cause that's something I won't believe in.
 
Last edited:

Head1985

Golden Member
Jul 8, 2014
1,866
699
136
I find it frustrating that many of the people in here who are claiming that NVIDIA is "ripping off we the gamers" seem to be arguing from false premises.

First of all, using die size to compare whether one chip is a worthy successor to another is ridiculous. In this day and age, in order to move to new nodes, the capital intensity goes substantially up due to the need for double patterning.

This means that even though the foundries can achieve improved cost/transistor, cost/mm^2 is actually on the rise. Intel estimates that in going from 22nm (single patterning) -> 14nm (double patterning) led to a 30% increase in wafer cost. That's NOT small.

If we assume something similar for TSMC in going from 28nm to 16nm, then this means that the cost of 300mm^2 of silicon in 16FF+ is about that of ~390mm^2 of TSMC 28nm silicon, assuming normalized yields. TSMC 16nm yields are said to be crazy good right now, so let's assume similar yields.

In addition, the performance/transistor has gone way up, so while some people will sit there and gnash their teeth about how they're not getting the "right amount of silicon die area" it's ultimately delivered performance/$ that matters.

If 300mm^2 of 16FF+ silicon gives you way better performance than 600mm^2 of 28nm silicon, and it is being sold to you for the same price as the 600mm^2 part, then why complain? Why whine that your "rights" as "we the gamers" are being trampled upon by evil, greedy NVIDIA?

If you don't like it, buy an AMD alternative. And, I guess if AMD does the same thing (and they appear to be doing just that), the best I can suggest is to use the "free" Intel/AMD iGPU on your CPU of choice.
If Gpus and wafers are more expensive why NV make most money in history since they doubled prices with kepler and maybe even tripled with pascal?and people like you still defending them..Wow
 

coercitiv

Diamond Member
Jan 24, 2014
6,402
12,862
136
I find it frustrating that many of the people in here who are claiming that NVIDIA is "ripping off we the gamers" seem to be arguing from false premises.

First of all, using die size to compare whether one chip is a worthy successor to another is ridiculous. In this day and age, in order to move to new nodes, the capital intensity goes substantially up due to the need for double patterning.
If that is true, we should see Nvidia struggling to keep up profits from the GeForce line. Are margins slowly sinking or still going up?

In addition, the performance/transistor has gone way up, so while some people will sit there and gnash their teeth about how they're not getting the "right amount of silicon die area" it's ultimately delivered performance/$ that matters.
Exactly, and it seems to me some people choose to spend their $$$ on even more performance, just like some people choose to skip a CPU gen.

If 300mm^2 of 16FF+ silicon gives you way better performance than 600mm^2 of 28nm silicon, and it is being sold to you for the same price as the 600mm^2 part, then why complain?
What's next for the following process jump, 150mm^2 high end parts? At this rate the iGPU just needs to wait for the dGPU to catch down in size
 

Timmah!

Golden Member
Jul 24, 2010
1,463
729
136
I find it frustrating that many of the people in here who are claiming that NVIDIA is "ripping off we the gamers" seem to be arguing from false premises.

First of all, using die size to compare whether one chip is a worthy successor to another is ridiculous. In this day and age, in order to move to new nodes, the capital intensity goes substantially up due to the need for double patterning.

This means that even though the foundries can achieve improved cost/transistor, cost/mm^2 is actually on the rise. Intel estimates that in going from 22nm (single patterning) -> 14nm (double patterning) led to a 30% increase in wafer cost. That's NOT small.

If we assume something similar for TSMC in going from 28nm to 16nm, then this means that the cost of 300mm^2 of silicon in 16FF+ is about that of ~390mm^2 of TSMC 28nm silicon, assuming normalized yields. TSMC 16nm yields are said to be crazy good right now, so let's assume similar yields.

In addition, the performance/transistor has gone way up, so while some people will sit there and gnash their teeth about how they're not getting the "right amount of silicon die area" it's ultimately delivered performance/$ that matters.

If 300mm^2 of 16FF+ silicon gives you way better performance than 600mm^2 of 28nm silicon, and it is being sold to you for the same price as the 600mm^2 part, then why complain? Why whine that your "rights" as "we the gamers" are being trampled upon by evil, greedy NVIDIA?

If you don't like it, buy an AMD alternative. And, I guess if AMD does the same thing (and they appear to be doing just that), the best I can suggest is to use the "free" Intel/AMD iGPU on your CPU of choice.

Easy, because 16nm silicon of 300 mm2 wont give you way better performance. 600 mm2 16nm silicon will.

Basically, people dont want to pay the same money for smaller performance increase than last time. If i paid say 500 EUROs for my first GPU, then 500 EUROs for my second, which was 2x faster than the first one, i expect my next GPU to be once again 2x faster for the same price. Not 1,25x or 1,4x or
1,5x. 2x or bust. If they can do it with 300mm2 chip nowadays, fine, i dont care, good for them. If they dont (and they dont), why is is it so difficult to understand i want 600mm2 part, which i s the one to provide the perf increase i am after?

OFC i understand inflation is a thing and costs rise, so i can live with the fact, what was 500 EUROs 5 years ago, is 650 now. But not 800 and certainly not 1000...which is what we are seeing. You want to tell me there was 200 percent inflation between 2010 and now, so i gotta pay 2x more for flagship product than i did in 2010 and that this price increase actually reflects rise of real production costs?
 
Last edited:

Kris194

Member
Mar 16, 2016
112
0
0
Arachnotronic, if new node costs that much then why Nvidia is making more money with every single year since 2009?

Gross margin for fiscal year 2016 was a record 56.1%, up 60 basis points from the previous year,
Our overall gross margin was 55.5% , 54.9% and 52.0% for fiscal years 2015 , 2014 and 2013 , respectively
Our gross margin was 52.0%, 51.4% and 39.8% for fiscal years 2013 , 2012 and 2011, respectively.

Our gross margin was 35.4%, 34.3% and 45.6% for fiscal years 2010, 2009 and 2008, respectively


If new node costs that much why gross margin went through the roof in FY2012?
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Arachnotronic, if new node costs that much then why Nvidia is making more money with every single year since 2009?





Of course higher prices are a part of the reason. But that's not the only reason. Tegra automotive, Geforce GRID, and Tesla sales are both high and very high margin. All of that coupled with a massive market share increase and a migration of core gamers to higher end GPU's is the reasons why. If AMD starts to claw back market share at a substantial rate, expect Nvidia to initiate a price war. But Tesla and Tegra automotive margins are shielded from gamer GPU sales and thus will still keep margins high.
 
Mar 10, 2006
11,715
2,012
126
Of course higher prices are a part of the reason. But that's not the only reason. Tegra automotive, Geforce GRID, and Tesla sales are both high and very high margin. All of that coupled with a massive market share increase and a migration of core gamers to higher end GPU's is the reasons why. If AMD starts to claw back market share at a substantial rate, expect Nvidia to initiate a price war. But Tesla and Tegra automotive margins are shielded from gamer GPU sales and thus will still keep margins high.

Tegra Automotive margins are in the toilet
 

Pinstripe

Member
Jun 17, 2014
197
12
81
Please do tell how prices go up by 9% while currency purchasing power stays the same.

The core inflation rate is 1-2%, but is being puffed up by food and energy inflation that is growing at a faster rate and not factored into the consumer price index due to the governments flawed measure of inflation. Food, rent and energy are more expensive.

As for other items (like graphics cards): Factor in other distortions like central banks propping up parasitic Wall Street and governments enforcing stealth taxes of all kinds and you can see why doing legit business is becoming increasingly difficult. So a company like Nvidia that needs to maintain a growing and ever more expensive ecosystem gotta make cuts somewhere else. If they can't roll the cost down to the consumer due to the fact that consumers are being squeezed left and right, they're gonna have to make cuts somewhere else. I.e. hardware. In other words, the times of 2x performance increase for the same money are over, so deal with it.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Right now Nvidia must be making a fortune on those P100 cards - they can sell them for so much money and as I understand it the HPC demand is huge. Hence nvidia will keep their margins even with this expensive new node and HBM memory.

That's why Nvidia have good margins - it's because they found and developed new markets for their cards, providing all that's required (including software). It's not because they played it safe and stuck to the lower margin gaming market.

This is in contrast to AMD - they have been unable to do the same thing, sure lots of nice ideas on power point but they never come to anything because they never really invest in them. Just produce something, put open source on some power points, and then leave it to others to fail to take it anywhere.
 
Last edited:

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
First, consumer price index numbers tend to be low during a recession or recessionary economic times, due to the drop in price of manufactured goods, which make up a large weight of the way cpi numbers are calculated. Food, housing, education and healthcare items can easily run 5%+ a year while still rocking a low 0-1% cpi number because of the huge deflationary effects of durable goods. The CPI is not a measure of the "cost of survival" and is a terrible metric for that, because you don't need to buy a new cell phone every year to live, and it doesn't keep you fed or put a roof over your head.

Secondly the way the CPI gets calculated changes every once in a whil, and so is not an honest apples to apples measurement. In the 1970s housing prices were spiking up so they removed that from the CPI. In the 1990s they added hedonic adjustments and substitutions, which means if the price of restaurant food goes up 50% and the price of fast food goes up 10%, and everybody stops eating restaurant food and instead eats fast food, the cpi might give an inflation number of -30% even though the price of all food items went up, but since total spending on food went down, the BLS will claim that there was deflation.

I mean just imagine if they calculated inflation in the 1990s during the housing boom like they used to in the 1970s. Do you really think you would get an annual inflation rate of 2%? Of course not.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
http://www.pcgamer.com/nvidia-pascal-rumors-point-to-three-gp104-gpus-in-june/

^ 3 GP104 SKUS.

Recall the 680, 670 and 660Ti.

The rumor suggest the non-cut GP104 will take the flagship moniker like the 980Ti, while the cut gets the 980 replacement and 2nd cut gets 970 replacement monikers.

This makes sense as a new node may not be as great yielding, so full dies are a rarity and will go for a nice premium $$.

Just as I have said over and over. And, most likely, the only one that will be worth it's weight in overclocking will be the one with GDDR5X. It'll depend on whether or not 8ghz GDDR5 can overclock a full 1ghz like 7ghz GDDR5 can, but I have my doubts. Expect prices to be $699 for Titan x + 25-30% performance, $549 for 980 TI +10%, and $379-399 for 980 TI - 5-10%. 30% faster than Titan X at 4k is right in line with 70% faster than GM204, which is pretty much in line with what GTX 680 was over GTX 560 TI and GTX 980 was over GTX 680. I'd be pleasantly surprised (in a sad sort of way) if pricing is $649, $499, and $349-379. Either way, perf/$ goes up vs. 980 TI, but so does the graphics cards segment occupied by 300mm2 GPU's.

The one good thing about Pascal is that it looks like GP106 will be ready for consumers soon since the chip has A) been pictured on Drive PX and B)I'm sure Drive PX will take many, many more months of validation in the software and debugging side. I also highly believe that if GP106 is nearly ready then so is the high-volume laptop friendly GP107. Nvidia will have a "full" lineup out relatively quickly. The mythical GP102 (I'm guessing exactly 50% more GPU than GP104, so 450mm2) will come early next year as the Titan, then Nvidia will refresh their lineup to have a cutdown GP102 as the new TI part.

This stuff is so easy to predict, even the coming of GP104 being the new 1080 TI.
 
Last edited:
Feb 19, 2009
10,457
10
76
@tviceman

I agree with you.

There's also rumors only the uncut chip gets GDDR5X, it will be able to stretch its legs and get far above Titan X, I will say around 20-25%. This is a premium low volume part, hence they can go with low volume GDDR5X before full mass production.

The others with GDDR5 will be severely bandwidth limited and be around 980Ti and a little below.

Polaris 10 isn't the competitor to a full GP104, so it will be priced to compete versus the cut GP104, allowing the top GDDR5X part to have a big price premium for the new halo part.
 
Feb 19, 2009
10,457
10
76
Is it confirmed that we are not getting it now?

We won't know until the consumer SKU info release basically, but there are improvements in the rendering engine to allow priority pre-emption to actually work in Pascal.

As in a priority compute task can halt a current graphics task in flight immediately to start processing.

http://www.theregister.co.uk/2016/04/06/nvidia_gtc_2016/

Software running on the P100 can be preempted on instruction boundaries, rather than at the end of a draw call. This means a thread can immediately give way to a higher priority thread, rather than waiting to the end of a potentially lengthy draw operation. This extra latency – the waiting for a call to end – can really mess up very time-sensitive applications, such as virtual reality headsets. A 5ms delay could lead to a missed Vsync and a visible glitch in the real-time rendering, which drives some people nuts.

By getting down to the instruction level, this latency penalty should evaporate, which is good news for VR gamers. Per-instruction preemption means programmers can also single step through GPU code to iron out bugs.

It will be much better for VR due to this change alone. Basically if you like NV, and you own an Maxwell GPU for VR, you better upgrade, because it will reduce that motion to photon latency and make your VR experience generally better.

As for actual parallel graphics + compute processing, that is unknown currently, you cannot rule it out entirely, but Pascal does have fine-grained pre-emption finally (something NV's PR claims Maxwell can do but it actually can't, like Async Compute -_-).
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
You guys are over thinking this whole async compute thing ...

All you need is a "dedicated compute unit" like GCN's ACEs to enable "async compute" much like how we have these "DMA engines" to handle "async copy". You need only to create a circuit for these specialized units dedicated to a specific function like async compute ...
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
If NV has a GP102 in the works, would it make sense for them to release GP104 and its cut down versions as 1080Ti, 1080, and 1070 and when GP102 is ready, release the full chip as Quadro cards first to maximize profits while yields are low and use the cut down version as a first Titan release. After that, once yields improve, they could then turn around and release a Titan with the full chip as the Titan Ti or whatever they come up with.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
This years releases really are looking to be less than exciting :'(. The bright side is that a lot of us who bought 980ti's last year probably won't be upgrading until 2017 with Vega/GP102(if it exists). More money for other hobbies this year at least

Unless they stop optimizing for Maxwell?

So, are the latest rumors that the *80 ti is now going to be a 104 chip? Sorry, with the time zone difference I end up getting behind on current posts.
 

coercitiv

Diamond Member
Jan 24, 2014
6,402
12,862
136
The core inflation rate is 1-2%, but is being puffed up by food and energy inflation that is growing at a faster rate and not factored into the consumer price index due to the governments flawed measure of inflation. Food, rent and energy are more expensive.
Thank you and alcoholbob for the extra info. Although I'm still tempted by the subject, I'll mute myself to avoid the OT.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |