NVIDIA Pascal Thread

Page 50 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Even if there are ROPs, there won't be a GP100 Titan at 610 mm2, even severely cut. It'd be waaaay too expensive. Maybe a Quadro but that would be several K.

The exact same thing was said of every Tesla/Quadro 500mm2+ chip at every new NV generation launch. It's too expensive right now but sooner or later, NV will want to either recoup the cost of GP100 R&D development and/or deal with all the non-fully yielding die. What are they going to do with all those left overs? Throw them out?

Also, no one seems to have addressed that each ASIC costs about $300-500M to develop, separately. I am not saying it's not possible but NV has not done this since G80. The underlying architecture and rough performance, minus certain GPU clock/memory clock and TDP adjustments of flagship Quadro/Tesla cards and GeForce is very similar and has been since G80, or almost 10 years of NV GPU history. I mean the history is against the theory that NV will design a 3840 CUDA core GP100 but then have a 5120-6144 CUDA core GP102. IMO, those are Volta specs. If these theories are true, I'll be mighty impressed with Pascal.
 

jpiniero

Lifer
Oct 1, 2010
14,841
5,456
136
The exact same thing was said of every Tesla/Quadro 500mm2+ chip at every new NV generation launch. It's too expensive right now but sooner or later, NV will want to either recoup the cost of GP100 R&D development and/or deal with all the non-fully yielding die. What are they going to do with all those left overs? Throw them out?

Unknown, but maybe their really huge HPC customers will take it (for a slight discount but still like 10k+)
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
The exact same thing was said of every Tesla/Quadro 500mm2+ chip at every new NV generation launch. It's too expensive right now but sooner or later, NV will want to either recoup the cost of GP100 R&D development and/or deal with all the non-fully yielding die. What are they going to do with all those left overs? Throw them out?

Also, no one seems to have addressed that each ASIC costs about $300-500M to develop, separately. I am not saying it's not possible but NV has not done this since G80. The underlying architecture and rough performance, minus certain GPU clock/memory clock and TDP adjustments of flagship Quadro/Tesla cards and GeForce is very similar and has been since G80, or almost 10 years of NV GPU history. I mean the history is against the theory that NV will design a 3840 CUDA core GP100 but then have a 5120-6144 CUDA core GP102. IMO, those are Volta specs. If these theories are true, I'll be mighty impressed with Pascal.

Word on the street is that.. there is already a 6 month allocation of P100s and one of the customers that are hoarding this right now is google.

Im sure we will see cut down versions of the DGX-1 (thinking these will come at Q1'17).
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That number is not even close to reality.

AMD quoted the same. This confirms the same.

"While initial estimates for new chips at 16/14nm have ranged as high as $300 million....how internal costs are assigned by companies to amortize them across departments. The cost of moving to a new process node can be huge. Qualcomm, which reported revenue of $24.9 billion in fiscal 2013, said the price tag is $2 billion. "
http://semiengineering.com/the-real-numbers-redefining-nre/

"Huang said that thousands of engineers have been working on the Pascal GPU architecture," noted Tim Prickett Morgan, coeditor of our sister site The Next Platform. "The effort, which began three years ago when Nvidia went 'all in' on machine learning, has cost the company upwards of $3 billion in investments across those years. This is money that the company wants to get back and then some."
http://www.theregister.co.uk/2016/04/06/nvidia_gtc_2016/

If NV has 5 distinct designs, say GP100, GP102, GP104, GP106, GP107, that's roughly $3B / 5 = $600 million per chip. I realize they didn't design them in isolation but people expecting a 5120-6144 CUDA core GP102 sounds more like they are very disappointed with GP100's 3840 @ 1.48Ghz projected performance rather than grounded in any actual facts/evidence.

Thus far there is no indication whatsoever that GP104 will have 4096 CUDA cores and GP102 will have 5120-6144 CUDA cores. Until further information leaks, we have 10+ years of NV GPU history with Tesla/Quadro and Flagship GeForce. That history tells us that NV's largest and fastest Tesla/Quadro/GeForce chip was largely the same. That suggests until further leaks that contradict history, GP102 GeForce will be a variant of GP100 with different GPU / memory clocks to up the performance for gaming.

Extrapolations of 6144 CUDA core 1080Ti with 1.5Ghz sounds like some made up fantasy based on those leaks we've seen recently that literally doubled everything from Maxwell and called it a day. Looking back to G80, GTX280/285, GTX480/580, GTX780Ti, the fastest Tesla/Quadro and fully unlocked flagship GeForce were based on roughly the same silicon with minor adjustments. There was no mythical gaming chip with 50-60% more performance than the flagship Tesla/Quadro card.

Yet, on this forum people are predicting GP102 to have 60% more CUDA cores than GP100, while retaining similar GPU clock speeds? Ya, so:

6144 CCs @ 1480mhz / (3072 CCs x 1075 Titan X) = 2.75X the performance increase, without accounting for Pascal's IPC.

I guess all that wait for Pascal and expectations is getting to people's heads, huh?

Word on the street is that.. there is already a 6 month allocation of P100s and one of the customers that are hoarding this right now is google.

Im sure we will see cut down versions of the DGX-1 (thinking these will come at Q1'17).

I am down with that theory, especially since this already happened with Kepler. Not only did NV release a cut-down Titan, but they released an even more cut-down GeForce based on the version of the flagship Kepler die.
http://www.anandtech.com/show/6973/nvidia-geforce-gtx-780-review

Even for 25% extra performance (launch data), NV still released a massively cut-down Kepler die.



If you compare 780 to 580 at launch, that's 64% faster. It's only later that NV released a fully unlocked and MUCH higher clocked 780Ti and Titan Black. What's stopping NV from repeating this exact strategy with Pascal since it worked so remarkably well?

How many people upgraded from 670/680 to 780 to 780Ti? The more upgrades for NV, the more $ they make.

Also, if flagship GP102 has 5120-6144 CUDA cores @ 1.48Ghz or so, not 1 person in this thread from what I have seen has come up with how in the world will NV increase that level of performance another 50-100% with Volta on the same 16nm FinFET? Let me guess this GP102 is a 400-450mm2 die only and flagship Volta with 8000-9000 CUDA Cores will be >600mm2?
 
Last edited:
Mar 10, 2006
11,715
2,012
126
The exact same thing was said of every Tesla/Quadro 500mm2+ chip at every new NV generation launch. It's too expensive right now but sooner or later, NV will want to either recoup the cost of GP100 R&D development and/or deal with all the non-fully yielding die. What are they going to do with all those left overs? Throw them out?

I think we saw a significant new precedent with the release of GK210. This part was an HPC/professional part only that was optimized for those workloads. GM200 was also sold as a "professional" card but really, this was a high-end gaming card through and through.

At this point, NVIDIA's professional and gaming market separately are so large in terms of revenue that the company really can justify building derivative GPUs optimized for the target workloads.

To provide you with some perspective, "Pro visualization" (i.e. Quadro) is now a $750M+/year business for NVIDIA, and Tesla is now a $330M+/year business. GeForce is a $2.8 billion/year business.

To protect/grow that $2.8 billion/year business, NVIDIA will absolutely build the best, most competitive gaming-focused GPUs that it can. To not do so would be to put its most important business segment in serious jeopardy, nothing that an even barely competent management team would do.

At the same time, to serve the large combined $1B+/year Quadro/Tesla market, NVIDIA is clearly investing to make products that are tailored to meet the needs of those customers.

As far as the R&D costs go, the vast majority of the R&D goes into building the fundamental core architecture. Building tweaked/segment-optimized GPUs around that same basic architecture with some bits added/removed isn't all that huge of an expense, at least relative to the market opportunities here.

I think we will see targeted gaming products based on the basic Pascal architecture, just as we are seeing targeted HPC products. It's the sensible thing for NVIDIA to do in order to try to maintain/grow its market share and, by virtue of having competitive products, keep gross profit margins where they are now.
 

Timmah!

Golden Member
Jul 24, 2010
1,463
729
136
AMD quoted the same. This confirms the same.

"While initial estimates for new chips at 16/14nm have ranged as high as $300 million....how internal costs are assigned by companies to amortize them across departments. The cost of moving to a new process node can be huge. Qualcomm, which reported revenue of $24.9 billion in fiscal 2013, said the price tag is $2 billion. "
http://semiengineering.com/the-real-numbers-redefining-nre/

"Huang said that thousands of engineers have been working on the Pascal GPU architecture," noted Tim Prickett Morgan, coeditor of our sister site The Next Platform. "The effort, which began three years ago when Nvidia went 'all in' on machine learning, has cost the company upwards of $3 billion in investments across those years. This is money that the company wants to get back and then some."
http://www.theregister.co.uk/2016/04/06/nvidia_gtc_2016/

If NV has 5 distinct designs, say GP100, GP102, GP104, GP106, GP107, that's roughly $3B / 5 = $600 million per chip. I realize they didn't design them in isolation but people expecting a 5120-6144 CUDA core GP102 sounds more like they are very disappointed with GP100's 3840 @ 1.48Ghz projected performance rather than grounded in any actual facts/evidence.

Thus far there is no indication whatsoever that GP104 will have 4096 CUDA cores and GP102 will have 5120-6144 CUDA cores. Until further information leaks, we have 10+ years of NV GPU history with Tesla/Quadro and Flagship GeForce. That history tells us that NV's largest and fastest Tesla/Quadro/GeForce chip was largely the same. That suggests until further leaks that contradict history, GP102 GeForce will be a variant of GP100 with different GPU / memory clocks to up the performance for gaming.

Extrapolations of 6144 CUDA core 1080Ti with 1.5Ghz sounds like some made up fantasy based on those leaks we've seen recently that literally doubled everything from Maxwell and called it a day.

I think, if there is any GP102 at all, its going to be those 3840 shaders big. 4096 top. Obviously no FP64 stuff, respectively minimal number like on GM200.
 
Feb 19, 2009
10,457
10
76
What kind of math is that?

Fully unlocked GP100 is 3840 CCs. Let's assume it comes clocked at 1480mhz and the full 1TB/sec HBM2. Out of the box, most reference 980Ti cards Boost to 1202mhz or so.

3840 * 1480 / (2816 * 1202) = 68%.

That's not accounting for:

1) NV's concentrated focus on Pascal driver support, which coincidentally means less focus on Maxwell (*Kepler gen hint hint*). With the architecture mimicking many parts of GCN, it also means GCN PS4/XB1 console ports will run much better on Pascal vs. Maxwell with minimal optimization.

2) increase in IPC of Pascal over Maxwell.

I have no clue where you got your 35% number from. I am pretty sure you have GP104 mixed up with GP100.

FYI, on paper, the performance increase from 780Ti to 980Ti is LESS than going from a 980Ti to GP100 and people tout 980TI as the greatest thing since sliced bread this gen. Tons and tons of 780Ti users upgraded to 980Ti for WAY less than a 68% boost in performance.

That's what I was talking about.

There's no way for GP100 to be less than 50% faster than GM200 in gaming. Specs alone gives it a massive edge already. Add IPC and a new GCN-like layout to take advantage of next-gen games, it will be a huge leap in new games.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think we saw a significant new precedent with the release of GK210. This part was an HPC/professional part only that was optimized for those workloads. GM200 was also sold as a "professional" card but really, this was a high-end gaming card through and through.

This is not a good example at all. GK210 is just a GK110 that's severely cut down and clock speed optimized for perf/watt. There no magic inside GK210.

"Fitting a pair of GPUs on a single card is not easy, and that is especially the case when those GPUs are GK210. Unsurprisingly then, NVIDIA is shipping K80 with only 13 of 15 SMXes enabled on each GPU, for a combined total of 4,992 CUDA cores enabled. This puts the clockspeed at a range of 562MHz to 870MHz. Meanwhile the memory clockspeeds have also been turned down slightly from Tesla K40; for Tesla K80 each GPU is paired with 12GB of GDDR5 clocked at 5GHz, for 240GB/sec of memory bandwidth per GPU."

So what you are telling me GK210 that came after full blown GK110 is an example that proves that NV will now produce a 5120-6144 Cuda core GP102/ It anything, it proves the completely opposite -- that NV will release a fully unlocked and even higher clocked 3840 CC GP100 as possibly GP102.

GP100 3584 cut down is the exact repeat of the OG Titan.

At this point, NVIDIA's professional and gaming market separately are so large in terms of revenue that the company really can justify building derivative GPUs optimized for the target workloads.

This is just a hypothesis with 0 evidence to support such claims. For the last 10 years NV never made a flagship gaming GeForce that wasn't based on the large version Big Daddy die. Whether it's Fermi, Kepler, Maxwell, every single time, the biggest chip NV had was always the underlying GeForce Big chip. The only difference for Tesla/Quadro markets was a matter of how cut down their SKU was and how much lower its clocks were relative to the GeForce version.

To protect/grow that $2.8 billion/year business, NVIDIA will absolutely build the best, most competitive gaming-focused GPUs that it can. To not do so would be to put its most important business segment in serious jeopardy, nothing that an even barely competent management team would do.

So you are suggesting a 1.5Ghz 3840 CUDA core 1TB/sec 16GB HBM2 Pascal isn't good enough? You mean to tell us that 68% higher gaming performance at 4K is not good enough now? Add in IPC increase of 20%, could be looking at well over >80% faster. Add in GCN optimized console ports that will run faster on Pascal, could be looking at 90-100% faster.

I think we will see targeted gaming products based on the basic Pascal architecture, just as we are seeing targeted HPC products. It's the sensible thing for NVIDIA to do in order to try to maintain/grow its market share and, by virtue of having competitive products, keep gross profit margins where they are now.

The proposed theories of 5120-6144 CUDA core 1.5Ghz Pascal would ensure it's 2.25-2.75X faster than 980Ti. I'll let you think about how realistic that is as well as how the hell do they improve another 50-70% from that on the same node with Volta?

I guess some people here expected Pascal to beat Maxwell by 2.25-2.75X now? I guess that's news to me. The exact same people expressing disappointment with Pascal are the same people who LOVED 680, 780Ti and 980Ti but now 80% faster than 980Ti is crap so let's start making up 5120-6144 CUDA core Pascal to raise those expectations back up?

Like where is this idea of a 6000+ CUDA core Pascal coming from when in the last 10 years NV has never done something like this.

That's what I was talking about.

There's no way for GP100 to be less than 50% faster than GM200 in gaming. Specs alone gives it a massive edge already. Add IPC and a new GCN-like layout to take advantage of next-gen games, it will be a huge leap in new games.

Well apparently upgrading from 580 to 680, 680 to 780Ti, 780Ti to 980Ti was worthwhile but Pascal possibly beating 980Ti by 'only' 70% isn't meeting expectations. That's to say nothing of its overclocking headroom, increase in IPC, support for HDR, etc.

I bet the minute GP104 1080 drops and has 'only' 30-35% more performance over a 980Ti, there will be people selling 980Tis and jumping on that as a stop-gap. Most people who have 980Ti need to have the fastest since that's how they roll. If tomorrow NV released a card 25% faster, they would still upgrade
 
Last edited:

Kris194

Member
Mar 16, 2016
112
0
0
Even x70 will be faster than GTX 980 Ti/Titan X. This was ALWAYS the case, not to mention x80 like you said.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
This is not a good example at all. GK210 is just a GK110 that's severely cut down and clock speed optimized for perf/watt. There no magic inside GK210.

Did you read the rest of the way down the article you were quoting?

To that end, while NVIDIA hasn’t made any sweeping changes such as adjusting the number of CUDA cores or their organization (this is still a GK110 derivative, after all) NVIDIA has adjusted the memory subsystem in each SMX. Whereas a GK110(B) SMX has a 256KB register file and 64KB of shared memory, GK210 doubles that to a 512KB register file and 128KB of shared memory. Though small, this change improves the data throughput within an SMX, serving to improve efficiency and keep the CUDA cores working more often. NVIDIA has never made a change mid-stream like this to a GPU before, so this marks the first time we’ve seen a GPU altered in a later revision in this fashion.

These alterations to the register file and on-board RAM cache size are silicon-level changes. They require a new mask, with all the expense that entails. Nvidia clearly felt it was worth it.

For the last 10 years NV never made a flagship gaming GeForce that wasn't based on the large version Big Daddy die. Whether it's Fermi, Kepler, Maxwell, every single time, the biggest chip NV had was always the underlying GeForce Big chip. The only difference for Tesla/Quadro markets was a matter of how cut down their SKU was and how much lower its clocks were relative to the GeForce version.

Nvidia never had to face serious competition from Intel (Xeon Phi) until recently.

So you are suggesting a 1.5Ghz 3840 CUDA core 1TB/sec 16GB HBM2 Pascal isn't good enough? You mean to tell us that 68% higher gaming performance at 4K is not good enough now?

Yes, it's good enough now. But it isn't being released now, and won't be released any time soon. All the chips for 2016 are booked up in the ultra-expensive supercomputers Nvidia is selling. Even Tesla P100 cards won't be sold individually until 2017.

And by Q1 2017, GP100 won't be good enough as a gaming/workstation card, since Vega 10 will be coming out, and AMD cards don't have to make the same compromises Nvidia cards do to support FP64. (Look at Hawaii: 123mm^2 smaller than GK110 on the same node, yet superior in virtually every respect. This is because GK110 had to waste so much die space on that 1/3 FP64 support, while AMD uses a more efficient method that lets the same hardware be reused for both functions.)
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
What kind of math is that?

Fully unlocked GP100 is 3840 CCs. Let's assume it comes clocked at 1480mhz and the full 1TB/sec HBM2. Out of the box, most reference 980Ti cards Boost to 1202mhz or so.

3840 * 1480 / (2816 * 1202) = 68%.

That's not accounting for:

1) NV's concentrated focus on Pascal driver support, which coincidentally means less focus on Maxwell (*Kepler gen hint hint*). With the architecture mimicking many parts of GCN, it also means GCN PS4/XB1 console ports will run much better on Pascal vs. Maxwell with minimal optimization.

2) increase in IPC of Pascal over Maxwell.

I have no clue where you got your 35% number from. I am pretty sure you have GP104 mixed up with GP100.

FYI, on paper, the performance increase from 780Ti to 980Ti is LESS than going from a 980Ti to GP100 and people tout 980TI as the greatest thing since sliced bread this gen. Tons and tons of 780Ti users upgraded to 980Ti for WAY less than a 68% boost in performance.

It's a kind of math known as arithmetic, you might have heard of it.

First of all I'm assuming that the first Geforce version of GP100 would be a cut down version and not a full version. Why? Because that's the only rational thing to expect at this point.

When Nvidia last released a compute focused GPU (GK110), it took 3 months to get a cutdown Geforce version out (Titan), and an additional 9 months to get a fully unlocked version out (780 Ti). GK110 was arguably a significantly easier GPU to make than what GP100 probably is, reason being it's smaller, came on a more mature node, didn't implement any new potentially problematic technology (interposer, HBM, NVLink), and used a tried and true architecture (Kepler).

So in other words it will probably take at least 12 months from the release of P100 until the release of a fully unlocked GP100 in Geforce form. So we're looking at Q1 2018, and at that point the competition is no longer called Vega and GP102*, but rather Navi and Volta, and if you honestly believe a fully unlocked GP100 will be able to compete with Navi and Volta, then you are incredibly naive imho.

Secondly if you had actually bothered reading the post you replied to, then you would have seen that I was not comparing to a reference 980 Ti but rather to aftermarket versions. Why? Because unless you're planning to manually overclock you would be insane not to pay ~$50 more for ~20% more performance.

So the actual comparison I was making (which imho is the only rational comparison), was a 1480 MHz 3584 core GP100 vs. a ~1400 MHz 2816 core 980 Ti, which gives you roughly a 35% advantage to GP100.

*I don't know if the speculation of a GP102 chip is true, but I'm assuming that Nvidia will have a gaming focused chip of some sort, which is faster than GP104, to compete with Vega
 
Last edited:

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
Even x70 will be faster than GTX 980 Ti/Titan X. This was ALWAYS the case, not to mention x80 like you said.
you are stating that with what? this is the first time nvidia has actually 1 line completly isolated from the other
hpc cards now are very different from the consumers one
also we dont know nothing about the maxwell v2.0 to draw a conclusion about how its going to be faster...
 

jpiniero

Lifer
Oct 1, 2010
14,841
5,456
136
First of all I'm assuming that the first Geforce version of GP100 would be a cut down version and not a full version. Why? Because that's the only rational thing to expect at this point.

There isn't going to be a consumer part that uses GP100.
 

steve wilson

Senior member
Sep 18, 2004
839
0
76
Can some kind soul break this down for a mere mortal? As far as I can tell there is no news about GTX 1080 and it's still a guessing game as to when it will be released into the wild?
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
Can some kind soul break this down for a mere mortal? As far as I can tell there is no news about GTX 1080 and it's still a guessing game as to when it will be released into the wild?

Nothing solid or from nvidia yet only speculation based on p100 which has nothing to do with the consumers cards
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
Pascal graphics card to launch at Computex 2016 and enter mass shipments in July

Nvidia is ready to announce its Pascal graphics cards at Computex 2016 from May 31-June 4, with graphics card players including Asustek Computer, Gigabyte Technology and Micro-Star International (MSI) showcasing their reference board products, according to sources from graphics card players.

The graphics card players will begin mass shipping their Pascal graphics cards in July and they expect the new-generation graphics card to increase their shipments and profits in the third quarter, the sources noted.

Nvidia initially plans to reveal GPUs including GeForce GTX 1080 and 1070 at Computex 2016 and has already begun to clear inventory of its existing GPUs to prepare for the next-generation products.

The sources pointed out that the graphics card market continues to see weak demand in the first half and most players' shipments in the second quarter are expected to drop around 10% from the first.

Meanwhile, AMD has prepared Polaris-based GPUs to compete against Nvidia's Pascal; however, the GPUs will be released later than Nvidia's Pascal and therefore the graphics card players' third-quarter performance will mainly be driven by demand for their Nvidia products.

In 2015, worldwide graphics card shipments dropped below 30 million units because of shrinking demand and rapid exchange rate fluctuations, which caused demand from Russia and Latin America to drop sharply. However, with increasing demand from the gaming and virtual reality market, the sources expects high-end graphics card to see strong performance, helping worldwide graphics card shipments to stay flat on year in 2016.

www.digitimes.com/news/a20160408PD205.html
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
Another source:

Korean sources seem to suggest that NVIDIA will unveil their GeForce GTX 1000 series graphics card soon as they cards are already under full production. Both NVIDIA GeForce GTX 1080 and GeForce GTX 1070 (naming not finalized) will be successors to the GeForce GTX 980 and GeForce GTX 970 which were released two years ago in 2014. These graphics cards will be based on the latest Pascal GPU architecture which was showcased by NVIDIA and we covered the architecture in full detail over here.

www.kbench.com/?q=node/161713
 
Feb 19, 2009
10,457
10
76
Your original OP with the Bitsandchips leak a few months ago is spot on btw.

Tesla announce, launch later. GP100 has ~1200mm2 interposer (huge die). All confirmed.

Computex announce GP104. Availability in Q3 2016.
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
GTX X80 - 3072 CUDA cores, with updated Maxwell to Pascal Arch, and higher core clocks.
GTX X70 - 2560 CUDA cores.
 

xpea

Senior member
Feb 14, 2014
449
150
116
First time someone writes that P100 has no video output with source from the field:
http://vrworld.com/2016/04/08/nvidia-mezzanine-nvlink-connector-pictured/

Some manufacturers simply gave up on the idea of calling Pascal GPU architecture – a GPU or GPGPU, but rather called it “CPU”, which in a way, Tesla P100 certainly qualifies (no display outputs, no video drivers, pure compute output). For example, Zoom NetCom was showing its OpenPOWER design called RedPOWER P210, featuring two POWER8 processors and four Tesla P100’s. Their naming for the mezzanine connector? JCPU.


This article also mentions that P100 mezzanine format is compatible with V100 (Volta) for easy upgrade
Given that IBM’s OpenPOWER conference is taking place at the same time as GTC, we searched for more details about the Mezzanine connector and the NVLink itself, and stumbled on quite an interesting amount of details. First and foremost, ever Pascal (Tesla P100) and Volta (Tesla V100) product that utilizes the NVLink will use the same connector, making sure that you have at least one generation of cross-compatibility.
 

Head1985

Golden Member
Jul 8, 2014
1,866
699
136
Actually if GP104 is 290-300mm2 it could have less CUDA Cores than GM200.
Why?
256bit 64Rops card can have 3072SP dont you think?
600mm2 on 28=300mm2 on 16nm + 96Rops vs 64Rops and 384bit vs 256bit.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Why?
256bit 64Rops card can have 3072SP dont you think?
600mm2 on 28=300mm2 on 16nm + 96Rops vs 64Rops and 384bit vs 256bit.

Because Front-End will be much bigger than GM200, also bigger Caches etc. And each CUDA Core could have more transistors than GM200.

Overall you get higher performance than GM200 but not with more cores.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |