[VideoCardz]NVIDIA GP104 and first Polaris GPUs supposedly spotted on Zauba

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Yawn, 2016 and the multi-GPU small-die flagship theorists are back in full force, AFTER AMD has commented that they want to move away from budget image and AFTER AMD delayed Fury X2. Considering how we have seen some major AAA games not have SLI/CF support, how no UE4 games support SLI/CF (as far as I am aware), and considering how both AMD/NV just unleashed 596-601mm2 28nm flagship die products for $650, the trends cannot be clearer. What's going to happen is as the 14nm/16nm node matures, AMD/NV will repeat the 28nm node by slowly moving from 275-350mm2 die towards a 600mm2 die over the next 3-5 years. Then, the strategy will restart with 7/10nm node. It may be true that along the way we could see a dual-chip card with mid-range die aka GTX690/7990 style but there is no logical reason to suggest that AMD/NV will stop pushing the die size limits along the node maturity curve.

Perhaps, most important of all is the consumer's buying trends -- to this date, no dual-chip flagship card from ATi/AMD/NV has ever sold well. Even the OG Titan sold 4X quicker than GTX690 according to NV (http://www.pcgamer.com/nvidias-surp...e-year-old-gtx-690-in-just-3-months-were-not/). Even when R9 295X2 cost almost the same as the much slower 980, almost no one cared for the cooler, quieter and way, way faster R9 295X2. Even if the technology and cost allow for dual mid-range + interposer concoction, the consumers have long voted that 1 large monolithic die, even with way less performance is more preferable to them. It would be a business mistake for both AMD/NV to not see these trends, along with a buckets of data going back to HD4870X2/GTX295 that just underlines what I typed above.
I can only conclude that you jumped straight into this thread without reading previous posts by several individuals.

It seems too many can't see past SLI/XFire when they read multi die composite GPU. Discard that thinking.

There are several possible solutions for having a composite GPU function and appear as a monolithic one. One is a central core die with all IP blocks except [geometry, CU, rasterizer and rops]. Latter block is replicated as needed. From one of the research papers linked earlier "the impedance across the interposer is identical to conventional on-chip interconnects".
When you look at a die shot of a GPU, don't you see blocks that have specific functions? How are these connected on a monolithic GPU? Don't we use routing through the die layers? All an interposer will do is substitute the already existing on-die connections in a monolithic design for an-off die interposer connection between the separated IP blocks.

Also don't get stuck thinking that the multi die must be composed of identical sub-units. It can be asymmetrical and still have the ability to scale.

You acknowledge that AMD wants to increase ASP and change their image as the low price option. This strategy is the ideal way to achieve this goal.

What if they can offer a big die [600mm^2] equivalent GPU this year? What do you think their ability to price will be?

You see a gradual approach to the 600mm^2 monolithic GPU as a given.

The interposer can be larger than the Fiji sized one. We're acting as if the interposer has active elements. It's a piece of silicon with copper routing. The design problem is an interconnect layout that is able to be done with smaller reticle limited exposures. This was done with Fiji, so they have experience.

In this case, why do you think that 600mm^2 is the limit for a single GPU?

What if they can offer a bigger than 600mm^2 GPU next year? What do you think their ability to price will be?

One of the papers linked had a cost analysis for what looked to be a mature process, [91.7% yield @ 148.5mm^2]. It is cheaper to do multi die than monolithic as you already have an interposer. I think it's safe to assume a greater benefit with a lower yielding new process.
The ability to obtain a greater bin range is icing on the cake.
 

bsp2020

Member
Dec 29, 2015
105
116
116
Considering how we have seen some major AAA games not have SLI/CF support, how no UE4 games support SLI/CF (as far as I am aware)

...

Even when R9 295X2 cost almost the same as the much slower 980, almost no one cared for the cooler, quieter and way, way faster R9 295X2.

That's why it is critical that AMD use interposer to integrate multiple dies. Using interposer will allow them to build a multiple-die chip that looks like a single-die chip to programmer.

Congratulations, you just exposed the flaw that I've been trying to point out: In order for this idea to work, the dies used must be designed for the specific purpose of being used as part of a monolithic die. In other words, this can't be used to save on the R&D costs of developing a bigger die because another die has to be created anyway.
Are you saying that no one can develop distributed shared memory machine because they never had resources develop a chip which can have all those CPUs on a single die? As you said, you have to design a die for a specific purpose. You can design your IP in a modular, scalable way so multiple dies can work seamlessly. Interposer will provide a lot of interconnection wires between those dies so that they have enough bandwidth to work as a single die.

It's unlikely that you can just slap two existing dies on an interposer and call it a day, as most people here seem to think. This is especially true now, since even if you could do that, it would require the low-end dies to used HBM2 when acting independently as well unless they somehow have two separate memory controllers. In other words, this is not happening this year, end of story.

I showed you that even the medium sized 14nm die (~300mm^2) can use the bandwidth of HBM since it would have about the same number of CUs as Fiji. BTW, I don't think anyone here who believed that multiple-die chip is feasible said it will be available this year. Since there is no convincing anyone who has already made up their mind, I'll just leave this discussion with my prediction.

"AMD will introduce 14nm multi-die chip with ~16 billion transistors to the market before NVidia introduces their monolithic ~18 billion transistor pascal product"

I'm not saying that it will happen this year or next year. What I'm saying is that using interposer will make it easier to build more complex products, allowing the company that uses it to introduce more complex product early.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You have missed the point, if done correctly it will work the same and seen the same as a single large GPU.

Ya, we've been waiting for IF for 30 years now for CPUs and GPUs. To this date, no one has been able to create a multi-chip solution that acts seamlessly as a single large chip. Since graphics workloads are parallel in nature and consumers are willing to pay for large die chips, consumer demand will continue for 520-600mm2 die large monolith chips. Furthermore, even when this IF scenario happens, they could just take 2x Large Die chips and put them together with an AIO CLC.

This too. Everyone is ignoring this major point and they just expect this to pop up out of nowhere with no actual evidence that it's happening.

Bingo. AMD/NV already stated that HBM1/2 is going to be reserved for flagships products, with lower and mid-range SKUs getting GDDR5(X). Of course even the mid-range GDDR5(X) parts will have the capability to easily outperform 980Ti/Fury X. This means that the idea of multi-chip mid-range chips + interposer itself is in-congruent with he statements made by GPU manufacturers. They will use traditional GDDR5(X) with smaller die chips and interposer+HBM with large die chips. There isn't even a single rumour about interposer+mid-range die dual chip cards. Sounds like someone's wild imagination aka Fury X = dual Tonga XTs.

Another point that keeps getting ignored is that for some HPC markets, double precision is still important. The compromises of making a mid-range chip that's great for games and also is good for FP64 compute are too great. That means, AMD/NV will still have to create a 420-600mm2 larger die chips to address this market segment. Once they do, it stands to reason that these parts can be re-purposed for gaming applications.

Some people here are missing the part that when you take 2 chips, you are wasting a lot of transistor space on redundancy (hardware schedulers, command processor, L2 cache, memory controllers, etc. are all repeated on chip #1 and chip #2). This also impacts perf/watt and aggregate power usage.

GTX970 is a 398mm2 die and 980Ti is as fast as GTX970 SLI but is 'only' a 601mm2 die. Where is the financial data backing up that it would cost less and be more profitable to sell 2x(325-375mm2 mid-range die) vs. a single 600mm2 die clocked higher?

That's why it is critical that AMD use interposer to integrate multiple dies. Using interposer will allow them to build a multiple-die chip that looks like a single-die chip to programmer.

As long as node shrinks are alive, and it's cost effective to manufacture large die chips, there is no reason to do what you said. Once we get to 5nm and silicon is end of the line, they may switch to another material anyway which will allow new ways of scaling/manufacturing large die chips. You are talking about a possibility of this in 10+ years from now, and how does that relate to Pascal/Polaris?
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Ya, we've been waiting for IF for 30 years now for CPUs and GPUs. To this date, no one has been able to create a multi-chip solution that acts seamlessly as a single large chip. Since graphics workloads are parallel in nature and consumers are willing to pay for large die chips, consumer demand will continue for 520-600mm2 die large monolith chips. Furthermore, even when this IF scenario happens, they could just take 2x Large Die chips and put them together with an AIO CLC.



Bingo. AMD/NV already stated that HBM1/2 is going to be reserved for flagships products, with lower and mid-range SKUs getting GDDR5(X). Of course even the mid-range GDDR5(X) parts will have the capability to easily outperform 980Ti/Fury X. This means that the idea of multi-chip mid-range chips + interposer itself is in-congruent with he statements made by GPU manufacturers. They will use traditional GDDR5(X) with smaller die chips and interposer+HBM with large die chips. There isn't even a single rumour about interposer+mid-range die dual chip cards. Sounds like someone's wild imagination aka Fury X = dual Tonga XTs.

Another point that keeps getting ignored is that for some HPC markets, double precision is still important. The compromises of making a mid-range chip that's great for games and also is good for FP64 compute are too great. That means, AMD/NV will still have to create a 420-600mm2 larger die chips to address this market segment. Once they do, it stands to reason that these parts can be re-purposed for gaming applications.

Some people here are missing the part that when you take 2 chips, you are wasting a lot of transistor space on redundancy (hardware schedulers, command processor, L2 cache, memory controllers, etc. are all repeated on chip #1 and chip #2). This also impacts perf/watt and aggregate power usage.

GTX970 is a 398mm2 die and 980Ti is as fast as GTX970 SLI but is 'only' a 601mm2 die. Where is the financial data backing up that it would cost less and be more profitable to sell 2x(325-375mm2 mid-range die) vs. a single 600mm2 die clocked higher?



As long as node shrinks are alive, and it's cost effective to manufacture large die chips, there is no reason to do what you said. Once we get to 5nm and silicon is end of the line, they may switch to another material anyway which will allow new ways of scaling/manufacturing large die chips. You are talking about a possibility of this in 10+ years from now, and how does that relate to Pascal/Polaris?
Interposers are now. The connection density [500/mm^2] did not exist before.

Its amazing that you bring back up everything addressed 3 posts ago.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Its amazing that you bring back up everything addressed 3 posts ago.

It's amazing how the dual-chip mid-range die theorists keep creating baseless hype for this new GPU strategy every 6 months in light of the fact that AMD/NV are both shifting away from multi-die designs towards a strategy of low, mid-range and large monolith die chips to address different markets.

- Interposer = expensive
- HBM2 = expensive

It makes no sense to take cutting edge, complex and expensive technology and combine it with mid-range tech (aka mid-range die).

600mm2 14nm FinFET Polaris/Pascal >>>>> 2x325-350mm2 graphics card. This has been the case for every generation up to now. Besides, no one has been able to create a real world product made of 2 separate CPUs/GPUs that are seen as 1 singular cohesive functional unit.

What's mind-blowing is just before Fury X launched, we saw these crazy theories pop up and instead of finally realizing what's actually happening in the GPU market, nope, early 2016 and we are back to the same baseless theories. There should be a separate sub-forum for "AMD is going bankrupt this year" and "mid-range multi-chip flagships this year -- the death of large die monolith chips" where all conspiracy theories can be addressed separately without derailing the main forum.

What's next, make a next gen flagship out of 10x GP107 chips?

Here is what AMD/NV are doing for the next generation -- HBM2 + Large Die:





2018-2020 Volta: Probably the same thing also.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
When Ryan Smith @Anandtech wrote the Polaris article he included this piece.
"To that end the Polaris architecture will encompass a few things: the fourth generation Graphics Core Next core architecture, RTG's updated display and video encode/blocks, and the next generation of RTG's memory and power controllers. Each of these blocks is considered a seperate IP by RTG, and as a result they can and do mix and match various versions of these blocks across different GPUs, such as the GCN 1.2 based Fiji containing an HEVC decoder but not the GCN 1.2 based Tonga. This, consequently, is part of the reason why AMD has always been slightly uneasy about our unofficial naming"



A multi-Die on interposer can be asymetrical in nature. The sub-units do not have to be identical. Mix IP blocks as necessary. A command processor, cache, display, I/O, etc in one unit. Rasterizer, geometry CUs and rops in another sub-unit. Duplicate the CU sub unit. Interposer signal routing substitutes for on chip routing.

This paper linked by bsp2020 [http://www.eecg.toronto.edu/~enright/Kannan_MICRO48.pdf] shows cost savings if you are already using an interposer. This will apply for HPC as well. In spite of this, some keep bringing up how much more it will cost. Talk about blinders. It will cost less to assemble a big GPU using smaller components, irrespective of use, HPC or desktop. I guess saving money isn't important for AMD.

AMD can assemble a traditional max size Die GPU [600mm^2] now with this tech versus waiting till next year for yields to improve, or loosing many now due to defects.What is the superior position?

Next year they can go past the 600mm^2 traditional monolithic Die limit. Will they be in a superior position?

No one has done this before is the cry. I thought this was a tech forum. We have research papers sponsored by AMD from several yrs ago arguing for this to happen.

Best go back home to the cave and light our fire for the night.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
That's why it is critical that AMD use interposer to integrate multiple dies. Using interposer will allow them to build a multiple-die chip that looks like a single-die chip to programmer.


Are you saying that no one can develop distributed shared memory machine because they never had resources develop a chip which can have all those CPUs on a single die? As you said, you have to design a die for a specific purpose. You can design your IP in a modular, scalable way so multiple dies can work seamlessly. Interposer will provide a lot of interconnection wires between those dies so that they have enough bandwidth to work as a single die.



I showed you that even the medium sized 14nm die (~300mm^2) can use the bandwidth of HBM since it would have about the same number of CUs as Fiji. BTW, I don't think anyone here who believed that multiple-die chip is feasible said it will be available this year. Since there is no convincing anyone who has already made up their mind, I'll just leave this discussion with my prediction.

"AMD will introduce 14nm multi-die chip with ~16 billion transistors to the market before NVidia introduces their monolithic ~18 billion transistor pascal product"

I'm not saying that it will happen this year or next year. What I'm saying is that using interposer will make it easier to build more complex products, allowing the company that uses it to introduce more complex product early.

It's not the bandwidth that's the issue; it's the cost and yields. There's a reason that it's rumored that Nvidia will only use HBM2 on GP100, and it's not related to the benefits of HBM at all. Face it, it's not realistic to expect this to happen this year. HBM2 won't even truly be ready for a mass-produced product until after the first HBM2 chips release. Call me back in a year, and then you can maybe talk about this possibility.

By the way, how is the issue of it not being crossfire going to be handled? It might require a separate controller chip on the interposer, which again increases costs and makes it even less feasible for a low-end chip. You guys are expecting Rome to be built in a day. Advancement takes time, simple as that.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
A multi-Die on interposer can be asymetrical in nature. The sub-units do not have to be identical. Mix IP blocks as necessary. A command processor, cache, display, I/O, etc in one unit. Rasterizer, geometry CUs and rops in another sub-unit. Duplicate the CU sub unit. Interposer signal routing substitutes for on chip routing.

...

No one has done this before is the cry. I thought this was a tech forum. We have research papers sponsored by AMD from several yrs ago arguing for this to happen.

Best go back home to the cave and light our fire for the night.

You are taking his quote completely out of context. It has nothing to do with multi-die interposer design like you are talking about.

If you want to discuss crazy theories of how graphics cards will ditch proven and well-understood designs for some insane asymmetrical multi-die mid-range interposer design, go start a new thread in the AMD/NV sub-forum. There is no need to take Pascal/Polaris GPU threads out of context with these wild theories. This is the same garbage we read on Fury X being a dual-chip design and now this non-sense is being regurgitated again for Pascal and Polaris.

Fact is, you provided no proof/logical reasoning at all why AMD/NV should suddenly abandon the large monolithic die strategy considering it has worked so well for them for the last 20+ years. Even AMD has now adopted this strategy full force with 290X and now Fiji.

Everything you described is the complete opposite of simplifying GPU design. Making multi-die graphics card where one die has certain parts and the other has other parts and making them work together as 1 cohesive unit, while overcoming latencies, using complex interposer, having Windows/driver see multiple die as 1 GPU vs. just fabbing a larger die single chip as 14nm/16nm node matures is the definition of complexity.

Again, there is no logic to your idea whatsoever. Even if your idea could work, the industry is moving towards larger SoCs, integrated designs, 3D memory, reduction of latencies and larger die. So really, it's actually YOU with the wild theories that has to convince the rest of the world that it makes sense and thus far it's not convincing at all.

It's even crazier to think that it would be better to take a 600mm2 die and split it into 2 smaller die, asymmetrical units in each of those die because now you are increasing the part's chance of failure. If die 1 has a 5% chance of failure and die 2 has a 5% chance of failure, the chance of failure for the single PCB graphics card would be 1-0.95*0.95 = 9.75%. Your idea not only increases complexity, but it increases RMA costs.

You also didn't address how the hell is it better to take 2 mid-range GDDR5(X) chips and combine them on an interposer vs. a single 500-600mm2 HBM2 design.

Face it, it's not realistic to expect this to happen this year. HBM2 won't even truly be ready for a mass-produced product until after the first HBM2 chips release. Call me back in a year, and then you can maybe talk about this possibility.

By the way, how is the issue of it not being crossfire going to be handled? It might require a separate controller chip on the interposer, which again increases costs and makes it even less feasible for a low-end chip. You guys are expecting Rome to be built in a day. Advancement takes time, simple as that.

A year? Not a chance. Even if AMD/NV could build 2 die on an interposer, they won't be able to solve the OS/driver seeing them as 2 separate chips for SLI/CF. Besides, there are far better ways to increase performance over time such as feeding massive amounts of memory bandwidth to the GPU by stacking the memory directly onto the GPU and pushing the limits of die area as 7nm, 10nm, 14nm/16nm nodes reach maturity.

No one has done this before is the cry.

Yes, because 1000s of engineers at AMD/NV think there is a better way. When over the next 5 years we see 2nd and 3rd generation 14nm/16nm FinFET maturity and die sizes start approaching 780Ti/980Ti/Fury X levels, what's going to be your response? There is sufficient technology in place to continue using proven techniques.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Interposers are now. The connection density [500/mm^2] did not exist before.

Its amazing that you bring back up everything addressed 3 posts ago.

You accuse RS of not reading. Have you not read what has been basically the same thread as this pop up for Hawaii and Fiji? That's his point. Others haven't been reading for the last two years. never mind just this thread.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You accuse RS of not reading. Have you not read what has been basically the same thread as this pop up for Hawaii and Fiji? That's his point. Others haven't been reading for the last two years. never mind just this thread.

The craziest thing is once 2 separate die have to share resources and perform calculations collaboratively with each other, if there is even a hint of latency, cache bottleneck, communication bottleneck, performance falls off a cliff.

This theory was already put to the test in 2007 with non-native quad core Intel CPUs and once the bottlenecks of 2 separate die got exposed, the end result was disastrous.





GPUs handle vastly more instructions in parallel, which means they are far more sensitive to cache, latency bottlenecks like this. This is exactly why NV/AMD keep increasing L2 cache sizes. But hey, let's throw all of that out the window and start reinventing the wheel because of some UofT research paper.
 
Last edited:

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
A year? Not a chance. Even if AMD/NV could build 2 die on an interposer, they won't be able to solve the OS/driver seeing them as 2 separate chips for SLI/CF. Besides, there are far better ways to increase performance over time such as feeding massive amounts of memory bandwidth to the GPU by stacking the memory directly onto the GPU and pushing the limits of die area as 7nm, 10nm, 14nm/16nm nodes reach maturity.

Oh, I know. I've just come to accept that they've brainwashed themselves into believing that they technology is there and it's just a matter of implementing it, so I focused on the one fact they can't reasonably counter. They'll probably just move goalposts again, though.
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
They weren't using this solution earlier because the tech for good results wasn't available, and there wasn't a problem with what they were currently doing. It's only been very recently that we are seeing these changes, and that the tech is there. Now we have lots of major players working on 2.5d solutions, it's quite interesting.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
They weren't using this solution earlier because the tech for good results wasn't available, and there wasn't a problem with what they were currently doing. It's only been very recently that we are seeing these changes, and that the tech is there. Now we have lots of major players working on 2.5d solutions, it's quite interesting.

Can't wait for you explain why that didn't work out a year from now.
 

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
The craziest thing is once 2 separate die have to share resources and perform calculations collaboratively with each other, if there is even a hint of latency, cache bottleneck, communication bottleneck, performance falls off a cliff.

This theory was already put to the test in 2007 with non-native quad core Intel CPUs and once the bottlenecks of 2 separate die got exposed, the end result was disastrous.





GPUs handle vastly more instructions in parallel, which means they are far more sensitive to cache, latency bottlenecks like this. This is exactly why NV/AMD keep increasing L2 cache sizes. But hey, let's throw all of that out the window and start reinventing the wheel because of some UofT research paper.

GPUs are designed to hide latencies and maximise throughput, and interposer communication has a much lower latency cost than traditional off-die communication. It's a different situation from C2Q.
 
Feb 19, 2009
10,457
10
76
Very poor example @RS.

That is nothing like interposer and ubump tech.

The result in SC2 is also not related to quad core since that game is not 4-threaded and was all IPC and 1-2 threads. I've played it back then on several rigs.

In fact, every single blizzard game IIRC is 1-2 threads.
 
Feb 19, 2009
10,457
10
76
That's a fake slide, they just made that up...

All the info is in the main article:

http://venturebeat.com/2016/01/15/a...o-full-graphics-immersion-with-16k-screens/2/

This the the important point:

We believe we’re several months ahead of this transition, especially for the notebook and the mainstream market. The competition is talking about chips for cars and stuff, but not the mainstream market.

For them to go public and say that they are ahead of NV at least several months is big, it means they are CERTAIN and their lead could be even longer. This is why JHH demoed "Pascal" with 980M chips...

This suggest TSMC's 16nm FF was and is fubar for large GPU chips and can only handle small mobile SOCs.

There were rumors of poor yields even for mobile SOCs and people were discussing problems with that node for GPUs... but I didn't jump onboard believing that, but now, with this in the open domain from Raja, this is the real deal. A re-make of Fermi.
 
Last edited:

gamervivek

Senior member
Jan 17, 2011
490
53
91
It's not in their article though. Too much excitement.

Wccf are best at cutting out relevant bits. Polaris 10 and 11 and not 16k screens.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,422
1,759
136
This suggest TSMC's 16nm FF was and is fubar for large GPU chips and can only handle small mobile SOCs.

No, it doesn't. AMD has been traditionally faster at node transitions, they are just better at dealing with immature shitty processes. (In contrast, nVidia have traditionally been better at software, allowing them to get more performance out of mature processes near the tail end of the process cycle.)

Seeing AMD out first with GPUs implies nothing about the relative merits of the processes.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
No, it doesn't. AMD has been traditionally faster at node transitions, they are just better at dealing with immature shitty processes. (In contrast, nVidia have traditionally been better at software, allowing them to get more performance out of mature processes near the tail end of the process cycle.)

Seeing AMD out first with GPUs implies nothing about the relative merits of the processes.

I think this also should silence those who blabber on incessantly about AMD having no R&D budget so they can't compete. They were first out with a next gen graphics API, HBM, and it's looking now like 14nm FF.
 
Feb 19, 2009
10,457
10
76
I think this also should silence those who blabber on incessantly about AMD having no R&D budget so they can't compete. They were first out with a next gen graphics API, HBM, and it's looking now like 14nm FF.

When AMD is first with new tech, it's because they desperately need it, and not a result of their R&D investment (because we all know they don't have any $$ to invest...), does that sum it up?

When was the last time NV's CEO showcased a FAKE product in recent history? Not too long ago, they were fubar on the node transition then.
 

nvgpu

Senior member
Sep 12, 2014
629
202
81
http://www.3dcenter.org/news/reihen...cht-gp100-gp102-gp104-gp106-gp107-gp10b-gv100

Nvidia already has top to bottom lineup of Pascal referenced in drivers. Competitor can barely manage 2 new GPUs.

https://www.zauba.com/import-graphics-processor-integrated-circuits-hs-code.html

GRAPHICS PROCESSOR INTEGRATED CIRCUITS BGLW5
GRAPHICS PROCESSOR INTEGRATED CIRCUITS NT0EH
GRAPHICS PROCESSOR INTEGRATED CIRCUITS 433X3
GRAPHICS PROCESSOR INTEGRATED CIRCUITS 2E73U
GRAPHICS PROCESSOR INTEGRATED CIRCUITS 3R08A
GRAPHICS PROCESSOR INTEGRATED CIRCUITS JM601

But people keep liking to slander and FUD Nvidia, the much more successful company because of their jealousy and need to defend their fail company.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |