Titan X Announced

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Alternately if you look at the diagram for the 970's design in the articles exposing the memory issue; perhaps they could leave all the blocks of L2 cache and memory controllers enabled and just disable an SM or two.

That's the absolute best approach.

I'm curious though if they have to in some cases.

One of the driven forces behind the release of parts that are cut down in any fashion is due manufacturing defects, and a way to capitalize on the fact that they do not have 100% efficiency in terms of 100% perfect units from every wafer, every time. I don't think any fab process, even Intel, has things that perfect.

Just like some defective SMMs means that a part cannot be used for the main release, perhaps parts of the ROP/L2 partitions can be defective. Through a combination of both, perhaps, is how they come to the conclusion that some parts have to be destined to ship as a lesser variant, and they may disable some parts intentionally and others are defective, and use common denominators to reach a design target so that they can utilize as close to 100% of the silicon as possible, increasing revenue potential/minimizing operations loss.

It appears as if the 970M simply has that last ROP/L2/MC partition completely disabled, whereas the 980M the entire ROP/L2/MC section is 100% enabled but 4 SMMs are disabled.

I'm curious if it goes like this:
980 = perfect GM204
980M = up to 4 defective SMMs, all else perfect
970 = up to 3 defective SMMs, up to one ROP/L2 segment disabled, shared link can be added
970M = up to 6 SMM defective, up to two ROP/L2/MC chains defective, may or may not reside in the same ROP/L2/MC partition, no shared links added

If one SMM is defective on a GM204 and all else is perfect, I bet they disable 3 more and ship it as a 980M. Depending on market needs, they COULD disable a ROP/L2 segment and ship it as a 970 variant. But I bet what is actually defective may be the first factor that determines the future for any individual die. From there, they may further disable additional segments to match the pre-defined cut-down specs based on some market demand and number and type of defect. Say there is an L2 block screwed up, they could make it a 970 or a 970M, depending on need. If SMMs are defective, they probably simply disable addition things based on what chip has a stronger demand.

How high the parts can clock is of course another variable that determines how a chip is binned, which should go without saying but I'll say it anyway.

I highly doubt any of it started based on a whim to simply disable parts, but the desire to add the shared link between two MC's on the 970 seems to be a mistake they likely won't make again. Otherwise, I suspect all of the cut-down parts simply started based on how to make use of as close to 100% of the silicon produced. That is surely the standard for all such products, be it Nvidia, AMD, or Intel.
 

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
Well if Nvidia were to drop the 980 price by a bit (like 50$) the Titan X at 999 USD it would not conflict with that price point as it would be up to customers to decide between performance with SLI or something more stable and have possibly slightly better DP performance than the Titan Black. Though I fear 1350 USD will be the price point as that could point towards Nvidia raising the price of things in general. I know the GM200 is a bigger chip than the GK110 but 28nm has matured and wafers have lowered in price IIRC and it'll be 600mm2 (1.53x398mm2 based on transistor count) which is not much bigger. I think that Nvidia will release a 985/990 GM200, probably in the summer to counteract AMD's 300 series at at 550 and 700 USD (former being cut down and the later being the full GM200 with 6GB VRAM and severely cut down DP performance) and the 980 will be 400 USD (and a very small 970 price drop) by then and cover the varying price points.

But why would Nvidia drop the price of the 980? The Titan X will likely be released in a week or two whereas AMD has essentially publically announced they have no product that can compete with the 980 until June.

If 970 is $329, GTX 980 is $549, and two 980s is $1098, there's no reason for a Titan X to be $999 since this will destroy SLI 980 sales. That's why $1350 is a convincing price point for the Titan X. AMD is presenting no pricing pressure against the 980 for the next 3 months so there's no reason for Nvidia to reduce its revenue on that SKU right now.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
But why would Nvidia drop the price of the 980? The Titan X will likely be released in a week or two whereas AMD has essentially publically announced they have no product that can compete with the 980 until June.

If 970 is $329, GTX 980 is $549, and two 980s is $1098, there's no reason for a Titan X to be $999 since this will destroy SLI 980 sales. That's why $1350 is a convincing price point for the Titan X. AMD is presenting no pricing pressure against the 980 for the next 3 months so there's no reason for Nvidia to reduce its revenue on that SKU right now.

That seems to be a backward approach to it.

The 970 is cheaper than the 980, and the 980 is cheaper than 970 SLI, but plenty buy each type, the single GPU or the SLI.

When the SLI is more expensive than the single card (which would be the case for a $999 Titan X), even though the SLI performs better, it is more expensive, and it is SLI. Some people prefer single-GPU setups, and some just don't want to spend the extra money.
The Titan X will not outperform a 980 SLI setup. So if the Titan X is more expensive than the 980 SLI setup, only the people who stand by single GPUs, or desire the best of the best and only the best and/or have a ton of spending money. If the Titan X costs less than the 980 SLI, it still sort of stays in that general spending level, but more people will buy it.

In short - it being MORE expensive than a 980 SLI setup means that 980 SLI will definitely cut into sales more than it would if the Titan X were cheaper than the SLI setup.

This will be more obvious once a fully-enabled GM200 is released as a GeForce card, likely around $700-800, and a 980 drops to say, $450. Some people would rather one card, and they can justify it even more because it is cheaper than the SLI setup. When such a card costs more than the SLI setup, and that SLI setup can outperform it easily, well, it becomes a more difficult task to argue against it. At that point, only those who vehemently oppose SLI, or have a lot of spending money and plan on going with SLI of the more expensive card anyway, will ultimately buy those cards.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Maximum PC posted some photos of the Titan X. The matte black textured finish on the card is odd - it almost looks like the cooler shroud is made of plastic. But that would be way out of place, since the GTX 980/970 reference cards have a cooler with a metal shroud. I can't see them giving a less premium cooler to their flagship. Maybe it's just some weird kind of anodization.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Maximum PC posted some photos of the Titan X. The matte black textured finish on the card is odd - it almost looks like the cooler shroud is made of plastic. But that would be way out of place, since the GTX 980/970 reference cards have a cooler with a metal shroud. I can't see them giving a less premium cooler to their flagship. Maybe it's just some weird kind of anodization.

It looks to simply be a black powder coat finish.
 

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
That seems to be a backward approach to it.

The 970 is cheaper than the 980, and the 980 is cheaper than 970 SLI, but plenty buy each type, the single GPU or the SLI.

When the SLI is more expensive than the single card (which would be the case for a $999 Titan X), even though the SLI performs better, it is more expensive, and it is SLI. Some people prefer single-GPU setups, and some just don't want to spend the extra money.
The Titan X will not outperform a 980 SLI setup. So if the Titan X is more expensive than the 980 SLI setup, only the people who stand by single GPUs, or desire the best of the best and only the best and/or have a ton of spending money. If the Titan X costs less than the 980 SLI, it still sort of stays in that general spending level, but more people will buy it.

In short - it being MORE expensive than a 980 SLI setup means that 980 SLI will definitely cut into sales more than it would if the Titan X were cheaper than the SLI setup.

This will be more obvious once a fully-enabled GM200 is released as a GeForce card, likely around $700-800, and a 980 drops to say, $450. Some people would rather one card, and they can justify it even more because it is cheaper than the SLI setup. When such a card costs more than the SLI setup, and that SLI setup can outperform it easily, well, it becomes a more difficult task to argue against it. At that point, only those who vehemently oppose SLI, or have a lot of spending money and plan on going with SLI of the more expensive card anyway, will ultimately buy those cards.

Anyone who knows how to read a frame latency graph knows a card 50% faster than single GTX 980 blows away SLI 980, or has the visual acuity appreciate the reduced latency, will feel the difference. Especially at higher resolutions as the bandwidth overloads the SLI interface.
 

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
This will be more obvious once a fully-enabled GM200 is released as a GeForce card, likely around $700-800, and a 980 drops to say, $450.

Titan=full on GM200=GeForce. Titan was never not a GeForce product.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
http://www.anandtech.com/show/8935/...cting-the-specs-exploring-memory-allocation/2

Ryan already explained it there.

Nvidia tried something new with the GTX 970, but unfortunately people are emotional and don't think with their brains, so yeah, whatever. No doubt Nvidia won't do it again, too much grief from ignorant people returning perfectly working fine cards.

At least your username is appropriate. You tell people to "think with their brains," yet yours is ruled by bias. If you think that lying to customers is fine, then I really don't care about anything you have to say. All that they had to do was explain that from the beginning and this issue would have only resulted in some minor groaning. None of you Nvidia white knights seem to understand why people are upset at all...
 

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
At least your username is appropriate. You tell people to "think with their brains," yet yours is ruled by bias. If you think that lying to customers is fine, then I really don't care about anything you have to say. All that they had to do was explain that from the beginning and this issue would have only resulted in some minor groaning. None of you Nvidia white knights seem to understand why people are upset at all...

Well if you think from Nvidia's perspective, they beat a $549 R9 290X at release with a $329 GTX 970. They probably think they were providing a great value so lying a little about specs to cut corners and make a little more profit was going to be excusable.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Well if you think from Nvidia's perspective, they beat a $549 R9 290X at release with a $329 GTX 970. They probably think they were providing a great value so lying a little about specs to cut corners and make a little more profit was going to be excusable.

Except that it wouldn't have changed the benchmarks. It would have barely affected sales, if at all once you consider people who decide to skip the card after hearing that they lied. Heck, it might have even driven up 980 sales a bit. I just can't fathom a situation where risking consumer trust was worth it to fudge a couple of numbers. It could even be turned around with a $400-430 version with 6GB all running at full speed. I just don't get it.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Titan=full on GM200=GeForce. Titan was never not a GeForce product.

Fair point. I forgot they even used the "GTX" brand for Titan.

Point still stands, however. There is a clear distinction in price and feature set and target demographic between the Titan brand and the numbered cards.

Also, nobody knows if Titan X is a full GM200, if that is what you were saying there. It could have an SMM or two disabled so that a better Titan can launch later, one that would also have higher clockrates. We'll see if they do that, and what they do regarding any cut-down parts for any 985 or 990 parts.
 

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
Except that it wouldn't have changed the benchmarks. It would have barely affected sales, if at all once you consider people who decide to skip the card after hearing that they lied. Heck, it might have even driven up 980 sales a bit. I just can't fathom a situation where risking consumer trust was worth it to fudge a couple of numbers. It could even be turned around with a $400-430 version with 6GB all running at full speed. I just don't get it.

Yet another thread getting derailed with the 970 memory "issue"... Can't you guys keep it to one thread?
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Fair point. I forgot they even used the "GTX" brand for Titan.

Point still stands, however. There is a clear distinction in price and feature set and target demographic between the Titan brand and the numbered cards.

Also, nobody knows if Titan X is a full GM200, if that is what you were saying there. It could have an SMM or two disabled so that a better Titan can launch later, one that would also have higher clockrates. We'll see if they do that, and what they do regarding any cut-down parts for any 985 or 990 parts.

Then we can have the Titan X Black. Except that the shroud is already black... Dang.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Then we can have the Titan X Black. Except that the shroud is already black... Dang.

They could roll with Titanium (possibly redundant? ), Platinum, Gold, X², etc. They can get creative.

The black shroud is quite nice looking, I like the flat coat and simplicity.

As someone who might consider such card down the road, I'll definitely be waiting until any potential 985 Ti or similar, one that has full GM200, potentially minus a Titan level Double Precision capability (much like 780 Ti compared to Titan Black).
 

hawtdawg

Golden Member
Jun 4, 2005
1,223
7
81
http://www.anandtech.com/show/8935/...cting-the-specs-exploring-memory-allocation/2

Ryan already explained it there.

Nvidia tried something new with the GTX 970, but unfortunately people are emotional and don't think with their brains, so yeah, whatever. No doubt Nvidia won't do it again, too much grief from ignorant people returning perfectly working fine cards.

I assure you that false advertisement is the absolute opposite of something new. Returning a product because Nvidia lied about what it was, is not being ignorant. Being smug and pretending like nothing wrong was done, is how Nvidia is going to lose customers.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
Nvidia tried something new with the GTX 970,

It's not new the 660ti had an assymetrical memory configuration, the 550ti had this as well.

The best case scenario is always going to be that the entire 192bit bus is in use by interleaving a memory operation across all 3 controllers, giving the card 144GB/sec of memory bandwidth (192bit * 6GHz / 8). But that can only be done at up to 1.5GB of memory; the final 512MB of memory is attached to a single memory controller. This invokes the worst case scenario, where only 1 64-bit memory controller is in use and thereby reducing memory bandwidth to a much more modest 48GB/sec.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
That's the absolute best approach.

I'm curious though if they have to in some cases.

One of the driven forces behind the release of parts that are cut down in any fashion is due manufacturing defects, and a way to capitalize on the fact that they do not have 100% efficiency in terms of 100% perfect units from every wafer, every time. I don't think any fab process, even Intel, has things that perfect.

Just like some defective SMMs means that a part cannot be used for the main release, perhaps parts of the ROP/L2 partitions can be defective. Through a combination of both, perhaps, is how they come to the conclusion that some parts have to be destined to ship as a lesser variant, and they may disable some parts intentionally and others are defective, and use common denominators to reach a design target so that they can utilize as close to 100% of the silicon as possible, increasing revenue potential/minimizing operations loss.

It appears as if the 970M simply has that last ROP/L2/MC partition completely disabled, whereas the 980M the entire ROP/L2/MC section is 100% enabled but 4 SMMs are disabled.

I'm curious if it goes like this:
980 = perfect GM204
980M = up to 4 defective SMMs, all else perfect
970 = up to 3 defective SMMs, up to one ROP/L2 segment disabled, shared link can be added
970M = up to 6 SMM defective, up to two ROP/L2/MC chains defective, may or may not reside in the same ROP/L2/MC partition, no shared links added

If one SMM is defective on a GM204 and all else is perfect, I bet they disable 3 more and ship it as a 980M. Depending on market needs, they COULD disable a ROP/L2 segment and ship it as a 970 variant. But I bet what is actually defective may be the first factor that determines the future for any individual die. From there, they may further disable additional segments to match the pre-defined cut-down specs based on some market demand and number and type of defect. Say there is an L2 block screwed up, they could make it a 970 or a 970M, depending on need. If SMMs are defective, they probably simply disable addition things based on what chip has a stronger demand.

How high the parts can clock is of course another variable that determines how a chip is binned, which should go without saying but I'll say it anyway.

I highly doubt any of it started based on a whim to simply disable parts, but the desire to add the shared link between two MC's on the 970 seems to be a mistake they likely won't make again. Otherwise, I suspect all of the cut-down parts simply started based on how to make use of as close to 100% of the silicon produced. That is surely the standard for all such products, be it Nvidia, AMD, or Intel.

So that must be how it used to work. There seems to be this new theory that the reason the original Titan was cut down was just to milk the market. It had nothing to do with an extremely large chip on a brand new node. Nothing to do with limited 28nm wafers. Its just because nvidia wanted to milk the market...,

Kind of like how they did it with fermi and the gf100..............

Yeah, makes a lot of sense. /sarcasm

I don't care much for the conspiracy theories. Your post is actually spot on. I think the only way we see a cut down Titan x is if there is a yield/power issue and nvidia is forced to in order to have enough sellable chips. Hopefully the 28nm node is mature enough that there won't be a need to have the titanX cut down but this is still uncharted territory. It will be pushing to the boundaries as no other chip has even been so large. I also expect that this gm200 will have a greater transistor density per mm^2. This adds another variable to the mix.

Personally feel like this will be the full gm200 this time. That the cut down gm200s will serve as the gtx version.

As for price, it all depends on how many nvidia plans to sale. I don't think people realize that their entire business is ran by these projections. They have the target in mind and that they price in accordance. If a GPU is meeting or exceeding expected sales, the price either stays the same or goes up. If they aren't meeting projections, then the prices go down and if they can't make up the volume, their Quarterly guidance gets adjusted.

Nvidia will price the Titan X depending on how many they intend to sale. If its higher than the last Titan, then they expect it to be a lower volume product. This would be due to capacity and reasonable expectations. Say only 50% of chips can make it as full gm200s, considering that there aren't that many possible out of each wafer, it could very well be intended as a low volume product. Nvidia can then save up the defective chips and when they have enough stock piled, sell them as a cut down gtx model.

That is the more reasonable path nvidia took with the original gk100. The new theory that nvidia just cut down the chip to milk the market is beyond ridiculous to me, in my opinion. the original Titan was cut down because of many issues such as yields and wafer availability. As time passed and there became stock piles of chips not capable of making a Titan, nvidia released the first gtx gk100, the gtx780. All the while, they are working to better produce their large chip on the 28nm node. By the time AMD had launched the 290x, nvidia was able to release a full gk100 and produce it in volume.

Just look at the consumption of the 780ti and it becomes obvious that a lot of work was put into improving the gk100. To be a full chip that ran at those speeds yet the power was almost the same levels as the cut down gk100s before it. That in itself is telling.

Regardless, there are other generations that played out just the same way. And in those, it would be crazy to imagine that nvidia released these cut down chips just to milk the market. Why us there this new insistence that Kepler was somehow different and it was planned all along to be slow rolled out? Big chips are very problematic and we have seen this to be the actual truth with fermi. But somehow, all of a sudden with Kepler this whole new theory is born. But, that I think it is nonsense.

I still believe that large chips are prone to more issues than smaller chips. But given that, I still am hopefull that nvidia has had enough time and experience with the 28nm node that even this record breaking large die gm200 will be suitable for launching full and uncut Titan X chips. This is where I am leaning and I expect the stock piles of defective dies will lead us to a gtx gm200 a few months down the road.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I'm expecting what we saw with GK110. Titan X will have one disabled SM, then we see another card with 3 disabled SMs and then we see the full chip in another card. Released in that order.

I think this is unlikely. The reason why the original Titan had a disabled SMX module is that early yields of the GK110 were low. Remember, this card was released back in February 2013 - more than two years ago. The TSMC 28nm process was still not fully mature, at least for dies that big. Nvidia needed all their fully-functional GK110 chips for much more lucrative Tesla cards.

In contrast, the GM200 appears not to have been designed as a compute-first chip; as other posters mentioned, it doesn't look like there is a Maxwell Tesla on Nvidia's roadmap. Furthermore, the TSMC 28nm process is extremely robust by now - it ought to be, since it's over three years old. Yields will likely be fairly high.

My guess is that the handful of highest-binned GM200 chips (best performance, lowest power usage) get put into the Quadro M6000 when that's released; the remaining non-defective chips will go into the Titan X; and those chips which need a module disabled will be stockpiled until AMD releases the R9 390X. At that point, Nvidia will retaliate by putting out a mainstream GM200-based card (maybe called the GTX 990?) with one disabled module at $599 or so, and cutting the GTX 980's price to about $399. Somewhere down the road, there will be a Titan X2 with dual GM200 chips for $2999.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
I think this is unlikely. The reason why the original Titan had a disabled SMX module is that early yields of the GK110 were low. Remember, this card was released back in February 2013 - more than two years ago. The TSMC 28nm process was still not fully mature, at least for dies that big. Nvidia needed all their fully-functional GK110 chips for much more lucrative Tesla cards.

In contrast, the GM200 appears not to have been designed as a compute-first chip; as other posters mentioned, it doesn't look like there is a Maxwell Tesla on Nvidia's roadmap. Furthermore, the TSMC 28nm process is extremely robust by now - it ought to be, since it's over three years old. Yields will likely be fairly high.

My guess is that the handful of highest-binned GM200 chips (best performance, lowest power usage) get put into the Quadro M6000 when that's released; the remaining non-defective chips will go into the Titan X; and those chips which need a module disabled will be stockpiled until AMD releases the R9 390X. At that point, Nvidia will retaliate by putting out a mainstream GM200-based card (maybe called the GTX 990?) with one disabled module at $599 or so, and cutting the GTX 980's price to about $399. Somewhere down the road, there will be a Titan X2 with dual GM200 chips for $2999.

Possible, but they were pushing out GK110s for a long while before Titan released. I'd have to go back and check, but I believe they were already selling fully unlocked chips in Teslas/Quadros when Titan arrived. Just because they can sell a full chip with ease doesn't mean they will, as not selling a full chip allows them to grind out all the profit they can. We saw them successfully slow roll out GK110 like this milking it for all they could.

The market for the Titan amongst gamers had to be very, very, very small at $1000 each. At that price point you are chasing adult gamers with good jobs or ones that don't game and their well indulged children. Titan X is rumoured at $1350 a piece, that is not going to be anywhere near a high volume seller to gamers. A lot of Titan buyers seem to be more oriented to just doing straight up extreme benchmarks if I were to go on owners I see in forums. The guy I sold my Titans to didn't even game, he just ran benchmarks to get scores on 3D Mark and HWBOT leaderboards. I don't see nvidia locking out the lion's share of the gaming market from a full GM200 behind a $1350 price point.

Breaking down GM200 into a slow-drip like they did GK100 just makes sense as it lets them milk it. It is no different than Titan's launch, it comes out at an abnormally high price point and then they launch 780 two months later for $400 less. They follow that up with a full chip down the road for a bit more like 780ti.

I also don't think there will ever be a GM200 with just one disabled SMX at the $599 price you mentioned. Maybe one with 3 disabled, but I expect to see what we saw with GK100; Titan X with one disabled, a card with 3 disabled and then lastly a full chip - with the cheapest card seen sporting GM200 at minimum $600 until/if there is pricing pressure from AMD. I also expect we see another GM200 chip other than Titan X before AMD even gets 390X out if the rumour of 390X in June is true - the market for a $1300 gaming card is too small to ignore the many more sales that can be reaped from, for example, a $700 price point. I guess we will find out for certain in just a few weeks once the card comes out.
 
Last edited:

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Possible, but they were pushing out GK110s for a long while before Titan released. I'd have to go back and check, but I believe they were already selling fully unlocked chips in Teslas/Quadros when Titan arrived. Just because they can sell a full chip with ease doesn't mean they will, as not selling a full chip allows them to grind out all the profit they can. We saw them successfully slow roll out GK110 like this milking it for all they could.

The market for the Titan amongst gamers had to be very, very, very small at $1000 each. At that price point you are chasing adult gamers with good jobs or ones that don't game and their well indulged children. Titan X is rumoured at $1350 a piece, that is not going to be anywhere near a high volume seller to gamers. A lot of Titan buyers seem to be more oriented to just doing straight up extreme benchmarks if I were to go on owners I see in forums. The guy I sold my Titans to didn't even game, he just ran benchmarks to get scores on 3D Mark and HWBOT leaderboards. I don't see nvidia locking out the lion's share of the gaming market from a full GM200 behind a $1350 price point.

Breaking down GM200 into a slow-drip like they did GK100 just makes sense as it lets them milk it. It is no different than Titan's launch, it comes out at an abnormally high price point and then they launch 780 two months later for $400 less. They follow that up with a full chip down the road for a bit more like 780ti.

I also don't think there will ever be a GM200 with just one disabled SMX at the $599 price you mentioned. Maybe one with 3 disabled, but I expect to see what we saw with GK100; Titan X with one disabled, a card with 3 disabled and then lastly a full chip - with the cheapest card seen sporting GM200 at minimum $600 until/if there is pricing pressure from AMD. I also expect we see another GM200 chip other than Titan X before AMD even gets 390X out if the rumour of 390X in June is true - the market for a $1300 gaming card is too small to ignore the many more sales that can be reaped from, for example, a $700 price point. I guess we will find out for certain in just a few weeks once the card comes out.

you are totally making stuff up. this is twice in this thread.

You havent looked back?

" I'd have to go back and check, but I believe they were already selling fully unlocked chips in Teslas/Quadros when Titan arrived"

I already posted about this and the way it was rolled out. You dont even know but you keep on insisting this complete fabricated view of a purposeful slow roll out and there is absolutely nothing as evidence to support it. It is made up, the facts dont line up with it at all.
I have made my case, twice. And yet you continue while admitting that you havent even took the time to look at how the gk110 was actually rolled out.

The first gk110 chip was the tesla K20(x). That enitire oak ridge supercomputer was filled with gimped chips. 2496cuda/2688cores enabled out of a max of 2880. Not only that, the speed was gimped to the lowest gk110 speed of all chips, 706/732mhz.

This k20(x) chip was shipping to the super computer Q4 2012, nvidia had announced both the k20 and the k20x in November of 2012, availability was an issue.

So far, in our timeline, no full gk110 chips launched. The next chip to launch is the first titan. Feb 2013. It was too, a cut down gk110.
It had the same amount of cuda cores as the K20X which is 2688 cores. The original titan ran at 837-876mhz with a TDP of 250watts

Then there was first gtx 780 (march), 3 months after titan- it had less cores than all other chips, k20, k20x, and titan. 2304cuda cores running at 863-900mhz. It had a TDP of 250watts, just like the titan and real world consumption was about on par as well. These signs point to this being a GPU made from stock piles of gk110 chips that couldnt make the cut as K20(x) or titan. This is the only explanation for the power consumption which should be lower if nvidia just gimped it for the heck of it.

The next gk110 to launch was the k6000. 3 months after the 780, 6 months after the titan, and 9 months after the first cut down gk110s came out (k20s), we finally get there. the k6000, it was the full fledged gk100 and was shipping by August 2013. We had heard rumors about this chip for awhile, but it was the end of july (2013) before nvidia finally made it official. This chip had an amassing trait not seen before. Running all 2880cores at over 900mhz.........the tdp was down to 225watts. hmmm.....isnt that interesting. You would think that something must be improving all these months later.........

Then in about a month, was the announcement of the gtx780ti. the first full fledged gk110 geforce card. It was announced in october, about a month after the k6000 and launched in the first week of November. Not only was it the first full fledged geforce gk110, it was also running at a much higher speed than any gk110 before it. 876-928mhz. And even more amassing, for all these cores......running at higher speeds........the power consumption of the 780ti was about the same as the much lowered spec titan that came out 8 months before it.

Finally, we get to the full fledged tesla gk110, it came a few weeks after the 780ti. A full 2880 cores running at a higher speed 810-875mhz with the TDP staying exactly the same 235watts as the slower and cut down k20x of a year before it.

There was no slow roll on purpose. To me, it is a theory just made up for whatever purpose they think it might serve. There are no facts to support it but when you start looking at the actual situation as it played out, the evidence supports quite the opposite.

Nvidia came into kepler straight from fermi. Fermi was a sore spot and nvidia was working hard to improve that power hog image they were ridiculed for. There was absolutely no way nvidia could have launched a full chip 780ti monster at the time they launched the gtx680. It was completely impractical to produce in volume, impractical from a yield perspective, but even if they could, the power consumption would have been worse than the original 480 was.

Count how many months it took to come out with a much improved gtx780ti and it completely aligns with the time frame of a re-spin. The same amount of time it took nvidia to fix the gf100, the same amount of time it took for the original gtx480 to launch when it was delayed 9 months.

This time though, the gk110 wasnt nearly as bad in shape as the gf100 originally was. But it was far from the real possibility of a full gk110 in the beginning of 2013. All signs point the other way. I think this time, the limited wafers really hurt just as much or more. Going with a big die on a new node is a huge huge risk.

Then there is the fact that originally, nvidia was prepping the gk104 to serve as the 670ti. There were even box covers printed, this thing was real. Nvidia had no faith in the gk104 betting Tahiti and i believe their plan was to try to hold off AMD as long as they could, to not have the performance crown for months and months. There is absolutely no way nvidia could have launched a gk110 at all a year earlier when the 680 launched march 2012. Even in november 2012, all nvidia could manage was a few k20s for the oak ridge super computer. This was cut down gk110s that were very very limited in supply.

The original titan, a cut down gk110 that came out a year after the gtx680. It sold out too, there was not capacity even then.

Look, no matter how you slice it. The slow roll theory is bunk. It just didnt happen like that. You cannot rewrite history. Why would nvidia let AMD take the performance crown with the 7970ghz if they could have launched the gk110 all along? AMD held the lead for months and months and nvidia just struggled for parity in some games. None of this makes sense unless you accept that there was real issues getting the gk110 out of the gate.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
Will admit that was only worth skimming as there was a real lack of brevity there, but, GK110 was certainly a slow roll out. In fact, Kepler as a whole was a slow play.

Titan-780-780ti-Titan Black. It was the best way to maximize profit. It really was very well done releasing Titan when they did for $1000 after a long drought of no real high end, just the 680 mid-range. Buyers with the cash grabbed them up and once that initial rush was clear, the 780 which was ready in the wings was dropped @ $600 to clean up the rest who balked at the $1000 price. Made excellent sense to do it this way because there are those who bought Titan @ $1000 that would of gotten a 780 instead had it been available. It was brilliant not to release the two together.

After that it was reactionary releases and price cuts against AMD with the 780 price cut, 780ti release and then Titan Black release. We'll see something similar happen with GM200 starting with Titan X. While it may not be a mirror image release cadence, we can certainly expect much the same of a slow roll out starting with an ultra expensive GM200 with no other options and then once that initial rush is milked, we'll see a cut down card come out. I personally expect Titan X to be cut down so they can launch a full fat down the road, we'll have to see, but it would be the best way to maximize profit and it worked so well with Kepler.

What is great this time is that we have Kepler's release history to look at and have an idea of what to expect. With the first Titan they bamboozled everyone by dropping that card without a hint of another card soon to come and just a few months later coming with the 780.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |