Titan X Announced

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ocre

Golden Member
Dec 26, 2008
1,594
7
81
While it's possible Titan X could be cut down, I don't think it will be. Everyone and their mom knows Fiji is in the pipeline and probably has working samples. Also, there won't be nearly as much time to slow roll GM200 as there was with GK110. On top of that, we're looking at an EXTREMELY mature 20nm process. I think Titan X will be fully functional GM200 chips.

I think its crazy to see how history gets rewritten.

There is no proof whatsoever that nvidia purposefully "slowed" down the gk110 roll out. Has it been so long ago? There is a huge issue with this theory when we look back..........its the facts that get in the way.

First off, the 28nm node itself. It went into full production and there was a huge demand for a young node. tsmc ramped up production as fast as they could but it was not near enough. The demand in wafers was much higher than could be provided. Then there was this little spat about customers paying for entire wafers, not just for the working chips. There was a limited supply and that went on for some time.

Second, the gk100 was a massive chip for such an early node. Its not ideal and at the very least you would think that there is a drawback to making large chips on a fresh node. History teaches us that large chips are not as easy to produce as smaller chips. When there is unlimited silicon, and you are paying for each working chip instead of each wafer, the situation isn't nearly as bad. But in the gk100 situation, the wafers being extremely limited for a very big chip on a very young and immature node...........how this seems to escape people's memory is beyond me. The road for a gk100 was very rough n the beginning. So rough that had the Tahiti been much faster, nvidia would have been in serious trouble for a good long while. There was simply no chance of producing a full flat gk100 chip in volume when the gk104 launched. It was a complete impossibility, there wasnt this plan to roll out Kepler slow just because. There actually was a real history and real reason why things turned out like they had.

Lastly, the huge gk100 supercomputer that nvidia had been contracted. This was where all the gk100 chips were going in the beginning. And........ It was completely filled with cut down gk100 chips. This wasnt part of some master plan to maximize profits with the slow roll. It was actually more of an embarrassment. These chips were not as powerful as they should/could have been. And nvidia would have been much more praised had they delivered full gk100 chips. There is no reason acceptable on this one, nothing but the truth.........nvidia was forced to fill the contract with cut down gk100 because they had no other choice. They could not produce full chips in capacity.

Sorry but this theory that nvidia slow rolled Kepler out just because is just doesn't sit with reality very well. I won't say that never has nvidia, intel, or AMD held back an ace from time to time. They may not always play their most aggressive hand and might be waiting on their competitors move first. Like the 780ti, I am sure nvidia did hold it back until they seen how Hawaii turned out. But this happened when the 28nm node was much more mature and in much higher volume. I am sure nvidia was waiting with the 780ti counter attack then but there is absolutely no way that a 780ti was possible when the gk104 just launched. There was no way that the 780ti was possible when the original Titan launched. It would be further down the road before the situation improved enough to allow such a chip.

The situation with Kepler was very interesting, nvidia just winged it. The gk104 wasn't this master plan to slow roll out, nothing like that. It was originally gonna be a gtx 670. Nvidia wasn't sure of it at all. They thought they were in trouble and would have to play a completely different hand. But things worked out for them very well and after a long while, the full gk100 became a reasonable chip that could be produced and sold in quantity. It just wasnt like that all along.

When we look at past chips and generations, look at how things really unfolded. Look at the situation in the time that it happens in.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
You didn't prove jack.

My original statement was regarding similar perf/w for GM200 compared to GM204 if we go by history of NV's bigger chip.

You didn't even bother to discuss the point but went off on a crusade to prove me wrong based on 1440p for 770 vs 780ti. The point is AT LAUNCH (in case in your worthless crusade you forgot the entire point, go here: http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780_Ti/27.html) the 780ti was ~35% faster than the 770 OVERALL OVERALL OVERALL. When you go up to 1440p, the gap increases. I even said as much.

How about you GO BACK to the point at hand: Re: GM200 vs GM204, launch time, launch performance comparisons.

Do you think efficiency will be the same? Yes? Then assuming 35% perf gains overall (I am not gonna cherry pick a particular resolution), it should consume ~35% more power.

If it consumes 50% more power, it should be 50% faster.

Unless you somehow feel NV will pull a miracle and improve perf/w with GM200, I stand by this logical extrapolation.

Hey if you want to throw in 1600x900 into the mix now too I guess you can do that too but it just further proves my point. I'm sure 1080p will be the main resolution used to compare GM200 to GM204. 1080p is the standard to compare flagship / halo cards. This is what you keep saying over and over. :thumbsup: I'll remember this next time you spout 4k.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Couple things.

-Lower clocks
-much of the chip scaled less. Bandwidth was 50% more, for example, the number of gpc only increased from 4 to 5 (25%)

I specifically mentioned the lower clocks, not sure how you missed that. And besides, all else being equal chances are that GM200 will also clock lower than GM204, due to being a bigger die.

And yes there were several things that increased by less than 75%, for instance memory controllers and ROPs, and raster engines (1 per GPC), but it stands to reason that the reason why these parts wasn't scaled to the same degree was simply because they didn't represent significant bottlenecks at their lower levels of scaling.

Either way trying to predict the performance scaling of a Maxwell GPU based on a couple of Kepler GPU will always be inherently inaccurate, so chances are it's not going to match up anyway.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
I specifically mentioned the lower clocks, not sure how you missed that.

I almost missed it at first. I'm still not sure how people have missed my last post about Kitguru saying that it will be a consumer based card with no double precision.
 

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
I'll buy it Day 1 for $1500, I had an original Titan for $1300 AUD or so here, returned it and got a 780 Ti GHz instead. Finally some real GPUs for once.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Kitguru have some more info apparently:
http://www.kitguru.net/components/g...itan-x-12gb-what-you-need-to-know-and-expect/

Not sure why they are putting 12GB on a consumer card without Double precision.

Because people are stupid and will buy it instead of waiting until the properly-priced GTX version of the GM200 is released.

And it's really more of a prosumer, extreme-enthusiast part, not really standard consumer part. A consumer GM200 will be a GTX brand card, which will almost surely carry 6GB, perhaps at 8GHz instead of 7GHz. While the "getting is good" Nvidia will focus on maximizing revenue from early GM200 production, while they continue binning some lower-yield products to be made into some lesser card. I do fear that a cut-down GM200 may very well end up having the same memory configuration issue that cut-down GM204 chips have (i.e. 970).

I mean, they absolutely have to use the GM200 parts that aren't fully enabled due to defects. To throw those away is unthinkable at these prices, I suspect.

I think they can avoid segregated memory clusters by not even disabling any of the ROPs. Or if they are going to disable any part of a ROP/MC partition, they should go all or nothing.

If the crossbar and overall architectural layout is like the GM204, save for additional GPCs, then the crossbar will have 6 ROP/MCs partitions attached.
For reference, the GM204 has 4 ROP/MC partitions. The 970 had the issue it did because only half of one of the partitions was disabling, dropping one L2 cache block and 8 ROPs. This forced two memory controllers to share links with one L2 cache block instead of having direct links.

I don't know if those partitions are just as likely to have failures as the SMMs, if not, then the simple answer is don't disable any. If so, then the answer here is to disable an entire block. That leaves a 320bit memory interface and 5GB of RAM. If they can get away with not disabling any ROP/MC blocks, that is preferred, leaving the entire 6GB and 384bit interface.

But after the last fiasco, they cannot try to make such a cut-down GM200 a "6GB" card. If they did indeed disable an entire ROP/MC block, calling the card a 6GB card would be an outright lie, unlike the 970 where all the memory WAS technically accessible, but not in an optimal way. If an entire partition was disabled, 2 entire 512MB blocks of DRAM would be inaccessible to the crossbar. So you would not see any processing slowdowns due to utilization of those "extra" memory blocks, you'd simply only have 5GB on a 6GB card. So if they go down this route, I would see them saving face if they, from the beginning, label such a card as a 5GB card. It would still be a great card.

A 320bit, 20SMM, 5GB GM200 would still be a great card if priced right.

This is actually perfect. Considering that's a "small" step up from a 980, market that cut-down GM200 as a 985.
Once Nvidia feels ready, they can then release a fully-enabled GM200 as a 985 Ti. This preserves the "sanctity" of the x90 mark as a dual-GPU card, although they haven't used it since the 690. If Nvidia feels pressured they could release a 990 dual GM200, perhaps even dual cut-down GM200. Hopefully they wouldn't restrict the dual-GPU configuration to the Titan brand like they did with Kepler, that would be a shame.

All such cut-down GM200 could theoretically be used only in the Quadro market, OR they could save all the cut-down GM200s to be used in a dual-GPU card of any brand.

Having the 980 as a smaller non-cut down GPU leaves them in a funny spot, and I think what they do from here is entirely decided by how competitive AMD is with the 300 series.
 
Feb 19, 2009
10,457
10
76
Hey if you want to throw in 1600x900 into the mix now too I guess you can do that too but it just further proves my point. I'm sure 1080p will be the main resolution used to compare GM200 to GM204. 1080p is the standard to compare flagship / halo cards. This is what you keep saying over and over. :thumbsup: I'll remember this next time you spout 4k.

So you pick 1440p.. Why not go with 4k? There's DSR available for consumers to up the resolution so its no longer limited by their monitor.

I picked 1080p because GK104 was undoubtedly a product designed for 1080p with 2GB of vram and 256 bus without the advantage of Maxwell's compression. It was touted by NV in official presentation as the best card (at launch) for 1080p.

It scales worse at 1440p compared to Tahiti and 780/ti, this is well known.

Launch review: 780ti is ~33% faster at 1080p.


At 1600p, the 780ti is ~45% faster.


If that hurts you so, fine, let's use 1440p and make it so that GK110 is 45% faster than GK104.

Does it change my original point: perf/w, which is that big NV die has similar perf/w characteristics as mid-range NV die and so if its 35% faster, it will use around 35% more power. Not really. You just decided to go pissy & personal over semantics. Sheesh, the launch isn't even here and you're already raising the battle flag.
 
Last edited:

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Does it change my original point: perf/w, which is that big NV die has similar perf/w characteristics as mid-range NV die and so if its 35% faster, it will use around 35% more power. Not really.

On a slight tangent, when using the TPU numbers to estimate efficiency it is important to note that the way TPU does it is quite flawed.

Basically they measure power usage in 1 game, at 1 resolution, whilst they measure performance in multiple games at multiple resolutions. As such their performance per watt graphs should basically be taken with a huge grain of salt, especially the non-1080p ones (power is measured at 1080p).

Preferably their graphs should be ignored completely and efficiency should only be estimated from Metro: Last Light (the game they use for power measurements).

Coincidentally the picture can change a fair deal when doing so, for instance the Titan is listed as being about 23% more efficient than a GTX 680 at 1440p in the TPU efficiency graph, but when you look at the actual numbers from only M:LL at 1080p (since again that's where power consumption is actually measured), it's only about 16% more efficient.
 

BHZ-GTR

Member
Aug 16, 2013
89
2
81
Oh MY GOD

I would not buy this product (GTX TITAN X)

Price = 1500 Dollars

I am Waiting For Fiji XT Or Fiji PRO
 
Last edited:

iiiankiii

Senior member
Apr 4, 2008
759
47
91
Nice. I can't wait to see what it can do. I understand the price is outrage. But, I also understand I'm not the targeted market. Atleast this will give us a general idea of what the GTX 980 "TI" can do. It also gives AMD what to aim for with their 390/x cards. Atleast this time, the 390/x would be out to challenge it quicker than the 290/x challenged the 780 ti.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
What mad scientist setup would one need that can actually use 12gb of vram?:awe:

I remember being so amazed when i jumped from 512mb on my 8800gts to the 1gb gtx280,i was blown away but 12gb of vram wow.What the hell can one do with this much vram?Launch nuclear warheads?Guessing the best usage of this much memory is with configurations consisting of multiple cards?

I do admire the beast though.
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
Kitguru have some more info apparently:
http://www.kitguru.net/components/g...itan-x-12gb-what-you-need-to-know-and-expect/

Not sure why they are putting 12GB on a consumer card without Double precision.

Wow just wow, unbelievable if this is true! :thumbsdown:

So this card is just a normal consumer gpu without DP? Just renaming the 580/780 ti successor to a titan which was supposedly a "prosumer" (price justification) card.

However, keep in mind that Nvidia’s “Maxwell” was not designed to handle professional computing tasks, therefore, it does not support native double precision FP64 compute capabilities. Even the GM200 will not be able to beat its predecessor, the GK110 in high-performance computing tasks (e.g., simulations) that require FP64.

Have you guys all missed that? It's simply a 980 ti renamed as a titan to price gouge consumers if it's missing full DP. NV's greed knows no limits, especially with the rumored prices. It might not even be the full die to boot.

I guess they are giving AMD an easy chance to get some high end sales with the 390x if this is the "competition".
 

Grooveriding

Diamond Member
Dec 25, 2008
9,110
1,260
126
The situation with Kepler was very interesting, nvidia just winged it. The gk104 wasn't this master plan to slow roll out, nothing like that.

Believing nvidia 'just wings it' is an extremely myopic viewpoint and naive take on nvidia's powerful marketing department. GK110 was a well executed ballet of milking that chip for all it was worth with a slow-roll out. Arguably this began with GTX 680 and continued on to Titan, 770, 780, 780ti, Titan Black.

It was certainly rehearsed and intentional, they were not sitting around throwing darts at a board to decide when to drop products. We're seeing exactly the same strategy happening with Maxwell, it's no coincidence. They let out the mid-range 680(980) as high end, then dropped Titan(Titan X) at $1000 to feed many of us who were waiting for the real flagship to drop, milked the market who would pay that price after waiting forever and then came out with 780(1080?) @ $600 to clean up the rest. Then came 770(980), a rebranded 680(980), 780ti(1080ti?) - the full chip lacking in the Titan and then Titan Black(Titan Black X?) again at $1000. It was clearly masterfully choreographed. Nvidia's marketing and product strategy is a big part of their success, they do it so well and so many don't even pick up on how well they're getting worked over...

My bets are the Titan X is cut down and we see the same situation play out again. They will need to milk GM200 for a long time because 20nm is not until 2016 and they won't be able to do anything better than delivering more GM200 based cards with less disabled units.

On performance; this card will wind up somewhat similar to Titan vs 680 in my estimation but with less of a performance disparity. Titan was about 35% faster than 680 running at stock, but you had the headroom to crank it up and get it to 50% faster than a 680. Also accounting for SLI setups, it will shine with overclocking vs 980 SLI the same way you see a single overclocked 780ti being as fast as 680SLI. I think the performance improvement numbers will be smaller though, there is less die space to add and overclocks will be lower than you get on a 980 because of how nvidia cripples overclocking on their cards with power limits and lack of voltage control. This is in full effect on the Maxwell 2.0 cards with their BIOS not being able to be manipulated the way Kepler based cards could be altered.

We'll see the real potential of the chip once a '980ti/1080ti', whatever it's called, drops and there are factory overclocked cards with custom PCBs and likely the full fat GM200 inside. Titan cards are all bone reference cards with minimal PCBs and we won't be able to even work the BIOS on these cards like we could on the Kepler Titans.

Whenever we see the EVGA Kingpin card with whatever moniker nvidia gives the GM200 once they launch the successor to this card, we will see what I expect to be about 90% the performance of 980SLI out of a single GPU.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
Have you guys all missed that? It's simply a 980 ti renamed as a titan to price gouge consumers if it's missing full DP. NV's greed knows no limits, especially with the rumored prices. It might not even be the full die to boot.

Most people around me that have/had a Titan, did not buy it because of DP, but because it was such a hyped card.

You are absolutely right, but even if it had DP, most people would not use it.
 
Feb 19, 2009
10,457
10
76
This is super awesome news.
It means that this is Maxwell's GTX 780 Ti since the card isnt targeted toward computation/FP64.

So we will get a near full/full GM200 that is also priced like a Geforce gaming card. ie max $999 most likely

It wouldn't be called Titan X if that's the plan (to sell it as a Geforce consumer card).

Leaks suggest its ~$1,300.. wouldn't be shocked if true. 12GB is a lot, it commands a hefty premium, right??
 

gradoman

Senior member
Mar 19, 2007
887
567
136
No backplate? really Nvidia? hopefully this is an unfinished one...

When was the last time an stock NVIDIA GTX card came with a backplate? Like never? Why expect one here?

Sure it's $1000+ dollars probably, but if they've never bothered to include a backplate, why would that change now?
 

n0x1ous

Platinum Member
Sep 9, 2010
2,572
248
106
When was the last time an stock NVIDIA GTX card came with a backplate? Like never? Why expect one here?

Sure it's $1000+ dollars probably, but if they've never bothered to include a backplate, why would that change now?

You must not follow GPU's very closely. the 980 and 970 reference cards have backplates
 

gradoman

Senior member
Mar 19, 2007
887
567
136
You must not follow GPU's very closely. the 980 and 970 reference cards have backplates

Ah you're right. First time in... Forever. I'm still sleepy and thinking about ALL the other GPUs that came out that have no backplate, including the previous Titan.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |