Titan X Announced

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

HurleyBird

Platinum Member
Apr 22, 2003
2,759
1,455
136
EDIT: Was responding to a thread crap, but decided to remove it to keep in line with Stahlhart's intentions for the thread.
 
Last edited:

alcoholbob

Diamond Member
May 24, 2005
6,308
355
126
Who here is going to buy this card if it's ~$1350 as rumored, and why?

I'm thinking about it for driving games at 4k. I briefly tried 980 SLI but the micro stutter was atrocious. Wasn't a surprise as my last 3 dual card setups were as well.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
I expect this card to have monster performance
With that black-out look, and turbo boost tech, I call this the GNX of Video Cards!

 

Kippa

Senior member
Dec 12, 2011
392
1
81
I've got a Titan and use it for both gaming and 3d rendering/photoshop. One thing that has got me wondering is the support for OpenGl in my version of Photoshop CS5, I know the executable for the running the software is 64bit, but does Photoshop CS5 address video ram 32bit or 64bit with regards to using OpenGl?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Without knowing anything about the performance of the card, how can anyone answer that?

Because at $1350, to most people the answer is no regardless of performance. And to others, they'll buy it because they know it will be the fastest card out at the time of it's release. I assure you there are more people who can answer this question than you think.
 

Kippa

Senior member
Dec 12, 2011
392
1
81
I don't think that 12gb is overkill specifically for the Titan considering that they are probably targeting CUDA people that don't want to buy a Quadro or Tesla. Reasonably good performance card plus 12gb of video ram for $1350 isn't that bad to be honest. If they were trying to sell a "purely gaming card only" without CUDA for $1350 then that would be totally different.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Without knowing anything about the performance of the card, how can anyone answer that?

Fair enough. How fast would it need to be for you to buy it?

I don't think that 12gb is overkill specifically for the Titan considering that they are probably targeting CUDA people that don't want to buy a Quadro or Tesla. Reasonably good performance card plus 12gb of video ram for $1350 isn't that bad to be honest. If they were trying to sell a "purely gaming card only" without CUDA for $1350 then that would be totally different.

That's what I'm wondering. Is anyone here going to buy it for it's DP and RAM to do CUDA/GPGPU? Is anyone going to buy it as a gaming card? Not what it can be bought for, but anyone who is actually considering buying it and what they are going to do with it.
 
Last edited:

Kippa

Senior member
Dec 12, 2011
392
1
81
I use my current Titan for gaming, 3D rendering (Octane using CUDA) and video editing (Premiere using CUDA). I would definitely make use of the 12gb with regards to the 3D rendering and video editing, it also would be nice for 4K gaming. I wouldn't jump on it straight away and would like see how it converts to UK price, also want to see the 3D rendering benchmarks to make sure it is actually worth the upgrade, though realistically the video ram is a huge selling point for me not that actual performance increase over the current Titan.

I have actually tried to loads a few assets in Lightwave using Octane once or twice it has actually used ALL 6gb of video ram (tried loading a highly detailed space ship Event Horizon and didn't have enough video ram even with 6gb ).

Video rendering using Octane with a gpu is insanely fast compared to a standard desktop cpu. I think one of the major things keeping the tech back for professional use is the video ram space. When Octane first came out most gfx card roughly had 2gb of video ram. With video cards coming out with 12gb of ram I can see this tech being started to be taken up more seriously for both hobbyists and professionals. Just like OpenCl and mining was great for AMD, I can really see 12gb on a gfx card be a really big thing for Nvidia and the uptake of the cards with regards to CUDA. I think that people have seriously underestimated the potential and possible demand of the new up coming Titan. I don't see having an extra 6gb as minor small thing and it could really make huge changes.
 
Last edited:

hawtdawg

Golden Member
Jun 4, 2005
1,223
7
81
Maybe it will be cheaper than the previous Titans since it doesn't appear to be anything more than a bigger GM204, and doesn't have the extra DP performance. Not to mention AMD is going to have competitive products with much lower prices in 3-4 months.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I use my current Titan for gaming, 3D rendering (Octane using CUDA) and video editing (Premiere using CUDA). I would definitely make use of the 12gb with regards to the 3D rendering and video editing, it also would be nice for 4K gaming. I wouldn't jump on it straight away and would like see how it converts to UK price, also want to see the 3D rendering benchmarks to make sure it is actually worth the upgrade, though realistically the video ram is a huge selling point for me not that actual performance increase over the current Titan.

I have actually tried to loads a few assets in Lightwave using Octane once or twice it has actually used ALL 6gb of video ram (tried loading a highly detailed space ship Event Horizon and didn't have enough video ram even with 6gb ).

Video rendering using Octane with a gpu is insanely fast compared to a standard desktop cpu. I think one of the major things keeping the tech back for professional use is the video ram space. When Octane first came out most gfx card roughly had 2gb of video ram. With video cards coming out with 12gb of ram I can see this tech being started to be taken up more seriously for both hobbyists and professionals. Just like OpenCl and mining was great for AMD, I can really see 12gb on a gfx card be a really big thing for Nvidia and the uptake of the cards with regards to CUDA. I think that people have seriously underestimated the potential and possible demand of the new up coming Titan. I don't see having an extra 6gb as minor small thing and it could really make huge changes.

Believe it or not brute force with X86 cores is still the preferred way for the professionals. I don't know the technical reasons why, but GPU rendering is much lower quality. That's why Pixar, for example, opts for 25,000 X86 cores in their render farm.

I believe (pretty sure?) that when rendering with multiple GPU's the cards don't run in SLI/Crossfire and the VRAM is not mirrored. Also, I don't think DP is used, either. You might be better off with multiple cheaper cards for what you do.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,109
136
Believe it or not brute force with X86 cores is still the preferred way for the professionals. I don't know the technical reasons why, but GPU rendering is much lower quality. That's why Pixar, for example, opts for 25,000 X86 cores in their render farm.

Well, Pixar has invested a lot of time and money over the years developing their software and hardware configurations based on x86 systems. The costs, in time and money, to transition their system towards a compute card model probably is not worth it at the moment. As x86 is a general purpose MPU, there is also the added value of greater flexibility (or, at least easier algorithmic development compared to compute programming).
 

Unoid

Senior member
Dec 20, 2012
461
0
76
I expect this card to have monster performance
With that black-out look, and turbo boost tech, I call this the GNX of Video Cards!



It's much more like a Corvette Z06, very expensive with a LARGE displacement (large die) engine.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Maybe it will be cheaper than the previous Titans since it doesn't appear to be anything more than a bigger GM204, and doesn't have the extra DP performance. Not to mention AMD is going to have competitive products with much lower prices in 3-4 months.

Nobody knows this but Nvidia at this time. I don't know why everyone keeps repeating it.

Comparatively, GM204 has slightly less DP performance compared to Kepler when compared against transistor count and die size, but it still outperforms GK104.
GM200 will be larger than GK110, and based on the math I had worked up, it is possible to at least be a good DP performer. If it outperforms the Titan Black, who knows, but it should be a good DP card.

People are using the knowledge that GM204 has a 1/32 DP/SP ratio and Maxwell will not be used for Tesla products as too much of a certainty against its potential. GK204 had 1/24 DP/SP, which isn't a significant amount better. That value has nothing to do with what the Titan variant can get, that's potentially up to what Nvidia permits. GK110, in the 780 Ti, also had about 1/24 DP/SP performance, so it's not a function of what a full-size die against a smaller die can produce, it's about what is enabled in terms of FP64 compute potential.

Titan X could very well be a bust for DP performance. Point being, we won't know until the specs are detailed.

Just because Nvidia isn't using it in Tesla products doesn't speak against it as much as many are arguing.

I think it's clear that it won't be a significant leap in compute performance over Kepler, so that's one argument that is possibly true. A full GM200 might not perform any better in DP over a full GK204. That's not to say it's worse or terrible, just not a leap like the SP performance for this generation.

There aren't any outstanding supercomputer orders for Nvidia at this time, and certainly Nvidia wouldn't be interested in selling what would be a minimal increment product to Tesla users over last generation, and risking a loss of sales in the long run. Their next generation should pack a wallop for DP, and at the Tesla side of the business, they'll be implementing NVlink, which will help scale performance significantly. They're working on basically introducing a new expansion standard on server motherboards, especially for POWER-based systems, which would use a whole new socket/slot standard for Nvidia GPGPUs built around an even better version of NVlink. With that on the roadmap, I expect most Tesla users, at least the supercomputer orders, will be focusing on what the next generations bring.

With NVlink on the horizon, I don't think Nvidia ever intended Maxwell to enter the Tesla market. Or at least, the current version of Maxwell. I think they had intended Maxwell to be at a smaller node in the original roadmaps, but now Maxwell evolved into a stopgap measure to hold off buyers until they can make the node jump.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Believe it or not brute force with X86 cores is still the preferred way for the professionals. I don't know the technical reasons why, but GPU rendering is much lower quality. That's why Pixar, for example, opts for 25,000 X86 cores in their render farm.

There is so much information about Pixar and nVidia from Raytracing to Latency.

Before, Pixar’s lighting artists relied on thousands of small cheats, that meant a scene could only be viewed from a limited number of angles. But by shifting to ray-tracing, which models the way light actually bounces around an environment, Pixar’s lighting team could free themselves to explore scenes from a wider variety of angles. And they could instantly change the way a scene was light — shifting from light with golden tones to starker colors with a few keystrokes to change the mood of a scene.

Pricing aside, interested in the technology and engineering of the bigger die!
 

artvscommerce

Golden Member
Jul 27, 2010
1,143
17
81
Believe it or not brute force with X86 cores is still the preferred way for the professionals. I don't know the technical reasons why, but GPU rendering is much lower quality. That's why Pixar, for example, opts for 25,000 X86 cores in their render farm.

I believe (pretty sure?) that when rendering with multiple GPU's the cards don't run in SLI/Crossfire and the VRAM is not mirrored. Also, I don't think DP is used, either. You might be better off with multiple cheaper cards for what you do.

When you render on multiple GPUs the memory is mirrored so it is not cumulative. The fact that until recently, the scenes professionals typically wanted to render could not fit into GPU memory is the primary reason we've seen production rendering take place on CPUs rather than GPUs.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Its funny how people always think their reasoning for buying a product is the correct one.
People who buy a $1350 product obviously have the money for it. They dont need to explain why.

Personally I draw the line at $999. There is a limit on how much Im willing to pay for a GPU. At $1350 I rather get two 980s. Or perhaps two 390x, depending on its TDP, noise and power consumption.
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Nobody knows this but Nvidia at this time. I don't know why everyone keeps repeating it.

Comparatively, GM204 has slightly less DP performance compared to Kepler when compared against transistor count and die size, but it still outperforms GK104.
GM200 will be larger than GK110, and based on the math I had worked up, it is possible to at least be a good DP performer. If it outperforms the Titan Black, who knows, but it should be a good DP card.

People are using the knowledge that GM204 has a 1/32 DP/SP ratio and Maxwell will not be used for Tesla products as too much of a certainty against its potential. GK204 had 1/24 DP/SP, which isn't a significant amount better. That value has nothing to do with what the Titan variant can get, that's potentially up to what Nvidia permits. GK110, in the 780 Ti, also had about 1/24 DP/SP performance, so it's not a function of what a full-size die against a smaller die can produce, it's about what is enabled in terms of FP64 compute potential.

Titan X could very well be a bust for DP performance. Point being, we won't know until the specs are detailed.

Just because Nvidia isn't using it in Tesla products doesn't speak against it as much as many are arguing.


I think it's clear that it won't be a significant leap in compute performance over Kepler, so that's one argument that is possibly true. A full GM200 might not perform any better in DP over a full GK204. That's not to say it's worse or terrible, just not a leap like the SP performance for this generation.

There aren't any outstanding supercomputer orders for Nvidia at this time, and certainly Nvidia wouldn't be interested in selling what would be a minimal increment product to Tesla users over last generation, and risking a loss of sales in the long run. Their next generation should pack a wallop for DP, and at the Tesla side of the business, they'll be implementing NVlink, which will help scale performance significantly. They're working on basically introducing a new expansion standard on server motherboards, especially for POWER-based systems, which would use a whole new socket/slot standard for Nvidia GPGPUs built around an even better version of NVlink. With that on the roadmap, I expect most Tesla users, at least the supercomputer orders, will be focusing on what the next generations bring.

With NVlink on the horizon, I don't think Nvidia ever intended Maxwell to enter the Tesla market. Or at least, the current version of Maxwell. I think they had intended Maxwell to be at a smaller node in the original roadmaps, but now Maxwell evolved into a stopgap measure to hold off buyers until they can make the node jump.

I doubt Titan will be a 'bust' but there has to be a reason NV is not updating their Tesla line for Maxwell. Maxwell is much more efficient vs. Kepler, and Tesla has historically been updated with each new GPU tech. If Titan doesn't have the dedicated DP die-space, it probably will be 'serviceable' but not amazing for DP workloads. Tesla is marketed to do both SP and DP well, so the GM200 (as we know it) likely wasn't a great fit for the Tesla line.

I agree, we will not know 100% for sure until we get the reviews and benchmarks posted.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Its funny how people always think their reasoning for buying a product is the correct one.
People who buy a $1350 product obviously have the money for it. They dont need to explain why.

Personally I draw the line at $999. There is a limit on how much Im willing to pay for a GPU

No one disputes that. There are people who will buy a $397,500 new Ford GT over a Ferrari or Lambo or McLaren. That's not the point. Every single generation since Maxwell NV keeps raising prices. First mid-range went ut to $500 with 680 and now it's $550 for the successor - 980. If Titan X goes up from $999 to $1350, why wouldn't 780Ti's successor also increase from $699 to $799-849? If people can keep paying these prices, NV will continue raising prices.

There are going to be gamers who see this as a major negative for the GPU industry overall because it means $250-300 for mid-range and $550 for high end is all but dead unless AMD brings some alternative. I don't blame NV though as they are simply exploiting the free market. I won't support mid-range cards at $500-550 and flagships at $1K which is why I'll keep waiting until something comes out that I view as a better balance of price/performance, even if a small premium is attached.

I mean for NV the Titan was worth $1K and Titan Z $3K and I am sure there are people who thought those prices are fair. A lot of gamers really can afford $2000-2500 of graphics but we don't spend that money because of how quickly GPUs depreciate and a matter of principle to send a signal that such prices are not OK. Those who think the prices are acceptable will also vote with their wallets and buy the card. What will remain true no matter what is in less than 1.5 years we will be able to get Titan X performance in half the price. It took just 11 months for a $330 970 to match a 780Ti, just 9 months for a 290 to match the original Titan and less than 12 months for R9 295X2 to drop from $1499 to $599.

Not everyone views gaming as their primary hobby. I had no problem dropping $500 on Sennheiser IE800s IEMs while most people are happy with stock smartphone $30 IEMs. At least I know in 5 years I could sell them for $400 easily but a $1350 Titan will be worth $200. If GPUs didn't depreciate so quickly, it would be far easier to justify $2500+ for a pair. But the reality is by Q4 2016, $1000 cards in pair will match or even beat dual Titan Xs. I am primarily interested in seeing the performance and overclocking on the Titan X but of course whether it's worth $1K will be subjective based on each gamer's income level. I think the Titan X will not be a cut down part but I expect the consumer Ti version to have slightly faster clocks and 6GB of VRAM. I look forward to that one and Titan X will give us a good idea of how well GM200 scales with increased specs and clock speeds.

One possible explanation why so many more gamers are willing to pay these 'new' GPU prices is because in the past we needed to upgrade our CPU platform fairly often but nowadays one can use a modern overclocked Intel CPU for 4-5 years with little to no performance to upgrade for gaming. As a result, more upgrade funds are left over towards the GPUs.
 
Last edited:

n0x1ous

Platinum Member
Sep 9, 2010
2,572
248
106
One possible explanation why so many more gamers are willing to pay these 'new' GPU prices is because in the past we needed to upgrade our CPU platform fairly often but nowadays one can use a modern overclocked Intel CPU for 4-5 years with little to no performance to upgrade for gaming. As a result, more upgrade funds are left over towards the GPUs.

Makes sense. I agree
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Tbh if someone wants to spend $1350 that's their choice - to some people that sort of money is fairly small change for a hobby (there's a lot more expensive hobbies out there). From our point of view if they fund nvidia then that's more r+d money for gpu's we might actually buy.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |