HurleyBird
Platinum Member
- Apr 22, 2003
- 2,785
- 1,506
- 136
EDIT: Was responding to a thread crap, but decided to remove it to keep in line with Stahlhart's intentions for the thread.
Last edited:
Who here is going to buy this card if it's ~$1350 as rumored, and why?
Who here is going to buy this card if it's ~$1350 as rumored, and why?
Without knowing anything about the performance of the card, how can anyone answer that?
Without knowing anything about the performance of the card, how can anyone answer that?
I don't think that 12gb is overkill specifically for the Titan considering that they are probably targeting CUDA people that don't want to buy a Quadro or Tesla. Reasonably good performance card plus 12gb of video ram for $1350 isn't that bad to be honest. If they were trying to sell a "purely gaming card only" without CUDA for $1350 then that would be totally different.
I use my current Titan for gaming, 3D rendering (Octane using CUDA) and video editing (Premiere using CUDA). I would definitely make use of the 12gb with regards to the 3D rendering and video editing, it also would be nice for 4K gaming. I wouldn't jump on it straight away and would like see how it converts to UK price, also want to see the 3D rendering benchmarks to make sure it is actually worth the upgrade, though realistically the video ram is a huge selling point for me not that actual performance increase over the current Titan.
I have actually tried to loads a few assets in Lightwave using Octane once or twice it has actually used ALL 6gb of video ram (tried loading a highly detailed space ship Event Horizon and didn't have enough video ram even with 6gb ).
Video rendering using Octane with a gpu is insanely fast compared to a standard desktop cpu. I think one of the major things keeping the tech back for professional use is the video ram space. When Octane first came out most gfx card roughly had 2gb of video ram. With video cards coming out with 12gb of ram I can see this tech being started to be taken up more seriously for both hobbyists and professionals. Just like OpenCl and mining was great for AMD, I can really see 12gb on a gfx card be a really big thing for Nvidia and the uptake of the cards with regards to CUDA. I think that people have seriously underestimated the potential and possible demand of the new up coming Titan. I don't see having an extra 6gb as minor small thing and it could really make huge changes.
Believe it or not brute force with X86 cores is still the preferred way for the professionals. I don't know the technical reasons why, but GPU rendering is much lower quality. That's why Pixar, for example, opts for 25,000 X86 cores in their render farm.
Along the way, it adopted SGI’s systems, and, then moved to PCs equipped with NVIDIA’s graphics cards.
I expect this card to have monster performance
With that black-out look, and turbo boost tech, I call this the GNX of Video Cards!
Maybe it will be cheaper than the previous Titans since it doesn't appear to be anything more than a bigger GM204, and doesn't have the extra DP performance. Not to mention AMD is going to have competitive products with much lower prices in 3-4 months.
Believe it or not brute force with X86 cores is still the preferred way for the professionals. I don't know the technical reasons why, but GPU rendering is much lower quality. That's why Pixar, for example, opts for 25,000 X86 cores in their render farm.
Before, Pixar’s lighting artists relied on thousands of small cheats, that meant a scene could only be viewed from a limited number of angles. But by shifting to ray-tracing, which models the way light actually bounces around an environment, Pixar’s lighting team could free themselves to explore scenes from a wider variety of angles. And they could instantly change the way a scene was light — shifting from light with golden tones to starker colors with a few keystrokes to change the mood of a scene.
Believe it or not brute force with X86 cores is still the preferred way for the professionals. I don't know the technical reasons why, but GPU rendering is much lower quality. That's why Pixar, for example, opts for 25,000 X86 cores in their render farm.
I believe (pretty sure?) that when rendering with multiple GPU's the cards don't run in SLI/Crossfire and the VRAM is not mirrored. Also, I don't think DP is used, either. You might be better off with multiple cheaper cards for what you do.
Fair enough. How fast would it need to be for you to buy it?
Nobody knows this but Nvidia at this time. I don't know why everyone keeps repeating it.
Comparatively, GM204 has slightly less DP performance compared to Kepler when compared against transistor count and die size, but it still outperforms GK104.
GM200 will be larger than GK110, and based on the math I had worked up, it is possible to at least be a good DP performer. If it outperforms the Titan Black, who knows, but it should be a good DP card.
People are using the knowledge that GM204 has a 1/32 DP/SP ratio and Maxwell will not be used for Tesla products as too much of a certainty against its potential. GK204 had 1/24 DP/SP, which isn't a significant amount better. That value has nothing to do with what the Titan variant can get, that's potentially up to what Nvidia permits. GK110, in the 780 Ti, also had about 1/24 DP/SP performance, so it's not a function of what a full-size die against a smaller die can produce, it's about what is enabled in terms of FP64 compute potential.
Titan X could very well be a bust for DP performance. Point being, we won't know until the specs are detailed.
Just because Nvidia isn't using it in Tesla products doesn't speak against it as much as many are arguing.
I think it's clear that it won't be a significant leap in compute performance over Kepler, so that's one argument that is possibly true. A full GM200 might not perform any better in DP over a full GK204. That's not to say it's worse or terrible, just not a leap like the SP performance for this generation.
There aren't any outstanding supercomputer orders for Nvidia at this time, and certainly Nvidia wouldn't be interested in selling what would be a minimal increment product to Tesla users over last generation, and risking a loss of sales in the long run. Their next generation should pack a wallop for DP, and at the Tesla side of the business, they'll be implementing NVlink, which will help scale performance significantly. They're working on basically introducing a new expansion standard on server motherboards, especially for POWER-based systems, which would use a whole new socket/slot standard for Nvidia GPGPUs built around an even better version of NVlink. With that on the roadmap, I expect most Tesla users, at least the supercomputer orders, will be focusing on what the next generations bring.
With NVlink on the horizon, I don't think Nvidia ever intended Maxwell to enter the Tesla market. Or at least, the current version of Maxwell. I think they had intended Maxwell to be at a smaller node in the original roadmaps, but now Maxwell evolved into a stopgap measure to hold off buyers until they can make the node jump.
Its funny how people always think their reasoning for buying a product is the correct one.
People who buy a $1350 product obviously have the money for it. They dont need to explain why.
Personally I draw the line at $999. There is a limit on how much Im willing to pay for a GPU
One possible explanation why so many more gamers are willing to pay these 'new' GPU prices is because in the past we needed to upgrade our CPU platform fairly often but nowadays one can use a modern overclocked Intel CPU for 4-5 years with little to no performance to upgrade for gaming. As a result, more upgrade funds are left over towards the GPUs.