Nvidia wasted transistors on being able to process massive amounts, but visually imperceptible levels of tessellation. All for benchmarks. As long as the benchmarks show it's better and the tech press has little interest in analysis of necessity, Nvidia will sell cards. Reality is malleable. It's why marketing is a thing.
I've said before this chip is not primarily for gaming. This is a chip meant to sell Quadro cards for devs to do offline ray tracing and have access to tensor cores. Everything being presented for the gaming use case has been more of a force fit. Does it work? Yes. Is is optimal? No. Realtime raytracing is not even remotely as performant as current raster techniques. It's not as clean of a case as compared to the professional workload case which delivers what people need/want and being completely better than the last generation. That's how we know what they built the chip for.
I sort of agree with this. But i dont really see anything bad about this approach - granted, i am biased, cause i have use for it and wanted this for years. Anyway, its IMO similar to Intel selling rebadged Xeon CPUs as Skylake-X and nobody really blasts them for doing so, even though consumers dont really need 18 cores or AVX-512 for gaming and whatnot.
At the same time, even though i agree its being force fit to games, i am pretty sure its going to work just fine, all these rumors of 2080ti not being able to deliver even 60FPS at 1080p are premature, based on unoptimized demoes from Gamescom. They will surely find a way to make it work properly. And if we talk aboit raytracing in games being not optimal - i dont think thats gonna be down to performance issues. It will be more like this hybrid approach not offering enough visible difference to most people, when compared to 100 percent rasterized image. For that you would need 100 percent ray-traced picture, which is obviously no-go in real-time yet and will be for quite some time.
I do know a fair number of quadro users and tbh they don't need fancy ray tracing. I do agree there is a big market for movies and commercials. I suspect for that Nvidia just needs to put the hardware out there with api's to control it and lots of free support then the software will be updated to use it. Same as CUDA slowly took over. I agree that Nvidia doing it now means they will corner the market almost certainly with propriety libraries before anyone else has chance to do anything. However that still won't need to be real time, just quicker.
This release is all about real time ray tracing. Those tensor cores aren't needed for movie rendering, they are there to de-noise which is only required if you don't fire out enough rays to get a near perfect result which is what you'd do for a movie. In addition a render farm wouldn't need all that tessellation power so for them all that die space is wasted. No the ray tracing + tensor core combo is definitely for real time ray tracing - where fps is key and compromised quality is acceptable, exactly what you want for games.
They do most likely use those quadros just for engineering purposes, work with AutoCAD, Revit, SolidWorks viewports, which really is faster on Quadros thanks to their optimized drivers, but not actual final rendering. You dont even need Quadro for that, regular Geforce is as fast as Quadro at that particular task. The only advantage Quadro has is VRAM capacity, but you need that only for massive projects. BTW, you can certainly use Tensor cores even for off-line rendering, it is not just real-time rendering, which requires denoising.