Isn't the Tensor cores obligatory for practical use of the RT cores? Right now they can't do full Ray Tracing with the available hardware and must fudge (de-noise) the output. Works quite well, but still not fully accurate.It almost feels like they should have kept the Tensor cores out and just had Ray Tracing as a feature. In that way, the space for the Tensor cores could have been used to increase performance with increased SMs, ROPs, caches, etc. That would have also boosted the performance in Ray Tracing.
Maybe there's a reason why they did not do so. Moore's Law is being jittery nowadays, but we're still getting scaling, so more transistors in an area. What's really being slowed is in performance gains, and worse, power reductions. For the high end CPUs and GPUs though the design becomes entirely power and thermal limited.
So what do you do? You add accelerators to take advantage of the abundant area. Power limits are "solved" since accelerators only run sometimes, not always, which allows that part of the chip to power down.
I don't think it's a matter of taste. FXAA lowers overall IQ, edges are one thing that people focus on, but the IQ of the rest of the scene also matters. FXAA is indiscriminately lowering the quality of textures in the middle of polygons, as well as blurring the edges.
Isn't the Tensor cores obligatory for practical use of the RT cores? Right now they can't do full Ray Tracing with the available hardware and must fudge (de-noise) the output. Works quite well, but still not fully accurate.
No shill card was brought out.When you bring out the shill card, you probably have no arguments.
Because I don't know whether there's any (meaningful) performance boost to unlock. What I do know though is one does not pay a premium over Pascal only to receive a fully unlocked product months before the 7nm gen launches.Why not just tell me why drivers won't improve Turing performance over time?
The problem is getting developer support... and that's going to be tough without the consoles supporting HW RT. And if the consoles support HW RT, that obviously means AMD will too.
Isn't the Tensor cores obligatory for practical use of the RT cores? Right now they can't do full Ray Tracing with the available hardware and must fudge (de-noise) the output. Works quite well, but still not fully accurate.
This introduces an important point, dev support will typically be minimal until the console adopt, and we know from history that to keep the price of the consoles reasonable each console generation borrows tech from the PC mid range of the prior generation, which performance wise it's very bad for them compared to the biggest and best on the PC, typically something like 1/3 or 1/4 the speed. RT is only just possible in 1080p on the very best cards now, so consoles wont have this for at least 1 more generation (console generation, being like 5-6 years)
How do you define fully accurate and why do full ray-tracing when you can be smarter? There's no analytical solution for the full light transport equation except for some very special corner cases. In commercial 3D render engines you let the render converge to a good enough state that contains such low level of noise that human eye can't spot.Isn't the Tensor cores obligatory for practical use of the RT cores? Right now they can't do full Ray Tracing with the available hardware and must fudge (de-noise) the output. Works quite well, but still not fully accurate.
This is a hot card. Easily hits like 57 C under water. My 1080 Ti used to never get past 50 C.
Has anyone here got a 2080 or 2080ti?
He has s 2080Ti I'm guessing.
Yes, I have a 2080 Ti with EK full cover block. I have it overclocked to 2085 Mhz on the core and 7699 on the memory with EVGA's 130% BIOS. Today it got as hot as 61 C playing Destiny 2 .Has anyone here got a 2080 or 2080ti?
This is a hot card. Easily hits like 57 C under water. My 1080 Ti used to never get past 50 C.
That makes DLSS seem quite unimpressive, and I'm blown away that it is being touted as much as it is if this is really the intent for it. I'd think Nvidia would want to focus on how they can simply outdo AMD in brute force rendering (but then if they try and tout the 4K rendering capability, and talk up that paired with the BFG Displays, its really makes them seem like a joke, where its like sure if you can drop $5k-10k you can really show those consoles who's boss!).
Yep, very true. A "feature" that intentionally drops the resolution by design. Remember when AMD were slammed for putting adjustable tessellation in their driver? But here dropping the whole resolution automatically is OK because Jensen "OMG 10 gigarays" Huang said so.In the past something such as this would have been called a cheat. Times have changed I guess and now it's called innovation....
If you ran this on a conventional gfx card you would use 1800p TAA vs. "4k" DLSS and get similar image quality for equivalent performance as Hardware Unboxed showed.No, not cheating. The graphics card is still outputting a native resolution image in the end. If you ran this on a conventional gfx card it'd be crushingly slow.
But dig a little deeper, and at least using the Infiltrator demo, DLSS is pretty similar in terms of both visual quality and performance, to running the demo at 1800p and then upscaling the image to 4K. Again, it’s just one demo, but pouring over the footage and performance data really tempered my expectations for what to expect when DLSS comes to real world games.
Yeah, let's ignore screenshots and demo recordings and reserve judgement about the article. In fact, let's reserve judgement about all articles that examine DLSS.That last statement is I'd think good reason to reserve judgement about that article
How do you define fully accurate and why do full ray-tracing when you can be smarter? There's no analytical solution for the full light transport equation except for some very special corner cases. In commercial 3D render engines you let the render converge to a good enough state that contains such low level of noise that human eye can't spot.
The trick with denoising is that you can get away with fewer samples and the AI can figure out the rest of it. Of course, 3D render engines and games will have different denoisers as the time budget to get a frame ready is not in the same order of magnitude - games can only afford 1-2 samples per pixel so the denoiser will have to work with a much noisier input.
DLSS is a stopgap to utilize the Tensor cores for something useful till hybrid raster/ray-tracing pipelines will be used in games. And the first few effects that use RT cores that will be found in BF5 and SotTR are just eye-candy, more accurate shadows and reflections. Metro will up the ante as the game will use it to mimick global illumination.
If you ran this on a conventional gfx card you would use 1800p TAA vs. "4k" DLSS and get similar image quality for equivalent performance as Hardware Unboxed showed.
PS: on a side-note, it's funny to see reviewers inadvertently announcing gamers they did not need to play their games at native 4k, as they can get subjectively equivalent image quality for faster framerates by lowering render resolution.
Yeah, let's ignore screenshots and demo recordings and reserve judgement about the article. In fact, let's reserve judgement about all articles that examine DLSS.
I wonder when they will share some details about how do they plan to challenge the RTX line-up and tech. Although one can bad-mouth NVIDIA for the underwhelming RTX-tech adoption in current games, 3D render engines will have updates to utilize RT cores pretty soon, even CPU only engines will incorporate the tech.My concern still strongly remains, are AMD going to do a 100% traditional rasterization