That's another way to use it. See, anti-aliasing is analogous to rendering to a virtual screen larger than your actual screen, then resizing the picture to smooth it. DLSS is a super-resolution algorithm - it smartly guesses pixels between other pixels. This can be used to create a higher resolution image, or to do anti-aliasing. Or both.
Not in this case. This is not a generic resize algorithm applied to AA. This is a specific Deep Learning AA algorithm. The whole point of of Deep Learning is that you train it for a VERY specific task. In this case, eliminating jaggies/aliasing. You train it by giving it the input and desired output and letting it figure out it's own algorithm on how to get there. We aren't trying to create detail blow things up, we are attempting to smooth edges.
In theory this could be a great AA method, because you could fail it for blurring the scene, like so many post process AA methods do, and this is almost certainly another post process method like TAA/FXAA.
Maybe. It seems like they'd at least need different AIs for different genres. There's 2D cartoon games, 3D cartoons, and 3D hyper-realistic games. There's also the occasional oddball, like Telltale Games'
rotoscoping style.
I would be interested if they come up with a video player that uses DLSS to make DVDs near 4K blu-ray quality.
Again, this is not a generic resize algorithm.
Resize is a separate DL algorithm. They exist for processing images. Though they tend to get exaggerated claims. It really won't turn DVD into Blu Ray, let alone 4K UHD Blu Ray.
https://letsenhance.io lets upload images for Deep Learning Scaling. Try uploading a DVD screen cap and see what you get out. I bet you won't be impressed.
Having Tensor cores is amazing. They have almost unlimited uses. To me one of the first things I thought of is RAW processing of camera images.
Digital Camera don't capture all color at each pixel. They typically have a Bayer pattern filter, and you have to calculate the missing colors, these algorithms tend to soften the images and create artifacts. But it really seems like DL could extract the maximum detail with minimum artifacts from raw images, and now that Tensor Core could end up in more users computers it may be practical for someone to implement this.
But I really wonder if NVidia is going to be blocking user from from runnin Deep Learning applications on Consumer Cards.