Priorities matter.Ironic that NVidia, the main preacher of Latency benefits, is now touting a feature that makes it worse.
Priorities matter.Ironic that NVidia, the main preacher of Latency benefits, is now touting a feature that makes it worse.
Like making $$$.Priorities matter.
https://www.nvidia.com/en-us/geforce/news/reflex-low-latency-platform/You don't have to buy it from us, buy from Nvidia themselves:
https://www.nvidia.com/en-us/geforce/news/reflex-low-latency-platform/
Let's see what 8ms extra latency does, according to Nvidia research:
View attachment 68284
Like making $$$.
It's all in the Nvidia research article, they lay it out clearly that it's not about the player "detecting" the difference, Nvidia objectively measured hit accuracy and kill time.Doesn’t tell us any about whether users will detect the difference in input latency over the visual difference in frame rate.
Aiming involves a series of sub-movements - subconscious corrections based on the current position of the crosshair relative to the target’s location. At higher latencies, this feedback loop time is increased resulting in less precision. Additionally, at higher average latencies, the latency varies more, meaning that it’s harder for your body to predict and adapt to. The end result is pretty clear - high latency means less precision.
Only a few posts ago you were asking for blind tests, measurements. Reflect on that.but most people will benefit from visual smoothness most of the time.
It's all in the Nvidia research article, they lay it out clearly that it's not about the player "detecting" the difference, Nvidia objectively measured hit accuracy and kill time.
Only a few posts ago you were asking for blind tests, measurements. Reflect on that.
Exactly, Nvidia is measuring something different than user perception of improved frame rate versus latency.
Blind test of user experience as in 1) can the user detect when DLSS3 is on and 2) if so, does the user prefer DLSS 3 and the latency hit versus no DLSS 3? My assumption is that smoothness wins out versus latency and users prefer DLSS 3 on even with the latency hit.
That's the point, it will not be smooth when you're making sudden movements in either your movement or your aim points. It projects a false frame further away from your intended move and when it corrects itself with a new true frame, the jump will be bigger as it has to adjust from the false position to the new, which will be a bigger delta.Exactly, Nvidia is measuring something different than user perception of improved frame rate versus latency.
Blind test of user experience as in 1) can the user detect when DLSS3 is on and 2) if so, does the user prefer DLSS 3 and the latency hit versus no DLSS 3? My assumption is that smoothness wins out versus latency and users prefer DLSS 3 on even with the latency hit.
Anyone know enough about chip design to tell if its possible or realistic to add PCIE5 support for the fully enabled *102 chip (likely 4090Ti) in 2023?
That's the point, it will not be smooth when you're making sudden movements in either your movement or your aim points. It projects a false frame further away from your intended move and when it corrects itself with a new true frame, the jump will be bigger as it has to adjust from the false position to the new, which will be a bigger delta.
Think about if you're running and suddenly shift to the right. The false frame will have you run a bit farther along before you seem to move right. Any sudden unpredictable change will show this as no one can predict your intentions in the next true frame. The false frame anticipates no vector change and extrapolates.
Could it work similar to how reprojection in VR works? Or would DLSS3 not be able to poll input at a higher rate than the game itself?
Won't you then have to render that new frame? If you do, then you defeat the purpose of a cheap computational frame.Could it work similar to how reprojection in VR works? Or would DLSS3 not be able to poll input at a higher rate than the game itself?
It works by distorting the frame.Won't you then have to render that new frame? If you do, then you defeat the purpose of a cheap computational frame.
I haven't had a chance to watch this in its entirety, but I figure I drop it in here now for everyone to digest. I'll add my 2c in a future edit below.
Edits:
- DF explains that the DLSS3 frame is inserted between two rendered frames.
- DF surmises that the push for higher and higher fps, even if some are AI generated, is to align with the push w/ high refresh rate monitors.
- There is a latency penalty for using DLSS3 FG (see below).
- Nvidia says there will be a "win some, lose some" scenario with DLSS3 FG, i.e. "there is no free lunch"
- DF acknowledges that while there are errors on the AI generated frames, it is very difficult if not impossible to notice at a high enough frame rate.
View attachment 68365
View attachment 68366
View attachment 68367
View attachment 68368
If you had two systems. Both with 120fps output, one was real 120fps, and one was DLSS3, the difference to the user would be very obvious in any game where timing is important.
nVidia was measuring the ACTUAL impact of input latency.
I am not sure why you are trying to talk around the importance of input latency. In what world is a soap opera effect more important than having the game do what you want it to?
I haven't had a chance to watch this in its entirety, but I figure I drop it in here now for everyone to digest. I'll add my 2c in a future edit below.
Edits:
- DF explains that the DLSS3 frame is inserted between two rendered frames.
Can always count on Nvidia reinventing things that have existing for a long time and then marketing it as something new, a la Apple, and then have the uninformed masses soak up the marketing.When NVidia revealed the half frame lag, I suspected it was waiting for the next frame before doing the in between, despite all the other pre-release info implying it was a forward projection.
This makes it even MORE like TV motion smoothing.
Yeah, I think my initial assessment of DLSS3 was correct. I see absolutely no reason to use it over DLSS2.x or native. Also, with the launch of Ada/DLSS3, Digital Foundry has confirmed beyond all doubt that they're, uhm, rather partial to Nvidia. Can't take them seriously at all.I do think it is fair to say that the artifacting is going to be hard to notice when the frame is displayed for 8ms (120 fps) but c'mon... It's hard to say that the image quality doesn't take a hit, because it does.
View attachment 68374
That's obviously the worst case scenario. One could easily find a frame that would be pretty much perfect. It will depend on scene by scene. Also it's is debatable if one can actually notice that in motion. Errors are more sever in extremely fast motion and in those cases you can't really notice them. Also the improved smoothness probably makes the game look more enjoyable. In any case, this is probably something that will vary greatly between different people.I do think it is fair to say that the artifacting is going to be hard to notice when the frame is displayed for 8ms (120 fps) but c'mon... It's hard to say that the image quality doesn't take a hit, because it does.
View attachment 68374
Still much less than native without reflex so that's only valid if you compare against same NVIDIA card with DLSS enabled. Even in worst case it is basically identical to native with reflex on. How that will actually feel is another thing entirely though.Looking at the latency comparisons. . . no thanks! The Cyberpunk and Spider-man latency numbers are actually quite terrible.
Still much less than native without reflex so that's only valid if you compare against same NVIDIA card with DLSS enabled.