Yes, but despite all of that reasoning against, DLSS works with some minor inconsistancies, which are most promininant with text alignment or movement.
In the same way Simpon's rule can estimate the area under a curve with greater and greater precision as delta x gets smaller, the same with frame prediction as delta t gets smaller. Again, the fundamental theorem of calculus is in fact correct and as these time slices approach zero so does the error. If the time slice is small enough and the AI prediction is good enough the error becomes exceedingly small. nVidia of course figured this out and decided it was worth pouring tens of millions or perhaps hundreds of millions of dollars into developing. I'm running some rough numbers, to get a feel for it. And yes, I do like to play with the numbers!
I'm not sure you're fully comprehending how short a time interval 3ms is in terms of human perception. For example, in the Olympics a false start is called if an athlete is shown to have a reaction time of less than 100ms. 10ms would be an order of magnitude faster than human reaction time. 3ms a third of that. As the time interval gets smaller the error between rendered and predicted frames becomes smaller.
The theory behind what nVidia is doing is solid. We may not like it but at the end of the day it does produce meaningful fps improvement with very little visual artifacts as Linus pointed out with this short demo of the 5090.
Interesting theory crafting, but that isn't what NVidia is doing. They are still holding the current frame and past frame, generating the in between frames and only after displaying the fake ones do they show you the now quite late "current" frame. The goal of this technology is smoothing. So they can't use predictions, that WILL get further off track, and need to snap back when they get a correct frame, that would actually introduce micro-stutter. Instead they use interpolation between two known frames, that way there is ALWAYS a smooth transition, and no abrupt corrections.
Smoothing is paramount...
You need to stop taking the word of corporate mouth pieces at face value.
This is NOT an actual FPS booster, that part is fake. It's a screen smoothing feature - the same basic thing that many TVs do. As misleading as some of the TV sellers are (cough Samsung cough), the still didn't go as far as claiming it would double (now quadruple) your FPS, they called it what it is - smoothing.
I'd have ZERO issues if NVidia added this and called it motion smoothing. The problem is, they use it to mislead people about actual frame rate. Mislead isn't a strong enough word.
They use it to lie to people.
When the released 40 series the bombarded YT with tons of extremely misleading (lying) ads show DLSS 3 doubling or tripling FPS with no explanation at all. Just pure deception.
Say you have a game running at 60 FPS and you turn on FG or MFG, to someone not playing, the action on screen just got smoother, to someone playing the lag time and response time, just increased to be nearly as bad as running at 30 FPS.
This could have just been another (marginally) useful tool in the toolbox, but instead NVidia is mainly using it for deceptive marketing.