Again, FSR 3 is most likely not going to be an upscaling technology, just like DLSS 3 isn't. You keep conflating temporal upscaling with interpolation, even though they are completely different technologies.
Both upscaling and interpolation have their pros and cons, so I'm sure that AMD will use their new A.I accelerators to improve the upscaling portion of FSR.
You argued that DLSS 3 will improve the image quality compared to DLSS 2, even though it has already been released and we know that it isn't the case. DLSS 3 produces way more artifacts than both DLSS 2 and FSR 2, yet you don't comment on that at all, even though you critique the far smaller differences between DLSS 2 and FSR 2. Also, input lag will be worse than having half the FPS, where those frames are actually based on new input.
I dare you to quote me where I said DLSS 3 will improve image quality compared to DLSS 2. I don't recall saying that anywhere. In fact, I did a word search on the entire 2nd page for whenever I mentioned DLSS and nowhere do I even mention DLSS 3. I personally have used it only once and I didn't detect any image quality or input lag issues, but then Nvidia reflex was also turned on at the time.
But speaking of DLSS 3 and input lag, a guy over at Guru3D tested the latest driver and claimed it reduced the driver overhead for DLSS 3 which of course helps with input lag and overall performance. People should know by now never to underestimate Nvidia in matters like these, because they are investing tons of resources into these types of technologies and the input lag is going to be reduced over time.
Guru3d DLSS 3 overhead
You continuously exaggerate the benefits of Nvidia features and downplay AMD's features...
And I could say the same about you, that you continuously exaggerate how much progress AMD has made, while downplaying Nvidia's lead.
But AMD can still add extra ray accelerators to the CU's.
And they are doing just that, but it's not the same thing as dedicated RT hardware that Nvidia uses. AMD's approach is more hybridized.
I don't agree on the clarity, but the other two are a bit worse on FSR 2. However, I dispute that this is a big difference.
Yep, a lot of this stuff is subjective to be sure.
Again, you don't call out the far bigger artifacting with DLSS 3, which suggests to me that you are rather biased.
Again, there you go piping off about DLSS 3 when I haven't even mentioned it in this thread until now.
ML is not necessarily better than manual coding. There is nothing that ML can do that a coder couldn't do in theory, although it may be harder to implement or even unfeasible due to how much effort it takes. However, there are advantages to hand coding as well, like tunability.
ML dramatically accelerates how quickly developers can implement algorithms to improve A.I based upscaling and RT acceleration. It's a big factor behind how RT and upscaling performance and quality have improved by leaps and bounds in such a short period of time relatively speaking.
XeSS actually runs way worse on non-Intel GPU's. XeSS on non-Intel is way worse than FSR 2.
I know it runs and looks worse on non Intel GPUs, but when it's run on an Intel Arc GPU, it's the closest DLSS alternative we have. And Intel is just getting started. Hopefully their next GPU is more competitive on the high end.