Even by their own definition, the lag will still be there. It should be lesser than before and not greater.How about you if the lag is still there?
Hey maybe the fake frames don't identify as interpolated frames. Maybe they think they are real frames, just like their buddies.How about you if the lag is still there? Will you still believe the salesman despite the evidence, and struggle to come up with a new theory of why it lags just like it's doing interpolation, while believe it's not interpolating?
Even by their own definition, the lag will still be there. It should be lesser than before and not greater.
Even by their own definition, the lag will still be there. It should be lesser than before and not greater.
Still hung up on raster perf? It has been dead for a while. Can't play any game today on RTX 40 series without DLSS 3.5. In general, turning off RT/DLSS/FSR/XeSS in recent games leads to poor visuals, poor performance and poor gameplay.And I'm sure that's true, just like the 5070 matching the 4090.
5090 has better raster than 4090. That itself should make a difference in FG latency.Nvidia didn't say anything about FG latency being reduced, at any point, at least I don't remember it, I think they said it improved fps for x2.
If latency improved, they would say it out loud.
Nvidia didn't say anything about FG latency being reduced, at any point, at least I don't remember it, I think they said it improved fps for x2.
If latency improved, they would say it out loud.
Getting off topic but it would actually be cool to have neural network imagined parts of the game so everyone can have a unique experience or something like that. That would be cool.I dont get why they are going after more fake frames....
They should be spending time building NVidia Game Assistant.
NGA plays the games for you to max out yours levels and gear without all the tedious grinding, so when you do have time to play you can go straight to having fun!
no that's slop. I play games to enjoy a given gamedesigner/writer/artist vision.so everyone can have a unique experience or something like that. That would be cool.
we have more than enough slop already.Gaming reimagined - personalized games to suit what you like.
What?Still hung up on raster perf? It has been dead for a while. Can't play any game today on RTX 40 series without DLSS 3.5. In general, turning off RT/DLSS/FSR/XeSS in recent games leads to poor visuals, poor performance and poor gameplay.
unless this AI craze ends that’s not happening. Looking at AMDs CES pathetic keynote further proves my pointMaybe someday either AMD or Intel will say "Enough!" and get a card that focuses purely on raster over x8 or x16 fake frames.
Slang is now openly governed by the Khronos Group.looks like you need to work with their "Slang" shader language, so you're going to need to maintain two totally different implementations of all your shaders.
Slang empowers real-time graphics developers with innovative features that complement existing shading languages, including modular code development, portable deployment to multiple target APIs, and neural computation in graphics shaders. Hosting under multi-company governance at Khronos will enable and foster industry-wide collaboration to drive Slang’s continued evolution.
Bit too late for that buddy, this is what AMD has to say on the matter, and looking at financials, can't really rebut that...Maybe someday either AMD or Intel will say "Enough!" and get a card that focuses purely on raster over x8 or x16 fake frames.
Improving performance in the areas that gamers care about most - ray tracing, ML Ops for FSR4 and ML Super Resolution
That's no 'guy'. He's the dude behind FXAA, FSR1 and other neat and thin things.This guys
Didn't know about it. Interesting this postHe's the dude behind FXAA, FSR1 and other neat and thin things.
I mean all I'm getting from that thread is that DLSS4 isn't artifact free, but it's still superior to DLSS 3 in pretty visible ways. I don't get why he's crapping on it so hard though as a general product. Isn't DLSS4 and even DLSS3 good enough for most people, like personally I can't spot 99% of the artifacts in the DF video even with video speed at 50%. I don't even usually notice much difference between FSR3 quality and native at 1440p, and according to most reviewers DLSS3 is superior to FSR3 and DLSS4 is so far shaping up to be another improvement. I don't exactly have the sharpest eyesight/perception but neither do many consumers so I don't get the outrage here with upscaling artifacts. Upscaling isn't "free frames" but it's significantly better frames at a nonequal lesser image quality. Not sold on MFG/FG but upscaling is definitely useful for consumers.This guys seems trying analyze DF C2077 video. There more posts
huh? Seems to me he’s very AMD biased. If NV did RT wrong then so did Intel and Apple.
Yeah sure, definitely a good person to get technical and non-biased info from. 🙄That's no 'guy'. He's the dude behind FXAA, FSR1 and other neat and thin things.
The thing about AI that concerns me is the fact that while we can code the neural net and train it, at the end of the day we don't know how it is doing what it is doing. I'll give you an example. It has always been impossible to remove a stem from a mixed song. A stem is just a track, like the vocal or bass track. Over the years we've used EQ to accentuate part of the audio spectrum, or use phase inversion to cancel out signals panned straight up the middle, but there is now way to undo the mixing.
Then a year or so ago I come across Ultimate Vocal Remover and it does the impossible. It removes the bass, or vocal or drum stem from an already mixed track. We train it by basically showing it a mixed track and the bass track and say, "Now you do it to this song file." It's amazing and disturbing at the same time because the darn thing has figured out how to do something we can't. Not something that would take us a long time to do, something we can't do. "We've done it Jim. We've finally created a computer smarter than us! That must make you quite happy Spock?" Not real Star Trek dialog but there was some dialog like that in the M5 computer episode.
How long before we get so reliant on this technology that there is really no need to dig too deeply into problems? AI is already coding, how long before it is doing day-to-day engineering, and other such tasks? And should the power go out are we instantly back to the stone age because nobody can actually figure anything out anymore. I mean in like a hundred years of this?
Reminds me of the Star Trek episode "For the World is Hollow and I have Touched the Sky." A civilization is living inside a giant asteroid completely run by a computer. The computer starts to fail and they are helpless because no one even knows what a computer is. Lucky for them Spock does.
Anyway I'm sure we'll figure out how this will work but I think the next 5 or so years are going to be a bumpy ride.
DLSS5 will have AI predicted player input to reduce latency.