Not for the 5070 relative to the 4070. It has higher advertised boost clock and a 50W higher TDP to brute force any problems.
That's why I said Game clock (heh). NV GPUs can automatically go above the advertised boost clock in games easy.
Not for the 5070 relative to the 4070. It has higher advertised boost clock and a 50W higher TDP to brute force any problems.
Better architecture? Better perf in 4k due to much bigger mem bandwidth?Process node is the same or almost the same. I don't understand why people expected a huge performance increase?
I don't think so. It should be the same at minimum.
Due to the extra competition at the low end I at least thin the RTX 5060 Ti will be a beefier card, maybe even RTX 5060. The MSRP of 5070 seems to hint that way. But let's wait and see ...
Yep, but the 25% higher TDP will help clock rates for the 5070 relative to the 4070 even if your theory about a dense-prioritized implementation are correct.That's why I said Game clock (heh). NV GPUs can automatically go above the advertised boost clock in games easy.
Beware NVidia marketing. That video appears to be a mockup, as there is no more detail until they zoom, you should be able to see the difference before that.
Also somewhere next to all the new Transformers models, I read "Beta", so IMO, this just a work in progress with faked video.
Basically more misleading NVidia marketing until something real shows up.
Yep, something doesn't add up. I could use the formula to calculate Shader FP32 numbers of RTX-4090, ie. 82.6 TF. But using the same formula, I get 104.9 TF: far lower than 125 TF shown in the presentation.Jensen said something interesting there about the changes in the SM.
"Two dual shaders, one is for floating point and one is for integer" Which he also clarified saying something like 'concurrent execution. Equal performance for both'. Which sounds like an evolution of Turing and Ampere, with Ampere/Lovelace being half a step.
Maybe 128FP32(64FP32+64FP32) + 128INT32(64INT32+64INT32) per SM. That
s a lot of Integer performance, but at least 1 clock cycle would not be conditionally half rate in floating point). This would of course make TFLOPs scale differently again in games vs previous 2 gens
Hopefully we get a whitepaper but its not yet certain
Haha, it's time for Nvidia to tell the press to tell the consumers how prone to image artifacts the previous technology was. All kinds of issues will pop up from the DLSS version that was supposed to be "better than native" just 24 hours ago.Well Digital Foundry (yeah-yeah Nvidia shills, whatever) also seems to have had access to it:
some artifacts are still very noticable, but all-in-all it looks quite good
I think that's for the full fat GB202 which is - of course - not shipping.far lower than 125 TF shown in the presentation.
This pattern does NOT bode well for 5060. The 5060 Ti might have a chance if it's built on a same die as 5070 and has a 192bit memory bus...
That is one of the major advantages of Nvidia Neural Rendering Real Time Ray Tracing technology. It gets better every generation unlike boring old raster which doesn't remove artifacts every generation.Haha, it's time for Nvidia to tell the press to tell the consumers how prone to image artifacts the previous technology was. All kinds of issues will pop up from the DLSS version that was supposed to be "better than native" just 24 hours ago.
That is SO true. And Digital Foundry is *very* guilty in this. I still remember they only ever showed the DLSS 2.0 trailing artifacts in Death Stranding once DLSS 2.2 or sth was out. *Never* did they mention any serious artifacts on DLSS 2.0 prior to that. Rince, repeat.Haha, it's time for Nvidia to tell the press to tell the consumers how prone to image artifacts the previous technology was. All kinds of issues will pop up from the DLSS version that was supposed to be "better than native" just 24 hours ago.
Well Digital Foundry (yeah-yeah Nvidia shills, whatever) also seems to have had access to it:
some artifacts are still very noticable, but all-in-all it looks quite good
Why add 1 frame in Raster when you can add 2 using AI!!!It might be a good time to start talking about IPC decreases. If 5090 was doing well in raw perf (raster/RT) then they would not have focused so much on new fake frames. 4090 got away with it because it was actually good double normal perf and x4 if you really want fake frames, so that was ok.
You are probably right about the 5060. The reasons why i think 5060 Ti will be different this time (probably indeed using the same die and 192 bit bus) is because :
1. Last time there really was no competition (Navi 32 was too expensive to produce and Navi 33 was a stupid chip. barely faster than last gen) - this time there probably is and from both vendors
2.They'll want to have at least some sub 600$ SKUs to endice more budget oriented RTX 2060 SUPER; RTX 3060 12GB and possibly RTX 3070 holdouts to upgrade who don't want to dish out around 600 bucks for a 5070
Me neither, I just see them as temporary Nvidia employees during the first month of each Nvidia release.I don't consider DF Shills
I don't consider DF Shills, but I do note that when they get these exclusive first looks, they tend to be very "uncritical" (never bite the hand that feeds you). It's like when Ferrari flies people to Wine country when it's miserable winter elsewhere, everyone is very uncritical of the cars they are testing, well except that one guy that was and was never invited back.
I have strong memory of the first DF exclusive on 40 series Frame Generation, where they basically only praised it and said nothing critical. Later when they had their own copy when everyone did, they did a proper critical deep dive.
Will watch the video later.
Yep, but the 25% higher TDP will help clock rates for the 5070 relative to the 4070 even if your theory about a dense-prioritized implementation are correct.
A pre-OC'd 4070 with more memory bandwidth and more fake frames. Also no optical flow hardware (greatest thing since sliced bread about 26 months ago).So is 5070 basically a 4070 Super minus $50 with moar framegen?
What competition? AMDs 9060 series uses Navi 44, a very tiny chip which has a 128 bit bus and AMD is apparently sticking with slower GDDR6. Which means even if NVidia sticks with 128 bit bus, it will have much more memory BW than AMD...
Not only that, there are plenty of flaws visible in the B-roll they are intentionally not talking about. Look at the lower right part of the screen (the tree especially) in this shot for instance (timestamped to 4:55):Well, they seem to be very critical of DLSS3 in this video. I wonder why they didn't show all these flaws with DLSS3 when RTX4k was launched? Funny how that works.
Speaking of this, this meme is quite well made:AMD bite NV and they make own RDNA3?