The 3080 has 70% more bandwidth than the 3070 but it wont perform 70% better. Scaling architectures up will be getting harder and harder for gaming without increasing compute workload. It doesnt make sense to go overboard with bandwidth when the benefit isnt there. Microsoft and Sony have it...
RTX3070 has 20TFLOPs with the advertised boost clock and will be around 21TFLOPs with "gaming" clock. 16gbit/s is enough for 20TFLOPs to provide ~20% more performance over a RTX2080TI.
Last posting, but you can skip through the video and look at the slides:
Both slides have 100% fake "leaks" and nothing has come reality. There is a difference between a leaked roadmap and photos and to have real insider information about architectures.
A photo is nothing. We got leaks from the 3080 cooler in june. nVidia has sent Quadros to partners. It will be released in 1 1/2 weeks.
He was wrong with every "non leaked" point about Ampere. Go back and watch his videos again.
Dont trust such clickbait youtubers.
This guy is making money with clickbait and fakes. Dont know why anybody believes him after the Ampere fiasco. He was totally wrong on everything which was not leaked by Twitter.
It would be a 90% improvement in perf/w over the 5700XT. He is just making his numbers up. At the same time Sony cant even ship a 10TFLOPs console with less than 100w for the GPU and memory...
Reflections, direct and indirect lightning have a huge impact on the whole picture. Claiming that these only "enhancing [a] part of the visuals" is just wrong.
Screen-Space effects have a limited effect because they dont have information from outside the screen-space.
Microsoft went from 68gb/s to 320gb/s off-chip. I think that is a statement that their approach with the One/S has failed. On-Chip ram must still be refreshed to hold the information and increasing the L2 cache will result in yields problems.
No, nVidia has warps with 32 threads. They would have to double everything within a SM. So with Ampere they doubled the FP32 throughput with relativ few transistors.
Navi22 is 3070 level. You need to stop with these baseless claims. Navi with 60CUs makes it barely to the 3070. 3070 will be ~60% faster than the 5700XT with only 46/48SM. GA104 is enough for everything AMD will throw at them.
No, the XBSX is not equal to a 2080S. It performs around a 2080. And a console SoC is always more efficient than a stand alone GPU. To be faster than a 3070 AMD has to deliver 70% more compute performance within 220W.
What? A 60CU RDNA2 will be only as fast as a 3070. nVidia doesnt need any other chips. 3070 is faster with compute, faster with raytracing and faster with DL.
Maybe AMD need something bigger. But nVidia has covered every corner.
Yes, when compute workload dominates Ampere will be so much faster. Here from the Iray blog: https://blog.irayrender.com/
Up to 2.6 faster than the Quadro RTX6000 (full Turing, 285W).
Yes? Look up what the "Osborne effect " is. 3080 is more than twice as fast as the 5700XT which was sold last year for >$400 with 8GB. The demand will be there.
Only TSMC gets a "very nice deal". Without competition every IHV will pay big money to get wafers.
So Ampere is a big deal for the whole business. The last big non TSMC GPU from nVidia was NV40 from IBM.
DLSS is maybe 10% of it. Raytracing should be ~2x faster than Turing. So either the 350W is not the actual power consumption or the 3090 is around 2,5x faster.
There will be so many versions of AIB cards. My Gigabyte 2080TI Gaming OC has a TDP of 300W with 360W powerlimit. That is 50W/40W more than the reference TDP.
Yes, DP2.0 would be good. But unlike DP1.4 and HDMI2.0b with 1.4a (DSC) and HDMI 2.1 most use cases are covered. And i dont really think that 4K@120Hz with HDR is a DP problem to solve...
SoC GPUs are clocked very low on TSMCs process. Look at Kirin980 to Kirin990 or Qualcomm's transition from Samsung's 10nm LPP to TSMC 7 process. They all went wide and reduced the clock rate.
And XBSX has around ~12b xtors. That is 1/3 of GA102.
There doesnt exist any performance numbers outside of Timespy Extreme. So just wait for the reveal. Without performance and real world power numbers it just useless to discus. And btw: My 2080TI with 360W hits only 7400 points. So a 3090 would be around 40% more efficient...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.