Considering the PS5 and Xbox's have a cut down GPU the performance difference should be noticeable. The PS5 RDNA2 based GPU is a 36 CU unit. The lowest Big Navi is a 60 CU part for PC, correct? There was some discussion of the graphics not being too good where I got that video link from. We don't know how old that gameplay video is and this game is not due for another month. It's also built atop a game that came out a year ago and wasn't designed with RT in mind. It's a good effort.
TechPowerUp had this to say to about the Series X GPU. I'm afraid I don't know much about AMD GPUs to understand how powerful a CU or if a double CU is less than two good separate CUs.
Microsoft in its Hot Chips 32 presentation detailed the SoC at the heart of the upcoming Xbox Series X entertainment system. The chip mostly uses AMD IP blocks, and is built on TSMC N7e (enhanced 7 nm) process. It is a 360.4 mm² die with a transistor count of 15.3 billion. Microsoft spoke about...
www.techpowerup.com
I bring up the consoles because I was thoroughly confused by your post. That video is PS5 video, not PC video. It doesn't seem like there's plans to bring it to PC for now. Your 30% figure should be higher given the cut down GPU in the consoles. This new gen of consoles brings a lot of value to the console market, but I do not think they'll be on the same footing as PCs. Close but not the same.
I agree, close but not the same, but let's compare how close. Let's take a 3090 (2,5X in 4K in relation with a 5700 - TechPowerup) How faster is going to be the PS5 in relation with a 5700, let's take a 5% IPC and the clock difference, I would calculate with a hypothetical 2115MHz average actual clock (anyone can do their calculations with their own IPC and actual clock projections) this will give nearly +29% for PS5 in relation with a 5700 but I will go with a +25% assuming there is no compression efficiency advantage in the RBEs leading to a small deficit since the bandwidth is the same 448GB/s (I won't go into details) so with these assumptions we have that 3090 is just 2X in relation with PS5. Why I say just 2X? Well in many games this is the frame rate ratio between 4K and QHD resolution. If you check at TechPowerup a Sapphire 5700XT Nitro+ Special Edition (around -5% from this assumed at PS5 level) you will see that the QHD average fps is around 1,8X in relation with 4K. Of course this is an average and it depends from the game and the engine. But coding in a fixed hardware environment with the necessary optimizations it is not illogical to assume that the development teams will extract nearer 2X than 1,8X on the average. (Please don't compare 3090 scaling, it will only mean that you don't know how things work, lol, btw 3090 is not +10% in relation with a 3080 due to some scaling wall Nvidia faced with ampere design, already 3080 is system limited not just CPU limited, if rumors are true about Cypress cove, within 4(?) months, 3080 will get at least +2% additional performance in relation with 2080Ti with Rocket Lake-S in relation with Comet Lake-S (PCI-Expess 16X gen4, Single thread performance, usage of upcoming Gen4 SSDs, etc.) You don't have to analyze it on a theoretical level that is system limited, just check 3090 at the games that are low on CPU resources (a good indication would be 60fps on consoles) and another must is that the engine must be designed to scale at high refresh rates (very hard) there you will see what is the true difference between 3090 and 3080. Do you think the only reason Nvidia persued the ray tracing path is because they see that they have an advantage in relation with the competition? They should have made the simulations, throwing just more raster GPU performance into the mix does not yield any more near perfect scaling like before due to the other system advancements that must happen concarently, of course I am talking in the long run, not for the immediate future (although Jensen in Turing launch was referring to another thing, how rasterization visual quality per pixel will be much more difficult to scale in the future, and that with all these cheating raster technics trying to simulate reality looking scenes, increasing resolution and precision, results in scenes looking more fake because the technics break and the fake looking result becomes much more clear for someone to perceive it) Anyway, Nvidia is way ahead than some people think, although I think Nvidia's core team is brilliant, giving up the consoles to AMD was a major mistake for Jensen (of course he will say things like, human resources and time is finite and find talent to hire is difficult, blah blah,blah, it was a mistake) Where would AMD be without all these collaborations from 360 era till now (on the technology level, don't underestimate Sony and MS contribution to the design choices/optimizations
), and having the whole f...ing industry optimizing their engines for them and still they are at 20% (please don't defend Jensen based on the 80% market share, or because he may had concerns about monopolistic regulations etc. , He made a big mistake, billions $ mistake, of course I wouldn't change him for anything (lol, I'm mainly saying this because In the last presentation wasn't himself, low energy and excitement, we want old Jensen back🤭) Anyway, back to PS5, it is not irrational to accept that a PS5 will do at QHD what a 3090 can do in 4k (on optimized for consoles engines, eventually all of them...)