I'd be surprised if it was faster than this but it really can't be much if at all slower.
In any case calling it a 7800 XT is meh considering how close to 6800 XT it's going to be. I assume it's going to have perf/w advantage but it sure is much less than what it should.
Makes me wonder if it would have been better to just make RDNA2 die shrinks and just a few optimisations instead... That would have meant no top end though.
I think that's the point though? The 6000 series cards should be basically gone and they don't yet have real replacement for that market segment.
That's always the question. But then it begs, what if they'd split compute off years back as separate designed chips for that, and kept making purer pixel processing designs like TeraScale. What would that be capable of at modern transistor densities. Especially paired with smart modern use of bandwidth (i.e. have NAND dedicated for Texture streaming, and the memory bandwidth to enable that - 512bit bus with modern GDDR pushing like 1TB/s bandwidth), would we be needing to bother with DLSS/FSR and the like? Further, would that actually make transitioning to path tracing easier? Or would you be able to integrate path tracing focused hardware into designs like that better/easier. Further, would having split compute from the pixel processing designs have made transition to chiplet designs easier? Keep SLI/Crossfire, but it'd be like one card is all pixel processing focused and another is compute/etc (which maybe would then be utilized in games for more than graphics and instead we'd have been getting physics, better AI, etc).
I'm actually astounded we didn't see AI grow out of video games or rather be a major test bed for its development. But I think that's because of how the industry went.
I honestly think Nvidia's push to compute and CUDA might have been the biggest limiting factor in gaming history since it basically forced the industry in that direction. Around that time they bought PhysX and locked them away which then stifled physics processing (as they attempted to pivot that to the compute units in order to justify pushing that into graphics cards; which is why I think having compute focused designs separate from graphics likely would have made more sense), pushed compute (with a bunch of half-baked gaming aspects; many of which had to be overcome with more clever tricks and eventually led to trying to undo much of the lighting with ray-tracing and then DLSS and the like to compensate for the limited - but still performance crippling - way they did that). We'd almost certainly have had much better transition to higher resolutions. I honestly don't think games would have been worse for it (possibly even better as I think we might have seen push to realistic textures paired with advanced geometry better). But I also think there's a high likelihood it might have accelerated path tracing for lighting (bypassing all the work to use compute for that - which is now effectively being undone plus we still have all that compute focused parts of the chips which is preventing it being used for path-tracing). I have a hunch it would make for much more responsive games today, and it would have made porting/updating older games easier as well.
I don't think Nvidia is singularly to blame, but they were the driving force. Microsoft for instance I think has culpability too, in that I think it would've prevented the whole mess that happened in the DX11 era (arguably 10 started that era but I think the compute and abstraction mess was more DX11), with DX 12 attempting to bypass all that abstraction and the like but then getting saddled with all the baggage from other moves (because again partnering with Nvidia on ray-tracing, which has screwed things up since). I think it also would have made things like VR easier, and maybe we'd be seeing real 8K rendering, and for VR per-eye individual GPU (so to boost framerates for full 8K rendering for VR you'd be doing 4K/4K per eye). I think it would've kept making multi-monitor (Eyefinity era) more feasible (since pixel pushing power would've kept pace, so we'd likely see better say sim rigs, be it racing, or say flight sims), with physics for SIMs also being more robust due to compute processing being more dedicated and not limited by integration into graphics pipeline (and similarly graphics limited the other way). And I think it would've helped with graphics cards and multi-chip in that they would have still been working on that the whole time, and so we'd be getting more sensible sized chips, and the heat and power spread across multiple GPUs for those that want that capability. But for $1000 designs I think we'd be looking at single GPUs that would do 4K/120 easily and likely 8K/60 if you went with somewhat lower settings.
Another thought is I think it would've enabled more competition. We were actually primed for several companies that were mobile focused to potentially make larger GPUs. That went nowhere once it went so compute focused, and because of all of that mess (DX11 era, and patents and all the other), it set back mobile gaming as well, and much of the SoCs started focusing on other aspects. Heck, I think the entirety of the situation stifled things so much that it effectively made it easy for Apple who notoriously cares little about games and gaming performance to basically be able to stun people by offering designs that would be good for games (and I think inherently we would have been seeing similar designs in the PC space because its a natural fit for the type of processing I'm talking about but much earlier; Intel kinda dabbled but only half-baked it with that one Radeon and then their eDRAM endeavor or whatever because they could get away with that).
It also highlights other issues, like AMD buying ATi and the subsequent issues that happened due to a weak AMD, which then took pressure off of Intel and Nvidia, enabling them to become either staid and lethargic and spend time on other which caused the PC industry to languish (for Intel it was all their weird nonsense in the early 2010s like the fashion stuff, the contra-revenue to try and force their mediocre mobile designs pushing HyperThreading which we've seen to be a security fiascoe; for NVidia it was giving them the ability to start doing all the horrible dictating of things - in graphics led to the DX11 era stuff meant they saw big inroads from focusing on software side of the rendering stuff giving them an even bigger advantage and stifling competition). All of that also played a role in how open source went during that time (I recall Linux making quite a bit of inroads in the 2000s but then the 2010s hit and it was like things took a backward step in many ways at least on the consumer side, basically requiring Google sized companies to do anything there whereas I feel like we could've been seeing open source provide a legit alternative to Windows had it continued - I think browsers is a good example of that situation where Firefox made huge inroads and then stalled out as things changed and Google started dominating that via Chrome and Android). And similar in graphics where that era was when open source was at its weakest (OpenGL had so little support).
Sorry, 3AM random thoughts.