Market distortion! That's what low supply and low price sensitivity introduce. Crypto already looks wobbly, the NFT market is already on its way to crashing. If proof of work ever takes off, which at this point someone might beat ETH too it as they keep delaying, then it could mean the end of high GPU prices as has been seen over the past two years.
So that's pretty simple really.
What's more interesting is I'd found one study showing how much data is often re-used between frames in graphics. It's a lot, you could theoretically cut bandwidth a lot by having an even larger cache. Not just storing buffers but previous scene resources as well. However the resources used there were a lot, like a lot a lot. Unigine Valley, a benchmark that's 9 years old at this point, re-uses up to 260mb.
All this means the highest end proposed RDNA3 cache of 512mb couldn't keep everything at max settings (256mb is useful for 4k+ rendering on its own). And the lowest end of 128mb wouldn't be able to keep much at all. But AMD could cut some bandwidth by increasing cache. If they're really planning on just 256bit bus for the highest end it would require 512mb, no way you'd get away with 256mb and doubling or more a 3090.
The question of really fast camera movement, raytracing, and etc. also come up. You could see very visible stutters on a camera cut. Though maybe a single frame dropped on a camera cut might be seen as worth it. There's also the consideration of just how much is being streamed in and compressed today anyway. This also, once again, totally discounts the Navi 33 rumors. The 6900xt already uses 128mb or more on some tasks, cutting the bus size in half sounds totally impossible.
There does seem to be some more bandwidth savings with cache structures. But they'd need to be absolutely huge, and probably quite expensive, more expensive than just putting a bigger bus on a chip at some point.