I think your example is flawed.
The rx6800 was designed to compete with the rtx3070, and on your charts it does that very well. Beating it on every test.
A better comparison would be the rx6800xt vs the rtx3080, which is more interesting:
Still among the best GPUs you can buy, let's take an updated look at the battle between the GeForce RTX 3080 and the Radeon RX 6800 XT...
www.techspot.com
Same cache as the rx6800, same memory bandwidth of the rx6800, but it keeps up with the rtx3080 much more closely. There is more going on then just cache and memory bandwidth.
-------------------------------------------------------------------------------------------------
I think you need to rewind your perspective on the whole subject. It is all trade offs.
Memory Bandwidth: each memory line uses a lot of space on the chip. The contact pad, the power circuitry, noise control. Memory bandwidth is expensive, power hungry, and manufacturing defects are more harmful.
Cache: uses lots of space on the chip. However, power efficient, and defects can be planned for and do not ruin the entire cache. However, if a chip does not have enough memory bandwidth to start with, all the cache in the world is not going to do much.
At a certain threshold, different for each rendering resolution, adding more cache is more effective then adding more memory bandwidth. Yes, that threshold is lower for 1080p then it is for 4k, but it is still true at 4k. It is a balancing act. Add in computation assets, clock speed, etc, and it is never going to be a simple question of memory bandwidth vs cache.
AMD this generation went for more cache, and it worked for them. Nvidia went for more bandwidth, and it worked for them.
To claim that cache does not work at 4k is incorrect however, as AMDs has cache heavy designs with 256 memory busses that do work very well at 4k. The rx6800xt is within
5% of the rtx3080 at 4k, while only having
67% of the memory bandwidth* of the rtx and
78% of the wattage**.
*512 / 760.3 GB/s
**250 watts / 320 watts
As we move forward, with rtx4000 series already rumored to break power supplies, I suspect you will see nvidia beginning to move toward cache heavy designs also. Power consumption is going to begin to matter.