Don't know where to post this, but Gamer Nexus did a great video looking at VRAM. The 3DMark results for me where the most interesting. I hope he hears back on this, I'd never expect the card with more VRAM to loose performance as MSAA is increased.
Vega seems to have some issue with MSAA (people speculate its related to the ROPs, which, I think Vega 64 showed issues even considering that - Fiji/Fury didn't have the same level of drop even though it had similar ROP count and memory bandwidth but with half the memory). I was curious if the memory situation would change things and doesn't seem like it did (although it might have improved it over Vega 64, not sure).
Vega seems like its compute/shader heavy (as seen when they went with heavy shader use in that video). I'm still curious if that might actually be good for this new ray-tracing, but I doubt AMD puts a lot of resources into getting the software support for it on older hardware, and kinda even hope that they might be moving towards a big overhaul focused on future products (speaking purely of software, stuff like the NGG related things that allegedly got punted to Navi and won't be enabled on Vega). Heck, it seems like they put quite a bit of effort on improving their Linux related stuff, starting with Vega 20 (which makes sense as it was really targeting markets that would be important for).
I think the only thing that would really leverage Radeon VII's VRAM/bandwidth would be loading up on lots of crazy high quality textures. And even then, I think Nvidia's compression algorithms would make it difficult to even notice the difference. But I think a game that was designed around that idea (like id's Rage with its megatexturing). Since it seems like in modern games, when I notice some graphical thing for its poor quality, it seems like its often related to texturing, I could see big gains in visual quality if there was enough bandwidth/VRAM, to sustain it. I even wonder if you could pre-render high levels of ray-tracing affect (that would change depending on factors like lighting and environment - so time of day, weather like rain, etc) on the textures, and then use that as a cheat. This way you get the visual benefit of ray-tracing for most of the scene, but without the real-time hit. Couple it with doing some onboard NAND, where you'd load the entire game onto it, or at least things the GPU would need like textures so that it'd have the quickest streaming and capacity for lots of textures (you'd have several versions of similar textures for different situations).
Which, we don't hear a lot about texture filtering these days. I think last time I recall much of any scrutiny of it was like 10 years ago, when people were looking at differences in filtering between AMD/ATi and Nvidia on Trackmania.