clearly Maxwell benefited from more shader L2 cache, since you have 128-bit memory bus GPUs like the GTX 950 and 960 matching AMD 256-bit memory bus GPUs.
GTX950/960 are slower than R9 380/380X/280X and that difference grows even more at higher resolutions.
https://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/23.html
something seems to be off with Witcher 3 with updated game/drivers and VLIW cards judging by the 6970 here
around launch time
http://pclab.pl/art63116-7.html
newer
http://pclab.pl/art66374-5.html
if you see the 6970 performance is a lot lower while the rest is more or less the same,
That does look bad. VLIW is going to suffer more and more as AMD is no longer going to optimize drivers for it. Even though Fermi is doing better than HD6970, The Witcher 3 is also aging that architecture. Seems older archtiectures are getting bottlenecked by TW3.
HD7850/7870 vs. 6970/580 on launch:
The benches you linked
HD7870 = 49.8 / 43.4 / 42.6 / 50.7 = avg 46.63
GTX580 = 37.7 / 29 / 36.5 / 37.5 = avg 35.18
Today HD7870 costs
$90. This just goes to show that future-proofing with $500+ flagship cards for next gen games out in 4-5 years doesn't work well. It would actually be cheaper to buy a 6970, throw it in the garbage and buy an R9 270 to play TW3 today than keep using the $500 580, and it's even worse considering HD6950 was $299 and unlocked to a 6970.
but I think the 7970 is aging A LOT better than the 5800s were.
The 7970 OC has very few weaknesses for its time compared to HD5850. It has 3GB of VRAM vs. 1GB on the 5850, it has big overclocking headroom vs. much smaller OCing on the 5850/6950, it has lots of memory bandwidth and lots of shading power and it benefits greatly from AMD focusing on GCN driver optimizations. Also, back then graphics improved at a much more rapid pace compared to modern period. Since HD7970 is more powerful than PS4's GPU and many modern AAA games are targeting to run on PS4 (Just Cause 3, Far Cry Primal, The Division, Watch Dogs, Far Cry 4, Dragon Age Inquisition, etc.), it's not a surprise that 7970 OC is still viable for 1080P gaming.
The performance difference bewteen the 2012 HD7970Ghz (aka 7970 OC) and 2009 5850 is just
massive compared to 2012 HD7970Ghz vs. 2013 R9 290X.
@ 1080p
HD7970Ghz is
2.38X faster than HD5870 (~5850 OC)
vs.
R9 290X is just 31% faster than HD7970Ghz. With 1.2Ghz OCing R9 290X might squeeze 45% faster but that's about it.
http://www.techspot.com/article/942-five-generations-amd-radeon-graphics-compared/page9.html
It is crazy to think that from Sept 2009 when HD5870 launched, AMD increased performance 2.38X when they launched HD7970 in January 2012 but 290X wasn't a big improvement at all in the grand scheme of things. Even with Fury X, AMD is nowhere close to 2.38X faster than R9 280X/7970Ghz. It will take a card >30% faster than the 980Ti to get a 2.38X lead over R9 280x at 1080P to get to relative terms wrt: HD5850 OC vs. HD7970Ghz.
R9 280X = 99%
GTX980Ti = 179%
=> 99% x 2.38X ~ 236%
https://tpucdn.com/reviews/ASUS/R9_380X_Strix/images/perfrel_1920_1080.png
At 1600p, HD7970Ghz was
2.95X faster than HD5870 per the TechSpot's January 2015 review (333% vs. 113%). To put that into perspective, it'll take a card with a rating of
292% on this chart below to get to where HD7970Ghz sits vs. HD5870 in 2015 @ 1440/1600p.
WOW.