after reading some reviews (not good deepdives yet unfortunately) it seems RDNA2 is ca 80%-100% faster on average compared to old outgoing Vega 8 but in some cases it-s much more than that.
Horizon Zero Dawn 17 vs 41 fps avg 1080p low for example. What does vega choke on in those type of games?
Is it 8 ROP-s vs 16/ one shader engine vs 2, DDR4 vs DDR5, difference of CU-s or just a combination of all of those?
on paper RDNA2 has ca 60-90% higher tflops, ca 50% more mem bandwith combined with 175% higher fillrate and who knows how much higher geometry rate?
i have 4800H laptop with iGPU only and have noticed resolution scaling does not give as much benefit as it should in many games specially newer ones so there are some other bottlenecks (geometry which does not really scale between resolutions?)
Horizon Zero Dawn 17 vs 41 fps avg 1080p low for example. What does vega choke on in those type of games?
Is it 8 ROP-s vs 16/ one shader engine vs 2, DDR4 vs DDR5, difference of CU-s or just a combination of all of those?
on paper RDNA2 has ca 60-90% higher tflops, ca 50% more mem bandwith combined with 175% higher fillrate and who knows how much higher geometry rate?
i have 4800H laptop with iGPU only and have noticed resolution scaling does not give as much benefit as it should in many games specially newer ones so there are some other bottlenecks (geometry which does not really scale between resolutions?)