Nah, LPDDR4X bandwidth is not the issue here, you can notice when the card drops from 1500 to 1300, it has enoght bandwidth it needs a higher clock and TDP room. 68GB/s is a lot for a gpu this small, thats a lot more of what the GT1030 DDR5 or a Vega IGP has.
Are you saying the frametime increases and stuttering is due to clock fluctuations?
1.5 to 1.3GHz is a small difference in the big scheme of things. It's only 15%. If it's due to clock it seems like a bigger deal, like a power mangement maturity problem.
It says on that GPU-Z screenshot it has 8 ROPs and 16 TMUs. TPU's database says 24/48 for 96EU and 20/40 for 80EU.
The base Iris Xe G7 architecture is 24/48, so either it's 24/48 or even the 20/40 figure is correct for the 80EU DG1.
GPU-Z doesn't go detect hardware features. The numbers are manually inserted by the coders. I know I had to correct the ROP/TMU count for earlier generations and they changed it, but they reverted to the erroneous figure.
There was some review showing how gameplay got smoother with a modest FPS boost for old 3D cards with simply swapping the HDD for an SSD. And that was just SATA 3.
And that's the truth. I used to have a G965(GMA X3000, the basis of modern Intel GPUs with execution units) but paired with Celeron D because I wanted to save money before I got a Core 2 Duo.
Since the performance was highly dependent on the CPU for that integrated graphics, I only got about 20 fps in the game World of Warcraft.
Yet I played it running 5 man dungeons and even raid instances. Because the X25-M SSD kept the framerate stable, which the WD Raptor drive couldn't.
I can tell you beyond a decent SATA SSD, it doesn't benefit performance. The NVMe throughput is rarely ever achieved and pretty much a marketing number.