Plus, it's marketing. It's been around since... well, forever. AGP at least.
"Well, our new card is AGP4X, it's up to twice as fast as AGP2X!" - When in reality, even back then AGP4X was a minor difference from 2X.
The story repeated at AGP8X vs 4X
And again for first gen PCIE vs AGP
And so on and so forth.
The issue isn't that the faster interface speeds aren't useful- they are. The issue is that a major reason the faster interface speeds were originally invented was to allow DMA (direct memory access) for the graphics card to talk to main system memory. At the time, video cards had relatively tiny amounts of memory and the idea of simply 'sharing' the main memory on the system sounded super amazing.
The problem was, even back then onboard memory on graphics cards was way faster and lower latency than system memory; any card that actually resorted to using system RAM performed *horribly*. This can even be seen today, when you try and play a game on a card with too little VRAM and you configure the settings too high, you get *horrible* hitching and hiccups in the framerate. Congratulations, you've found the spots where your GPU borrows system RAM to make up for its lack of VRAM!
And though system RAM has gotten much faster, and the interface has gotten faster, it's not mattered because of two factors:
1. GPU memory has gotten much faster as well
2. GPUs come with lots *more* memory now, so why would they need to borrow system RAM?
So today, of course they say "Well, our card is PCIE 3.0 X16!" Because if they didn't, someone else would say "Our card is X16, theirs is only X8- obviously ours is faster!"
Though, not everyone does this. The AMD RX460 cards are only X8.