Those numbers don't make any sense.
The 5090 has 33% more SMs.The 5080 has 5%, 5070 Ti 6%, and 5070 4% (over 4070 NS). You'd expect the gap between the 4090 and 5090 to be 25-30% more than the other ones.
If the TPU preliminary spec differences are accurate, it's a combination of factors.
Also the pixel throughput increase is almost nothing on 5090 while 5080 gets quite a boost over the predecessor. The 5070 Ti gets an enormous amount of pixel throughput boost. The outlier seems to be 5070 over 4070.
But your assumption is that the 4090 has the same bottleneck as the 4080 and 4070. When you are cutting things down you don't get to cut down things that make it optimal in performance for the low end part.
Things are also going to change depending on the workload. Is it 1080p vs 1440p vs 4K? Higher memory bandwidth benefits are greater on 4K resolutions than on lower ones for example. Does it use Ray Tracing? What about DLAA and DLSS? The gains for 5090 is greater than 5080 when DLSS MFG is enabled.
Prescott on the high end was mediocre over Northwood. However, Prescott's architectural improvements such as the coalesced write buffers improved performance greatly for the cache starved 128KB L2 Celeron parts. It was actually reviewed decently, partly because the Northwood successor was so sucky. It was 25-30% faster per clock than Northwood. Oftentimes the strive to push on the high end benefits low end chips disproportionately - one can say Moore's Law is about benefitting the common man.
And the perf bump seems to be under the bandwidth bump, which is underwhelming.
Which is logical if Moore's Law scaling stops, while memory continues to improve. If you have two identical chips and you have an option to use next gen memory standard you'll use it.