The arguing about percentages is always interesting.
Let's say we start with Prescott=1 .00+40% Merom 1.4 + 12% Nehalem= 1.56 + 10% SandyBridge= 1.73 + 4% IvyBridge=1.79 +10% Haswell=1.97 +4% Broadwell=2.05 +10% Skylake = 2.26 +17% Sunny Cove = 2.64 +19% Golden Cove = 3.14
What you can see is that since the baseline always changes the percentage increases are actually much more absolute IPC each time. That means the 19% increase of Golden Cove alone is around 50% of a Prescott, or actually more than the 40% you got from Merom. Or even the measly 10% increase on Skylake is about half a Merom increase over Prescott, and slightly more than a Sandy Bridge in absolute IPC.
If percentages without the baseline matters you would be far better off with Atom, which consistently has 20-40% for a Tock.
As absolute performance numbers grow larger of course the difference in those numbers has to increase to keep percentages even. This is the definition of percentages.
If A does 100 widgets/hour and B does 150 widgets/hour, then B is faster by 50 widgets/hour or 50% faster.
If C does 225 widgets/hour then C is 50% faster than B and has to be faster by 75 widgets/hour to achieve the same percent performance increase as B.
It's not really "harder" for C to achieve the same performance increase as B, it's the nature of number theory.
Now if you are saying it's harder to keep increasing efficiency due to the low hanging fruit argument then that almost always true. Increasing performance through iterative design always is harder as the iterations progress.
Software/hardware usually starts topping out at certain versions before massive infrastructure changes need to occur.
MS Word, Excel. Back in the late '80's, early '90's new versions were notable events with in depth reviews. But for years now they just "work."
CorelDraw has massive improvements, features for about the first 8 revisions, then just small incremental things.
Video editing software, let's say what was originally Sound Forge, then Sony, now Magix, got pretty solid and complete by version 10 or so.
How about smart phones? Once they were fast enough to smooth scroll complex web pages and open the camera basically instantly I was done with performance upgrades. Hence I'm still using a Pixel 2 with no issues/complaints. My daughter "needs" the latest iPhone for the me emogies. I'm like $1000 for that one feature and another useless camera? Yeah, not so much. They're (manufacturers) really looking for features we need to sell new phones. That's why Apple got caught slowing down old phones with updates, they realize the yearly phone upgrade cadence is pretty pointless now.
Even with computer processors we've reached a "good enough" point for most people. Luckily the competition between ARM/AMD/Intel is creating superfast designs that honestly probably 1% of the population (and of course industry) really need for productivity purposes.
Okay so I'm really curious as to how Golden Cove is going to pull another 19% IPC "rabbit out of the hat" IPC increase over Sunny Cove? Looking at the big front end/back end structures right now we are at 5 decoders (1 complex, 4 simple) and 10 execution ports.
This is where some of the Big Brains here are going to have to come in and slap me straight but I'm thinking for Golden Cove Intel is going to have to do something with the decoders. 2 more simple decoders? Another complex? Since the back end was just opened up I'm thinking they'll look to the front end.
And of course they will have to make a lot of the algorithms that keeps instructions moving through the core efficiently they'll need to make them smarter and probably a bit larger.
DDR5 should help with feeding the beast so right off the bat so Golden Cove probably has a small IPC increase coming just from the memory subsystem increase. After that Intel knows what the next bottle neck is and how to open it.
Okay, cue the Big Brains here to step in and give us some thoughts on what structures in Golden Cove probably need to be enhanced to provide another 19% IPC over Sunny Cove?