In conclusion, without knowing the GPU-bound part of the frame time, IPC can not be deduced.
Sure, that is where you see the biggest difference. But even with generally GPU bound game there are passages which are completely CPU bound.
Practically.
In 2015 I had a old PC (I had a good one in work) with C2D e6300@3,16GHz from 2006 with 8GB DDR2-800 slightly oced to 850 MHz. I had gtx260 GPU which failed so I bought gtx760 in 2014.
In 2015, a game I wanted to play came out- Witcher 3. In 1680x1050 (NEC WGX2Pro IPS panel) it was quite a mess on death march difficulty. The running around and city was ok, but boss/general fights were like 20 fps, unplayable even on low details...
Benchmarks on Core2Quad were about 70-80 fps in medium details 1080p around the web, So I upgraded to c2q 9650 and oced it to 3.9GHz (got a good price 80EUR and I waited for Skylake to come out to buy a new platform).
Fps in fight went up, to like 25-30, nothing special. Average went up massively, much smoother in general areas.
Then I bought new board (asrock extreme4 z170, 32GB DDR4 2666- needed that as a replacement for current work CFD precalc machine and i5-6600K, which I oced to 4,4GHz). Average FPS went up a little (10 fps up, GPU bound), but fights with the same GTX760 went up from 25 to 70-80 fps. Finally the death march difficulty was playable and the game enjoyable.
The web reviews didn't show that massive difference at all. It was like 30-40% upgrade, which is a lot, but not 300% upgrade I experienced in the critical CPU intensive scenes- boss/fights where I needed the fps as high as possible.
If you asked around the web for upgrade for my system, pretty much 90% of the people recommended upgrade the gtx760, that the low low end for such a demanding game like wither 3. But that wasn't the case.
I am not saying that is the case with ryzen vs cores now, but I am saying that the average benchmarking can be very misleading.