Really, you are only showing again and again, that you don't get it. You keep assuming that more cores always win as long the load is parallel, which is totally absurd.
The more parallel a workload is, the less clock speed matters. You see this in graphics rendering where GPUs have thousands of cores with much lower clock speeds than CPUs. Some of the most parallel applications are also optimized for SIMD, which benefits greatly from more execution engines rather than clock speed.
You are comparing 10 cores to 4 cores in your one sided example, you have more than double the cores, so you would have to run the 4 core at more than double the clock speed to equalize it.
And now we're back to square one, because the kind of clock speeds you're talking about that would be necessary to equalize it just aren't possible yet. And as I've previously mentioned before, clock speed does not scale linearly. It reaches a point where it eventually plateaus due to microarchitecture limitations and bandwidth.
Even if it were possible to run a 7700K at 8ghz with LN2 cooling, it would not beat the 6950x in Watch Dogs 2 since it would stop scaling with frequency at a certain point, probably around 5ghz.
Run that 6950x at 1GHz and the 7700K would beat it in any benchmark, even fully parallel ones.
This would be an interesting experiment. The 6950x is 54% faster than the 7700K in that benchmark, which isn't a small deficit. Assuming that clock speed scales linearly (and it doesn't) the 7700K would need to be clocked at 6.4ghz to match it.