Freescale has quoted 13.5 SPEC int 2006 for a 2 GHz Cortex-A72. That is more than a Core 2 T7400 running at 2167 Mhz so they already have reached that IPC ;-)
https://www.spec.org/cpu2006/results/res2007q3/cpu2006-20070723-01562.html
http://www.realworldtech.com/forum/?threadid=152735&curpostid=152741
Note the Intel result is old enough that Intel didn't have yet time to implement SPEC specific tuning in their compiler (e.g. for libquantum...).
And that
is a good step (just the fast they are good enough to be even be willing to perform SPEC benches on them!), but I'll believe it's just as fast, or faster, when they can come up with less cache-friendly benchmark sets, and show comparisons that are on even planes (Debian or Ubuntu kernels and packages, FI, and identical storage).
Of those SPEC tests, FI, 403.gcc is probably the closest to daily use stuff, with 473.astar coming up behind (not sure about 465.hmmer). Even so, those are anything but dismissable scores. Another CPU generation or two and they'll be there, with a fraction of the cost and power consumption to boot, and even greater vector throughput, too. One problem with many benchmarks is that you really
want the test to blow out the instruction and data caches, and not be too predictable, to stress branch prediction and data prefetching. Most benchmarks get made to work in stable conditions, repeatable across systems, and tend to lose some of that, especially with regards to instructions. GCC, however, is freaking huge, relative to any normal near-CPU-core caches, and makes use of big tree and list data structures, often unsorted and unbalanced. x86 has had a historical advantage in such cases, given that's been important to users, since around the late 90s (as opposed to, say, FP performance). I wonder if the full set of scores is available...?