I think that if you are willing to give up die space for IPC, and pay for a high transistor budget on a premium fab node, you can get very high performance a low power from a single core. Apple products are much more expensive than PC's. This is one major contributor to that cost IMO.N3e is not that good honestly, we know Apple has shifted to a high performance library instead of high density library, and they got from 4 to 4.5GHz, not that impressive considering this fact. Intel is clocking at 5.7GHz with N3B so yeah it won't be any speeding beast of a process node.
Agree. For decades, Intel has driven new instruction sets and their adoption .... except when they didn't (ie 3DNow! and x64) and were forced to by the industry. In fact, by being in a near monopoly position, Intel could release new instruction sets, get the optimizations in place, and enjoy an entire design cycle of being ahead of AMD just with this one advantage alone.I meant software adoption. And I mentioned that AMD will have it for free, as Intel is actively driving SIMD library efforts, which hardly can be said about AMD. They cannot even ensure their CPUs have reasonable support in mainstream compilers 3 months past launch...
It seems like Intel has run out of new tricks for new instructions as of late though. You don't hear about new miracle instructions so much anymore. I mean, sure AVX512 rocked with its crazy wide execution path. How could it not improve performance ... albeit at a good cost in die size, power, etc. Still, it did improve IPC in the loads it can be used in by a BIG margin. Then Intel got rid of it on desktop because they didn't want to pay the price in power and die space for the feature.... while AMD still has it. You don't suppose Intel can use their monopoly power to get software compilers to remove the support so they don't look so bad do you? .