What is not logical and completely not OK to me is misguided marketing segmentation and grave errors Intel committed in designing Skylake server/HEDT CPU.
1) Whole 512bit vector effort is very misguided, Intel should have went AVX128 style - use WHOLE set of AVX512 featured instructions on server/HEDT - masks, 32 YMM registers, CD, byte ops. But use them ON 256bits, keeping execution units the same, cache paths the same to save area and power and efficiency overall! This whole effort is full retard mode for Intel and result of what is probably complete loss of guidance inside the company. They already got CPU that does AVX512 real well - that is KL. No one in their sane minds will buy high core Intel CPUs for AVX512 throughput work - it is not power efficient in scale and it hasn't got teraflops of Kl/GPUs either. This whole belief that there are "workstation" or server workloads that can benefit from AVX512 wide vectors is hilariously out of place in the real world where it takes very specific CPU workload: think about being a hard rock of finding 512bit chunks of vector work and hard place of GPUs/KL accelerator that offer way more TF per $ and w. This stupid race to larger vectors on x86 CPUs has to stop, we need actual 512bit vectors in typical server/desktop about as much as we need native 128bit integer registers. Sure some workloads could benefit from those (big integer math stuff, crypto etc), but those are better served by accelerators or dedicated HW like encryption engines.
2) By betting the farm on 512bit execution and wide hw and datapaths to support it ( load bw that is sitting idle if vector units are not in use ) and using new fabric and cache architecture to scale to the moon and feed AVX512, Intel has ruined what were amazing server CPUs. One can only imagine how disastrous memory latency is going to be in 6 mem channels with ECC memory. And Intel made sure it is going to get hit a lot by making L3 smaller and eviction only. Needless to say L3 cache hits were good for power and performance in workloads i care about (think about JVMs with gigabytes of garbage collection per second per server ).
3) This whole local cache L2 size vs L3 is again forced by the need to feed AVX512 monster and scalability to higher core counts. Problem is that not every workload is HPC floating point calculation working on a matrix of numbers. Real world apps tend to have plenty of inter thread communication ( consumer/producer, lock contention, false sharing etc, interrupts, DPC happen) and Intel just made sure they will get penalized properly.
4) On topic of fabric, if latencies are this bad on LCC, what will happen with MCC with 18 cores, even more pain from extra column of cores adding to latency and inter-core traffic jams? At least with Broadwell one could use NUMA on ring, for example run 4 JVM instances on dual socket system each bound to NUMA node and be happy about predictable performance. Same with VMs or containerized stuff like memcached and friends.
5) I don't care about HEDT PCIE lines, as SLI is dead, so GPU + having some extra for two M.2 drives is just fine. Gaming does not care about 8x, compute cares even less. But this whole segmentation leaves bad taste. And this AVX512 segmentation? Even more hilariously out of place, squandering advantages(even if tiny) when none can be spared. Sure I can still develop and run code, but it is spit in the face to segment so obviously - 10 core has it, 6 and 8 cores don't.
Intel still has IPC advantage, but as recent SPEC INT targetted "leaks" have shown us, AMD has great core, with decent power efficiency and 9X% of servers in the world, proliants and poweredges run INT workloads.
Here's a warning word for Intel,
11:54pm up 4285 days, 3:39, 1 user, load average: 1.45, 0.77, 0.64
i ran Opterons in the past and as you can see above one is still running. And I won't hesitate to buy more. Those cents saved on solder, fused memory channels and AVX512 half rate combined with above madness is not OK.
P.S. Disclaimer, i run 7700K 5Ghz on desktop and AMD won't make me give up performance.