In what world are you dealing with where CPU's only take 10%-20% of the hardware cost?
Cost of a standard 2P Intel E5-based server (retail pricing):
2x E5-2699v4: $9000
24x 128 GB DDR4 LRDIMM: $28000
Motherboard and enclosure: $1000
On this bare-bones compute-only server, the CPU represents only 20% of the overall cost. This is without including things like SSDs, interconnect controllers, GPUs (!) and other accelerators, etc. that would add even more cost to the system. When you add in power, land, and labor, the fraction of the cost associated with the CPU becomes incredibly small. You might object to retail pricing, but when accounting for actual customer pricing, the CPU cost will be even lower, while memory costs are about the same.
Think about it this way. Just like gamers spend lots of money to maximize the value of their GPU (the most expensive component), datacenter operators want to maximize the utilization of their DRAM.
Can we get out of the world where people only look money is no object like spending when analyzing something. Even big corporations have big purchasing departments that analyze the different costs of different hardware implementations. 10k over 1000 servers for a global update is still a lot of money.
Money is everything, which is why you buy the highest capacity CPU you can fit in a 2P configuration. What keeps 4P configurations (Naples is 8P per socket!) from taking off is poor rack density and memory performance, not cost of CPUs.
I don't think anyone is suggesting SIMDs are never being used. It's just that explicitly opting to use them is more of a corner case than generally utilized optimization. You'd be surprised how many developers are not even aware of SIMDs let alone use them.
While they do have their time and place, rarely would AVX or any other SIMD be the decisive factor when choosing server CPU.
Why argue against strawmen? All I said was that 8ch memory controllers are inefficient and unnecessary without strong compute performance (i.e. SIMD). Someone brought up random access, but in fact memory latency (which is already worse on Zen than Skylake) will hinder Naples even further, given the penalty with extreme NUMA levels and way too many memory channels.
Nonsense. You win by also beeing cheaper. That way Intel kicked Ibm out.
There is a reason Intel xeon isnt twice as expensive. Yes. Price obviously matters big time. There is a sizeable price elasticity here. Its not small amounts here.
The idea that cpu cost is so small it doesnt matter becaue it eg isnt shown in a tco analysis from Gartner is pathetic. Every cost matters. All over. All the time. Every penny. You cost optimize your company all over or somebody takes your chair tomorrow.
The cost difference between IBM and Intel is orders of magnitude. Nobody sane is expecting Naples to be an order of magnitude cheaper than Skylake-SP. Besides that, what really killed POWER was expensive
DRAM.