Temperature and humidity variance is what kills hardware not the absolute temperature. But again you keep going back to the "datacentre" the datacentre is but one cost.
A heavy, significant, relevant cost that doesn't scale linearly, neither in terms of CAPEX and OPEX as you stated. In fact, I doubt you have saw the business plan or the construction budget of a single Datacenter, because you wouldn't be able to say that if you did. Both costs don't scale up linearly, and that's why power efficient processors have a price premium over most of the power hogs.
no i didn't, reread what i said, find me a processor from AMD that does that against a Xeon.
6376 will trump, for example, Xeon E5-4603v2 on raw performance, but yet AMD cannot breach on that market bracket because of the atrocious power consumption of the 6376.
and i never said there wasn't a market for them. But that is driven by workload.
Performance in the target workload is one thing, the other thing you need to factor is the TCO. You can have good performance with AMD processors in a reasonable number of scenarios, but you can't have that performance without messing up with your TCO because of the added power consumption. This is what killed them in the server market, not performance only.
If power consumption wasn't an issue, we would be seeing AMD pushing 220W, or even 300W server processors on the market, but there specifically they never dared to go beyond 140W.