More cores? Why do we need more cores?
To increase performance by a means that is affordable and known possible. It's not like anyone can magically make faster single threaded processors, by making them bigger and hotter. Those properties are the consequences of making them faster, because with slow memory, faster basically means being able to perform more complicated state changes in parallel over some period of time.
While they could surely make them a bit faster per clock than today, if power use weren't such a concern...well, power use is a major concern, so you can forget about that. High thermal density increases failure rates
(in theory, you can design around it, but in reality computers blow capacitors and VRMs), people like quiet, people like long battery life, and most of the world pays a pretty penny for their kiloWatts. On top of that, excessive heat to exhaust from servers finally reached a point where processor and storage density started taking a back seat to making
air conditioning easier and cheaper, on the med/big business side of things.
One thing that concerns me about Intel and AMD is that they promote and/or use additional non cpu technologies to do tasks that were originally performed on the cpu. In the case of AMD it is the gpu that will replace cpu cores for transcoding.....with Intel it is "Quick Sync".
Yet they want to sell us on the idea of more cpu cores?
Quicksync is an ASIP. It's only good for transcoding quickly. The GPGPU is good for performing vector operations over a large serial data stream. The CPU needs to be good for anything that
fits on the tape. They are not swappable. Fixed-function hardware is faster at what it is made for, being simpler due to not having to do much else. GPGPU is somewhere in between the two.
This makes me wonder what is going to happen with future Intel enthusiast sockets (ie, LGA 2011 successors)? Are we going to see those turn into "many core" arrangements without IGP? <----Just wondering how many people are going to want "many core" ATX towers if other (non cpu) techs are trying to obsolete this concept?
Surely Intel has a much higher IPC "non mobile" CPU core on the horizon (specifically for the enthusiast socket)? With these small nodes they certainly have the silicon die space to pull this off if processor design does not include IGP.
Sure, but what about paying all the engineers to do the R&D? R&D costs a lot of money not just because implementing the final design is tedious, but because they will also need to verify what good ideas won't work for them, and what ideas that look good on paper suck in reality. The cost and time needed keeps going up and up and up,
because it is not easy to do. It can be done, and Intel is making their CPUs ever faster per clock, but they can't just make a bigger one and watch it go really fast. Rather, they can spend a ton of money, and make one a bit faster, and watch it be bigger and maybe hotter. But, hotter means slower, today, which only adds to the difficulty.
Before we say the market won't pay more, I'd like to see some analysis done on the potential for adding an additional level of AMD x86 Server products.
AMD can't sell the levels they have now (not much, anyway). They probably have one too many, and if they stick with the x86 server market (please do, please do, please do), should probably drop down to one server socket and be done with it.
People would pay more in the past for AMD server CPUs, and Intel's, for that matter. AMD was able to command high prices for their faster Opterons until the Core 2 ones came out, and then were still able to keep somewhat of a lead until Nehalem ones, due the Core 2's being limited by FSB. Today, people hardly want
free Opterons.
One thing I noticed in my brief reading of "per core" server licensing is that adding additional threads with SMT does not count as additional cores. For example, a large/wide high IPC AMD CPU core with four way SMT would still count as only one core.
SMT offers very little, thus won't count as anywhere near a whole set of execution units. Even now that Intel has gotten HT working very well, typical gains from using it are <20%, and OLTP and OLAP are just the sorts of work where it may be eschewed in favor of per-thread performance.
Also, again, per-core licensing doesn't matter to the overwhelming majority of the market.
What I feel is needed is higher IPC design with greater width (with the appropriate compensations engineered into the CPU to make design work).
http://en.wikipedia.org/wiki/Alpha_21464
Before compilers got good enough to not need many registers, that was the future
(oh, irony: it was to be that compilers would make better use of many GPRs; instead, predicted advancements occurred, but had the opposite result: wasted GPRs, and faster x86!).
Before electricity costs drove CPU development, that was to be the future.
Before the Prescott got a welcome of , and the market at large told Tejas it wasn't welcome, that was to be the future.
The fact is, it's too hard to do well, will run too hot, and not perform well enough to be worth it. Separate narrower execution units work better, all else considered.
With Poulson Itanium increasing to a whopping 12-wide, maybe Intel will permit this change to happen?
I could have sworn it was 6, which is closer to 2, but still, no. AMD brought the Hammer down, and COTS won. X86 got the RAS it needed to hang with the big boys, Linux became a serious sever OS
(Linux/FOSS greatly removes hardware/software lock-in, benefiting X86 and ARM), and now it is too late*.
* On top of that, x86 was already good enough for software built for faulty hardware (about anything made with Erlang, FI), putting IA64 in a position of lower actual reliability, just due to the amount of work put into quality support for x86.