Originally posted by: kouch
after the news of the 100th processor that is going to take over the world, excuse me if I am not excited.
- Definitely agree!
There is no way cell could be that powerful. If it were, ps3 wouldn't need 4 of them since graphics would be by far GPU limited.
Since the cells are just very lean Power cores, there's some possibility to estimate performance, and you're right, the article has to be way over the top
And btw people flaming x86, neither intel or amd are technically a CISC x86 machines that people say they are. They are basically a RISC machine that breaks down x86 instructs in microcode and support (not based on) x86 because of legacy reasons.
It's just that people don't understand what CISC and RISC really is. X86 is CISC, but that's not bad. That is just RISC propaganda from late 80'ies, early '90ies, that has stuck in peoples head.
Breaking down instructions doesn't make anything RISC, and CPUs have always done this in some way.
Microcode, codefission, micro ops, whatever, is NOT RISC.
What is true in this context, is that there is no longer any great difference in the technologies used by later CISC cores and what is (- sofar understood as-) used by RISC. CISCs have since the advent of the MC68040 and Intels lesser but contemporary '486, started to use the same hardware technologies as RISC.
But RISC is NOT these technologies, nor vice versa. These technologies mostly comes from the various generations of supercomputers.
RISC was/is an approach to the design of the ISA (the instruction set architecture). A 'flavor' of ISA, aiming at certain percieved opportunities.
One of these was that it would be possible to do a more advanced, more complex cpu, - featuring some of the previously hinted high performance technologies - , if one reduced the number of supported instructions, and selected them carefully. Like for instance reducing complex addressing. Hence "RISC" = reduced instruction set computing. Another opportunity was compiler optimizations with a large number of visable registers.
(There are today reasons to believe that too intimate CPU-compiler reliance, is a dead end for the evolution of the core performance.)
Edit:
I think maybe a good way to put this, is that in RISC, you start with the hardware architecture that you think you want, then design the ISA from that. CISC is the other way round.
Obviously, the benefits of RISC are vulnurable to changes in the environment, like the evolution of technology.
The benefit of RISC was thus, that more registers and advanced and complex hardware architecture could be afforded on a
LIMITED AMOUNT OF TRANSISTORS.
But the world moves on. And just as a hedgehog's survival strategies are no longer viable when the sun rises, RISC doesn't compare so well anymore when very large numbers of transistors are available. And the cost for being CISC is just a small part of all those transistors.
This is maybe a good place to note that the central push of RISC was to simplify decoding and increase instruction execution speed. This is not a problem today. What holds back todays CPUs, are branchhandling, false dependencies (many visable register maybe wasn't such a grand idea, after all) and moving all the stuff into and out of the CPU. (again, RISC's larger code and data and less flexible addressing is no help here.)
The reemergence of RISC in IBM's Cell processor and SUN's Niagara is primarily motivated by the need of AGAIN going to a very limited amount of transistors. Another advantage is powerconsumtion and heat.
By the time when AMD and Intel dual/multi core X86 CPUs have matured software to truly take advantage of a large number of cores, it might very well be that Intel and AMD does not need to go 'lean' enough, to give RISC an advantage. Multicore is the future, but I think there is a very good chance that future is X86-64 CISC multicore.