Power PC

Ruptga

Lifer
Aug 3, 2006
10,247
207
106
I really don't know much about PPC, other than it's RISC instead of CISC like x86, and has generally been lower performance than x86, especially in recent times. It's also been mentioned that PPC has better interconnects than x86, they have some kind of FSB alternative? I dunno, that's why I'm posting here.

So, my main question is why is PPC used in the gamecube, PS2, PS3, Xbox 360, and Wii? (it might be in more, but those are the ones I know of) My basic thought is that PPC must not be that great if Apple redid everything to switch over to x86, so what am I missing here?
 

praeses

Member
Jun 10, 2006
40
0
0
RISC Chips are far cheaper (usually much lower transistor count) and Power PC is basically the leader for providing solutions for integrators to shove into consumer products with those sorts of performance requirements.

Consoles are almost expected to have the "toaster/appliance" effect, turn it on, it works, no maintenance other than dusting it off. That is a little more feasible with a more lightweight ie lower power processor(1st/current gen of xbox 360's on 90nm are a little hungry though). I believe they are also far more FPU intensive than ALU, which is easier to achieve with a higher clockrate/simpler processor with the same density/power requirements. They're mostly about playing games afterall.

CISC is easier to program for, especially if that's been your area for awhile.. inertia wins there.

There's debates regarding advantages for parallel processing both of them as well.
 

praeses

Member
Jun 10, 2006
40
0
0
Oh the biggy comes obvious from the name, RISC is more geared towards specific tasks to accomplish those with less overhead (reduced) while CISC is more of a generic able to do more (complex). Most people consider their computer "general purpose" asides from those email-only folk.

They're both really good at what they target, they just target different audiences.

I don't believe RISC has a place in our desktop world quite yet. Most likely when we're getting close, we'll have a new better/hybrid instruction set.
 

Goi

Diamond Member
Oct 10, 1999
6,764
6
91
Actually all x86 CPUs since like the K5 has been hybrid. The frontend is x86 CISC but the backend is all RISC like micro/macro-ops. The statement about RISC being geared towards specific tasks isn't really true, since a RISC CPU is able to do exactly the same things a CISC CPU can, and vice versa. They're both general purpose CPUs.
 

kpb

Senior member
Oct 18, 2001
252
0
0
Honestly i think the big reason is licensing. IBM is willing to sell the license for the processor designs to Microsoft etc so that they can do die shrinks and control production and cost on thier own. Microsoft got bit in the rear by intel last time since intel wasn't willing to license the p3 core they used in the first xbox and wasn't interested in a die shrink and cost reductions.

When you take intel out of the options because they won't license thier designs like that PPC most obivous option. Others could include arm and mips but PPC definitely is the best of the other options from what I know of them.
 

cker

Member
Dec 19, 2005
175
0
0
I beliece the PPC chips are also much slower than Intel or other CPU types in terms of market revisions. For example, the more mainstream x86 chips have lots of revisions based on minor differences. PowerPC CPUs are known quantities with long production windows. Power3 hit in about 1998, and wasn't followed by Power4 until 2001. I think 2005 was Power5 and I don't think there's a new version of the CPU coming out until next year sometime.

This is nice in a console because it means you can source the same part for several years at a time, without having to worry so much about the manufacturer discontinuing an older part in favor of a new (and not completely tested in your game console) CPU.
 

Leros

Lifer
Jul 11, 2004
21,867
7
81
Originally posted by: cker
I beliece the PPC chips are also much slower than Intel or other CPU types in terms of market revisions. For example, the more mainstream x86 chips have lots of revisions based on minor differences. PowerPC CPUs are known quantities with long production windows. Power3 hit in about 1998, and wasn't followed by Power4 until 2001. I think 2005 was Power5 and I don't think there's a new version of the CPU coming out until next year sometime.

This is nice in a console because it means you can source the same part for several years at a time, without having to worry so much about the manufacturer discontinuing an older part in favor of a new (and not completely tested in your game console) CPU.

Surely a manufacturer would produce older chips if they new they could sell a million of them. Companies love making money off of old technology.
 

BladeVenom

Lifer
Jun 2, 2005
13,540
16
0
Originally posted by: ADDAvenger
My basic thought is that PPC must not be that great if Apple redid everything to switch over to x86, so what am I missing here?

You're not missing anything. They're cheap and that's basically it.

Here's an amusing benchmark of Sony's cell processor getting spanked by a three and a half year old budget G5 processor in Linux. PS3 Performance Makes it pretty clear why Apple ran so quickly to Intel after seeing the cell procesor in "action."

 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
but isnt the cell made for multimedia things in mind as opposed to a general purpose cpu?
 

BladeVenom

Lifer
Jun 2, 2005
13,540
16
0
Yes, but in PCs and consoles most of that is handled by the video and sound cards. What does that leave for the cells to process? Physics maybe? So far it's looking pretty useless for most things. Maybe someone will come up with some science or math programs for the cell processor, but how useful would that be for most PC and console users?
 

icarus4586

Senior member
Jun 10, 2004
219
0
0
First, the PS2 doesn't have a PPC processor, it's MIPS.

PowerPC has a history of being customized for specialized applications, whereas x86 has only been used as a general purpose PC architecture. None of the chips used in consoles are the same as have been used in Apple computers, just as the chips used in Apple computers have been different than those used in IBM workstations.

Wii's Broadway CPU has its roots in the PPC 750 series, which is the same lineage as Apple's G3. The Xbox 360's Xenon CPU is even more customized. It's got some technology from the PPC 970 line (think G5), but it's multi-threaded and has no out-of-order execution abilities. The PS3's Cell processor is a different beast entirely. It's got 1 main "processing element," very similar to each of the cores in the Xenon CPU, and 7 "SPEs." The SPEs are very specialized units, designed for SIMD workloads.

There's obviously lots of debate on why Apple switched to x86, but there's little doubt that it was a good idea. Why did the move make sense for them, but not for game consoles? The majority of Apple's revenue comes from laptops and consumer desktops, like the iMac and Mac Mini. These designs need CPUs with low power consumption. I'm sure that the G5 could have continued to be competitive with x86 CPUs, but development always would have lagged, and at least in the short term, IBM had no low-power designs in sight. Apple needed low-power CPUs, game consoles don't (as much). They need customized CPUs. That's where PowerPC excels.
 

jagec

Lifer
Apr 30, 2004
24,442
6
81
Originally posted by: Leros

Surely a manufacturer would produce older chips if they new they could sell a million of them. Companies love making money off of old technology.
Absolutely, people are still cranking out 486's.

Plus, now that Apple is no longer demanding them, I bet that PPC chips are a pretty good value right now.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
PPC has better interconnects? Uses the same copper and silicon as PC cpus.
FSB alternative? It's AMD's hypertransport.

Why's it used in the consoles? Cause it's cheap, end of story. Not even cheap hardware costs, cheap licensing. AMD and Intel won't sell their designs out, IBM will. The Xbox 360's triple core cpu is as large as current dual cores, yet wouldn't come close in performance. But it only cost Microsoft $100,000,000 to have a cpu design they own and can produce anywhere they want. Last gen when they were using an Intel cpu, they had to buy every cpu from Intel and it was quite expensive. (on a side note, the xbox 360 cpu was supposed to be much faster than it finally came out to be, at least matching a pentium 4 in per mhz performance, but IBM wasn't able to do it; according to some book that goes into the making of the 360)

Cell on the other hand is a $1 billion project that has more uses than just the Playstation. Sony (and IBM and Toshiba) want to use it for quite a few electronics. Additionally, once again it has the advantage that Sony pretty much has full rights to the design and can produce it on its own. The Cell cpu itself is as large as a quad core x86 cpu, so it wasn't chosen because it's cheap to manufacture.

It has nothing to do with RISC versus CISC, it's all about IBM versus...well anyone else. Intel and AMD don't sell their designs out and thus are too expensive for a console, and anyone else besides IBM doesn't have the ability to produce a cpu anywhere near the performance necessary for a console. The 360's cpu is only multicore because it was the cheapest way for IBM to develop a cpu with the performance MS wanted (and it did fall short anyway).
BTW, the RISC and CISC argument is dead since all current processors are some combination of both. You think the G5 can be considered RISC?

It's about economic costs due to corporate contracts, and nothing more.
 

smack Down

Diamond Member
Sep 10, 2005
4,507
0
0
Another big plus of going with IBM is other companies including sony can and have built fabs using the same process as IBM's fab. MS could take the design and have it manufactored at a different site with no rework to the design.
 

Xdreamer

Member
Aug 22, 2004
131
0
0
Originally posted by: smack Down
Another big plus of going with IBM is other companies including sony can and have built fabs using the same process as IBM's fab. MS could take the design and have it manufactored at a different site with no rework to the design.

share and share alike huh?
 

Loki726

Senior member
Dec 27, 2003
228
0
0
Originally posted by: BladeVenom
Yes, but in PCs and consoles most of that is handled by the video and sound cards. What does that leave for the cells to process? Physics maybe? So far it's looking pretty useless for most things. Maybe someone will come up with some science or math programs for the cell processor, but how useful would that be for most PC and console users?

Cell is interesting not because it will be extremely useful or easily integrated into in modern desktop or embedded platforms, but because it is designed around a programming model that is revolutionary from the perspective of commodity microprocessors (Intel, AMD, SUN, IBM etc). Specifically, cell uses an explicitly parallel programming model that is exposed to the programmer at a very low level. It starts with a stripped down version of a Power5 that is architecturally similar to modern x86 processors in that it uses techniques like multilevel caches, multilevel branch predication, register renaming, reorder buffers, can support multiple instruction issue/retire, has parallel functional units, etc... This Power5 is supplemented with 8 other RISC cores that are completely different from modern architectures in that they throw away things like hardware caching, all but basic branch prediction, out of order execution (a big one), register renaming, etc. This is almost unheard of since practically all commercial advances made in computer architecture from the start of the 90s up to even Intel's new Core architecture have come from defining and refining those techniques.

The engineers who built cell justify these decisions in the following way: they argue that modern processors are not mainly made of components that do useful work. What is useful work? Its addition, multiplication, bitwise logical operations, comparison, memory operations, etc. Most processors devote something like 10-15% or even less of chip area to logic that actually does useful work. The rest is devoted to looking at instructions that were generated by compilers and programmers without much underlying knowledge of the architecture that they were to be run on and shuffling them around so that they can execute immediately without having to wait for a valued to be loaded from slow off-chip memory or for the result of another instruction. In x86 they additionally need to be translated from variable width CISC to fixed width RISC. Peter Hofstee (one of the chief architects who designed cell) argues that, at least in the case of the applications targeted by cell, that it is a better use of chip area and power resources to devote most of the chip to functional units that do useful work and provide the programmer with an interface to resolve dependencies between instructions and hide memory latency. Whether it actually is better is not immediately clear as it depends on how well people (and compilers) are able to do these kind of optimizations when they write programs. It is the case that if the operations were optimized ideally for a modern Intel/AMD type architecture and a Cell type architecture, Cell would be dramatically faster.

Aside from the whole ditching caches and most of the prediction/reordering/control logic, cell is also interesting in the fact that it uses an explicitly parallel programming model. This is one proposed solution to writing programs for multi-core environments that is different from the kind of model that you would find in even a multiprocessor or even a traditional multi-core machine where applications are divided into different threads that are essentially treated like separate programs with possibly some shared memory or support for synchronization primitive operations between threads. Instead, the interface for loading parts of a program into different cores (called SPEs) and transferring data between SPEs is exposed directly to the programmer. Once again, it puts more control in the hands of the programmer and in this case, takes it away from the operating system. It is my opinion that multi-core systems will have to move to this programming model eventually as the number of cores increases simply because it is extremely difficult to design a compiler or build support into the operating system that does this partitioning efficiently because the optimization space grows exponentially with the number of cores in the system.

Anyways, I seem to have gotten off topic here but I guess the main point is that experimental architectures like that used in cell have the potential to be much more efficient and better performing than traditional architectures as long as programmers actually take the time to learn the requirements of the architecture and make full use of the interfaces provided by the programming model. Conversely, if you just hack something together out of code that was intended to run on an Intel x86 and expect the operating system and underlying architecture to optimize your code for you, then you are out of luck and will get much worse performance than running on a traditional architecture.
 

Loki726

Senior member
Dec 27, 2003
228
0
0
Originally posted by: BladeVenom
Originally posted by: ADDAvenger
My basic thought is that PPC must not be that great if Apple redid everything to switch over to x86, so what am I missing here?

You're not missing anything. They're cheap and that's basically it.

Here's an amusing benchmark of Sony's cell processor getting spanked by a three and a half year old budget G5 processor in Linux. PS3 Performance Makes it pretty clear why Apple ran so quickly to Intel after seeing the cell procesor in "action."

The example above reinforces the point that I made in my previous post: if you just throw a program together that was designed and optimized for a different architecture, there is no support built into cell to reorganize your code to make it run efficiently. Specifically, the above example runs the linux benchmarks on the Power5 core in Cell only. This core is different from other Power5 cores in that it can only issue instructions in order, can only issue two instructions at a time, has about half of the number of functional units, a smaller cache, simple branch prediction tables, and a longer pipeline. All of these changes make it smaller and more power efficient than a regular power5, but make it much slower clock-for-clock.

The reason that the benchmark was able to run on cell without a complete redesign is that the stripped down power5 on the cell is compatible with instructions written for other power5 architectures. However, the benchmark doesn't use the 8 SPEs at all, or the explicit support for parallel operations done below the thread level.

If you look at programs that are actually optimized to run on Cell:

8x speedup over x86 clock-for clock for Ray-Tracing on Cell (Look at Page 9)

50x speedup for ray casting vs 2.0GHZ Apple G5

You can see that cell can be dramatically faster if programmers take the time to optimize their code for its architecture. I will concede that it is unlikely that an end user will ever see this kind of performance in the near future because the commodity CPU market is based around putting a small amount of work into a project to receive an acceptable level of performance rather than putting a large amount of work into a project to receive an exceptional level of performance.

More info on cell architecture:

Microarchitecture

Cell On-Chip Network
 

Loki726

Senior member
Dec 27, 2003
228
0
0
Originally posted by: tidehigh
very informative and well written posts Loki726. I thank you.

Glad to help if I can.

Just try to keep in mind when you are reading posts like this that everyone who participates in these debates (me included) has some agenda that they are trying to push to other people. Even with an in-depth understanding and complete performance characterization of the design decisions that went into cell, people still tend to decide for themselves which side of the argument they think is stronger. If you can, try to gather as much information from as many different perspectives as you can before you pick your own.
 

abcslayer

Junior Member
Dec 1, 2006
1
0
0
Congrat. Loki726, all you said is exactly many ppl out here thinking. Those stupid benchmarks between CellBE and Mac G5 appeared everywhere on the net just because M$ fired up the marketing war with Sony (ofcourse there are some other articles review that Xenon is similiar to G5 and even better optimized!?!).
^_^
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
You can see that cell can be dramatically faster if programmers take the time to optimize their code for its architecture. I will concede that it is unlikely that an end user will ever see this kind of performance in the near future because the commodity CPU market is based around putting a small amount of work into a project to receive an acceptable level of performance rather than putting a large amount of work into a project to receive an exceptional level of performance.

Another factor is that many programs cannot readily be multithreaded (and/or make use of SIMD extensions) and do not rely heavily on FPU performance -- or at least not nearly to the extent that a raytracer can. Of course, the Cell processor is not exactly designed to run web browsers and word processors.

"Cell"-like streaming architectures (massively parallel, high FPU performance, high memory bandwidth but usually also high memory latency) work great for some tasks but not others. I mean, nobody's going to write desktop applications that are 'optimized' for this architecture. You have to have an algorithm that is both parallelizable and can plan its memory accesses pretty well in advance to really take advantage of it.
 

shortylickens

No Lifer
Jul 15, 2003
82,854
17,365
136
I was gonna chime in but got beat to it.
Its all about the business, NOT the performance or architecture or programming ability anything else.
Whatever they decide to use, they will hype and advertise and hype some more until people finally buy into the BS.
Thats why I didnt listen to the guy at EB when I was buying a GeForce 3.
"Hey man, you know that GF3 only puts out like 50 megaflops but the Playstation 2 puts out like 2 GIGAFLOPS MAN!"
"ORLY? So how come the GF3 can display 32 bit color, 1280x1024 at around 60 frames per second, and the PS2 runs 16 bit color at around 640x480 and only gets 30-40 frames a second, which isnt even viewable by the way?"
You do the math on that and realize the GF3 is a powerful bastard.
 

EricMartello

Senior member
Apr 17, 2003
910
0
0
shortylickens makes a good point here, that I think is often overlooked in these types of discussions. You gotta remember that the best technical solution isn't always what the shareholders of these companies view to be the most profitable, short or long term. The PPC CPU is a good ol' standby that has carved out a solid niche for itself as a versatile applicance CPU. If you want your stock prices to go up, play it safe and stick with what works - and I can tell you know, that's not going to the the fastest, flashiest processor available.

What Nintendo did with their Wii is a great example of this. By basically repackaging the Gamecube hardware in a "cuter" box and selling it as a new product called the Wii...they end up with a safe investment. Historically, Nintendo has done as little as possible with their long-running Gameboy to keep it a contender in the handheld gaming market. Ever notice how relatively small the steps are between successive Gameboy products? Yes, you guessed it - the cost of making small steps is a lot less than taking big leaps, and if you look at Nintendo's public financial statement, you'll also see their the only one of the big 3 not losing money on hardware.

Sony took the biggest "leap" with their hardware, and it is costing them. They're leveraging their dominant market share and counting on a more substantial return in the long run, when (if?) their console matures. It is a risky investment, but if it works they will cash in within a few years.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |