Discussion [TweakTown] Guerrilla dev: PS3's Cell CPU is by far stronger than new Intel CPUs

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
5,593
8,767
136

Even desktop chips nowadays, the fastest Intel stuff you can buy is not by far as powerful as the Cell CPU, but it's very difficult to get power out of the Cell. I think it was ahead of its age, because it was a little bit more like how GPUs work nowadays, but it was maybe not balanced nicely and it was too hard to use. It overshot a little bit in power and undershot in usability, but it was definitely visionary

Interesting perspective. I completely disagree, but interesting. I think in a way it's true with the Cell having the different types of compute units, especially the SPEs which made it very powerful (on paper) for SIMD instructions compared to CPUs at the time but I don't see how it could be considered any more powerful than a modern APU.
 

DrMrLordX

Lifer
Apr 27, 2000
21,794
11,143
136
Ah man, I literally just watched the LTT video on the SPX, and man, even clocked at a ludicrous 3Ghz, 16GB of DDR4, and with a 256GB PCIe state of the art SSD, it's .... kind of terrible?

I see that as more of a software problem than a hardware problem. If anyone can get that thing to boot Linux, it might be worth looking at it. A little off-topic though.

I would expect Cell running a custom Windows 10S version to be even worse.
 

soresu

Platinum Member
Dec 19, 2014
2,941
2,164
136
Arkaign

When I get bored, I fire up my Kabini box (A4-5000) and play around. Yes it's got 4 cores but man... you literally have to adjust to that experience before you can actually using it, it's that slow. Definitely feels slower than any pentium 4's that I used at the time (~3 Ghz SKUs). Only good for watching YouTube videos and thinking... thats kind of stuff consoles have got, mmm.... sweet! Not.

Honestly, even with 8 slightly higher clocked cores I have no idea how PS4 is able to produce that kind of FPS/quality in those games. In the PC scene you would quickly become CPU limited in any of those games with similar GPU power.

View attachment 14231

Or maybe PS4 IS slow but it's just extremely well optimized... I know that if I can't get less than 60 fps in any fps game I start dropping down gfx settings. You see, what's good here is very subjective. In the late 90's there was a famous Diamond Monster 3D II ad "if you can't play at 60 fps, you can't play". That's definitely me.
PS4 and XB1 devs supposedly put a lot of effort into running tasks on the GPU where possible to take the strain off the CPU.

The fact that there were 8 CPU cores did mean well optimised parallel code should also have run pretty decently too (yes I know the UI and other things took up some cores first). I suspect that AVX was added to Jaguar in order to give some code reuse for devs porting to or from PC.

Either way the PS5 and XB2 should be a great step up even if they only go with Zen2 and not Zen3.
 

BigDaveX

Senior member
Jun 12, 2014
440
216
116
Either way the PS5 and XB2 should be a great step up even if they only go with Zen2 and not Zen3.
Devkits for the PS5 are already in the wild, and they use Zen 2. It's probably too late in the design process to switch to Zen 3, unless it ends up being only a very minor revision over Zen 2.
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
Honestly, even with 8 slightly higher clocked cores I have no idea how PS4 is able to produce that kind of FPS/quality in those games. In the PC scene you would quickly become CPU limited in any of those games with similar GPU power.


It's certainly well optimized with it's own to the metal API. But more importantly most console titles run at 30fps meaning you already need half the CPU power compared to 60 fps. And I remember from a PS4 pro article that the chips has a custom "command processor" that actually issues all the draw calls therefore heavily reducing CPU single-threaded limitations. Hm, or now thinking about it that is actually what the xbox one x (aka scorpio) has.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
While Zen 2 on the consoles (the newest leak pointing to 8c/16t at 3.5ghz) is great for their performance I can't be the only that is worried about the possible impact on 60 fps PC gaming, much less high refresh PC gaming.

PC CPUs (and associated subsystems) were so much more powerful than those in the PS4/Xbox One that despite the optimization disadvantages and 30fps game targets that it was possible to brute force 60 fps and higher.

If the new consoles and games still only target 30 fps (for "cinematic" reasons) while taking up all that the CPUs have to offer what will that mean for PC ports and the associated CPU requirements? We are not going to anytime soon (doubtful over the lifetime of the PS5/Xbox) achieve a similar ST advantage over Zen 2 (with Zen3, 4, 5, etc.) that current CPUs enjoy over existing consoles to have the same hardware differential to brute force the problem.

Even with a 60 fps target I'd wonder what would actually be matching hardware on the PC side given the likely "overhead' differences. Imagine if Zen 3 (or even beyond) is effectively slower than the next console CPUs.
 
Last edited:
Reactions: Magic Carpet

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
We are not going to anytime soon (doubtful over the lifetime of the PS5/Xbox) achieve a similar ST advantage over Zen 2 (with Zen3, 4, 5, etc.) that current CPUs enjoy over existing consoles to have the same hardware differential to brute force the problem.
I personally consider this a good thing. This will increase scrutiny for the whole software stack and may make using different OSes for gaming more feasible. PCs will always have the advantage of enabling using hardware better than closed systems over time. PCs also offer the freedom of using different optimized software. With the gap in the former decreasing the latter will become more important.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
I personally consider this a good thing. This will increase scrutiny for the whole software stack and may make using different OSes for gaming more feasible. PCs will always have the advantage of enabling using hardware better than closed systems over time. PCs also offer the freedom of using different optimized software. With the gap in the former decreasing the latter will become more important.

But we're talking about the PC side of gaming. Based on past history the likelihood is that it is not going to be a optimization priority. The optimization targets have always been the consoles.The developer perspective will be to just experience that "cinematic" 30 fps like you will on the console as that is what the game is optimized for.

Except currently (and what will be the past) is the PC side could brute force through those lack of optimizations and still offer a "better" than console experience (whether that be higher frames, or more graphics settings such as draw distance) due the excess hardware capability available. That isn't going to exist anymore.
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
But we're talking about the PC side of gaming. Based on past history the likelihood is that it is not going to be a optimization priority. The optimization targets have always been the consoles.The developer perspective will be to just experience that "cinematic" 30 fps like you will on the console as that is what the game is optimized for.

Except currently (and what will be the past) is the PC side could brute force through those lack of optimizations and still offer a "better" than console experience (whether that be higher frames, or more graphics settings such as draw distance) due the excess hardware capability available. That isn't going to exist anymore.
I feel your problem there is that you are (unknowingly?) conflating PC with Windows 10 there. I'm primarily a Linux user, and I see a lot of optimization potential for PC gaming on OS level.
 

soresu

Platinum Member
Dec 19, 2014
2,941
2,164
136
I feel your problem there is that you are (unknowingly?) conflating PC with Windows 10 there. I'm primarily a Linux user, and I see a lot of optimization potential for PC gaming on OS level.
The performance differences between Clear Linux and Ubuntu suggest there are significant inefficiencies in the stack somewhere.

I wonder if Valve might rebase Steam OS on Clear Linux.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
I see that as more of a software problem than a hardware problem. If anyone can get that thing to boot Linux, it might be worth looking at it. A little off-topic though.

I would expect Cell running a custom Windows 10S version to be even worse.

Agreed, that would be fascinating, and W10 ARM is likely to be an afterthought of an afterthought, so a good Linux build could be fascinating.

Which takes us back to PS3 and Cell I never did the PS3 Linux install, or kept up with it, Sony stopped supporting it after a certain firmware update, and got class action sued over it. But surely some people can mod PS3 to Linux. How did that run I wonder? Anyone here have an experience with it?

It's also kind of impressive how many variants of the Cell and RSX they made for the PS3 series. They kept making them on new process tech, slimmer/quieter. Cool to see that kind of progress back in the day. Now new nodes are a real event when they happen at all.
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
The performance differences between Clear Linux and Ubuntu suggest there are significant inefficiencies in the stack somewhere.

I wonder if Valve might rebase Steam OS on Clear Linux.
As I said inefficiencies are everywhere on PC. Which is why I consider it a good development to have less room for the hardware to brute force through it and software improvements thus having a bigger impact/reward.

As for Steam OS, it's essentially a plain Linux distro with a specific set of libraries combined with Steam Big Picture mode as primary shell. In general you can set up Clear Linux like that as well. Clear Linux biggest issue is that it doesn't have the wealth of support other Linux distributions and they consider themselves an "online OS" so a lot of "offline" software is just not considered.
 

Mopetar

Diamond Member
Jan 31, 2011
8,004
6,446
136
While Zen 2 on the consoles (the newest leak pointing to 8c/16t at 3.5ghz) is great for their performance I can't be the only that is worried about the possible impact on 60 fps PC gaming, much less high refresh PC gaming.

PC CPUs (and associated subsystems) were so much more powerful than those in the PS4/Xbox One that despite the optimization disadvantages and 30fps game targets that it was possible to brute force 60 fps and higher.

If the new consoles and games still only target 30 fps (for "cinematic" reasons) while taking up all that the CPUs have to offer what will that mean for PC ports and the associated CPU requirements?

I wouldn't worry about it. The Jaguar CPUs in the previous consoles are significantly weaker than what the next generation consoles will get.

The original PS4 had the 8 CPU cores running at 1.6 GHz, so even ignoring everything else they're getting twice as much performance just from clock speed increases. The amount of cache on the CPU is increasing by a staggering amount as well (going from 2 x 2MB L2 cache to 32 MB L3 cache)

I understand that whole argument that developers will just get lazy and won't optimize as hard, but there's so much of an increase in resources from the CPU side that it's hard to imagine that happening right away.
 

soresu

Platinum Member
Dec 19, 2014
2,941
2,164
136
The original PS4 had the 8 CPU cores running at 1.6 GHz, so even ignoring everything else they're getting twice as much performance just from clock speed increases. The amount of cache on the CPU is increasing by a staggering amount as well (going from 2 x 2MB L2 cache to 32 MB L3 cache)
Yes, much more so again for AVX heavy code at the same clocks.

It should handle at least AV1 decoding without ASIC, and probably the gen after it too (presuming dav1d level efficiency of SW decoder).
 

Mopetar

Diamond Member
Jan 31, 2011
8,004
6,446
136
Yes, much more so again for AVX heavy code at the same clocks.

It should handle at least AV1 decoding without ASIC, and probably the gen after it too (presuming dav1d level efficiency of SW decoder).

I guess I don't know a lot about games, but do they even really include all that much that would necessitate or benefit from AVX instructions? Most of that kind of work in gaming situations would already be done on a GPU because it's good at those types of things. I'm sure there're some tasks that could benefit, but it seems like those would be a small part of the overall code and so you're only seeing the massive performance gains something like 1% of the time for the majority of game titles.

I realize that these consoles are more than just gaming systems now, but outside of video processing/encoding I don't see a lot of use. I guess it's a big deal if you're trying to stream from your console, but I'm not even sure how popular that is.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I guess I don't know a lot about games, but do they even really include all that much that would necessitate or benefit from AVX instructions? Most of that kind of work in gaming situations would already be done on a GPU because it's good at those types of things. I'm sure there're some tasks that could benefit, but it seems like those would be a small part of the overall code and so you're only seeing the massive performance gains something like 1% of the time for the majority of game titles.

I think games use AVX/AVX2 mostly for physics and particle effects. PhysX uses AVX for cloth simulation for instance.

Also BF5 uses AVX/AVX2. I know this because if I enable the AVX offset in my UEFI BIOS, the CPU runs at a lower clock speed when playing the game. It probably uses it for physics and particle effects.
 
Reactions: Mopetar

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Both of those things can be done on the GPU.

A recent update to the UE4 dev tools called Niagara is geared more towards GPU particles than the old Cascade tool.

Yeah, I know that they can run on GPU, but depending on the game and setup, it may not be practical. For instance, most games are going to be GPU limited on PCs, and may leave the GPU with little wiggle room for processing extra duties like physics (especially destruction) and particle effects.

But the biggest reason I think is because of PCs with discrete graphics cards. If the GPU has to calculate physics, it can introduce more latency as the processing has to be synchronized with the game thread, which is run by the CPU. All that back and forth across the PCIe bus between the CPU and GPU can end up lowering performance and introduce stuttering and lag. This was a huge problem when I played Borderlands 2 with PhysX on a dedicated PhysX card.

I used to be a huge proponent of GPU PhysX back in the day. For several years I even used a dedicated PhysX card in my systems, along with SLI. But when NVidia and Havok started to optimize their CPU physics algorithms for multicore and SIMD, I'm now more of an opponent of GPU game physics. I remember when Ageia's PPU first made possible dynamic cloth simulation in games, as no CPU at the time was powerful enough to do the calculations without a serious hit to performance. And when NVidia bought the tech and ported it over to their GPUs, they improved on it significantly.

But today, cloth simulation runs extremely well on modern CPUs due to multicore and SIMD optimizations. In fact, I've read that it's more efficient and performant to run cloth simulation on the CPU rather than on the GPU. Also, Epic is in the process of switching their game engine over from NVidia's PhysX to their brand new Chaos physics engine, which Intel has helped to develop with its Intel® Implicit SPMD Program Compiler technology. The former can utilize the GPU as we all know, but Chaos runs only on the CPU and is highly optimized for today's multicore/multithreaded CPUs with wide vector SIMD. Judging by the demo, it looks to be very impressive:


Intel news announcement

So with CPUs getting more and more cores and threads and wider vectors, it makes perfect sense to focus on software physics rather than burdening the GPU even more if you ask me.
 

soresu

Platinum Member
Dec 19, 2014
2,941
2,164
136
But today, cloth simulation runs extremely well on modern CPUs due to multicore and SIMD optimizations. In fact, I've read that it's more efficient and performant to run cloth simulation on the CPU rather than on the GPU. Also, Epic is in the process of switching their game engine over from NVidia's PhysX to their brand new Chaos physics engine, which Intel has helped to develop with its Intel® Implicit SPMD Program Compiler technology. The former can utilize the GPU as we all know, but Chaos runs only on the CPU and is highly optimized for today's multicore/multithreaded CPUs with wide vector SIMD. Judging by the demo, it looks to be very impressive:
For now yes it's CPU bound - this doesn't surprise me as a complex new tool/system is best implemented on the CPU first for stability sake.

It would not surprise me to see it gaining a GPU implementation in the future though, Epic have become extremely proactive with development recently - including acquisitions like Shave and a Haircut and Quixel, and even Intel themselves are pivoting towards non CPU compute with OneAPI.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
For now yes it's CPU bound - this doesn't surprise me as a complex new tool/system is best implemented on the CPU first for stability sake.

It would not surprise me to see it gaining a GPU implementation in the future though, Epic have become extremely proactive with development recently - including acquisitions like Shave and a Haircut and Quixel, and even Intel themselves are pivoting towards non CPU compute with OneAPI.

I'm all in favor of more processing power for physics no matter what form it comes in. Physics unfortunately has always taken a backseat to graphics when it comes to gaming, but there is plenty of potential there to exploit and make games more immersive and realistic.

But having delved deep into GPU PhysX over the years, I know its weaknesses so I'm wary of the industry going down that road again. With 4K, HDR and high refresh rate monitors becoming more ubiquitous, the GPU is more burdened than ever and I just don't see how piling on even more of a workload would improve the situation, plus the latency issues that I mentioned earlier. I think GPU physics makes more sense with consoles, where the CPU and GPU can share the same memory pool.

It's ironic that NVidia itself has done more than anyone to get rid of GPU physics (at least on PCs) over the years. First by making GPU PhysX CUDA only, and second by completely rebuilding software PhysX from the ground up for multithreading and SIMD starting with PhysX 3.0. The former ensured that AMD GPUs would always be excluded from GPU PhysX, making it hard for developers to justify additional resources for implementing it in their games. And the latter by porting what USED to be exclusive GPU PhysX effects over to software so that now everyone could run them regardless of what GPU brand they had in their system.

As I said in my previous post, there was a time when cloth simulation was considered too intensive for any CPU to run, and the same goes for fancy particle and destruction effects. Now all three run exceedingly well on modern CPUs.
 

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
As you can see, Jaguar is barely better than Pentium 4 clock per clock. Yet it replaced the PS3 processor in the 8th gen @ 1.6Ghz vs 3.2Ghz Cell. And it has shown to be capable of running the same games such as The Last of Us, Uncharted Series, God of War 3, which were the most lauded 7th gen titles, at equal or higher framerates.

Bear in mind that a lot of the tasks that you would offload to a Cell SPE are things that would probably be run as a compute shader on a current-gen console.
 

soresu

Platinum Member
Dec 19, 2014
2,941
2,164
136
It's ironic that NVidia itself has done more than anyone to get rid of GPU physics (at least on PCs) over the years. First by making GPU PhysX CUDA only, and second by completely rebuilding software PhysX from the ground up for multithreading and SIMD starting with PhysX 3.0. The former ensured that AMD GPUs would always be excluded from GPU PhysX, making it hard for developers to justify additional resources for implementing it in their games. And the latter by porting what USED to be exclusive GPU PhysX effects over to software so that now everyone could run them regardless of what GPU brand they had in their system.

The sad case of GPU Havok (and post acquisition Havok in general) did contribute somewhat to the problem - it's a shame that Valve or Epic did not have the drive and resources to influence the market as they do today when Havok was acquired by Intel, the game physics market could have been much richer by now if so.

I guess we can only hope that Epic surges forward and makes Chaos and Niagara a de facto standard of features and performance - whether others like Valve or Unity match the tools in their own engines, UE4 is becoming a sort of market standard all by itself.
 

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
The original PS4 had the 8 CPU cores running at 1.6 GHz, so even ignoring everything else they're getting twice as much performance just from clock speed increases. The amount of cache on the CPU is increasing by a staggering amount as well (going from 2 x 2MB L2 cache to 32 MB L3 cache)

I wouldn't bet on that. Zen 1 had reduced cache in some uses, with the APUs getting 4MB per CCX while the server CPUs got 8MB per CCX. The amount of cache on Zen 2 server CPUs is, as you pointed out, enormous- and it takes up a huge amount of expensive 7nm die area:


If I was creating a console APU, the obvious first place to start reducing cost would be to slash the L3 cache. Cut it down to 16MB (or even 8MB!), and use that die space for extra GPU shaders instead.
 
Reactions: scannall

Nereus77

Member
Dec 30, 2016
142
251
136
The optimization targets have always been the consoles.The developer perspective will be to just experience that "cinematic" 30 fps like you will on the console as that is what the game is optimized for.

Except currently (and what will be the past) is the PC side could brute force through those lack of optimizations and still offer a "better" than console experience (whether that be higher frames, or more graphics settings such as draw distance) due the excess hardware capability available. That isn't going to exist anymore.

Actually that will make porting games from console to PC way easier. Zen 2 on console to Zen 2 on PC. Done and dusted.

Note that PC gaming performance will always be "a console generation" ahead of consoles due to the PCs rapid iteration and modular nature. PCs and consoles will be similar for all of one year and then *BOOM* Zen 3 and RTX-3080s will hit the shelves. Also, PCs should have better GPUs and cooling, therefore faster running components and better gaming performance.
 

Adonisds

Member
Oct 27, 2019
98
33
51
While Zen 2 on the consoles (the newest leak pointing to 8c/16t at 3.5ghz) is great for their performance I can't be the only that is worried about the possible impact on 60 fps PC gaming, much less high refresh PC gaming.

PC CPUs (and associated subsystems) were so much more powerful than those in the PS4/Xbox One that despite the optimization disadvantages and 30fps game targets that it was possible to brute force 60 fps and higher.

If the new consoles and games still only target 30 fps (for "cinematic" reasons) while taking up all that the CPUs have to offer what will that mean for PC ports and the associated CPU requirements? We are not going to anytime soon (doubtful over the lifetime of the PS5/Xbox) achieve a similar ST advantage over Zen 2 (with Zen3, 4, 5, etc.) that current CPUs enjoy over existing consoles to have the same hardware differential to brute force the problem.

Even with a 60 fps target I'd wonder what would actually be matching hardware on the PC side given the likely "overhead' differences. Imagine if Zen 3 (or even beyond) is effectively slower than the next console CPUs.
I think most 30 fps next-gen games should be 30 fps because of the GPU, so the PC version will see significantly better framerates.

But you're right, it's gonna be much harder to have very high frame rates, and that should be a good thing. PC gamers have been complaining since the PS3 era that PC games are held back because of consoles. If a developer fully uses the CPU to create a 30 fps game, that should be a very interesting game regarding its logic. And if the PC version is only able to run at 45 fps, that's not too bad with freesync
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |