Discussion Apple Silicon SoC thread

Page 209 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,752
1,284
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

eek2121

Diamond Member
Aug 2, 2005
3,051
4,273
136
I know about the great Dr Ian Cuttess and his channel.

But he unfortunately doesn't do testing in the scale of Geekerwan or even the former Andrei, which is understandable since he now has little spare time to do so.
I follow him on Twitter (and enjoyed his content here at AT), and very occasionally watch his videos, but outside of constantly attempting to eat wafers, and traveling I can’t tell what he does, if anything, for a living.
 
Reactions: Tlh97 and Lodix

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Well he doesn't get a ton of credit on this considering his hesitance to something so obvious two months ago. Obviously N3E uses fewer EUV layers and has better parametric yields - that's the whole reason it's been speculated Apple wanted to move over ASAP. But yes.
Well, defect density should drop as there will be fewer exposures each of which increases the odds of contamination. What wasn’t obvious two months ago, was how quickly Apple wanted to move over, the rumors where a bit thin at the time (IMHO).
 

Lodix

Senior member
Jun 24, 2016
340
116
116
Heating isn't going to be a problem with GB6 since it runs so quickly.
I see people saying this multiple times. Phones do throttle running Geekbench even in single core. They can't sustain more than 4-5W and that's including everything releasing heat like display, Modem, RAM, etc. If you run the benchmark multiple times you will see the scores getting lower.
 

Mopetar

Diamond Member
Jan 31, 2011
8,005
6,449
136
I follow him on Twitter (and enjoyed his content here at AT), and very occasionally watch his videos, but outside of constantly attempting to eat wafers, and traveling I can’t tell what he does, if anything, for a living.

He does consulting work as I recall. Essentially he's a freelancer. I don't think he makes much off of any of his social media presence, but he may do it as a hobby.
 

Doug S

Platinum Member
Feb 8, 2020
2,486
4,048
136
I see people saying this multiple times. Phones do throttle running Geekbench even in single core. They can't sustain more than 4-5W and that's including everything releasing heat like display, Modem, RAM, etc. If you run the benchmark multiple times you will see the scores getting lower.

My 14 Pro Max has consistent GB6 scores through three immediately consecutive runs and is barely above ambient. There's just not that much load. I'm not saying that's comparable to the 15 Pro/Max with the N3B A17 but clearly not all phones throttle (also not sure why the cellular modem would be active during a benchmark run unless you are running something else in the background while running GB6)
 

SpudLobby

Senior member
May 18, 2022
961
655
106
My 14 Pro Max has consistent GB6 scores through three immediately consecutive runs and is barely above ambient. There's just not that much load. I'm not saying that's comparable to the 15 Pro/Max with the N3B A17 but clearly not all phones throttle (also not sure why the cellular modem would be active during a benchmark run unless you are running something else in the background while running GB6)
Likewise I've never really had noticeable throttling on ST tests unless I'd already heated the phone up notably doing something.
 

Antey

Member
Jul 4, 2019
105
153
116
New iphone is really nice... But it seems like Apple's P-cores are stagnant in terms of IPC, they have barely increased in the last 5 years.

SOCSC (GB6.2)FreqIPC
A1110492,39 Ghz4262017
A1212722,49 Ghz510+19,71%2018
A1317022,66 Ghz639+25,29%2019
A1420552,99 Ghz687+7,5%2020
A1522313,23 Ghz690+0%2021
A1625313,46 Ghz731+5,94%2022
A1728923,78 Ghz765+4,64%2023

Once again it's september 2023 and once again we can see a single digit IPC improvement. A17 has just a mere 19,71% improvement over A13 for IPC, way less than we got with that core over its predecessor (+25%).

it seems that the ARM world is not doing that spectacularly as we thought 3-4 years ago, we were used to a 20-25% YoY IPC improvemts cadence. ARM P-Cores (X series) isnt doing that well either.

I think going wider is not the 'panacea' as it was said, A17 is now a 9 wide core with close to no IPC improvements, ARM X4 is a 10 wide core with no surprising results. We will see the qualcomm results with its nuvia core (hamoa) when it finally arrives.

AMD is also going wider with Zen 5 but it comes from a very narrow core (4-wide). Intel went with a wider design (golden cove), was it 6-wide? with similiar results in terms of IPC.

Did Apple/ARM go too wide for its cores? was it the correct path to go? were they set back by moore's law ''stopping'' being a reality and bringing fewer and fewer improvements each year? Can AMD/Intel catch up to Apple/ARM in IPC due to the lithographic stagnation? And even stay ahead thanks to the frequency advantage and the power room available?

In 2019-2020 x86-64 looked like it would be dead soon. I'm just a layman that likes to build PCs as a hobby and read about them. At that time we got some people coming here arguing ARM was the future and x86 was doomed, apple ditched intel and released rosetta, ARM and Microsoft were working with Windows for ARM and Qualcomm 8CX processors, and it was very hard to discuss that considering the apple performance improvements year over year.

It's really interesting how things turned out.
 

Doug S

Platinum Member
Feb 8, 2020
2,486
4,048
136
it seems that the ARM world is not doing that spectacularly as we thought 3-4 years ago, we were used to a 20-25% YoY IPC improvemts cadence. ARM P-Cores (X series) isnt doing that well either.

There are always diminishing returns. Apple could get huge gains all the time because they started from such a low point with their first custom CPU design the A6. They now have a big lead over everyone in IPC, and getting 20-25% in a single iteration is simply never going to happen again at that level. Most of the low hanging fruit has been plucked at this point.
 

Orfosaurio

Junior Member
Sep 23, 2023
23
6
41
There are always diminishing returns. Apple could get huge gains all the time because they started from such a low point with their first custom CPU design the A6. They now have a big lead over everyone in IPC, and getting 20-25% in a single iteration is simply never going to happen again at that level. Most of the low hanging fruit has been plucked at this point.
Kinda.
 
Sep 18, 2023
26
13
41
New iphone is really nice... But it seems like Apple's P-cores are stagnant in terms of IPC, they have barely increased in the last 5 years.

SOCSC (GB6.2)FreqIPC
A1110492,39 Ghz4262017
A1212722,49 Ghz510+19,71%2018
A1317022,66 Ghz639+25,29%2019
A1420552,99 Ghz687+7,5%2020
A1522313,23 Ghz690+0%2021
A1625313,46 Ghz731+5,94%2022
A1728923,78 Ghz765+4,64%2023

Once again it's september 2023 and once again we can see a single digit IPC improvement. A17 has just a mere 19,71% improvement over A13 for IPC, way less than we got with that core over its predecessor (+25%).

it seems that the ARM world is not doing that spectacularly as we thought 3-4 years ago, we were used to a 20-25% YoY IPC improvemts cadence. ARM P-Cores (X series) isnt doing that well either.

I think going wider is not the 'panacea' as it was said, A17 is now a 9 wide core with close to no IPC improvements, ARM X4 is a 10 wide core with no surprising results. We will see the qualcomm results with its nuvia core (hamoa) when it finally arrives.

AMD is also going wider with Zen 5 but it comes from a very narrow core (4-wide). Intel went with a wider design (golden cove), was it 6-wide? with similiar results in terms of IPC.

Did Apple/ARM go too wide for its cores? was it the correct path to go? were they set back by moore's law ''stopping'' being a reality and bringing fewer and fewer improvements each year? Can AMD/Intel catch up to Apple/ARM in IPC due to the lithographic stagnation? And even stay ahead thanks to the frequency advantage and the power room available?

In 2019-2020 x86-64 looked like it would be dead soon. I'm just a layman that likes to build PCs as a hobby and read about them. At that time we got some people coming here arguing ARM was the future and x86 was doomed, apple ditched intel and released rosetta, ARM and Microsoft were working with Windows for ARM and Qualcomm 8CX processors, and it was very hard to discuss that considering the apple performance improvements year over year.

It's really interesting how things turned out.

I wonder if the wider design does have an impact in performance at peak efficiency? Like neither AMD nor Intel can match ARM or Apple Sillicon when targeting peak efficiency; however, they might not have the expected effect at peak-performance.

What other characteristics from the Core designs can one highlight? Intel seems to have a gargantuan big-core design, if they are not the widest yet, why their core is so big?
 
Reactions: Orfosaurio

Doug S

Platinum Member
Feb 8, 2020
2,486
4,048
136
Goes to show that benchmarks mean nothing. Real world tests do matter. Also shows that Qualcomm GPUs are only good for gaming

While I agree, showing results with a single app isn't any better than trying to capture performance with a single benchmark. Maybe that's representative of real world behavior, or maybe that software is optimized for iOS and poorly ported to Android.

Apple sure sees a lot of that sort of thing in GPU comparisons between macOS and Windows, because so many Mac apps are ported, not optimized for Metal and Apple's delayed rendering GPU, so I'd want to know more about the provenance of that app and see if similar behavior can be replicated elsewhere in a neutral way (i.e. not cherry picking more apps that confirm that, or cherry picking something that claims to refute it)
 

Eug

Lifer
Mar 11, 2000
23,752
1,284
126
While I agree, showing results with a single app isn't any better than trying to capture performance with a single benchmark. Maybe that's representative of real world behavior, or maybe that software is optimized for iOS and poorly ported to Android.
According to some Onshape forum posts, the app is far better optimized on iOS than Android.
 

Doug S

Platinum Member
Feb 8, 2020
2,486
4,048
136
According to some Onshape forum posts, the app is far better optimized on iOS than Android.

Yeah I think in some ways the mobile world is the opposite of the PC world. In the PC world Apple is the also ran that gets bad ports, in the mobile world it is Android that's the also ran getting bad ports.
 

soresu

Platinum Member
Dec 19, 2014
2,959
2,180
136
Yeah I think in some ways the mobile world is the opposite of the PC world. In the PC world Apple is the also ran that gets bad ports, in the mobile world it is Android that's the also ran getting bad ports.
Depends on the game studio doing the porting + Android was never the best OS for games in general.

The sooner Fuchsia takes over the better, not that it will change much for game devs perception of Google mind you.
 

Orfosaurio

Junior Member
Sep 23, 2023
23
6
41
Goes to show that benchmarks mean nothing. Real world tests do matter. Also shows that Qualcomm GPUs are only good for gaming
The point of benchmarks is to give an approximation of "real-world performance". And about Qualcomn GPUs, they only shine on benchmarks. It's weird; maybe it's because of the CPU bottleneck. Games on Android are still less graphically demanding than on iOS and iPadOS, but the difference is not that visible given the relative small size of phone screens.
According to some Onshape forum posts, the app is far better optimized on iOS than Android.
Do they provide some proof that a lack of optimization is the root of the lower GENERAL performance of the app?

The sooner Fuchsia takes over the better, not that it will change much for game devs perception of Google mind you.
But is it really a problem with software? Big games commonly receive money from Android OEMs like Samsung.
 

hemedans

Senior member
Jan 31, 2015
207
102
116
The point of benchmarks is to give an approximation of "real-world performance". And about Qualcomn GPUs, they only shine on benchmarks. It's weird; maybe it's because of the CPU bottleneck. Games on Android are still less graphically demanding than on iOS and iPadOS, but the difference is not that visible given the relative small size of phone screens.

Do they provide some proof that a lack of optimization is the root of the lower GENERAL performance of the app?

But is it really a problem with software? Big games commonly receive money from Android OEMs like Samsung.
Sometimes with rendering accelerators can be used to increase rendering speed even if Gpu is weak, like How intel Quicksync do well compare to more powerfull Amd/Nvidia Gpu.
 

eek2121

Diamond Member
Aug 2, 2005
3,051
4,273
136
x86 is not anywhere gaming and businesses rely on x86
What does this even mean?
There are always diminishing returns. Apple could get huge gains all the time because they started from such a low point with their first custom CPU design the A6. They now have a big lead over everyone in IPC, and getting 20-25% in a single iteration is simply never going to happen again at that level. Most of the low hanging fruit has been plucked at this point.
I told people pretty early on as much, even here on this forum. I got tood I was wrong. 🤷‍♂️

At any rate, Apple’s mobile chips aren’t really hurting for more CPU power. I think their focus on the GPU is the right move for now. It isn’t like you can buy a smartphone that has something faster in it. None of the released smartphones come anywhere close to the iPhone in terms of performance.

GPU performance is important not just for gaming, but for GPGPU workloads like AI, and also productivity workloads like encoding, 3D rendering, etc.

Having a phone that can play games is kinda nice as well. I am a PC guy, but I definitely have a few games installed on my iPhone.

As for the laptop/desktop arena, the same mostly applies except things are much more competitive there. AMD in particular is not sitting around idly. (well possibly the GPU team is)
 

soresu

Platinum Member
Dec 19, 2014
2,959
2,180
136
But is it really a problem with software? Big games commonly receive money from Android OEMs like Samsung.
The way I have heard it the problem is partially Android itself, but mostly the insecure nature of the Play Store and the prevalence of side loading piracy vs Apple iOS.
 
Mar 11, 2004
23,175
5,641
146
While I agree, showing results with a single app isn't any better than trying to capture performance with a single benchmark. Maybe that's representative of real world behavior, or maybe that software is optimized for iOS and poorly ported to Android.

Apple sure sees a lot of that sort of thing in GPU comparisons between macOS and Windows, because so many Mac apps are ported, not optimized for Metal and Apple's delayed rendering GPU, so I'd want to know more about the provenance of that app and see if similar behavior can be replicated elsewhere in a neutral way (i.e. not cherry picking more apps that confirm that, or cherry picking something that claims to refute it)

Agreed, unless its the only app you use. I've said, the best benchmark these days is likely just doomscrooling on social media as that's what most people are chewing through their batteries doing.

According to some Onshape forum posts, the app is far better optimized on iOS than Android.

This was my assumption. Lots of companies built Apple native versions of apps that they didn't bother with elsewhere.

Depends on the game studio doing the porting + Android was never the best OS for games in general.

The sooner Fuchsia takes over the better, not that it will change much for game devs perception of Google mind you.

I hope you don't mean Fuschia the Google OS that even they don't care much about. Its in what, 2 devices of Google's? It barely had ARM support, and only fairly recently even got Chrome. And we know how Google operates. Frankly I think the main reason it wasn't killed off yet is that its still too obscure within the company for the execs to know about it to kill it.

Fuschia sounds interesting and could have potential, but Google is only ever like a quarter interested in anything they're doing, and I don't think there's any hope it ever develops into what people want it to be (basically ground up rework of Android to get rid of many of its issues and build a true single platform across modern devices; which at this point, seems like they might need to begin porting it to RISC as by the time Fuschia is ready we might be looking at RISC finally pushign ARM).

Sometimes with rendering accelerators can be used to increase rendering speed even if Gpu is weak, like How intel Quicksync do well compare to more powerfull Amd/Nvidia Gpu.

That's two completely different things. I don't know of any 3D modeling graphics rendering that QuickSync is accelerating compared to a graphics chip (although I have made the case for using it for the image processing/frame generation stuff they're now attempting to do), and AMD and Nvidia are not using their GPU to process video rendering (although you can use them for that sometimes), they use dedicated hardware for that, just like Intel does. AMD has that in their APUs (I don't recall if they added any to their traditional CPUs now that they integrated a bit of GPU in there as well) same as Intel does. And Intel and Nvidia also offer that capability in their dGPU chips.
 
Reactions: moinmoin
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |