Discussion Apple Silicon SoC thread

Page 369 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,953
1,567
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

johnsonwax

Member
Jun 27, 2024
118
195
76
The biggest problem with that approach was, that the hyperscalers wanted out of this model. Jon Masters said already a decade ago that they were going vertically integrate.

Now over 50% of Amazon CPUs are Graviton and they look to do the same for AI tasks:
Yeah, it was pretty obvious to people like me who focus more on how business models affect markets that the ARM licensing model when combined with foundries would (you need both), once customers had hit a certain scale, justify building bespoke silicon rather than picking from Intel's menagerie of SKUs. This is a pretty well understood cycle of bundling/unbundling that happens in many markets. Apple starts out in an immature market, making a lot of their own silicon (Apple I/II). Market efficiencies develop when common needs are established and a component market develops that reaches across the customer market (x86/Windows). The market grows large enough that the diminishing returns of scale and growing segmentation of the customer space force a segmentation of the component market (SKU bloat). Customers begin to outscale the individual SKUs (market consolidation often drives this) and the benefits of tailoring hardware to your specific needs plus the ability to capture the component makers margins make returning to making your own silicon attractive again (A/M series). And around and around. Happens in a LOT of places.

The challenge with this analysis for investors is what Horace Dediu makes clear: "Those who predict the future we call futurists. Those who know when the future will happen we call billionaires." The problem isn't actually that Intel couldn't figure out the 'when'. I mean, I figured it out roughly, sure as shit they could. The problem is that most corporations are so dependent on their business model and internal culture that when such a boulder appears on the tracks they're unable to do anything sufficiently substantive about it. This is a byproduct of institutional inertia, which plagues every institution. So Intel saw the problem, almost certainly talked themselves out of it being a problem (self-deception is a common solution to cognitive dissonance - an existential threat is coming, but we are who we are and that got us this far, therefore the threat must not be existential) plowed ahead with some minor adaptation to their business model and mostly stayed the course until they could no longer keep lying to themselves (profits fall through the floor)that the threat was indeed existential, at which point its now too late to respond to it, all they can do is grind through CEOs until the board recognizes the scale of the business model change that is needed.

It's frustratingly apparent, predicable, and seemingly impossible to avoid. Benefits of being an investor, you can take your money to the new space, but the company often can only push against the inevitable.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,754
106

LightningZ71

Platinum Member
Mar 10, 2017
2,018
2,455
136
Yeah, it was pretty obvious to people like me who focus more on how business models affect markets that the ARM licensing model when combined with foundries would (you need both), once customers had hit a certain scale, justify building bespoke silicon rather than picking from Intel's menagerie of SKUs. This is a pretty well understood cycle of bundling/unbundling that happens in many markets. Apple starts out in an immature market, making a lot of their own silicon (Apple I/II). Market efficiencies develop when common needs are established and a component market develops that reaches across the customer market (x86/Windows). The market grows large enough that the diminishing returns of scale and growing segmentation of the customer space force a segmentation of the component market (SKU bloat). Customers begin to outscale the individual SKUs (market consolidation often drives this) and the benefits of tailoring hardware to your specific needs plus the ability to capture the component makers margins make returning to making your own silicon attractive again (A/M series). And around and around. Happens in a LOT of places.

The challenge with this analysis for investors is what Horace Dediu makes clear: "Those who predict the future we call futurists. Those who know when the future will happen we call billionaires." The problem isn't actually that Intel couldn't figure out the 'when'. I mean, I figured it out roughly, sure as shit they could. The problem is that most corporations are so dependent on their business model and internal culture that when such a boulder appears on the tracks they're unable to do anything sufficiently substantive about it. This is a byproduct of institutional inertia, which plagues every institution. So Intel saw the problem, almost certainly talked themselves out of it being a problem (self-deception is a common solution to cognitive dissonance - an existential threat is coming, but we are who we are and that got us this far, therefore the threat must not be existential) plowed ahead with some minor adaptation to their business model and mostly stayed the course until they could no longer keep lying to themselves (profits fall through the floor)that the threat was indeed existential, at which point its now too late to respond to it, all they can do is grind through CEOs until the board recognizes the scale of the business model change that is needed.

It's frustratingly apparent, predicable, and seemingly impossible to avoid. Benefits of being an investor, you can take your money to the new space, but the company often can only push against the inevitable.
This is most certainly central to the big picture. Intel could have, at any time, decided to take just a tiny bit less profit and worked closely with their biggest customers to give them each exactly the chips that they needed instead of very slightly tailoring an existing product and calling it good. They occasionally did a limited run of something that was mildly tweaked, but they offered a square peg that could have bits chipped off of it to everyone. Heck, they might have made even more money from those customers as they certainly would have paid a premium for such a service, though not enough to justify Intel taking their head out of the ground.

As we have gone round and round about here, there is very little about ARM that is intrinsically substantially better than x86. The only reason that it's making the market penetration that it is is the result of Intel abusing it's position in the market in the past, and AMD not having the resources to offer a different way. Now, we're in an era that there are precious few things that ABSOLUTELY HAVE to be on any particular ISA. Almost everything is heavily abstracted from the hardware and most of that is built on the framework of a web browser and what is essentially a light weight virtual machine inside of it. This is NOT a captive market.
 

name99

Senior member
Sep 11, 2010
565
463
136
I won't dispute most of what you said, but I will point out one very significant difference between Apple vs. Intel/AMD: the server market. Apple has intentionally chosen not to play in the data center field (excluding a recent effort at AI clustering their own computers). Intel and AMD, to keep their own costs reigned in, reuse large portions of or entire cores from all the way at the top of their server stack down to the sub $300 laptop market. That's a LOT of ground to cover, a portion of which is wholely unaddressed by Apple. Big x86 made a ton of money on servers while Apple was blowing in the wind from PowerPC to x86 to their own ARM products on the Mac line while not earning a lot of money for much of that time from said products.

Now, the tables have turned, largely due to Intel shooting themselves in the foot repeatedly and AMD not being large enough to dictate the market themselves. Apple is fully vertically integrated and Intel is on the verge of imploding. You can blame Intel for going for Ghz, but that was a symptom of bad management, not the cause of their problems.
Except that Apple's cores would be superb for data center usage. Look at the numbers they get for verilator compared to anyone else (except maybe a z/ series!)
All Apple needs to do to make a"datacenter" chip is throw away all the other stuff (ISP, media, GPU, etc) and dump 40 (or whatever) clusters on a single chip... Core doesn't change at all. NoC might need some changes, though even then probably not much.

How does Intel trying to create an optimal data center core hurt Intel trying to create an optimal laptop core? Both segments want the exact same thing - performance at lowish (say 2..4W/core) power!

If anything the problem is that Intel (for reasons I cannot understand) is in thrall to gamers, and thinks the most important thing is the world is to provide a crazy overclockable KS chip that might hit 6.5GHz, regardless of the effect that has on every other segment of their product line...
 

LightningZ71

Platinum Member
Mar 10, 2017
2,018
2,455
136
The frequency numbers are in favor of ST for games as well as legacy programs that have no idea what multiple cores and threads are. For DC and HPC markets, they include things like AVX-512 and other barely used features in their base core designs. While they can trim some of that out for desktop and mobile (or just leave it dark, but pay the transistor count floorplan penalty anyway to not have to do another layout) it doesn't magically make everything else they sacrificed to put it in there in the first place come back. Some of these things are implemented in accelerators outside of the core for Apple. For x86, there's rarely any sort of OS or compiler support for anything outside of the ISa implemented in the core.

Remember, Apple controls all of that from the very top down to the individual transistor. They make using those things as seamless as possible to the end user or app programmer. If they put it in there, it's fully supported from day 1.
 
Reactions: CouncilorIrissa

SpudLobby

Golden Member
May 18, 2022
1,027
695
106
The first Apple modem will also only support four-carrier aggregation, a technology that combines bands from several wireless providers simultaneously to increase network capacity and speeds. Modems from Qualcomm can support six or more carriers at the same time.
In lab tests, the first Apple modem caps out at download speeds of about 4 gigabits per second, less than the top speeds offered by non-mmWave Qualcomm modems, the people said.
This is fine, mmWave is a ripoff anyway. But I wonder what real world use looks like. CA limit is meh though
Real world speeds for both types of modems are typically far less, meaning that customers might not notice a difference in day-to-day use.
In any case, the first Apple modem will have several other advantages that the company believes will give it an edge with consumers. For one, it will be tightly integrated with Apple-designed main processors to use less power, scan for cellular service more efficiently and better support on-device features for connecting to satellite networks.
The scanning is, IMO, the single most interesting possible advantage Apple might bring, but this is more of a software tweak that can be implemented in a new model easily and I expect Qualcomm to follow up with a performance/reliability sacrificing energy saving scan shut down mode as well. In weak coverage you can kill a battery pretty easily.

That said, Qualcomm isn’t like Intel/AMD with power and the lack of aggregation will also probably hurt reliability early on. I don't expect huge power gains at all.
The Apple modem will also be able to offer better performance relative to SAR limits because it will be managed intelligently via the main processor, the people said. SAR, or specific absorption rate, is a measure of the radio frequency absorbed by the body, and government agencies such as the US Federal Communications Commission regulate acceptable levels.
Doubt this is anything significant lol
 
Reactions: Raqia

gdansk

Diamond Member
Feb 8, 2011
3,768
6,015
136
Yikes. Well that’s not a good look. At this point even the most ardent Intel supporters have to be like “WTF WERE THEY DOING?”.
Didn't Intel acquire that group mainly from Infineon?
I think that's why they were so willing to sell it.

The SNAFU is that they paid more for it than Apple paid.
 
Reactions: Mopetar

Doug S

Diamond Member
Feb 8, 2020
3,005
5,167
136
This is fine, mmWave is a ripoff anyway. But I wonder what real world use looks like. CA limit is meh though

Why do you care about carrier aggregation? Most towers don't support more than 2 way when there's any contention, and won't go above 4 way at all. I suppose it matters in big city skyscraper valleys where you can leverage multipath effects and CA together, but that's mostly a meh too. Agree about mmwave, only time I've seen it come in handy is in a big stadium that had been outfitted with it. I remember a decade ago going to football games and being unable to send/receive text or imessages. You'd walk out of the stadium, and when you're about 4-5 blocks away and link to another tower then boom your phone blows up with three dozen messages hitting at once.

The scanning is, IMO, the single most interesting possible advantage Apple might bring, but this is more of a software tweak that can be implemented in a new model easily and I expect Qualcomm to follow up with a performance/reliability sacrificing energy saving scan shut down mode as well. In weak coverage you can kill a battery pretty easily.

That said, Qualcomm isn’t like Intel/AMD with power and the lack of aggregation will also probably hurt reliability early on. I don't expect huge power gains at all.

Doubt this is anything significant lol

Personally I think the most significant thing is that the baseband software will be entirely under Apple's control. Reportedly Qualcomm's baseband is sort of a monster that has grown over many years, with poor security practices and the likelihood of many security issues. That's not to say Apple's will be perfect, but since Apple reportedly had to scrap Intel's work and start from scratch it should be cleaner (and I wouldn't be surprised if the great majority of it was written in a memory safe language i.e. Swift instead of C) The baseband is where some nation state level attacks occur, which don't need a bug in the phone's OS at all. If nothing else at least nation state attackers will need to find new attacks against Apple's baseband since the list of attacks they have against Qualcomm's will not work.
 

jpiniero

Lifer
Oct 1, 2010
15,631
6,109
136
I think that's why they were so willing to sell it.

The 10 nm disaster killed any chance of having a semi competitive product. I think they sold it before the Board/CEO would have been willing to fab such a highly visible product at TSMC and it wasn't something that could have been done at 14 nm since they tried that and it drew too much.
 

Doug S

Diamond Member
Feb 8, 2020
3,005
5,167
136
The 10 nm disaster killed any chance of having a semi competitive product. I think they sold it before the Board/CEO would have been willing to fab such a highly visible product at TSMC and it wasn't something that could have been done at 14 nm since they tried that and it drew too much.

The reason they were willing to sell it was that Apple was the only customer who had even been using it in its last few years. Intel bought it because they thought they'd need it for their Atom SoCs they naively assumed would sweep aside ARM SoCs in non-Apple phones because everyone wants x86 in their phones, right?

Once Apple dropped them because they didn't trust Intel's 5G roadmap it had no customers. Made more sense to get something out of it by selling it to Apple, than spend money developing something that had no customers or shut it down and get nothing out of it.

Apple would have been better off if they'd just bought the patents and not acquired the design or team, since they ended up having to throw it all out and start over.
 

johnsonwax

Member
Jun 27, 2024
118
195
76
As we have gone round and round about here, there is very little about ARM that is intrinsically substantially better than x86.
Sure there is. You can license and modify it. You can build your own cores. That is intrinsically substantially better. Look at Apple showing ARM how to make their own stuff.
 
Reactions: Eug

johnsonwax

Member
Jun 27, 2024
118
195
76
Except that Apple's cores would be superb for data center usage. Look at the numbers they get for verilator compared to anyone else (except maybe a z/ series!)
All Apple needs to do to make a"datacenter" chip is throw away all the other stuff (ISP, media, GPU, etc) and dump 40 (or whatever) clusters on a single chip... Core doesn't change at all. NoC might need some changes, though even then probably not much.
Intel should count its blessings that Apple isn't interested in being a component supplier.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,754
106
Rumour: Adreno 840 in Snapdragon 8 Elite Gen 2 will have 18 CUs.


That's 50% more CUs than the Adreno 830 in 8 Elite. A whopping 2304 FP32 ALUs.

Does not necessarily mean that peak performance will be 50% higher, as that depends on the clock speed, but the efficiency uplift from widening the GPU by 50% will be terrific.
How will Apple A19 Pro fare against this?

AppleQualcomm
2021A15 Bionic

5 cores
Family 8
8 Gen 1

1024 ALUs
Adreno 7
2022A16 Bionic

5 cores
Family 8
8 Gen 2

1536 ALUs
Adreno 7
2023A17 Pro

6 cores
Family 9
8 Gen 3

1536 ALUs
Adreno 8
2024A18 Pro

6 cores
Family 9
8 Elite

1536 ALUs
Adreno 8
2025A19 Pro


8 Elite Gen 2

2304 ALUs
Adreno 8

It seems likely that A19 will introduce Apple's next generation Family 10 GPU architecture.

(Speculation)

A19 Pro
2P + 6E
8-core GPU

A19
2P + 4E
6-core GPU
 

Nothingness

Diamond Member
Jul 3, 2013
3,183
2,233
136
Sure there is. You can license and modify it. You can build your own cores. That is intrinsically substantially better. Look at Apple showing ARM how to make their own stuff.
Licensees can't modify licensed cores except for specific limited areas (e.g. cache sizes). As far as architecture license goes, the existing instruction set and architectural features in general can't be altered, though extra instructions can be added in reserved space (Apple did this, and in the past customers added things such as integer divide).

That being said, you're right: the flexibility of Arm license model goes well beyond what x86 has to offer.
 

name99

Senior member
Sep 11, 2010
565
463
136
As we have gone round and round about here, there is very little about ARM that is intrinsically substantially better than x86.
Of course there is.
You have one set of people (eg myself or Eric Quinnel) giving complex technical reasoning, with detailed numbers, for why this is so.

eg

You have a different set of people saying "no no no, it's not a problem, trust us; the NEXT chip will show just how great x86 still is".

If you want to choose to believe the second set of people, that's on you.
 

gdansk

Diamond Member
Feb 8, 2011
3,768
6,015
136
You have a different set of people saying "no no no, it's not a problem, trust us; the NEXT chip will show just how great x86 still is".

If you want to choose to believe the second set of people, that's on you.
It's pretty convincing when no one but Apple can replicate it.
 

name99

Senior member
Sep 11, 2010
565
463
136
How will Apple A19 Pro fare against this?

AppleQualcomm
2021A15 Bionic

5 cores
Family 8
8 Gen 1

1024 ALUs
Adreno 7
2022A16 Bionic

5 cores
Family 8
8 Gen 2

1536 ALUs
Adreno 7
2023A17 Pro

6 cores
Family 9
8 Gen 3

1536 ALUs
Adreno 8
2024A18 Pro

6 cores
Family 9
8 Elite

1536 ALUs
Adreno 8
2025A19 Pro

8 Elite Gen 2

2304 ALUs
Adreno 8

It seems likely that A19 will introduce Apple's next generation Family 10 GPU architecture.

(Speculation)

A19 Pro
2P + 6E
8-core GPU

A19
2P + 4E
6-core GPU
Dude don't just count up P and E cores (or whatever QC is calling them, P and M cores?). It makes you look like a moron, and when you look like a moron no-one intelligent is going to reply to you, even if you ask a question sincerely.
I don't care if blind adding up cores is what the idiots at tomshardware and similar sites do. If you want to associate with people who can give you intelligent answers, step one is acting intelligent yourself.
 

gdansk

Diamond Member
Feb 8, 2011
3,768
6,015
136
Tell me you didn't look at the link without telling me you didn't look at the link...
And yet it doesn't matter. No one but Apple has made a core competitive even with the flaming piles of x86 garbage. Weird for something so inherently better.

In some specific measurements it may have advantages but in the end SPECint still shows x86 is good enough despite all these weird limitations. And in the server market no one is even on the performance per watt level of Zen 5. Strange for something inherently better. The only one who will make ARM look great here is Apple.

Commodity ARM designers aren't doing any better than commodity x86 designers.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |