Discussion Apple Silicon SoC thread

Page 308 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,809
1,388
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

name99

Senior member
Sep 11, 2010
498
385
136
Apple have done these architectural transitions multiple times, so it's hardly surprising that it went so smooth for them, especially considering that they've been running iOS (which already shares a lot of code with MacOS) on ARM for well over a decade prior.

If Apple hadn't pulled it off seemingly flawlessly, I'd have been a lot more alarmed and/or concerned.
So have MS. It's just that they fumble EVERY ONE OF THEM. And learn nothing from the experience.
DOS to Windows, and then 16b to 32b Windows were done well. Those were the last ones.

The transition to an NT-based Windows took far too long.
64b Windows was endless drama. .NET was endless drama.
ARM has been launched so many times that I've lost track (each time missing something essential, like 64b x86 support).

Mostly this seems to be internal sabotage, some part of the company that doesn't believe in the direction and will passive-aggressively undermine it.
The framing problem seems to be that early MS (up to 32b Windows) was forward looking. Yes, it was important to continue to be able to run your DOS apps on Windows 3, or your Win 3 app son Win 95. But there was a company-wide belief that Windows 3, then Win 95 were the future, and we were the people making it!

Since then MS engineering/implementation has been dominated by people looking backwards, people who think that Windows as it is now (whenever that may be) and x86 are as good as it gets, so why mess with perfection.
And Satya is unwilling (or unable) to get rid of them.

Meanwhile Apple is populated by people who still believe in the future (and is the company that attracts such people). When Apple wants to switch to OSX, or writes a 64b PPC OS, or then rewrites everything for ARM iPhone, the bulk of the people involved are thrilled. They are defining the future! And they can get it right this time, can fix multiple niggling issues that they got wrong last time!

So of course you get different outcomes! Not because of experience so much as because one population wants the change to work, while the other population wants the change to fail.
 

Doug S

Platinum Member
Feb 8, 2020
2,759
4,697
136
So have MS. It's just that they fumble EVERY ONE OF THEM. And learn nothing from the experience.
DOS to Windows, and then 16b to 32b Windows were done well. Those were the last ones.

The transition to an NT-based Windows took far too long.
64b Windows was endless drama. .NET was endless drama.
ARM has been launched so many times that I've lost track (each time missing something essential, like 64b x86 support).

Because none of those were migrations in the Apple sense. With DOS to Windows, 16b to 32b, DOS based Windows to NT, 32b to 64b were ALL platform additions. They didn't drop support for what they were coming from, it was just added it to the legacy pile. AFAIK even today plenty of DOS programs will still run in a command window on a modern PC. They didn't drop support for 16 bit Windows programs until 64 bit Windows (and AMD kinda forced that on them via the 64 bit x86 ISA) Moving to NT didn't mean dropping support for existing Windows programs, same with moving to 64 bit Windows for 32 bit Windows programs - Microsoft had to provide a translator for that but unlike Apple who will someday drop Rosetta 2 support and x86 programs will no longer run I'll bet it'll still be possible to run 32 bit Windows programs on whatever a Microsoft Windows PC is in 2050.

Windows on ARM (and all the other defunct ISAs they used to support) have never been intended to displace x86, it is another platform addition. ISAs are the only thing that Microsoft has ever completely eliminated in the way Apple completed eliminated support for 68K and PPC. They have done that to ARM before, and I feel confident that if Windows/ARM doesn't live up to their internal projections it will meet the same fate again.

You can't compare what Microsoft has done to what Apple has done, because they have different goals. Now sure, theoretically Apple could have (if they had the resources at the time) continued to support 68K Mac programs under PPC forever instead of for a transition period. They could still support some now (even if it was some via some byzantine scheme where Rosetta 2 would run Rosetta 1 to run the PPC->68K translator and there would still be some libraries hidden away that provided the System 7 APIs. I mean, that's effectively what Microsoft has done where even today it is possible to run DOS programs from the 80s going through the WoW6432 layer to the remainder of the DOS API in the command window. I imagine Apple would have more compatibility headaches reported had they done the same because they'd have to make compromises to make that work and because you allow people to do something means some yahoo somewhere will do it.
 

soresu

Diamond Member
Dec 19, 2014
3,214
2,490
136
Ye the work on the open gfx driver has been pretty intense.

I think one of the main drivers is also working on the open driver for ARM Mali and might be dividing their time a bit more toward Apple GPU than Mali 😅

If I thought there was some possibility I might be able to get a 3rd party M.2 SSD working on an Asahi-fied Mac I might buy one for my dads entertainment system, but alas I think they are gimped against 3rd party upgrades at the electrical or BIOS level.

Edit: Scratch that.

Mac Mini doesn't even have better than 1 Gbit ethernet in its base SKU.....



There are literally ARM SBCs with better than this 🤦‍♂️
 
Last edited:
Reactions: igor_kavinski

Eug

Lifer
Mar 11, 2000
23,809
1,388
126
Mac minis are not worth it anymore. The Studios if you grab them on sale is where the value is. 32GB RAM, Max SKU and 10GIG standard and more ports.

@Glo. got a killer deal recently on a M1 Max studio.
Mac mini is fine for those who need only 16-24 GB RAM, the non-Pro SoC, and not too many ports. That's a lot more people than who need the 32 GB M2 Max Mac Studio.

A 32 GB / 512 GB M2 Max Mac Studio at retail costs over twice as much as a 16 GB / 512 GB M2 Mac mini.

And I betcha 80% of people who buy Mac desktops don't even use Ethernet at all.

I wish I could get a Mac Studio for cheap, but just for the extra ports. Unfortunately, I have been unable to find a cheap Mac Studio either at retail or in the used market here in Canada. The few I've seen on eBay in Canada have gone for something like 85% of retail so that's not a good deal.

However, I'm intrigued by the knowledge that M4 has four Thunderbolt controllers, so maybe the port situation with the M4 Mac mini in 2025 will improve.
 

Doug S

Platinum Member
Feb 8, 2020
2,759
4,697
136
If I thought there was some possibility I might be able to get a 3rd party M.2 SSD working on an Asahi-fied Mac I might buy one for my dads entertainment system, but alas I think they are gimped against 3rd party upgrades at the electrical or BIOS level.

Apple doesn't have an m.2 interface, they use raw NAND. It isn't "gimped", it just isn't designed for m.2 SSDs any more than it is designed for SCSI drives.

You can get TB4 enclosures that support multiple m.2 slots if you really want to pursue that path I suppose, but installing a non-native OS on a rather high priced Apple hardware seems like a lot for an entertainment system even if Macs had a native m.2 slot. It isn't like you can't buy passively cooled SFF x86 PCs (all metal case, basically the whole thing is a giant heatsink) I think they charge too much for what they are, but they would certainly be up to the task.
 
Jul 27, 2020
19,877
13,621
146
Now sure, theoretically Apple could have (if they had the resources at the time) continued to support 68K Mac programs under PPC forever instead of for a transition period.
One reason they don't need to bother is this guy: http://www.emulators.com/about.htm

He probably took part in the Microsoft effort to develop the Prism emulator.

Supports MacOS from 1.1g to 9.0 and complete list of supported Macs is in the table at the bottom of this page: http://www.emulators.com/softmac.htm
 
Last edited:

name99

Senior member
Sep 11, 2010
498
385
136
You can't compare what Microsoft has done to what Apple has done, because they have different goals. Now sure, theoretically Apple could have (if they had the resources at the time) continued to support 68K Mac programs under PPC forever instead of for a transition period. They could still support some now (even if it was some via some byzantine scheme where Rosetta 2 would run Rosetta 1 to run the PPC->68K translator and there would still be some libraries hidden away that provided the System 7 APIs. I mean, that's effectively what Microsoft has done where even today it is possible to run DOS programs from the 80s going through the WoW6432 layer to the remainder of the DOS API in the command window. I imagine Apple would have more compatibility headaches reported had they done the same because they'd have to make compromises to make that work and because you allow people to do something means some yahoo somewhere will do it.
Exactly what I was saying...

Apple does this "well" (ie with a certain set of outcomes, at a certain speed) because they have one set of goals.
MS does this "poorly" (ie with a different set of outcomes, at a much slower speed) because they have a different set of goals.

I'm not interested in fighting over which set of goals is "better".
But you have to be blind not to see that these two philosophies result in different outcomes. I mean, you just admitted that IF Apple did things the MS way it would be a lot harder.

Each transition Apple has learned that it can make the transition even faster by making it *less smooth* in some sense.
The 68K->PPC transition was this amazing SW architecture that allowed for PPC code to call into 68K code and vice versa. You could do crazy things like run a PPC app that would execute 68K plugins.
With the 32->64b transition, and then with the PPC->x86 transition, they gave that up. You just run your app the way it wants to run, and pay the cost of having an entirely duplicated set of libraries. ie each transition the message is stronger (to users, but even more so to developers) that THIS IS IT: You get your act together in two years or our customers will probably drop you. Remember Intuit? Well, no-one on the Mac does...

MS does this poorly not because they have no experience, but because it's not important to them to do it well; because they (ie most of the company) don't even believe in whatever the transition is. By avoiding a year of pain today, they store up endless years of pain for the future.
You don't even see these as transitions by MS, not because they weren't, but because MS chose not to use them as "real" transitions. .NET, for example, was SUPPOSED to be the equivalent of dropping Carbon for NeXTStep/AppKit. In the hands of Steve Jobs running MS that's how it would have happened. But it's not what did happen.
 
Jul 27, 2020
19,877
13,621
146
Microsoft is stupid anyway. They can't even force game developers to adhere to DirectX so staunchly that a game made 10 years ago works bug-free now. There's always some little issue that the community has to patch the old game for, to make it run seamlessly in the latest Windoze.
 

Doug S

Platinum Member
Feb 8, 2020
2,759
4,697
136
Microsoft is stupid anyway. They can't even force game developers to adhere to DirectX so staunchly that a game made 10 years ago works bug-free now. There's always some little issue that the community has to patch the old game for, to make it run seamlessly in the latest Windoze.

What method could possibly force game developers to "adhere" to DirectX? They can do that on Xbox since they hold the power over what can run on it, but anyone can write anything they want on Windows, and if it works at the time it was sold what incentive to do they have to care whether some future implementation of DirectX will expose bugs in it 10 years later? They've already got your money.
 

Jan Olšan

Senior member
Jan 12, 2017
404
710
136
Apple doesn't have an m.2 interface, they use raw NAND. It isn't "gimped", it just isn't designed for m.2 SSDs any more than it is designed for SCSI drives.
That was a choice they made though, and there is no way nobody in there didn't internally weigh using that NIH, completely non-standard and complicated system against using plain NVMe and M.2 slots.
The upsides seem pretty dubious, the downsides are obvious, it doesn't take much paranoia to see they basically did this for no other reason than to prevent user upgrades and ensure they get all the extremely fat NAND resell margin money out of their users. So while usually people tend to jump to conclusions about malice and gimping way too much and usually wrongly, in this case I don't blame them, it looks extremely incriminating to me too. I don't believe their SSD scheme was done for any legit purpose. If anything, they decided to completely ignore a big pro-user advantage of standard SSDs, so it's anti-user (so, gimped?) design basically by definition.
 

Doug S

Platinum Member
Feb 8, 2020
2,759
4,697
136
That was a choice they made though, and there is no way nobody in there didn't internally weigh using that NIH, completely non-standard and complicated system against using plain NVMe and M.2 slots.
The upsides seem pretty dubious, the downsides are obvious, it doesn't take much paranoia to see they basically did this for no other reason than to prevent user upgrades and ensure they get all the extremely fat NAND resell margin money out of their users. So while usually people tend to jump to conclusions about malice and gimping way too much and usually wrongly, in this case I don't blame them, it looks extremely incriminating to me too. I don't believe their SSD scheme was done for any legit purpose. If anything, they decided to completely ignore a big pro-user advantage of standard SSDs, so it's anti-user (so, gimped?) design basically by definition.

The upside is super obvious. They ALREADY HAD a great NAND controller they'd purchased from Anobit and integrated into iPhone SoCs years ago. It isn't "super complicated" to continue doing what they had been doing for years. It is also cheaper, since they aren't handing money to a third party for a controller when they already have one. Instead of buying finished SSDs that are marked up by someone else, they are buying the same NAND chips they are already buying in bulk for iPhones and iPads.
 

FlameTail

Diamond Member
Dec 15, 2021
3,894
2,324
106
The upside is super obvious. They ALREADY HAD a great NAND controller they'd purchased from Anobit and integrated into iPhone SoCs years ago. It isn't "super complicated" to continue doing what they had been doing for years. It is also cheaper, since they aren't handing money to a third party for a controller when they already have one. Instead of buying finished SSDs that are marked up by someone else, they are buying the same NAND chips they are already buying in bulk for iPhones and iPads.
I wonder how die area in the SoC do these NAND controllers take up?
 

Eug

Lifer
Mar 11, 2000
23,809
1,388
126
The upside is super obvious. They ALREADY HAD a great NAND controller they'd purchased from Anobit and integrated into iPhone SoCs years ago. It isn't "super complicated" to continue doing what they had been doing for years. It is also cheaper, since they aren't handing money to a third party for a controller when they already have one. Instead of buying finished SSDs that are marked up by someone else, they are buying the same NAND chips they are already buying in bulk for iPhones and iPads.
Nitpick but from my understanding it is more accurate to say they didn't just buy the controller from Anobit, but they bought Anobit itself thereby acquiring both great NAND controllers and associated IP, along with the great engineers that designed them.
 

soresu

Diamond Member
Dec 19, 2014
3,214
2,490
136
I think they charge too much for what they are
Likewise for Apple products full stop, and unfortunately most of their direct competitors in the phone market since Samsung Galaxy got traction in the early 2010s.

As you say though, will probably end up getting something like a Phoenix 2 or Kraken Point system with M.2 slots and 2.5G ethernet, though by KP there might even be possibility of 5GbE given Realtek is now pushing forward with that a la this.
 
Jul 27, 2020
19,877
13,621
146
What’s even worse is they solder the SSDs on the MacBooks!!

It’s pure e-waste after the SSD dies or an expensive repair.
Their problem is they think only about the average user and have this stupid misconception that a power user must have loads of cash on hand to go with 2TB or higher SSD at THEIR prices. I am SO glad that I didn't grow up in a Mac household. Must be miserable coming up with all sorts of scummy moneymaking schemes to afford Mac hardware.
 

The Hardcard

Senior member
Oct 19, 2021
214
304
106
What’s even worse is they solder the SSDs on the MacBooks!!

It’s pure e-waste after the SSD dies or an expensive repair.
What is the data on that failure rate though? I have never had an SSD failure, or even a spinning disc failure for that matter.

My Tandy Color Computer from the 1980s still works. Outside of physical or liquid damage no semiconductor component has ever stopped working for me.
 

jpiniero

Lifer
Oct 1, 2010
15,176
5,717
136
What is the data on that failure rate though? I have never had an SSD failure, or even a spinning disc failure for that matter.

SSD's do wear out eventually. The failure rate during the warranty period is likely very low... but I suppose it could be an issue for people buying used Macs.

IIRC you can boot off an external drive.
 
Reactions: Jan Olšan
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |