Discussion Apple Silicon SoC thread

Page 305 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,871
1,438
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

Eug

Lifer
Mar 11, 2000
23,871
1,438
126
Interesting. Somewhat off topic but the rumour for the Nintendo Switch 2 is also that it will get 12 GB RAM utilizing two 6 GB chips, running at 7500 MT/s. I wonder if this 7500 speed is a new de facto (non-)standard, considering that's the speed in the iPad Pro M4, and it's also believed to be 12 GB RAM (but with only 8 GB accessible).


If the new Mac mini and MacBook Pro comes with 12 GB RAM base, I wonder if it will be this speed too.

BTW, we've been using Apple Arcade recently, as a stop-gap until we can buy a Switch 2. My kid started asking about getting a Switch a while back but I said I wouldn't buy one until it got updated, thinking it could be 2023 or 2024. Well we know now it will be 2025. If that delay means it comes with 12 GB instead of 8 GB, that might be the silver lining.
 
Last edited:
Reactions: Orfosaurio

name99

Senior member
Sep 11, 2010
511
395
136
Yep, that's it.

Apple's high end has always been a dogs breakfast. Sometimes it works pretty well, sometimes it's completely missing, sometimes it's 'what the fuck'. I think you can put a bit of that back on Apple struggling to provide Intel based offerings that weren't just repackaging of Dell.

Apple's biggest problem here though is that the job-to-be-done of the Mac is not as robust as the PC market. What's their high-end market? Ok, video editing. Audio, sure. Developers. Uh... LLM hobbyists? This is really the problem they need to work on, and as that comes into greater focus, what that high end needs to look like will similarly come into focus. I mean, I think the single most important thing they've done to secure their high-end sales is ProRes codecs because that is the heart of that market. I did data science on Macs, but that's the sort of thing that you don't really scale your hardware to fit, when things get computationally expensive you run and rent AWS compute and drive it from you MBP. The Mac is fantastic for data science, but it mostly ignores the high end for scalable compute. They're making a better than halfhearted effort on gaming, but still missing the big pieces. You need both the hardware and software to be there. Apple needs to find the software the platform lacks - buy it, make it, but get it.

But yeah, the floor will remain the floor. They'll look for opportunities to go upmarket.
I've long suggested that Apple will eventually offer the equivalent of "transparent AWS" as a combination user/developer offering.
ie I run Mathematica (or whatever) locally, but with the ability to, more or less transparently, shift the calculation to some Apple server box somewhere.
XCode in the Cloud is step one of this.
Private Cloud Compute (sold as for AI, but notice the name has nothing to do with AI..) is step two.

I think we'll see interesting elements of this play out over the next few years.
(And of course this doesn't mean the end of "big-ish" local Macs. Some people just want to own a big machine, some will do enough 24/7 compute that it makes financial sense. And if your compute involves massive amounts of local data - eg video manipulation - remote compute probably makes little sense.)
 

johnsonwax

Member
Jun 27, 2024
83
155
66
Didn't they just do it with Ipad Pros. 11" started at 749 or something and with new OLED its starting at 999. 13" went up $300. Even if you increase base storage to 256 its still more expensive than last gen. So if they are upping the specs I expect price increase. 15 pro max increased base storage to 256 and price went up by 100 bucks. I dont see Apple give anything for free.
They'll reposition the product in the lineup. That's a little different though. There was a time when the MacBook diverged into two premium alternatives - the thin and light Air and the Pro, each of which had $500+ premium on the MacBook. But they ended up dropping the base MacBook, and repositioning the Air into that slot, which is why the Air base went from $1799 to $1099. It's called the Air, but really it's the MacBook at the old MacBook floor.

iPads have been similarly reorganized with the Pro increasingly being positioned as a Mac replacement with comparable specs as they can get the product there. I mean, the current iPad Pro is faster than half the Mac lineup right now. So it's not an arbitrary lifting of the price (the base iPad is still in the same place) but they move the upsell products around a bit usually with significant increases/decreases in features/performance to fit. Once they find its place, it'll become stable again. But it's not the case that they'll just raise it $200 when they add 8GB on the base model as what was originally asserted. If it's performing the same role in the lineup, it'll have the same price. Again, the price is the thing the device is designed to.
 
Reactions: Orfosaurio

johnsonwax

Member
Jun 27, 2024
83
155
66
I've long suggested that Apple will eventually offer the equivalent of "transparent AWS" as a combination user/developer offering.
ie I run Mathematica (or whatever) locally, but with the ability to, more or less transparently, shift the calculation to some Apple server box somewhere.
XCode in the Cloud is step one of this.
Private Cloud Compute (sold as for AI, but notice the name has nothing to do with AI..) is step two.

I think we'll see interesting elements of this play out over the next few years.
(And of course this doesn't mean the end of "big-ish" local Macs. Some people just want to own a big machine, some will do enough 24/7 compute that it makes financial sense. And if your compute involves massive amounts of local data - eg video manipulation - remote compute probably makes little sense.)
I think they'll only do that if they think they can secure a new market where that would needed to either fill a gap, or where hanging off of something like AWS would take the market away. Private Cloud Compute is, I'm pretty sure, intended to be temporary, but given Apple doesn't know where this AI thing is going, might be infrastructure they need to rely on for a long time. Put another way, if they hold their AI compute needs flat and continue to double their NPU perf for a few generations, the need for the service goes away. I think that's the intent, but keeping compute needs flat isn't something Apple feels they can control right now.

Offering that up as a feature you can invest into is a different proposition entirely, one with costs that are open-ended. Apple really doesn't like to throttle after the fact, so they'd need to be pretty confident that such a service would avoid that. I wouldn't say no to it happening, but I also wouldn't say the other two give much insight into whether Apple might do it or not. Xcode in the cloud is a twice-removed customer acquisition cost. Getting more apps made means more 30% cut and more devices sold. And it's hard for a $99/yr developer subscription to spiral out of control in terms of demand. It's already throttled behind that subscription fee. I think there's another piece needed for it to work which you're assuming in your idea or which I'm missing. But it's certainly not a 'never gonna happen' idea like running Mac apps on iPad is, or dual boot, or python system scripting on iPad.
 

name99

Senior member
Sep 11, 2010
511
395
136
I think they'll only do that if they think they can secure a new market where that would needed to either fill a gap, or where hanging off of something like AWS would take the market away. Private Cloud Compute is, I'm pretty sure, intended to be temporary, but given Apple doesn't know where this AI thing is going, might be infrastructure they need to rely on for a long time. Put another way, if they hold their AI compute needs flat and continue to double their NPU perf for a few generations, the need for the service goes away. I think that's the intent, but keeping compute needs flat isn't something Apple feels they can control right now.

Offering that up as a feature you can invest into is a different proposition entirely, one with costs that are open-ended. Apple really doesn't like to throttle after the fact, so they'd need to be pretty confident that such a service would avoid that. I wouldn't say no to it happening, but I also wouldn't say the other two give much insight into whether Apple might do it or not. Xcode in the cloud is a twice-removed customer acquisition cost. Getting more apps made means more 30% cut and more devices sold. And it's hard for a $99/yr developer subscription to spiral out of control in terms of demand. It's already throttled behind that subscription fee. I think there's another piece needed for it to work which you're assuming in your idea or which I'm missing. But it's certainly not a 'never gonna happen' idea like running Mac apps on iPad is, or dual boot, or python system scripting on iPad.
Oh I'm assuming that remote compute will be something I pay for.
Likely as part of my iCloud subscription; along with 50GB of storage I get 50 "units" of remote compute per month, something like that.
I assume that charging for these (but for most people as part of a bundle, and on the assumption that it's something you use once a month) reduces your concerns?
I don't see this primarily as a way for either Apple or customers to avoid selling/buying the "appropriate" level of machine; rather it's just one more perk of the Apple system, that when you do need some serious compute a solution exists.

If you would go out to AWS (either as a developer, or as eg a small department) Apple wants to keep you in the family as well, but via a different sort of plan, one that's priced in the same way as AWS [probably a few percent more], but has the convenience of being a more transparent extension of the way you already do things.

Remember our starting point was "how does Apple sell more computing"? Directly selling a DGX or a $25,000 server is tough for multiple reasons. But the sort of path I'm suggesting seems to me easier, a way for people who have Macs to obtain that sort of functionality by the hour (and without having to find $25K or $100K in the budget).
 

Doug S

Platinum Member
Feb 8, 2020
2,836
4,820
136
I think they'll only do that if they think they can secure a new market where that would needed to either fill a gap, or where hanging off of something like AWS would take the market away. Private Cloud Compute is, I'm pretty sure, intended to be temporary, but given Apple doesn't know where this AI thing is going, might be infrastructure they need to rely on for a long time. Put another way, if they hold their AI compute needs flat and continue to double their NPU perf for a few generations, the need for the service goes away. I think that's the intent, but keeping compute needs flat isn't something Apple feels they can control right now.

Offering that up as a feature you can invest into is a different proposition entirely, one with costs that are open-ended. Apple really doesn't like to throttle after the fact, so they'd need to be pretty confident that such a service would avoid that. I wouldn't say no to it happening, but I also wouldn't say the other two give much insight into whether Apple might do it or not. Xcode in the cloud is a twice-removed customer acquisition cost. Getting more apps made means more 30% cut and more devices sold. And it's hard for a $99/yr developer subscription to spiral out of control in terms of demand. It's already throttled behind that subscription fee. I think there's another piece needed for it to work which you're assuming in your idea or which I'm missing. But it's certainly not a 'never gonna happen' idea like running Mac apps on iPad is, or dual boot, or python system scripting on iPad.

Since when have compute needs (in any sphere) ever remained flat? If there's anything you can count on its that tomorrow people will want to run bigger spreadsheets, linear algebra with more variables and equations, AI models with more tokens than they do today.

I speculated about this many years ago when the ARM transition was still only a "rumor" that everyone knew was coming just didn't know when. I had assumed they weren't going to do it since nothing had ever happened, but whether AI has something to do with it or they've just been putting pieces into place to launch what they want I don't know but it sounds like they're finally pulling the trigger.

Look its one thing to offer cloud computation. Amazon and Microsoft have been making billions doing so, but there's a LOT of room for improvement in how that power can be delivered along the path from datacenter to developer to end user. One thing Apple excels at is introducing new functionality in a way that makes it easy for developers to integrate into their apps - and making it transparent to end users. Other than a new permission "allow this app to offload computation to Apple Private Cloud" the end users might not know the app now offloads computation into the cloud other than it is much faster than it used to be for big jobs.

A lot will depend on how billing is done, but I wouldn't be shocked if in some cases developers will build that cost into their subscriptions so that end users don't have to worry about watching their "cloud credits" budget. We'll see how that shakes out. One thing people hate are limited services - like when you had a certain number of voice minutes per month or whatever. But obviously they can't offer unlimited cloud computing resources like how voice minutes are now unlimited, so figuring out how that works will be an important piece of the puzzle.

Apple having control of the OS, control of the APIs, control of Xcode, and control of the cloud means this can all be really simple for everyone involved. The developer adds some calls to new API functions to initialize and send computations, and when that code is compiled by Xcode it also compiles pieces that get installed in the cloud to handle that end of things. So the developer can run his app to test and it will start doing stuff in the cloud, with no fussing around with cloud configuration for him - and more importantly no cloud configuration for the end user! You update your apps and suddenly the ability to run things in the cloud appears, and you just have to flip a switch in the app permissions to make it work.

Maybe I'm overselling how transparent it will be, but I think what I'm outlining is the goal whether or not version 1.0 manages to be quite as slick as what I'm describing. It would be interesting to see what developers would come up with, and what happens when you theoretically have the power of an entire rack of Apple Silicon CPUs at the beck and call of your Macbook or maybe even your iPhone.
 
Reactions: name99

johnsonwax

Member
Jun 27, 2024
83
155
66
Since when have compute needs (in any sphere) ever remained flat? If there's anything you can count on its that tomorrow people will want to run bigger spreadsheets, linear algebra with more variables and equations, AI models with more tokens than they do today.
There are lots of tech that hit substantial diminishing returns and where advancement largely stops. Getting enough compute and good enough sensors to do TouchID in half a second was a heavy lift, but once it works it works. About the only gain you can get is to knock that down to ¼ second and after that nobody gives a shit, so why continue to invest down that path? Once you have retina displays with smaller pixels than you can see, what's the benefit of making them smaller? Brighter, lower power, lighter, flexible - sure, but pixel density improvements in phones has become such a pointless exercise that it used to be one of the biggest selling points of devices and now isn't even mentioned because every phone now has, effectively, perfect resolution.

If you take the AI as product view, then yes, there will be a constant and continuous demand for more compute. But if you take Apple's AI as feature view, once the feature is delivered, it's delivered - because it's a discrete thing - it can't set your timer more good, it either sets it or doesn't. With compute you're not delivering more feature, you're delivering new features - maybe faster. And Apple has plenty of development work to do just building all of these models for other languages to bring parity to other markets. None of that requires additional compute.

I do think Apple will continue to add features, but how much compute each of those features needs we don't know. It's not like it needs enough compute to do this simultaneously, so if they figure out a new feature that can operate within the exiting NPU budget, then it's effectively free. Apple doesn't need to add compute, just software.

And the history of these kinds of things for Apple is that their first offering is right up against the limit of what's possible (see AVP) and they will advance part of their lineup (the P in AVP) to keep their foot on the gas there, and then lock down the feature set so that the rest of the lineup can get those features without having to raise prices. So the iPhone Pro gets the best camera, and that camera then trickles down to the rest of the lineup as the cost of the component comes down over subsequent years - so below the Pro, it is flat because of that trickle down effect.

What we don't know is where AI is going to diverge with respect to iPhone/iPad/Mac as different power budgets allow. Thus far iPhone has carried by far the biggest NPU because it was used for narrow case computational photography more than it was used for traditional ML. And because Apple has mostly rejected the generative/opened-ended aspect of AI - at least in the general cases - it gives them the opportunity to expand their AI features more slowly than their customers replace weaker NPU hardware with stronger, reducing the cloud demand. They did this with Siri shifting its compute from cloud to device. I expect them to try and do the same here. So far, they have a big lead for on-device AI.
 
Reactions: Orfosaurio

FlameTail

Diamond Member
Dec 15, 2021
4,095
2,465
106
Could Apple stack two M4 Max dies on top of each other using WoW (Wafer-on-Wafer) packaging to create the M4 Ultra, instead of the current method where two M2 Max dies are placed on a giant interposed side by side and connected using the UltraFusion Interconnect?
 
Jul 27, 2020
20,420
14,090
146
Could Apple stack two M4 Max dies on top of each other using WoW (Wafer-on-Wafer) packaging to create the M4 Ultra, instead of the current method where two M2 Max dies are placed on a giant interposed side by side and connected using the UltraFusion Interconnect?
With their dies hitting 108C, they would have to be clocked really low for that to happen.

 

FlameTail

Diamond Member
Dec 15, 2021
4,095
2,465
106
AMD is the first customer to adopt SoIC technology, with its latest MI300 chip using SoIC combined with CoWoS solution. Apple, TSMC’s primary customer, is reportedly interested in SoIC and plans to incorporate it with Hybrid molding technology for Mac products. Small-scale trials are currently underway, with mass production anticipated between 2025 and 2026
Apple M5?
 
Reactions: Glo.

The Hardcard

Senior member
Oct 19, 2021
252
332
106
Apple M5?
Mark Gurman claims the M4 Mac Studios aren’t arriving until H2 2025. All rumors and speculation, but given it’s the same timeframe as TSMC making a big step up in SoIC production, it could mean the M4 Ultra is targeted for SoIC. Cheaper, more bandwidth, and lower latency than the current UltraFusion.

It would make sense why Apple would want to wait, though it is a very long wait, more than a year after M4 iPad. Plus 2 solid years wait for new Mac Studio/Pro, most of that time with major GPU features the top Macs are doing without. It would be 20 or so months after the release of hardware ray tracing, etc. on lower Macs.

SoIC must look really good - or something does if it’s a different reason.
 
Last edited:
Reactions: Glo.

LightningZ71

Golden Member
Mar 10, 2017
1,827
2,203
136
I just don't see Apple stacking multiple M4 generation dies on top of each other in any circumstance. Het dissipation will be a major hurdle with two active core die stacked like that. I can see them moving the SLC, memory controllers and all I/O to a separate die located underneath the main M4 Max/Ultra die. That would allow for a massive SLC and put shoreline components on a die where space is less precious/expensive per area because it's on a trailing process tech.
 
Reactions: igor_kavinski

The Hardcard

Senior member
Oct 19, 2021
252
332
106
People are waiting? Why? They can just do distributed computing with multiple M4 iPads!
There are people doing that. Someone posted his 1 iPhone, 2 iPad, 1 Mac Air AI cluster. Apple machine learning researchers have an open source framework MLX that among other things makes distributed training and inferencing straightforward between Apple Silicon devices.

There’s going to be a sizable amount of people grabbing multiple high RAM Mac Studios when the new ones drop. 1 TB RAM for the win!
 
Reactions: Orfosaurio

Glo.

Diamond Member
Apr 25, 2015
5,811
4,786
136
It all points to possibility that M4 Ultra will come with HBM memory integrated on the package, and from this moment on, the Ultra will have the Max treatment.

MX Max chips will not come with Ultrafusion anymore, only Ultra, and two Ultra chips will create MX Extreme chips.
 
Reactions: FlameTail

name99

Senior member
Sep 11, 2010
511
395
136
I just don't see Apple stacking multiple M4 generation dies on top of each other in any circumstance. Het dissipation will be a major hurdle with two active core die stacked like that. I can see them moving the SLC, memory controllers and all I/O to a separate die located underneath the main M4 Max/Ultra die. That would allow for a massive SLC and put shoreline components on a die where space is less precious/expensive per area because it's on a trailing process tech.
Nah, people obsess too much about heat dissipation, especially in the context of Apple.
Highest power you can get out of an M3 Max is something like 80W, so two of them is 160W. That's way below what Intel or AMD generate in a smaller area.

I'd say a more important issues are:
- is the tech ready?
- is Apple ready?
- is it cheaper (or otherwise more appropriate) than the alternative?

All three seem false right now.
Most importantly, Apple doesn't jump on new tech for the sake of "FIRTS!!!". They didn't feel a need either to provide V-Cache first, or even second.
Right now vertical stacking of logic doesn't (as far as I can see) solve an important Apple problem.
Vertical stacking of memory? That's not quite so obvious, but I still don't see anything requiring a solution more drastic (this generation) than something like two ranks, one on each side of a board.
 

name99

Senior member
Sep 11, 2010
511
395
136
It all points to possibility that M4 Ultra will come with HBM memory integrated on the package, and from this moment on, the Ultra will have the Max treatment.
Again, why HBM? What does it offer APPLE (not nVidia, not AMD, APPLE) that they can't get, and better, from LP-DDR5 or LP-DDR5X?

MX Max chips will not come with Ultrafusion anymore, only Ultra, and two Ultra chips will create MX Extreme chips.
This is plausible, in different versions.
Many of the packaging patents suggest a basic manufacturing unit of an "Ultra" (ie two "Max's" manufactured side by side, with an RDL [think fancy version of Cerebras wiring between cheaps] between the two. This basic "two chip" unit is then diced as a single unit, and assembled into larger packages.

Such a scheme is certainly compatible with other ideas we have thrown out, like moving IO and displays to a separate chip.
 
Last edited:

The Hardcard

Senior member
Oct 19, 2021
252
332
106
That would make no sense. They'd be releasing it around the time when M5 appears, or even after.
Where’s any indication that the M5 is coming in 2025? I would agree that it is unlikely that higher-end M4 and the base M5 would show up in the same timeframe. Gurman, though has been reasonably reliable about product releases. I think his claims about the Mac Studio releases in H2 2025 are more likely than any contradicting sources. and I think the reports of Apple’s testing of SoIC starting last year with production targeted for between 2025 and 2026 give Gurman claims even more weight.

To me that means either the M5 is not coming next year, or the next Ultra will be the M5 Ultra.

Nah, people obsess too much about heat dissipation, especially in the context of Apple.
Highest power you can get out of an M3 Max is something like 80W, so two of them is 160W. That's way below what Intel or AMD generate in a smaller area.

I'd say a more important issues are:
- is the tech ready?
- is Apple ready?
- is it cheaper (or otherwise more appropriate) than the alternative?

All three seem false right now.
Most importantly, Apple doesn't jump on new tech for the sake of "FIRTS!!!". They didn't feel a need either to provide V-Cache first, or even second.
Right now vertical stacking of logic doesn't (as far as I can see) solve an important Apple problem.
Vertical stacking of memory? That's not quite so obvious, but I still don't see anything requiring a solution more drastic (this generation) than something like two ranks, one on each side of a board.

to the contrary, all of your issues seem true right now. SoIC is shipping now in AMD products. What is needed on TSMC’s part is boosting production capacity which they are doing.

Reports say Apple began testing SoIC at some point in 2023 with the target of going into production in 2025. It appears they will be ready. A key benefit to the technology Is bringing down the cost and complexity of the PCB, making it the cheaper way to go (after the higher upfront costs.)

there are numerous other benefits as well. The space needed for the interconnect on the Max will be significantly smaller. And the capacity to present to the outside world as a single chip is enhanced by lower latency, and a significant increase in bandwidth capacity. The icing on the cake is the power savings from reducing the distance, the signals need to travel.

The more I read about it the more it seems it is absolutely worth it for Apple to wait on this despite it forcing Apple to try to sell Mac Studios and Mac Pros for yet another year with old cores. this could be mitigated by the new need for large amounts of M2 Ultras Internally.

And again, I’m hoping that timeframe also gives them space to tweak some of the weakest links in an otherwise compelling product for higher end AI workflows.
 

FlameTail

Diamond Member
Dec 15, 2021
4,095
2,465
106
Again, why HBM? What does it offer APPLE (not nVidia, not AMD, APPLE) that they can't get, and better, from LP-DDR5 or LP-DDR5X?
Indeed, while HBM's memory bandwidth is nice, the issue is that it's capacity is limited.

A hypothetical M4 Extreme chip, with LPDDR5-7500 + 2048 bit memory bus, would have ~2 TB/s of memory bandwidth. But even more impressively, it can be had with 1 TB of RAM!

That is simply not possible with HBM. And besides, HBM supply is very tight and the cost is high.
 
Reactions: name99

Doug S

Platinum Member
Feb 8, 2020
2,836
4,820
136
Where’s any indication that the M5 is coming in 2025? I would agree that it is unlikely that higher-end M4 and the base M5 would show up in the same timeframe. Gurman, though has been reasonably reliable about product releases. I think his claims about the Mac Studio releases in H2 2025 are more likely than any contradicting sources. and I think the reports of Apple’s testing of SoIC starting last year with production targeted for between 2025 and 2026 give Gurman claims even more weight.

To me that means either the M5 is not coming next year, or the next Ultra will be the M5 Ultra.

I was one of the few who have been saying that Apple never intended for a 18 month cadence, it just kinda sorta worked that way with the first couple generations due to outside factors like covid, turnover on the team, extra effort for the initial launch of Pro/Max/Ultra versions, etc. I said M4 would be released a year after M3, and well I was right about 18 months behind out the window even though I was as surprised as anyone when M4 came out only seven months after M3.

So yeah, while there is no indication M5 is coming in 2025 I'd bet heavily that it will. The only question in my mind is whether it comes in spring like M4 or it comes out around the time A19 does.
 

The Hardcard

Senior member
Oct 19, 2021
252
332
106
I was one of the few who have been saying that Apple never intended for a 18 month cadence, it just kinda sorta worked that way with the first couple generations due to outside factors like covid, turnover on the team, extra effort for the initial launch of Pro/Max/Ultra versions, etc. I said M4 would be released a year after M3, and well I was right about 18 months behind out the window even though I was as surprised as anyone when M4 came out only seven months after M3.

So yeah, while there is no indication M5 is coming in 2025 I'd bet heavily that it will. The only question in my mind is whether it comes in spring like M4 or it comes out around the time A19 does.
I agree with you on what would be a normal release schedule and why it has been abnormal conditions. I just question whether it is time to go all Gelsinger and declare that the abnormal is now in our rear view mirror.

I still think there are signs that the big dogs aren’t coming until next summer. I would love for it to all be M5 generation next year because the changes I look forward to most involve boosting the compute on the GPU-side full SOC memory bus, whether that is some combination of more cores and/or stronger cores or dedicated matrix math or even outer product cores.

I haven’t any analysis of the M4 GPU yet, and I think significant differences would have been caught on some level by now. So yeah, bring on the M5 Ultra. I hope I can get into position to buy one or two by then.
 

FlameTail

Diamond Member
Dec 15, 2021
4,095
2,465
106
M4 has new P-cores, but the E-cores, GPU and NPU are the same as M3/A17. They are just clocked higher.

So will M4 Pro/M4 Max/M4 Ultra/A18 also be that way?

We do know the A18 is getting a significantly beefed up NPU.

M4 feels like an M3.5
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |