Discussion Apple Silicon SoC thread

Page 315 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,774
1,346
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

name99

Senior member
Sep 11, 2010
478
359
136
Building a 5G modem, beyond the patents minefield, is incredibly complex, it's not only a fab/price issue.

BTW Apple bought the Intel modem division in 2019. But at that point Intel had already internalized everything after closing all R&D centers in Europe. So it comes as no surprise that Apple still doesn't have its own competitive chip. They probably bought only buggy unusable IP and some patents, and too few engineers with expertise.
This strikes me as basically identical to what people were saying about Apple and AI -- right up until WWDC.
People seem to have this bizarre idea that if Apple hasn't made any public announcement, that's OBVIOUSLY proof that things have gone terribly wrong internally and nothing is happening.
30 years after Steve Jobs returned and people STILL think that Apple operates like other companies, making big public announcements of whatever half-assed barely-working functionality they have in the lab (*cough* Intel *cough*).

People said it would be impossible for Apple to match Intel. Not just difficult, not just take time, but IMPOSSIBLE. For reasons...
Apparently something something patents, something something experience, something something all the best engineer blah blah.
Then, learning nothing from that experience, they said it would be impossible for Apple to do that with the GPU. Apparent patents, experience, best engineer blah blah.
Now it's RF.

I have not made even the slightest attempt to track Apple's cellular patents. But I can tell you (as can anyone else who simply takes a look) that a huge fraction, quite possibly more than 60%, of their patent output over the past few years has been cellular related.
So clearly they are working hard on the problem.

So if they have been working so hard for so long, why not ship?
My guess is that they want to ship something that clearly matches/exceeds QC on every dimension, because if there's one dimension where they fall short, people who were unaware a day ago of that aspect of cellular modems will suddenly proclaim that it is the single most important cellular functionality in their lives. So you need to get the obvious performance issues down, but you also need backward compatibility, you need working at high speed (doppler shift, eg on fast trains), you need good handoff especially when crossing borders (something even current QC chipsets do terribly); and, most important for Apple, you need much better power control.

They were able to grow their way from A6 upwards, and in a perfect world they'd probably have done the same thing with the modem - start with a minimal base and improve every year. But modems are a different business, not least because you have to interop with every carrier everywhere in the world, while also following all manner of government regulation. You don't really have much choice except to get *everything* perfect and then ship; you can't ship something good enough in 2022 and leave it till 2025 to have the model that's really what you had in mind, that fixes all the compromises required to ship in 2022...
 

name99

Senior member
Sep 11, 2010
478
359
136
The SSVE performance of the SME block of the M4 is poor. It's lower than the aggregated NEON performance of the P cores. For instance I'm not sure one could write a faster codec with it.
The SSVE performance of the SME block is poor.
The 512b vector performance of the SME block is NOT poor.

The details of why things have worked out this way is unclear (I've discussed them elsewhere) but they are probably temporary and may well change in the M5 (or even M4 Pro/Max?)
But you can definitely get the expected performance for 512b vectors out of the M4 AMX unit if you use the correct instructions (either the SME vector instructions or, I'm guessing they still work, at least for one or two more generations, the AMX instructions).

You also would not write a codec with SME/AMX/SSVE; that's not the target use case. This is not just theoretical whining, it makes no sense. The latency/throughput balance doesn't work, table lookups are problematic, and you don't have the full range of desirable integer instructions.
 

SarahKerrigan

Senior member
Oct 12, 2014
735
2,033
136
Did we meet somewhere? Coz you just described me, minus knowing PCs inside and out (I learn something new everyday). But yeah, cars, printers, women. None of those things make sense to me. Oh, women are not products? Sorry, I got confused coz they seem to come in so many varieties and some of them actually want you to sample them and spend inordinate amounts of time trying to make themselves look appealing to potential... (yeah, not gonna say it!)

Please don't do this.

I'm a tinkerer. I want to tinker with something Apple that has 32GB RAM, user upgradable SSD, enough GPU cores to match a 4060 Ti. Then I wanna run benchmarks on it and see how much I can get out of it. And I have $1500. Does Apple want my money (and that of a lot of other geeks), yes or no? Make up your frickin' mind, Apple. Don't hide behind your brand. If you think your hardware is up to it, let REAL hardcore users use it, instead of daddy's little girl trying to design her fashion dresses on it or daddy's little boy running his DJ software on it.

I'm not a Mac user. I have fundamental issues with the way Apple does things. I run a Framework as my main machine.

This is still a gross mischaracterization. There are many, many, many power users of all stripes running Macs, because the hardware is good and so is the application ecosystem.

But I'm sure I'm just another "not REAL hardcore user," so my opinion is meaningless. I just do formal-verification and correctness-validation flows. I don't play games or obsess over Cinebench.
 

Doug S

Platinum Member
Feb 8, 2020
2,643
4,444
136
I'm a tinkerer. I want to tinker with something Apple that has 32GB RAM, user upgradable SSD, enough GPU cores to match a 4060 Ti. Then I wanna run benchmarks on it and see how much I can get out of it. And I have $1500. Does Apple want my money (and that of a lot of other geeks), yes or no? Make up your frickin' mind, Apple. Don't hide behind your brand. If you think your hardware is up to it, let REAL hardcore users use it, instead of daddy's little girl trying to design her fashion dresses on it or daddy's little boy running his DJ software on it.

Do you not understand that what you're talking about is a niche market? We're talking like 2-3% of the overall PC market. That's pretty much the share you see for hardware tinkerers (the overclockers) and software tinkerers (people like me who run a Linux desktop) You think you have economic power in a market the size of the PC market, let alone the smartphone market? Sorry to break the news to you, but you're a rounding error.

If Apple is willing to toss out the bottom 80% of the market, do you really think they are concerned with adding a few percent on the margins who would massively impact how they design both their hardware and their software - compromising that design for the masses who are like it how it is judged by their success in that upper 20% of the market - just to keep self entitled little pr***s like you happy? Not that you would be, you'd still whine that they aren't doing "hardcore" right, just like all the toxic Star Wars fans who will loudly and constantly tell everyone who doesn't care why they personally can't stand any of the new stuff for one of a hundred different reasons.

So no, Apple does not want your money. Get over it. You wanna run benchmarks on a fast ARM system, well Qualcomm has you covered now so go complain why they aren't properly serving your desire to tinker.
 

Doug S

Platinum Member
Feb 8, 2020
2,643
4,444
136
So if they have been working so hard for so long, why not ship?
My guess is that they want to ship something that clearly matches/exceeds QC on every dimension, because if there's one dimension where they fall short, people who were unaware a day ago of that aspect of cellular modems will suddenly proclaim that it is the single most important cellular functionality in their lives.

They don't need to match Qualcomm on the measureables. Cellular has long passed the "more than fast enough" bar. The only times it isn't fast enough is when a cell is congested or you're on the edge of a cell, neither of which your modem can do anything about. OK sure, a better modem will get a slightly better signal/speed when you're on the ragged edge of losing coverage, but as the blank spaces get slowly filled in that problem is becoming less and less of a factor. Between wifi calling which solved issues for people in weak coverage areas when at home or at work, and satellite support which will at least cover your for sending basic text messages, anyone trying to argue for one modem over another based on metrics is going to have a time making anyone care.

For all the hand wringing about how Intel modems were inferior in speed/bars to Qualcomm modems (and this was the better part of a decade ago when coverage maps were a lot thinner and LTE was less common than 5G is today) it didn't impact Apple's sales at all, and customers couldn't tell in their day to day usage - or at least I couldn't when I had an Intel modem in one of my iPhones.

What Apple needs though is for the basic stuff to work, like roaming. Its those things that won't show up in some cellular test that says "wow last year's iPhone with a Qualcomm modem got 1355 Mbps in our testing but the new one with Apple modem only gets 975 Mbps, time to bring the pitchforks to 1 Infinite Loop!" We take roaming for granted but that's hideously complex for the baseband to deal with for all the carriers around the world. If all Apple had to do was make it work in the US that would be easy - there really aren't that many carriers (most are MVNOs of the big three) but Americans have this annoying habit of wanting to leave the country occasionally, and a phone that can't roam properly in Europe isn't going to be acceptable.

Qualcomm had an easier time there because they've been around since the start of mobile, so back when roaming was something new and didn't always work people didn't expect too much. They also became effectively the "standard", so if they implemented something wrongly everyone else had to adjust to them (and in a few cases the standard itself was changed to conform!) So Apple has to figure all this stuff out, and basically has to test all over the world, in depth and at length, to figure out if they've got it right and figure out what to fix (and then retest) when they haven't. It isn't like implementing an ARM CPU where there's this lengthy standard out there to can design to, and a conformance test that can determine if you've done it correctly.

That's why I keep saying that what Apple ought to do is talk to Mediatek and buy their IP. Essentially "fork" their modem and baseband, and each can continue on their own path from that point. Place all the patents (including the ones Apple has) in a JV that handles licensing for both with an agreement that any cellular related patents either comes up with are granted to it. Is Mediatek's modem as good as Qualcomm's? No, but it is more than good enough, and unless Apple is finally approaching the finish line on their own effort would get them off Qualcomm modems sooner.
 

johnsonwax

Member
Jun 27, 2024
70
136
61
Fine, I'll just put a thought in Elon's head that Apple silicon is the only thing that will work on Mars and then it's him and his merry band of billionaires plotting a hostile takeover of Apple.
Jesus.

A hostile takeover of Apple will require about $1.5T. Their largest shareholders are Vanguard, Blackrock, Berkshire Hathaway. Nobody is executing a hostile takeover of Apple. Again, grow up.
 

johnsonwax

Member
Jun 27, 2024
70
136
61
While I strongly agree with your overall point, do you know for a fact this particular detail ("they've always run on Azure and AWS") and how it has evolved?
Certainly there have been times in the past when Apple has used each of the big three cloud services, we've seen fragmentary info about that.
On the other hand, even five years ago we know that Apple had large data centers, and all the major known ones from them are still around and appear to still be used by Apple.

It seems to me (purely as an outsider with very little knowledge of this space) that the trajectory has probably been somewhat different from what you describe. More like initial versions of Mobile Me and then iCloud launched on AWS/Google/Azure because why not, how else do you get started?
And then slowly one step at a time (maybe Maps this year, maybe Siri cloud compute next year) elements moved from a commercial provider to Apple internal. Along with various parallel migrations inside those data centers – this year we try to run this functionality on OSX instead of Linux? Next year we try to run it on Apple Silicon?

On the one hand, we have all those data centers which, presumably, do *something*.
On the other hand, having this stuff in-house maybe makes it easier to tweak things to Apple's requirements (eg security issues)? Or whatever they are boasting about this year in terms of zero carbon and suchlike.
On the third hand maybe it's even operationally cheaper once you get to a certain scale? Especially so (unclear, but seems possible) if you're running on repurposed Apple silicon (ie Apple chips, but fitted into fairly bare-bones boxes)?

I could also believe a somewhat split model where all interesting "compute" now happens in Apple data centers, but bulk storage of photos and backups (something to which Apple can probably add little value) happens on the commercial cloud?
My understanding is that Apples data centers up to this point have been pretty generic cloud infrastructure, because Apple has the same scaling/support issues as everyone else in the world and there weren't any particular benefits to running their own hardware/OS. It's not enough that OS X be as beneficial as linux, it needs to be materially better, otherwise just use linux. So far Apple's back end has been all pretty generic stuff. And just because Apple owns a data center doesn't mean it's not an Azure local deployment.

That's what's notable about Private Cloud Compute - it very directly leans on Apple Silicon and OS X features. At least in this situation, they've decided it is materially better, which is kind if interesting because where AS is reasonably competitive to other server hardware for generic compute, it's not that competitive for AI - and that's the use case. If you wanted to offload AI from an iPhone or MacBook Air, you'd think a server with Nvidia hardware would be the best option, and yet Apple is either explicitly rejecting that, or they're building AS servers that can take Nvidia cards and are writing the drivers. It's not like they need general use GPU drivers, which are a lot of work, just the AI compute stuff.

It's a new approach, and the security stack seems to be the rationale. I'm still of the view that Private Cloud Compute is a stopgap, that Apple doesn't intend it to be a permanent feature, so they may be viewing this as a temporary bit of infrastructure/support. Or it may be a shift in attitude.
 

FlameTail

Diamond Member
Dec 15, 2021
3,592
2,076
106
Every P-core, from Firestorm in M1 to M4-P, has 192 KB L1i and 128 KB L1d.

And they all have 4 × 128b NEON as well.

Might this change in M5?
 
Mar 11, 2004
23,217
5,676
146
I think the most likely reason is that Apple is simply being appropriately cautious about "AI" (or even the machine learning that current AI really just is). They also have leadership that predates the AI hype so they don't have to justify their position by lying about AI leading to the current feedback loop (lie about AI to get investment, then spend the money on AI hardware to justify the lies, then have to keep pushing lies to justify the spending, with them giving the nebulous AGI goal so they can try to keep the hype train going indefinitely), like Microsoft/OpenAI and Google are. Tim Cook is smart enough to not get sucked into the shenanigans of all these galaxy brained clowns, but he unfortunately does have to play to the Wall Street circus so Apple has to play the game some.

Apple also doesn't tend to favor things that don't make money. Its the main reason they haven't done much more with Siri because it costs money to query and there's not really a good way to monetize it that won't make people hate it (much like how Google sabotaging Search in misguided attempt to push growth is blowing up in their face as people become frustrated). Apple at least gets people to pay by paying high for devices.

Also, people seem to forget, Apple f'n hates Nvidia. They literally were paying AMD to stay in existence prior to Zen (buying AMD GPU, also doing GPU IP licensing) just so they wouldn't be stuck having to use Nvidia. So there's no way they'll buy Nvidia's overpriced AI stuff for what Apple is actually doing with AI. They can also tout it as being for privacy/security, but that's more about protecting themselves than their customers.
 
Last edited:

The Hardcard

Member
Oct 19, 2021
181
273
106
My understanding is that Apples data centers up to this point have been pretty generic cloud infrastructure, because Apple has the same scaling/support issues as everyone else in the world and there weren't any particular benefits to running their own hardware/OS. It's not enough that OS X be as beneficial as linux, it needs to be materially better, otherwise just use linux. So far Apple's back end has been all pretty generic stuff. And just because Apple owns a data center doesn't mean it's not an Azure local deployment.

That's what's notable about Private Cloud Compute - it very directly leans on Apple Silicon and OS X features. At least in this situation, they've decided it is materially better, which is kind if interesting because where AS is reasonably competitive to other server hardware for generic compute, it's not that competitive for AI - and that's the use case. If you wanted to offload AI from an iPhone or MacBook Air, you'd think a server with Nvidia hardware would be the best option, and yet Apple is either explicitly rejecting that, or they're building AS servers that can take Nvidia cards and are writing the drivers. It's not like they need general use GPU drivers, which are a lot of work, just the AI compute stuff.

It's a new approach, and the security stack seems to be the rationale. I'm still of the view that Private Cloud Compute is a stopgap, that Apple doesn't intend it to be a permanent feature, so they may be viewing this as a temporary bit of infrastructure/support. Or it may be a shift in attitude.
I don’t think Private Cloud Compute is a stopgap. They are investing a lot of resources into adjusting the next Ultra and possible Extreme to accommodate its use in the datacenter. Sumit Gupta designed Google’s AI cloud infrastructure around large numbers of smaller machines rather than the huge petaflop large batch handling boxes that Nvidia and AMD are providing.

Now that he is at Apple, he is largely responsible for building a similar infrastructure using Apple Silicon. Apple hardware is competitive in the AI space and I think the move to optimize its strengths and confront some of its weaknesses, is a key reason why there are so many reports indicating that the next Ultra is not coming until the second half of 2025.

The unified memory is proving to be a valuable asset in the AI race. It is allowing the Max and Ultra tiers to serve large memory footprint deep learning models that require much more expensive hardware if you want to go the Nvidia route. Not only is the 800 GB/s fast enough to handle the requests of one or two people at a time, current LDDR5 memory can relatively easily be boosted. if Apple is attaching Samsung’s fastest RAM to its datacenter M2 Ultras, the memory bandwidth goes up to 1.3 TB/s.

Even more significant is that it appears the new Ultras won’t be made of 2 Max dies. Mark Gurman has indicated that the Hidra will be the desktop Apple silicon that will form at least the Ultra If not also the Extreme.

There is speculation that the Brava die will be the next Pro and that it will be designed for two of them to be joined to create a future Max. all of these upcoming chips will use.SoIC to bring both the RAM and joined compute die much closer together than current Apple Silicon.

So, not a stopgap, but the main plan. most exciting is the rumor that there will be one design for the datacenter and the consumer. It is my goal to be one of those consumers as close to the release date as I can get.
 
Reactions: name99

The Hardcard

Member
Oct 19, 2021
181
273
106
I'm a tinkerer. I want to tinker with something Apple that has 32GB RAM, user upgradable SSD, enough GPU cores to match a 4060 Ti. Then I wanna run benchmarks on it and see how much I can get out of it. And I have $1500. Does Apple want my money (and that of a lot of other geeks), yes or no? Make up your frickin' mind, Apple. Don't hide behind your brand. If you think your hardware is up to it, let REAL hardcore users use it, instead of daddy's little girl trying to design her fashion dresses on it or daddy's little boy running his DJ software on it.
Apple’s small share of the personal computing device market still amounts to millions of people. they are moving close to 20 million Macs per year and your infuriated with Apple and derisive to those millions of customers because of this particular device that you want Apple to make?

You just been given a wonderful gift. Snapdragon X Elite with the Oryon core that is extremely similar to Apple Silicon cores. They are coming next year with much more powerful GPUs. You can tinker with the chip and run benchmarks to your heart’s delight with the actual Windows UI no less. Upgradeable SSDs, it all there for you.

Enjoy it and hopefully it can allow you to not worry about what Apple’s business model is and what its customers get out of the products they make.
 
Last edited:

smalM

Member
Sep 9, 2019
67
69
91
Now what would be interesting to see is the profit share Apple has in laptops.

In smartphones, iPhones make up only about 20% of shipments, but Apple takes home >50% of the profit in the industry.
In smartphones last year Apple took home 50% of the revenues.
In PC revenues it was 'only' 13%.

But you can definitely get the expected performance for 512b vectors out of the M4 AMX unit if you use the correct instructions (either the SME vector instructions or, I'm guessing they still work, at least for one or two more generations, the AMX instructions).
I seem to remember that Dougall Johnson mentioned the AMX instructions still work.
 
Last edited:
Reactions: name99

johnsonwax

Member
Jun 27, 2024
70
136
61
In smartphones last year Apple took home 50% of the revenues.
In PC revenues it was 'only' 13%.
The metric isn't revenue, it's profits. You can't convert revenue into R&D if there are no profits. Apple takes 85% of mobile phone profits. They can outspend the entire industry on R&D by a wide margin.

For PCs, it's probably around 30%. Used to be higher, but PC OEMs have figured out how to sell nice laptops.
 

name99

Senior member
Sep 11, 2010
478
359
136
Every P-core, from Firestorm in M1 to M4-P, has 192 KB L1i and 128 KB L1d.

And they all have 4 × 128b NEON as well.

Might this change in M5?
Numbers in the design of a CPU are much less interesting than algorithms.
But it is VASTLY easier to run a few micro-benchmarks on a new CPU and proclaim new numbers than it is to understand new algorithms. Even supposedly tech sites like Geekerwan or Chips and Cheese are happy to collect a few numbers and move on, without bothering to investigate any anomalies in the umbers.

So point is: you may or may not get an answer from people who may or may not know. And in a year or so we'll know those numbers for the M5.
But you will have the correct answers to the wrong questions.
 

name99

Senior member
Sep 11, 2010
478
359
136
I don’t think Private Cloud Compute is a stopgap. They are investing a lot of resources into adjusting the next Ultra and possible Extreme to accommodate its use in the datacenter. Sumit Gupta designed Google’s AI cloud infrastructure around large numbers of smaller machines rather than the huge petaflop large batch handling boxes that Nvidia and AMD are providing.

Now that he is at Apple, he is largely responsible for building a similar infrastructure using Apple Silicon. Apple hardware is competitive in the AI space and I think the move to optimize its strengths and confront some of its weaknesses, is a key reason why there are so many reports indicating that the next Ultra is not coming until the second half of 2025.

The unified memory is proving to be a valuable asset in the AI race. It is allowing the Max and Ultra tiers to serve large memory footprint deep learning models that require much more expensive hardware if you want to go the Nvidia route. Not only is the 800 GB/s fast enough to handle the requests of one or two people at a time, current LDDR5 memory can relatively easily be boosted. if Apple is attaching Samsung’s fastest RAM to its datacenter M2 Ultras, the memory bandwidth goes up to 1.3 TB/s.

Even more significant is that it appears the new Ultras won’t be made of 2 Max dies. Mark Gurman has indicated that the Hidra will be the desktop Apple silicon that will form at least the Ultra If not also the Extreme.

There is speculation that the Brava die will be the next Pro and that it will be designed for two of them to be joined to create a future Max. all of these upcoming chips will use.SoIC to bring both the RAM and joined compute die much closer together than current Apple Silicon.

So, not a stopgap, but the main plan. most exciting is the rumor that there will be one design for the datacenter and the consumer. It is my goal to be one of those consumers as close to the release date as I can get.
I agree. I don't see any obvious reason to assume that there will soon be an end to the need for "remote compute".

Every decade we have people insisting that "computers are now fast enough" ALWAYS because of some crazy assumption that we will stop finding new things to do with computers.
In the early 90s computers were "fast enough to do word processing and who needs more?" Once they became fast enough to handle video, who could possibly need more? OK, once they could handle 3D graphics, now REALLY who could possibly need more?

Imagining that current LLMs, used as they currently are (and in some way shrunk to be fit onto a phone, so no longer needing a data warehouse), is the end point of computing seems to me utterly crazy! Like saying "sure, sure, I know the last five predictions about this were wrong but, trust me, this time we REALLY REALLY understand the future and where computing is headed; no more surprises in store"...
 
Reactions: dr1337
Jul 27, 2020
19,027
12,965
146
Imagining that current LLMs, used as they currently are (and in some way shrunk to be fit onto a phone, so no longer needing a data warehouse), is the end point of computing seems to me utterly crazy! Like saying "sure, sure, I know the last five predictions about this were wrong but, trust me, this time we REALLY REALLY understand the future and where computing is headed; no more surprises in store"...
It's insane how inventive people can be and how "experts" can't basically predict anything about the future trends in computing and I guess this is why the big players are putting NPUs on their SoCs even though they have little idea of their usefulness beyond a few not-very-compelling use cases. Some 10 to 25 year old tomorrow is going to use an NPU to invent a whole new and unprecedented class of applications and the market will simply go crazy, causing a sudden uptick in AI capable hardware sales.
 

johnsonwax

Member
Jun 27, 2024
70
136
61
It's insane how inventive people can be and how "experts" can't basically predict anything about the future trends in computing and I guess this is why the big players are putting NPUs on their SoCs even though they have little idea of their usefulness beyond a few not-very-compelling use cases. Some 10 to 25 year old tomorrow is going to use an NPU to invent a whole new and unprecedented class of applications and the market will simply go crazy, causing a sudden uptick in AI capable hardware sales.
You understand that the reason that iPhone has the largest NPU in the Apple Silicon lineup is because it's used for computational photography/FaceID. So I would say Apple:
1) has a very clear idea of the usefulness of the NPU that they added
2) presents that the use case is very compelling (despite your failure to understand the use case)

NPUs aren't some exotic thing - they're just a narrow, specialized compute unit that anything which is well suited to that type of compute can benefit from. You're expressing amazement that linear algebra has a broad set of applications. Like, yeah, no shit - ask any mathematician. I think your problem is that you misidentify who the experts are, since experts usually don't make a lot of definitive predictions on account of knowing what is difficult to predict.

But the market goes crazy because they aren't experts. They go crazy because they usually don't know what questions to ask and so they follow the market hype, which is usually led by someone trying to make a buck who has no idea what they're talking about. Right now the problem around generative AI has nothing to do with the technical capabilities or limitations of it and has everything to do with the most common questions that investors fail to ask "How do you collect money against this" and "Who is the market". And both of those are unanswered by OpenAI and every other AI company. Apple has offered an answer to the first part of this from WWDC: and it's "You don't, this is a feature, not a product." which sort of negates the 2nd question. And if they're right, then the hype will self-destruct, the market value will mostly vanish, quietly reappearing in the companies that can value-add using it. I would argue that process is already well underway. Not enough people read this, or understood it. Just because a market goes crazy doesn't mean that it's warranted, or sustainable.
 

Nothingness

Platinum Member
Jul 3, 2013
2,968
1,862
136
But the market goes crazy because they aren't experts. They go crazy because they usually don't know what questions to ask and so they follow the market hype, which is usually led by someone trying to make a buck who has no idea what they're talking about. Right now the problem around generative AI has nothing to do with the technical capabilities or limitations of it and has everything to do with the most common questions that investors fail to ask "How do you collect money against this" and "Who is the market". And both of those are unanswered by OpenAI and every other AI company. Apple has offered an answer to the first part of this from WWDC: and it's "You don't, this is a feature, not a product." which sort of negates the 2nd question. And if they're right, then the hype will self-destruct, the market value will mostly vanish, quietly reappearing in the companies that can value-add using it. I would argue that process is already well underway. Not enough people read this, or understood it. Just because a market goes crazy doesn't mean that it's warranted, or sustainable.
This makes me wonder if the money in that market isn't in only two places: tight integration in existing software stacks (which would make the existing SW product more valuable) and efficient NPU designs (needed to run and update the models on the go). These two extremities would be the only ones to survive the current craze.
 
Jul 27, 2020
19,027
12,965
146
So I would say Apple:
1) has a very clear idea of the usefulness of the NPU that they added
Gotta disagree. Apple's been caught off guard by Microsoft's AI push and higher AI TOPS requirement. They "might" rectify that with M4 Macbooks but if they use the exact same SoC as the iPad Pro, they will be limited to 38 TOPS while players in the x86 space will have anywhere from 50 to 55 TOPS (HP Omnibook Ultra). Developers may be more excited to experiment on x86 laptops than Apple ones. Apple is faced with a conundrum right now. Do they follow suit or do they pave their own way? The deals with OpenAI seem to suggest the latter as Apple clearly didn't anticipate Microsoft's strong AI push so they will try to give their users more AI TOPS through the OpenAI cloud without getting into the bind of being stuck with lower rated NPUs in their SoCs. It also suggests that they never gave much thought to genAI features before. I'm not saying that x86's higher local AI performance will revolutionize computing but it could certainly bring forth new possibilities. If I were a developer, I wouldn't touch Apple's overpriced hardware to experiment with local AI possibilities, especially since Apple does not show any interest in aggressively increasing their marketshare. Apple could still move developers in their direction through clever marketing and promises of reaching more users through their store and iPhones. It will be interesting watching this space on how things unfold.
 

poke01

Golden Member
Mar 8, 2022
1,879
2,374
106
Gotta disagree. Apple's been caught off guard by Microsoft's AI push and higher AI TOPS requirement. They "might" rectify that with M4 Macbooks but if they use the exact same SoC as the iPad Pro, they will be limited to 38 TOPS while players in the x86 space will have anywhere from 50 to 55 TOPS (HP Omnibook Ultra). Developers may be more excited to experiment on x86 laptops than Apple ones. Apple is faced with a conundrum right now. Do they follow suit or do they pave their own way? The deals with OpenAI seem to suggest the latter as Apple clearly didn't anticipate Microsoft's strong AI push so they will try to give their users more AI TOPS through the OpenAI cloud without getting into the bind of being stuck with lower rated NPUs in their SoCs. It also suggests that they never gave much thought to genAI features before. I'm not saying that x86's higher local AI performance will revolutionize computing but it could certainly bring forth new possibilities. If I were a developer, I wouldn't touch Apple's overpriced hardware to experiment with local AI possibilities, especially since Apple does not show any interest in aggressively increasing their marketshare. Apple could still move developers in their direction through clever marketing and promises of reaching more users through their store and iPhones. It will be interesting watching this space on how things unfold.
I woudn't underestimate Apple, keep on eye on the A18 Pro this september. I do agree that Apple was blindsided by the AI hype but they can get their roadmaps in order if they plan well.
 

Glo.

Diamond Member
Apr 25, 2015
5,787
4,752
136
Gotta disagree. Apple's been caught off guard by Microsoft's AI push and higher AI TOPS requirement. They "might" rectify that with M4 Macbooks but if they use the exact same SoC as the iPad Pro, they will be limited to 38 TOPS while players in the x86 space will have anywhere from 50 to 55 TOPS (HP Omnibook Ultra). Developers may be more excited to experiment on x86 laptops than Apple ones. Apple is faced with a conundrum right now. Do they follow suit or do they pave their own way? The deals with OpenAI seem to suggest the latter as Apple clearly didn't anticipate Microsoft's strong AI push so they will try to give their users more AI TOPS through the OpenAI cloud without getting into the bind of being stuck with lower rated NPUs in their SoCs. It also suggests that they never gave much thought to genAI features before. I'm not saying that x86's higher local AI performance will revolutionize computing but it could certainly bring forth new possibilities. If I were a developer, I wouldn't touch Apple's overpriced hardware to experiment with local AI possibilities, especially since Apple does not show any interest in aggressively increasing their marketshare. Apple could still move developers in their direction through clever marketing and promises of reaching more users through their store and iPhones. It will be interesting watching this space on how things unfold.
Its the exact opposite, mate.

Its Microsoft which rushed the AI PC initiative, because they knew what Apple is doing with their own AI venture.
 

FangBLade

Senior member
Apr 13, 2022
201
395
106
Will the current Mx series be strong enough for all those AI features? I know that Apple said they will support their Intelligence, but the question is whether they will be limited, for example, maybe they won’t support all features but only partially. What does those 38 TOPS in the M4 mean compared to 18 in the M3?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |