Discussion Apple Silicon SoC thread

Page 119 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,749
1,281
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,479
4,035
136
Well Samsung said they could built stacks up to 32 in the article about their LPDDR5X. And if Samsung can't deliver what Apple wants, Micron would. Apple buys a LOT of LPDDR when you consider all the iPhones, and a ton of NAND as well. Either will be falling all over themselves to get Apple's order, and if getting that huge order means delivering a relatively small quantity of high capacity packages for Mac Pro, they'd figure it out.

Samsung's LPDDR5/5X dies are ALL x16, so I'm not sure what you're talking about with needing "a single 16 bit interface in volume".
 

Doug S

Platinum Member
Feb 8, 2020
2,479
4,035
136
Regarding the Tech Insight's blog entry, the thing I found most interesting was the confirmation they get those 10,000 I/Os in half a chip edge.

That means an M2 Max only requires one full and one half chip edge to deliver the 30,000 I/Os required to connect three other M2 Maxs to build a Mac Pro. They'd be connected in a square, with the two outside edges of each used for LPDDR I/Os, the other I/Os would be on the half edge facing another M2 Max (the half that isn't used for interconnect) routed underneath via the interposer. Since those other I/Os for stuff like DP, TB/USB, maybe PCIe, etc. already connect off the interposer they don't need an exposed chip edge like LPDDR since they are less latency critical.
 
Reactions: ashFTW

Mopetar

Diamond Member
Jan 31, 2011
8,004
6,445
136
I wonder if this is a technical limitation / OS compatibility issue.

Center Stage was introduced with A13, so they’d most likely need A13 minimum. However, no A13 device has ever shipped with less than 64 GB.

It's just Apple using left over parts. Consider how many iDevices they make and the volume that they order components in and contrast that with how many of these displays they'll likely make in a year.

The parts being allocated to these displays are a rounding error in terms of the orders they place for the other products that use them.

Obviously not a power limitation with 370W at its disposal, or a heat limitation giving how cool everything is running and the fans at idle. So is there something type of internal limitation for how much power can be delivered to the SoC, or available bandwidth to the SLC or something of that nature?

It's doubtful that Apple intends for device to ever use that much power. The extra capacity may just be so that it can handle spikes that occur at sub-millisecond durations.

Someone needs to compile a power virus like Furmark to run on this in order to see how much it can be stressed.
 
Reactions: Eug

Doug S

Platinum Member
Feb 8, 2020
2,479
4,035
136
Saw an article about Nvidia's 144 core ARMv9 server chip, which offers up to 1 TB of LPDDR5X, at a bandwidth of up to 1 TB/sec. Apple is getting 800 GB/sec out of M1 Ultra using LPDDR5, if that was LPDDR5X it would be 1 TB/sec, so Nvidia must be using the same number of controllers as M1 Ultra does. However Nvidia supports 1 TB instead of 128 GB, thus they are using 8x as much LPDDR per controller (i.e. 8x as dense per package) as Apple is.

If Apple used the same LPDDR5X parts as Nvidia, they could deliver a Mac Pro with 2 TB. Nvidia is also using ECC LPDDR5X, so that's also possible (something which IMHO they should do for Mac Pro, but I guess we'll see)
 
Reactions: Tlh97 and Saylick

JasonLD

Senior member
Aug 22, 2017
486
447
136
Saw an article about Nvidia's 144 core ARMv9 server chip, which offers up to 1 TB of LPDDR5X, at a bandwidth of up to 1 TB/sec. Apple is getting 800 GB/sec out of M1 Ultra using LPDDR5, if that was LPDDR5X it would be 1 TB/sec, so Nvidia must be using the same number of controllers as M1 Ultra does. However Nvidia supports 1 TB instead of 128 GB, thus they are using 8x as much LPDDR per controller (i.e. 8x as dense per package) as Apple is.

If Apple used the same LPDDR5X parts as Nvidia, they could deliver a Mac Pro with 2 TB. Nvidia is also using ECC LPDDR5X, so that's also possible (something which IMHO they should do for Mac Pro, but I guess we'll see)

Theoretical maximum fpr LPDDR5x per package is 64GB, so it is still going to be 1TB maximum for Mac Pro if it is like M2 Ultra x 2.
Apple's hypothetical Mac Pro could potentially have a bandwidth of 2TB/sec with 1TB maximum capacity. Apple's M1 Max/Ultra got double the memory channels per package vs Nvidia's server chip.
 
Last edited:

eek2121

Diamond Member
Aug 2, 2005
3,043
4,264
136
Excuse my ignorance, but the 5950X does not have a GPU of the RTX 380 performance level mind you? So using SoC power numbers is dubious - you might want to check the RTX380 idle power for reference. More importantly the 5950X does not even have the CPU performance of the Apple M1 Ultra, in fact my 5950X is significantly slower, while consuming 130-140 W.
So the conclusion that the M1Ultra is in a completely different league with respect to power efficiency is not an understatement.
It doesn’t matter since the GPU was at 0% utilization for the benchmark.


Only Apple would do something wasteful like this.
Wasteful? They likely want (or wanted) to add new features to the display as time goes on. What if they added AppleTV to the display? Or some type of low power app sharing type of deal.
 

Doug S

Platinum Member
Feb 8, 2020
2,479
4,035
136
Theoretical maximum fpr LPDDR5x per package is 64GB, so it is still going to be 1TB maximum for Mac Pro if it is like M2 Ultra x 2.
Apple's hypothetical Mac Pro could potentially have a bandwidth of 2TB/sec with 1TB maximum capacity. Apple's M1 Max/Ultra got double the memory channels per package vs Nvidia's server chip.

How would Nvidia be getting 1 TB/sec memory bandwidth with half the controllers? How would they be reaching 1 TB of capacity with half the controllers? That would imply they are 16x more dense per controller than M1.
 

JasonLD

Senior member
Aug 22, 2017
486
447
136
How would Nvidia be getting 1 TB/sec memory bandwidth with half the controllers? How would they be reaching 1 TB of capacity with half the controllers? That would imply they are 16x more dense per controller than M1.




Apple gets 800GB/s with 8 packages and Nvidia's Grace gets 1TB/s with 16 packages, which implies they got the same number of channels (32) since only speed difference comes from LPDDR5 (6400Mbps) vs LPDDR5X (8533Mbps)
LPDDR5x's maximum capacity per package is 64GB, so 64x16=1024GB. If Apple's simply doubles up on M1/M2 ultra x 2, you will see the maximum memory capacity of 1TB.
 
Jul 27, 2020
17,799
11,599
106
Wasteful? They likely want (or wanted) to add new features to the display as time goes on. What if they added AppleTV to the display? Or some type of low power app sharing type of deal.
I somehow don't see Apple being that magnanimous. But hey, if the displays don't sell well, they can always strip out the part with 64GB flash from the excess inventory and sell it as some iDevice in a third world country.
 

uzzi38

Platinum Member
Oct 16, 2019
2,702
6,404
146
You are confusing the capabilities of their GPU µArchs with the SKUs that put them to use.

The problem being that they are competing with each other and not Apple - therefore they will push the max of the bell curve before power draw skyrockets in order to get max bang per mm2 vs the opposition to maximise profits.

RDNA2 has already reached insane clock frequencies for a GPU, and the µArch perf/watt would likely benefit drastically from lowering them just a bit, as we are likely to see as chiplets in RDNA3+ allow them to start scaling perf with more silicon instead of more voltage (assuming IO/sync overhead doesn't kill the efficiency instead).

Also Max Studio is running on at least a 2nd gen 5nm chip that they get by grabbing TSMCs cutting edge capacity probably before it is even up for auction to other players in the industry.

Hardly a fair comparison here vs either AMD or nVidia taking these factors into consideration.

You can run a 6500XT at under half the power consumption at ~2.3 GHz, and that's probably the most extreme comparison for RDNA2. The rest of the lineup is not too bad in terms of perf/W.

Btw, I actually expect clock frequencies to rise again with RDNA3, but I'm getting a bit off topic here.
 
Reactions: Tlh97 and soresu

Eug

Lifer
Mar 11, 2000
23,749
1,281
126
It's just Apple using left over parts. Consider how many iDevices they make and the volume that they order components in and contrast that with how many of these displays they'll likely make in a year.

The parts being allocated to these displays are a rounding error in terms of the orders they place for the other products that use them.
Yes, that makes sense. I had forgotten they're still selling the iPhone 11 new with A13. They also sell the iPad with A13.

It's doubtful that Apple intends for device to ever use that much power. The extra capacity may just be so that it can handle spikes that occur at sub-millisecond durations.

Someone needs to compile a power virus like Furmark to run on this in order to see how much it can be stressed.
That and the ~130 Watts needed to power USB devices.
 

repoman27

Senior member
Dec 17, 2018
378
535
136
Apple gets 800GB/s with 8 packages and Nvidia's Grace gets 1TB/s with 16 packages, which implies they got the same number of channels (32) since only speed difference comes from LPDDR5 (6400Mbps) vs LPDDR5X (8533Mbps)
LPDDR5x's maximum capacity per package is 64GB, so 64x16=1024GB. If Apple's simply doubles up on M1/M2 ultra x 2, you will see the maximum memory capacity of 1TB.
Apple is using 8 x128 packages at 6400 MT/s. NVIDIA is using 16 x64 packages at 8533 MT/s.

The maximum capacity per package depends on the maximum die density and number of dies you can stack in a package. Micron, Samsung, and SK hynix all make mostly 8 Gbit LPDDR dies. Micron had 16 Gbit LPDDR4X dies. Samsung has 16 Gbit LPDDR5/X dies, and SK hynix has 12 Gbit LPDDR5 dies. Samsung claimed their 16 Gbit LPDDR5X dies would enable 64 GB packages in the future, but there is nothing intrinsic to LPDDR5X technology or 16 Gbit die sizes that would allow them to suddenly be able to make 64 GB packages.
 

Doug S

Platinum Member
Feb 8, 2020
2,479
4,035
136
View attachment 58998View attachment 58999

Apple gets 800GB/s with 8 packages and Nvidia's Grace gets 1TB/s with 16 packages, which implies they got the same number of channels (32) since only speed difference comes from LPDDR5 (6400Mbps) vs LPDDR5X (8533Mbps)
LPDDR5x's maximum capacity per package is 64GB, so 64x16=1024GB. If Apple's simply doubles up on M1/M2 ultra x 2, you will see the maximum memory capacity of 1TB.

You're not getting it. This is simple math.

You agree that both M1 Ultra and Grace have the same number of controllers/channels. Thus the M1 Ultra and Grace memory bus are both 1024 bits wide in total. If M1 Ultra has 8 packages, each package must be 128 bits wide. If Grace has 16 packages, each package must be 64 bits wide. They are not the same, so simply counting the number you see and drawing conclusions from that is pointless.

Just because Apple is using 128 bit packages with the M1 family does not mean they are forced to do so with M2. If they chose to use the packages Nvidia is using, which are 64 GB in a 64 bit wide package, they can hit 2 TB on a Mac Pro because they will have twice the width of memory bus which would therefore support 32 64 GB packages.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,260
5,257
136
If they chose to use the packages Nvidia is using, which are 64 GB in a 64 bit wide package, they can hit 2 TB on a Mac Pro because they will have twice the width of memory bus which would therefore support 32 64 GB packages.

We have no idea what capacity the NVidia parts are, because Grace memory capacity has not been revealed.

Those could be 8GB packages for all we know, limiting them to the same 128GB as M1-Ultra. Though I think 16GB is a safe bet.

Further we have ZERO idea what Mac Pro memory topology looks like right now, as it could be M2 based. For all we know M2-Ultra might switch to HBM3.
 

Eug

Lifer
Mar 11, 2000
23,749
1,281
126
My only storage concern is with soldered in desktop storage, like on the Mini being a significant detriment to long term repairs. Studio doesn't have that problem.

Hopefully an M2 Mini gets a storage socket as well.

Luke Miani bought 3 Mac Studios, and did some SSD swaps to see if he could make it work. It did not.

1. Even with Apple's Configurator 2 service software that does firmware restores and what not, he cannot get his Mac Studios to work with a swapped in SSD from another machine. He was trying to do an upgrade from 512 GB to 1 TB, but no dice. Configurator 2 restores will fail after a storage swap, until you swap the original back in.
2. He bricked one of his Mac Studios. Hopefully he can revive it.
3. He says that iFixit was able to swap one from one machine into another and got ONE of them to work, but the other doesn't work. However the drives were the same size.
4. Gathering information from various sources including from Apple techs, he repeats the info (already posted here) that these aren't actually full-fledged SSDs, but are just flash storage modules. The controller is on the SoC.
5. The storage modules are serialized, so they need to be matched to the system (although iFixit's result suggests sometimes it may work even if not matched).
6. Because the storage modules are serialized, in order to replace one (properly), you must use Apple's service software to imprint the new serial with the SoC/logic board. This service software is not available to the public.
7. Apple's service providers are not allowed to do storage upgrades. The service providers can do storage repairs/replacements with the same sized modules, but not upgrades. As mentioned, such repairs require the service software.
 
Last edited:
Reactions: igor_kavinski

Heartbreaker

Diamond Member
Apr 3, 2006
4,260
5,257
136

Luke Miani bought 3 Mac Studios, and did some SSD swaps to see if he could make it work. It did not.

1. Even with Apple's Configurator 2 service software that does firmware restores and what not, he cannot get his Mac Studios to work with a swapped in SSD from another machine. He was trying to do an upgrade from 512 GB to 1 TB, but no dice. Configurator 2 restores will fail after a storage swap.
2. He bricked one of his Mac Studios. Hopefully he can revive it.
3. He says that iFixit was able to swap one from one machine into another and got ONE of them to work, but the other doesn't work. However the drives were the same size.
4. Gathering information from various sources including from Apple techs, he repeats the info (already posted here) that these aren't actually SSDs, but are just flash storage modules. The controller is on the SoC.
5. The storage modules are serialized, so they need to be matched to the system (although iFixit's result suggests sometimes it may work even if not matched).
6. Because the storage modules are serialized, in order to replace one (properly), you must use Apple's service software to imprint the new serial with the SoC/logic board. This service software is not available to the public.
7. Apple's service providers are not allowed to do storage upgrades. The service providers can do storage repairs/replacements, but not upgrades. As mentioned, such repairs require the service software.

Gotta love the world of ignorant ranting youtube clickbait.

Arstechnica explains it a bit more rationally and minus the clickbait:

 
Reactions: Mopetar

Eug

Lifer
Mar 11, 2000
23,749
1,281
126
Gotta love the world of ignorant ranting youtube clickbait.

Arstechnica explains it a bit more rationally and minus the clickbait:

I think Ars is being overly dismissive. Furthermore, Ars' use of the procedure for the Mac Pro to explain things in their article doesn't necessarily make sense.

After going through the entire video, it's clear to me that Luke Miani is missing a fair bit of information, but one key bit of info provided in the video that Ars made no mention of was that at least according to the Apple techs Luke Miani has spoken with, the service providers are not allowed to do storage upgrades for the Mac Studio. They can only replace like-for-like as part of an actual repair. This is actually quite a different scenario from the Mac Pro, despite Ars' dismissive claims. For the Mac Pro, an end user can simply buy SSD upgrades straight from the Apple Store, and do the upgrade at home (or pay an Apple service centre to do the upgrade).

This is for me the most important point here. I mean, this is not unexpected of course, but this is nonetheless a very important distinction between the Mac Studio and the Intel Mac Pro. It also may mean that the SSD replacement procedure between the two machines is different.
 
Last edited:
Reactions: moinmoin

Heartbreaker

Diamond Member
Apr 3, 2006
4,260
5,257
136
I think Ars is being overly dismissive. Furthermore, Ars' use of the procedure for the Mac Pro to explain things in their article doesn't really make sense.

After going through the entire video, it's clear to me that Luke Miani is missing a fair bit of information, but one key bit of information provided in the video that Ars made no mention of was that at least according to Apple techs Luke Miani has spoken with, the service providers are not allowed to do storage upgrades for the Mac Studio. They can only replace like-for-like as part of an actual repair. This is actually quite a different scenario from the Mac Pro, despite Ars' dismissive claims. For the Mac Pro, an end user can simply buy SSD upgrades straight from the Apple Store, and do the upgrade at home (or pay an Apple service centre to do the upgrade).

This is for me the most important point here. I mean, this is not unexpected of course, but this is nonetheless a very important distinction between the Mac Studio and the Intel Mac Pro. It also likely means that the SSD replacement procedure between the two machines is different.

I think it's bit early to worry about hearsay, about a machine that's been on the market for a handful of days.

Eventually OWC will probably be selling compatible SSD devices, and there were will be a widely available procedure for activating them.

But for now, buy an appropriate amount of storage to last you a couple of years.

It's a much better situation than the rest of the M1 Macs, where it's soldered in.
 

repoman27

Senior member
Dec 17, 2018
378
535
136
Samsung's LPDDR5/5X dies are ALL x16, so I'm not sure what you're talking about with needing "a single 16 bit interface in volume".
LPDDR4/X dies were generally divided into two halves, each being half of the total die capacity, and each having its own 16-bit interface. I didn't realize LPDDR5/X dies are single channel only—my bad. The interfaces can also be run at half width (8-bit or "byte mode") to increase package density. Typical configurations would be:

1 dual die stack to create a single-rank x32 package
1 quad die stack to create a dual-rank x32 package
2 dual die stacks side by side to create a dual-rank x32 package
2 quad die stacks side by side with interfaces in byte mode to create a dual-rank x32 package
1 quad die stack to create a single-rank x64 package
2 dual die stacks side by side to create a single-rank x64 package
2 quad die stacks side by side to create a dual-rank x64 package
2 quad die stacks side by side to create a single-rank x128 package

It looks like Micron, Samsung, and SK hynix all have 8 and 12 Gbit LPDDR5 dies, but only Samsung has announced 16 Gbit density for their LPDDR5/X. Samsung claimed in a press release that their 16 Gbit die would enable 64 GB packages. And if you do the math, with a die density of 16 Gbit, a maximum of 4 dies per 16-bit channel, and an 8-channel package interface (x128), you do arrive at 64 GB. However, the highest density LPDDR5 package I'm aware of is SK hynix's 18 GB module, which probably uses 12x 12 Gbit dies, and I've never seen an LPDDR package with more than 12 dies. So while 64 GB may be possible in theory, that doesn't mean it's an actual product that is currently or will ever be available. 16 and even 24 Gbit LPDDR4X dies have been around for years, yet the highest capacity package made with them stands at 12 GB. Will somebody do 12x 16 Gbit dies for 24 GB? Sure, but the jump to 32 dies seems improbable to me.
 

eek2121

Diamond Member
Aug 2, 2005
3,043
4,264
136
I somehow don't see Apple being that magnanimous. But hey, if the displays don't sell well, they can always strip out the part with 64GB flash from the excess inventory and sell it as some iDevice in a third world country.
I guess we'll find out. When they mentioned that the monitor had an A13 in it, I figured they probably had some type of platform update in mind for the future. Maybe not based on any existing products, but rather, possibly an entirely new platform. The fact they have storage further indicates that this is or may have been the case. Who knows?
I think Ars is being overly dismissive. Furthermore, Ars' use of the procedure for the Mac Pro to explain things in their article doesn't necessarily make sense.

After going through the entire video, it's clear to me that Luke Miani is missing a fair bit of information, but one key bit of info provided in the video that Ars made no mention of was that at least according to the Apple techs Luke Miani has spoken with, the service providers are not allowed to do storage upgrades for the Mac Studio. They can only replace like-for-like as part of an actual repair. This is actually quite a different scenario from the Mac Pro, despite Ars' dismissive claims. For the Mac Pro, an end user can simply buy SSD upgrades straight from the Apple Store, and do the upgrade at home (or pay an Apple service centre to do the upgrade).

This is for me the most important point here. I mean, this is not unexpected of course, but this is nonetheless a very important distinction between the Mac Studio and the Intel Mac Pro. It also may mean that the SSD replacement procedure between the two machines is different.
I read similar explanations from other experts, particularly, the NAND did not have a full controller on it.

I suspect the restoration process on the Mac Studio is broken. Someone on Twitter tried to restore a Mac Studio without messing with the hardware and it did not work properly either.
 

Eug

Lifer
Mar 11, 2000
23,749
1,281
126
I think it's bit early to worry about hearsay, about a machine that's been on the market for a handful of days.

Eventually OWC will probably be selling compatible SSD devices, and there were will be a widely available procedure for activating them.
Unfortunately, OWC stuff is 3rd tier. It's basically a generic store brand. Maybe they'll get better (as someone mentioned they acquired a small Taiwanese engineering company), but I'm not optimistic. The good thing about OWC is they have a good reliable warranty.

Anyhow, after 3 years they still don't sell Mac Pro SSD module replacements, so I don't see them selling Mac Studio SSD module replacements either.

But for now, buy an appropriate amount of storage to last you a couple of years.
Truthfully, I'm not really concerned for my own usage. The bulk of my SSD storage is on external SSDs anyway. (My iMac is 1 TB internal SSD and 2 TB external USB-C SSD, the latter running at about 1 GB/s.) I don't need 7 GB/s transfer speeds, since I'm not an 8K video editor or whatever.

It's a much better situation than the rest of the M1 Macs, where it's soldered in.
I agree. I guess my point (not directed at you specifically, but just in general) is that it's great it's repairable, but people shouldn't conflate "repairable" with "upgradable". Although he didn't actually spell it out that way in that YouTube video, that was the main takeaway I got from it.
 

repoman27

Senior member
Dec 17, 2018
378
535
136
Yeah, the data on the NAND is fully encrypted, and the encryption keys are intrinsic to the SoC. The serialization bit is a red herring. The Mac doesn't care which NAND modules you put in it (although they do probably need to match if you install more than one). However, it can't possibly read any data off of the SSD in order to boot if the NAND is encrypted with a key the SSD controller / Secure Enclave in the SoC isn't privy to. DFU restore will probably work just fine once Apple provides a DFU restore image for the Mac Studio. Why people would think swapping these modules would function without some way to allow the SoC to initialize the drive and get the machine to a bootable state is beyond me.
 

Eug

Lifer
Mar 11, 2000
23,749
1,281
126
Yeah, the data on the NAND is fully encrypted, and the encryption keys are intrinsic to the SoC. The serialization bit is a red herring. The Mac doesn't care which NAND modules you put in it (although they do probably need to match if you install more than one). However, it can't possibly read any data off of the SSD in order to boot if the NAND is encrypted with a key the SSD controller / Secure Enclave in the SoC isn't privy to. DFU restore will probably work just fine once Apple provides a DFU restore image for the Mac Studio. Why people would think swapping these modules would function without some way to allow the SoC to initialize the drive and get the machine to a bootable state is beyond me.
How long does it usually take for DFU images to go live? Weeks? Just wondering. It would be great to see this being retested at that time. The Mac Studio DFU instructions were already up when the testing was done though. The instructions that included Mac Studio went up the same day the video was made, but of course that doesn't mean the actual image was there too.

This doesn't change the point made though that at least for now, the Apple service providers aren't allowed to do storage upgrades. (However, Apple is very clear up front about this, that the storage is not upgradable after the fact, so this doesn't come as a surprise.) If this continues to be true in the future, I wonder what this means for right to repair. You'd be able to buy the SSDs, but only the ones that matched your existing machine? This would lead to a grey market of OEM parts for possible upgrades, which means $$$. No third party aftermarket replacement SSD parts exist anywhere for the 2019 Mac Pro AFAIK, so I don't see such third party aftermarket replacement SSD parts for the Mac Studio appearing anytime soon either, even if the DFU restore process is proven to work. OEM SSDs for the Mac Pro exist on the used market, but they are very expensive, and you almost never know how much wear is on them (because the sellers generally don't list the amount of wear).

The OEM SSDs for the 2014 Mac mini, 2015 MacBook Pro, and 2017 MacBook Air we have were also very expensive for a very long time, but then once Apple started supporting third party NVMe SSDs in these machines (unofficially), the pricing on the used OEM drives dropped like a rock. So, I ended up buying a used OEM SSD for a reasonable price to put in the MacBook Pro. BTW, OWC doesn't sell proper replacements for these SSDs either. What OWC sells are generic third party NVMe drives that have the right connector for the Macs, but they don't function at all like OEM drives.
 

Doug S

Platinum Member
Feb 8, 2020
2,479
4,035
136
Why people would think swapping these modules would function without some way to allow the SoC to initialize the drive and get the machine to a bootable state is beyond me.

The people writing these articles don't even understand they aren't SSDs. They aren't interested in how Macs work, they only want to author drivel claiming Apple is being evil to get clicks/views. Its like politics, extremist content gets more engagement (and drives more donations) than balanced points of view.
 
Reactions: scannall

repoman27

Senior member
Dec 17, 2018
378
535
136
The M1 Macs do treat internal storage quite differently than Intel Macs, and the boot process has changed quite a bit, so that may also be a factor here.

 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |