Info LPDDR6 @ Q3-2024: Mother of All CPU Upgrades

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Doug S

Platinum Member
Feb 8, 2020
2,467
4,024
136

According to this, 32 GB of LPDDR6 LPCAMM3 will cost $500 apiece.

RIP.

RIP DDR5 as well, did you see the prices they were charging for that when it first appeared!

Wait, what...DDR5 is everywhere? How is that possible when it was declared RIP based on launch pricing??
 
Reactions: igor_kavinski

jpiniero

Lifer
Oct 1, 2010
14,823
5,440
136
RIP DDR5 as well, did you see the prices they were charging for that when it first appeared!

Wait, what...DDR5 is everywhere? How is that possible when it was declared RIP based on launch pricing??

Given how much of AMD's CPU sales were DDR4... until AI hype... yeah, pricing is a problem.
 

FlameTail

Diamond Member
Dec 15, 2021
3,122
1,786
106
LPDDR5X -> LPDDR6
16 bit channel -> 24 bit channel

I speculate that the channel width will increase again, either in the LPDDR6X or LPDDR7 generation (likely the latter).

LPDDR6->LPDDR7
24 bit channel -> 32 bit channel
 

Doug S

Platinum Member
Feb 8, 2020
2,467
4,024
136
If you think that you have zero understanding of why the channel width was increased.

Since @FlameTail laugh reacted to this I'll spell it out for him.

Channel width is irrelevant in isolation. If you want x bandwidth you'll provide the necessary width via multiple controllers, so there is no benefit to the controller being wider. Indeed, there is benefit to it being narrower, which is why you saw DDR4 with a 64 bit wide channel -> DDR5 with 2x 32 bit wide channels -> DDR6 rumored to have 4 channels (which I'm betting will be 24 bits wide because 16 bit wide channels create certain implementation difficulties for ECC) That allows more independent memory accesses at a given overall width, which you want when you have a lot of cores that are working on different things.

The reason LPDDR6 went to 24 bit channels was simple math. They decided they wanted the flexibility of having additional bits to implement ECC or whatever else the host might want such as memory tagging. Theoretically they could have stayed with 16 bit channels and increased burst length to BL17/BL34, but that creates its own set of problems. Those problems were avoided with 24 bit wide channels and BL24. That math works out much better and creates the same 16 bits per 256 for host usage.

Had they not wanted those host bits they would have stayed with 16 bits, because there is no difference in building e.g. an Apple M4 with 128 bit wide memory that consists of 8 16 bit controllers or of 4 32 bit controllers. There would be almost no difference in die space or power. But there would be a cost in how many independent loads/stores could be in progress at once, with only half as many possible on the system with 32 bit controllers.
 

SpudLobby

Senior member
May 18, 2022
961
655
106
Since @FlameTail laugh reacted to this I'll spell it out for him.

Channel width is irrelevant in isolation. If you want x bandwidth you'll provide the necessary width via multiple controllers, so there is no benefit to the controller being wider. Indeed, there is benefit to it being narrower, which is why you saw DDR4 with a 64 bit wide channel -> DDR5 with 2x 32 bit wide channels -> DDR6 rumored to have 4 channels (which I'm betting will be 24 bits wide because 16 bit wide channels create certain implementation difficulties for ECC) That allows more independent memory accesses at a given overall width, which you want when you have a lot of cores that are working on different things.

The reason LPDDR6 went to 24 bit channels was simple math. They decided they wanted the flexibility of having additional bits to implement ECC or whatever else the host might want such as memory tagging. Theoretically they could have stayed with 16 bit channels and increased burst length to BL17/BL34, but that creates its own set of problems. Those problems were avoided with 24 bit wide channels and BL24. That math works out much better and creates the same 16 bits per 256 for host usage.

Had they not wanted those host bits they would have stayed with 16 bits, because there is no difference in building e.g. an Apple M4 with 128 bit wide memory that consists of 8 16 bit controllers or of 4 32 bit controllers. There would be almost no difference in die space or power. But there would be a cost in how many independent loads/stores could be in progress at once, with only half as many possible on the system with 32 bit controllers.
Yep. Narrower channels -> better bandwidth utilization
 

MadRat

Lifer
Oct 14, 1999
11,922
259
126
Cheap optical transceivers have unacceptable latency for main memory use. The universal standard connector for the next memory generation will be CAMM. Maybe there will be cheap low-latency transceivers for the DDR7 generation, but I doubt that.
Core interconnects will likely be moving to a wireless standard within the decade and are quite a bit more sensitive to timing. The same technology should be easily transferrable to RAM usage. Wireless interconnects are being demonstrated to connect up to 25mm point to point. By using timing of individual points and modulation of the signal array its possible to pack these tranmitters and receivers in tight, dense arrays. We are talking pretty insane switching speeds compared to those seen relative to memory rates, so this will not be an exotic solution. Its about creating a future proof standard that abstracts physical memory types. I simply believe the solution being pushed for interconnects should be redirected at the memory bus. The industry would be turned on its head imho, once memory abstraction occurs. But to me its a natural focus of effort towards the path of least resistance.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |