I thought the implication of LPDDR6 increasing channel width was that memory bus widths are going to increase across the board, not DECREASE.
Low End Mobile SoCs : 32 bit -> 48 bit
High End Mobile SoCs : 64 bit -> 96 bit
Low End Laptop SoCs: 64 bit -> 96 bit
Mainstream Laptop SoCs: 128 bit -> 192 bit
If we specifically focus on smartphone SoCs, let's say Snapdragon 8 Gen 5 supports LPDDR6, and downgraded the memory bus from 64 bit to 48 bit.
Snapdragon 8 Gen 4
= 64 bit × LPDDR5X-9600
= 76.8 GB/s
Snapdragon 8 Gen 5
= 48 bit × LPDDR6-10667
= 64 GB/s × (100-11)% [subtracting metadata]
= 57 GB/s
So bandwidth actually goes down gen-on-gen, and by a huge amount. This is clearly unacceptable, and will never happen.
Actually, these smartphone SoCs will need significantly more bandwidth because of the push for on-device AI. The Snapdragon 8 Gen 3 already has a 45 TOPS NPU (yes, the same one as the X Elite; source: Revegnus). 8 Gen 5 is probably going to have double of that.
You guys know how much memory bandwidth is critical for AI workloads. Unlike the CPU or GPU, you can't sate an NPU by putting more cache to it. Because NPUs pull gigabytes of data from the AI models stored in RAM, and that cannot be stored in the megabyte-scale on-device caches.
It's not just smartphones. There is a push for AI, particularly on-device, throughout the industry. Microsoft recently unveiled their Copilot+ PC standard (terrible name, btw), with a minimum 40 TOPS requirment. They are rumoured to increase this requirement to ~100 TOPS for the next generation AI PCs (2025/2026. There needs to be significant bandwidth improvement to feed those huge NPUs.
Hence why I believe the industry came together at JEDEC, and they jointly decided to increase the channel width by 50%.