Since
@FlameTail laugh reacted to this I'll spell it out for him.
Channel width is irrelevant in isolation. If you want x bandwidth you'll provide the necessary width via multiple controllers, so there is no benefit to the controller being wider. Indeed, there is benefit to it being
narrower, which is why you saw DDR4 with a 64 bit wide channel -> DDR5 with 2x 32 bit wide channels -> DDR6 rumored to have 4 channels (which I'm betting will be 24 bits wide because 16 bit wide channels create certain implementation difficulties for ECC) That allows more independent memory accesses at a given overall width, which you want when you have a lot of cores that are working on different things.
The reason LPDDR6 went to 24 bit channels was simple math. They decided they wanted the flexibility of having additional bits to implement ECC or whatever else the host might want such as memory tagging. Theoretically they could have stayed with 16 bit channels and increased burst length to BL17/BL34, but that creates its own set of problems. Those problems were avoided with 24 bit wide channels and BL24. That math works out much better and creates the same 16 bits per 256 for host usage.
Had they not wanted those host bits they would have stayed with 16 bits, because there is no difference in building e.g. an Apple M4 with 128 bit wide memory that consists of 8 16 bit controllers or of 4 32 bit controllers. There would be almost no difference in die space or power. But there would be a cost in how many independent loads/stores could be in progress at once, with only half as many possible on the system with 32 bit controllers.