Not according to Abwx. Of course, his source (SOI consortium, IIRC) is completely biased. Anyway, the point is that there are those who would argue otherwise.
The main issue is that it's absolutely useless for mobile. Not only are the bandwidth requirements lower, but there are too many power, areal and cost sacrifices that have to be made. You've got to have 4 DIMMs or the equivalent number of DDR3 ICs to take advantage of it.
An L3 cache would be better. GDDR5M would be better (which AMD apparently pursued). More memory channels would technically solve the problem, while creating bigger ones.
This. The cost of adding more channels is HUGE. Board area for placing addition memory chips, routing huge 64b buses across the board and accounting for noise and power created by them, extra pins/pads on the package, adding an extra memory controller per channel (these are huge, lumbering and complex), accounting for ECC on these extra channels, increased complexity of the system agent and memory arbitration, increased pre-silcon, post-silicon, functional and performance validation time for the platform (this adds weeks onto the release date which could potentially cost millions of dollars). Have you ever wondered why it takes Intel over a year longer to release extreme editions (LGA2011)? It's because it takes that long to validate the hundreds of additional features supported by server grade platforms, which includes additional memory channels.
Huge impact to development schedule and costs, BOM, die area, power, and for what, maybe double the memory the bandwidth? How much more product would AMD or Intel stand to ship with a huge, power hungry mobile SoC with twice the memory bandwidth? Not much I would guess. Better off decreasing memory voltage and increasing frequency. Improving cache size and source/dest segregation. Improving system agent arbitration and exposing arbitration priority to system and software developers to help with improving drivers and other software.
There's many different ways to increase GPU memory bandwidth, and adding more channels is one of the least efficient ones in terms of the metrics that are important to mobile SoCs. Moving from dual to tri or even quad channel is actually pretty straight forward technologically, so it's a quick an easy way to increase memory bandwidth on server or workstation platforms where things like BOM, area and power are trumped by the importance of performance.