Uh, okay. None of that is true. We don't yet know what the memory controller of Vega can actually do, but AMD used a "simple" HBM controller in the Fury cards - which worked perfectly well - and Nvidia has something comparable in P100. A "simple" memory controller can perfectly well utilize all the bandwidth of the memory connected to it. That's up to a) the bandwidth and speed of the controller (which again in a determinant of what memory ends up being connected to it, not the other way around), and b) the tasks given to it. Or are you saying Nvidia's GDDR5/5X controllers can't utilize the full bandwidth of those either?
I'm a big supporter of AMDs desicion to go HBM2 with Vega - it has a ton of advantages - but it's very clear that Nvidia's choice of GDDR5X for high-end consumer cards stems from a combination of cost, availability, GDDR5X being "close enough" in performance, and Nvidia's GPU architecture efficiency advantage. Why? Nvidia sells far more GPUs than AMD, so they'd need a much healthier supply of HBM. This doesn't exist, and even if it did, costs for higher volumes would be huge. They can fit the 20+W of a 8GB+ GDDR5X setup within reasonable board TDPs due to their (for now) superior GPU efficiency. And, as the 1070, 1080 and 1080Ti show, GDDR5X is no slouch. Would HBM have been better? Probably, but it would significantly cut into Nvidia's margins (or force prices higher) while giving them a small advantage (over themselves?) at best.
While the Vega memory controller sounds innovative and might be ground-breaking, it certainly isn't the sole reason why AMD went HBM2. After all, they made HBM GPUs way before a controller like that existed.