Does Vega desktop implementation has unified memory with CPU? Nope. That pretty much means memory is not shared, coherency is not maintained and traffic needs to go through PCIE. Period.
And i was referring to claims that "insert buzzword here" would magically expand frame buffer. Comparison with GTX 970 memory management schemes and their successes/failures is very much on topic here.
Non-Volatile RAM, NETWORK Storage, System DRAM. That is what HBCC connects with.
It goes through PCIe and is shared(this is what 49 bit adressing does), and coherent with appropriate bandwidth here. There is a reason why total addressing amount of data available to Vega is 512 TB's of data.
Vega GPUs are not only dGPUs, but also APUs. But this is not the question here.
Lets say we are talking about 3072 GCN core chip, with 4 GB of HBM2 with 512 GB/s. This GPU will be perfectly capable of doing 4K, because the framebuffer in this GPU is not wasted on unused data, that had to be stored in previous version of driver and framebuffer models. Secondly, culling techniques used in Vega, are saving enormous amounts of video RAM, and secondly, memory compression techniques also add to this.
Both ways of looking at this problem are correct. 4GB GPU that has much more "usable" framebuffer, compared to other 4GB GPU with 1-2 GB usable at this particular moment. And 4GB GPU that has effective 12 GB memory framebuffer, because that part not on GPU memory chips can be considered volatile. AMD has done this, to make GPU memory more usable, and not had to refresh the page as often when the data changed, and effectively wasted power, and cycles doing so. Right now everything will be done "when it is needed". The GPU architecture is more reactive.
Second part, strictly from development perspective is that new architecture will be much simpler to program and optimize. You can ask developers how bad was optimizing the software to work on AMD GPU, and tell them to compare it to Nvidia architectures. Its a night and day right now for GCN/NCU. Biggest struggle for GCN came from Memory management, because Pixel Engine was client of Memory controller, not L2 cache, like it was with Nvidia architectures. Vega is similar on this front to Kepler/Maxwell/Pascal. This also cost power(page refreshing), and cycles. Vega will do everything automatically. The only thing right now with Vega, what Devs will have to custom work is Primitive Shader vectors. They will have to tell the GPU what is seen and what is not seen and the GPU will cull it saving resources, and memory, and cycles.
Its just a top of the mountain of how memory architecture works with Vega.