I would like to ask, does anyone of you know if Micron, Hynix, Samsung or anyone developed 8-Hi stacks of HBM1?
As you can see: Tech Shrink.
That picture is a fake.
I would like to ask, does anyone of you know if Micron, Hynix, Samsung or anyone developed 8-Hi stacks of HBM1?
As you can see: Tech Shrink.
I would like to ask, does anyone of you know if Micron, Hynix, Samsung or anyone developed 8-Hi stacks of HBM1?
Only Hynix makes HBM1 and they gave up on 8Hi stacks.
Of course specific uarch paths are needed with DX12. Same reason why the fabled AOTS wont run on an Intel IGP. Because there is none.
So I assume you finally admitted that DX12 requires specific optimization for every single uarch, even as low as SKUs. And its a perfect place to do please your sponsor via.
Just wait till you see new uarchs and no patches, that will be a fun thing for older games. GCN 1.1 owners can rejoice over any console ports. But GCN 1.0, 1.2, 1.3, Kepler, Maxwell, Pascal, Gen8, Gen9, Gen10 owners can only pray to the dev gods for every game and see if they get lucky.
Tonga and Fiji had visual artifacts and worse performance than Hawaii and Tahiti in Gears day 1. This can invariably only mean there is a difference in optimizations.
Do you have a source for this?
Also what would the point of it be?
Also worth noting, here: http://www.stonearch.net/forums/showpost.php?p=255569&postcount=438anandtech said:If, for some reason, an ASIC developer believes that Pseudo Channel mode is not optimal for their product, then HBM2 chips can also work in Legacy mode.
Yes, and they fixed it in a week or so, the 380 spanks the 960 after the optimizations, likewise for the other SKUs.
This really shows uarch specific optimization can be done quite quick in DX12. Good news for gamers.
Instead they now have to use a hotfix solution with 2 engineers.
Source on the two engineers? You always say two so I assume that is from somewhere. Thank you in advance.
Ryan Smith said:Which is why for Fiji, AMD tells us they have dedicated two engineers to the task of VRAM optimizations. To be clear here, theres little AMD can to do reduce VRAM consumption, but what they can do is better manage what resources are placed in VRAM and what resources are paged out to system RAM. Even this optimization cant completely resolve the 4GB issue, but it can help up to a point. So long as game isnt actively trying to use all 4GB of resources at once, then intelligent paging can help ensure that only the resources that are actively in use reside in VRAM and therefore are immediately available to the GPU when requested.
Well I'd be interested in a Polaris 11/1070 gtx @ ~ 110 watts, 2x the performance of my overclocked gtx960 (overclocked gtx980 performance) @ ~ 300$.
I think Polaris 10 /1080gtx might be out of the price range I'm comfortable paying for a card.
Think its possible?
Otherwise I might as well grab a G1 gamer gtx970 overclock it (182watts) to 980 speeds for ~240$ and eat the extra 70 watts.
A 182 watt card is at toaster oven levels, not quite a space heater.:thumbsup: j/k
We have a rather optimal case with GPUs that are currently selling and a developer with a vested interest in showing DX12 on a game that just launched. Yet the issue still occurred and was tested and supported prior to launch?
What happens in the 2018 time frame when we will have another set of GPU architectures added into the mix combined with games possibly at 1yr+ post release? What is the real support situation going to be?
I'm surprised this potential issue with low level APIs is not getting more attention.
We have a rather optimal case with GPUs that are currently selling and a developer with a vested interest in showing DX12 on a game that just launched. Yet the issue still occurred and was tested and supported prior to launch?
What happens in the 2018 time frame when we will have another set of GPU architectures added into the mix combined with games possibly at 1yr+ post release? What is the real support situation going to be?
I'm surprised this potential issue with low level APIs is not getting more attention.