Some well-known game developers disagree with Intel. It's interesting how Intel even tries to get on the discussion of graphics considering they never made a good GPU in their lives and their current drivers/GUI is abysmal.
Eidos Montreal - Rise of the Tomb Raider
"Rise of the Tomb Raider uses
async compute to render breathtaking volumetric lighting on Xbox One. Of all the rendering techniques used in the game, the
most fascinating is its use of asynchronous compute for the generation of advanced volumetric lights. For this purpose, the developer has employed a resolution-agnostic voxel method, which allows volumetric lights to be rendered using asynchronous compute after the rendering of shadows, with correctly handled transparency composition."
Uncharted 4 by Naughty Dog:
Is that why arguably the best looking game on PS4 is rumored to have the highest use of Asynchronous Compute out of any PS4 game? ([URL=" 1[/URL] and [URL=" 2[/URL])
Intel's theories do not match realities. Intel might think there is a better way to make a DX12 game but reality is using AC engines on PS4 provides a huge boost in graphical capabilities.
Oxide clearly stated that they tried to use AC on NV hardware since this feature was exposed in the drivers, but when they did, performance was abysmal. OTOH, this helped to improve performance on GCN products. What that tells us if Kepler/Maxwell had strong Asynchronous Compute engine(s), those GPUs would have also received an additional boost in performance for free!
Oxide has stressed that on the console side, the boost from AC on DX12 can be up to 30% on the GPU side vs. running a conventional DX11 path.
"Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware."
More analysis is required on future brand-agnostic (non-GameWorks) DX12 games to see which path developers choose to take with Asynchronous Compute engines and DX12. Right now it's looking like NV got caught with its pants down advertising supposedly more advanced DX12 feature set than [URL=" but in reality the current NV hardware appears far less advanced to actuallytake advantage of DX12[/URL]. No wonder nV is in full PR damage control with Oxide. That's why it's absurd when certain people have blatantly ignored actual GPU horsepower and VRAM capacity and instead have been trolling the forums for the last 12 months about HDMI 2.0, 4K HEVC and Maxwell's DX12.1 code-path while ignoring that in the sub-$270 price bracket, GCN 1.0-1.2 has been wiping the floor with NV's budget cards at every price level when it comes to price/performance and VRAM capacity.
Anyone who went out of his/her way to recommend GTX960 vs. 280X/290 over the last 12 months to save $50 will have his reputation on the line if more brand agnostic DX12 games expose weaknesses in Maxwell. Certainty right now the safest bet is to bet on GPU performance+VRAM capacity. In those metrics, Maxwell loses to GCN in the sub-$270 space so the choice of which GPU to pick for most gamers should be easy.
If more developers confirm this to be true, this is the worst GPU news of 2015 because it means NV's Fermi/Kepler and Maxwell hardware, coupled with NV's 75%+ market share, and PaidWorks will mean the benefits Asynchronous Compute brings in terms of extra performance won't be realized for a while.
"Oxide effectively summarized my thoughts on the matter. NVIDIA claims “full support” for DX12, but conveniently ignores that Maxwell is utterly incapable of performing asynchronous compute without heavy reliance on slow context switching."
I want to see some more DX12 games and interviews/feedback of the actual developers on this issue though.
I can't wait until Pascal and AMD's 16nm HBM2 GPUs because this generation has been nothing but a disaster in terms of promises and under delivering from both camps.
That would mean NV expects its owners to upgrade GPUs every new generation (every 2 years)? I have no problem with that if they are transparent about it but instead they/their focus group members kept hyping up DX12 and supposedly more extensive feature set as somehow superior for DX12 games over GCN and more future-proof? Sounds like misleading marketing again if their hardware doesn't have one of the key features DX12 takes advantage of - Asynchronous Compute.