Not showing the same as AOTS. How...unexpected.
AMD's cards do seem to cope a little better comparatively speaking. That said, the 980 Ti doesn't seem to suffer at all, and even Nvidia's lower-end cards don't experience the GeForce FX-style implosion that many of AMD's more... enthusiastic fans have spent the last few months assuring us would happen.
Railven, did you remember when Oxide said UE4 may not even use async compute?
I find this situation fascinating.
https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html
I wonder if Lionhead Studios who claim Async Compute is "free performance on GCN" only meant it for Xbone. Given it's a MS sponsored title, they would care a great deal about Xbone performance.
DX12 has a lot of great features, which GPU has the advantage will depend entirely on which features are used. There's no point generalizing about all DX12 games as if they were the same.
An RTS with heavy focus on draw calls & AI doesn't behave like a run-through scenery?
Who would have thought? Different games will be... different.
Interestingly, on the official UE4 dev documentation portal, it says Async Compute (the function is added to UE4 by Lionhead Studios) is only functional for Xbone and not for PC. Wonder what's going on there?
ps. This is a damn amazing result for AMD (even on unoptimized drivers, re: AT's article), other UE4 games have the 970/980 destroying 390X and Fury X.
Test Results
We observed a higher-then-usual amount of variation between benchmark runs on both AMD and Nvidia hardware and adjusted our testing methodology to compensate. We ran the benchmark 4x on each card, at each quality preset, but threw out the first run in each case. We also threw out runs that appeared unusually far from the average — AMD and Nvidia both had at least one 720p run that returned a result of ~115 FPS, for example.
This raises the question why this Fable Benchmark is advertising itself as using Async Compute when UE4 is not capable and the feature is not supported in NV's drivers, yet.
Async is working for this benchmark (Dynamic Global Illumination / Compute Shader Simulation & Culling) for all GCN and also for Maxwell2. The Nano can calculate those algorithm more than two times faster with async shader.
My console comment wasn't as tongue-in-cheek. There is clearly a huge benefit to GCN running lower resolution. Akin to consoles resolution. I'd be interested if they can explain why AMD sees such huge gains at lower resolutions.
!
Nvidia are cpu limited in that resolution...
So amd doesn't have a massive dx12 advantage? Looks pretty normal results I only looked at anandtech so far.
I'm more interested in seeing how this translates to graphics and physics quality in games with dx12 then what vendor does better though.
Next time plz provide some better excuse.Railven, did you remember when Oxide said UE4 may not even use async compute?
I find this situation fascinating.
https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html
I wonder if Lionhead Studios who claim Async Compute is "free performance on GCN" only meant it for Xbone. Given it's a MS sponsored title, they would care a great deal about Xbone performance.
DX12 has a lot of great features, which GPU has the advantage will depend entirely on which features are used. There's no point generalizing about all DX12 games as if they were the same.
Where does it say that? Anand's rendering sub-system breakdown shows GCN's up to 3 times slower than Maxwell in GI.
Where does it say that? Anand's rendering sub-system breakdown shows GCN's up to 3 times slower than Maxwell in GI.
Bleh so much for AMD crushing Nvidia in DX12. I would think after being proved wrong so many times, people would stop buying into the AMD marketing slides.