parvadomus
Senior member
- Dec 11, 2012
- 685
- 14
- 81
If anything this proves the Hawaii gpu command processor is faster than GM200/GM204, and that AMD DX11 drivers suck.
Both synthetic thus far (this & Starswarm) is specific to the case studies these benches aim to look at, it's NOT indicative of games because we don't know what features game engines will push or focus on.
There's no way someone is going to make games with many millions of drawcalls per second. Think about that a bit.
What you CAN extrapolate to, is that CPU usage will be lower across the board in DX12, this means less Total System Power for equivalent workloads.
ps. It is foolish to use synthetics to beat your hated vendor with, just as it was when Starswarm was showcased.
Would a game like Eve use enough draw calls to bring out the difference between AMD and nVidia in draw call performance? There have been some huge battles in that game.
NOTE: This is not debating the like/dislike of Eve. Just using it a an example to ask the question.
Nope. Far from it. Eve is limited by the server rather than drawcalls. They have very aggressive LOD to block little details in objects & ships out of close range. As such you won't even see individual turrets or glows/particles from distant ships, just a blip basically.
Starswarm is pretty close to being a real game. It was a tech show case for the game they're actually making.
http://www.ashesofthesingularity.com/
Its far from being a game. It pretty much shows an ugly pile of blured crap to reach the amount of draw calls.
If I had to make a guess, I dont think draw call usage would be more than 2-3x of DX11 the next couple of years. Simply because there is not enough GPU power to justify its usage without being some kind of quantity over quality.
One simple example for a massive increase in DC: Individual particles with full physics & lighting simulation via DX12 compute. Lots and lots of them. Plenty of draw calls to ensure developers can achieve their vision without the CPU being the bottleneck.
Have a look at recent Star Citizen demo, they are doing crazy stuff with their damage simulation using physics & compute. Mantle/DX12 open up to gamedevs an entire new world of creativity imo.
The GPU also needs to be able to keep up. Else you just exchange one bottleneck for another.
That's why there's Asynchronous Compute, the key part in DX12 that's going to make such creativity possible without crippling rendering performance.
The GPU also needs to be able to keep up. Else you just exchange one bottleneck for another.
Starswarm is pretty close to being a real game. It was a tech show case for the game they're actually making.
http://www.ashesofthesingularity.com/
Also a lot of games now don't even fully utilize all the gpu resources. When was the last time that you saw a game max put gpu utilization?
Also a lot of games now don't even fully utilize all the gpu resources. When was the last time that you saw a game max put gpu utilization?
Can you please explain how can it be? I thought the main advantage of new APIs was the workload of all CPU cores (instead of one in DX11). If so, should't the performance double in 2-core mode? Why there is 6.7x increase in draw call instead of 2x ?Finally with 2 cores many of our configurations are CPU limited. The baseline changes a bit – DX11MT ceases to be effective since 1 core must be reserved for the display driver – and the fastest cards have lost quite a bit of performance here. None the less, the AMD cards can still hit 10M+ draw calls per second with just 2 cores, and the GTX 980/680 are close behind at 9.4M draw calls per second. Which is again a minimum 6.7x increase in draw call throughput versus DirectX 11, showing that even on relatively low performance CPUs the draw call gains from DirectX 12 are substantial.
Can you please explain how can it be? I thought the main advantage of new APIs was the workload of all CPU cores (instead of one in DX11). If so, should't the performance double in 2-core mode? Why there is 6.7x increase in draw call instead of 2x ?
I know there is such advantage of Mantle and DX12 as direct addressing GPU, w/o CPU. But this test is about draw calls, requested from CPU to GPU. How can we boost the number of draw calls apart from using additional CPU core?
Lower overhead per call.
Thanks, it gets clearer. Do you know what causes the lower overhead (less time to execute)? Using low-level commands?The benefit isn't just better multithreading but also lower overhead.
If each draw calls in DX 12 takes less time to execute than it does in DX11 then you will have more draw calls even on a single core CPU.
Thanks, it gets clearer. Do you know what causes the lower overhead (less time to execute)? Using low-level commands?
Because it's incredibly risky/difficult. Only a limited number of people can write low-level code like that correctly and not shoot themselves in the foot in some manner. You basically need to be a guru-level programmer to properly handle D3D12.Noctifer616, thanks a lot, that really does make sense. I wonder what prevented MS and Khronos from developing such APIs for PC before, as they did for game consoles.