Ashes of the Singularity User Benchmarks Thread

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
What proof or evidence do you have that Maxwell incurs a performance hit for using asynchronous compute?

If you want proof you have to talk to game developers who are experienced with DX12. I am merely summing up the discussions on this topic from their statements on tech forums.

The proof will come when DX12 games are released or when NV release fancy DX12 demos showcasing their hardware and having gamedevs come out and praise how great Maxwell is for DX12. You have to question why this hasn't occurred already? DX11 with its Tessellation feature was showcased with much fanfare!

ps. Starswarm is draw call heavy. One aspect of DX12. Similar to the 3dMark DX12 synthetic. The other aspect is async compute & shading.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
maxwell 2 is clearly good dx11 hardware

AMD isn't exactly slaying NVidia in AotS if you haven't noticed. The performance is still relatively close, despite the game being clearly in favor of AMD and in alpha state.
 
Feb 19, 2009
10,457
10
76
AMD isn't exactly slaying NVidia in AotS if you haven't noticed. The performance is still relatively close, despite the game being clearly in favor of AMD and in alpha state.

Indeed, it's still an alpha state.

Did you know Ashes run with v-sync forced on in DX12 mode for AMD GPUs?

http://www.eurogamer.net/articles/digitalfoundry-2015-ashes-of-the-singularity-dx12-benchmark-tested

One thing that became clear after our testing as we studied video captures is that the AMD card would only run at DX12 with v-sync enabled (all the other benches are run with v-sync off - standard benchmarking practise). So this limits top-end performance on the R9 390 to 60fps, though in practise it rarely reaches this limit. Results will also take a very slight hit as in many places the GPU will be waiting for the next display refresh before it draws the next frame. We've reached out to Oxide about this, but in the meantime, to make a meaningful comparison, we re-benched all of our AMD tests with v-sync enabled to match the presentation we were getting from DX12.

Funny to see the R290X matching the 980Ti. The normal performance gap is what, 40-50%?





Frame times:





As for NV's PR statements, they released an optimized driver for this game, in alpha, so it definitely shows that Oxide have been collaborating with them. Why did they feel the need to spite Oxide for making a DX12 game that's "not representative of DX12 games"?? Really, Oxide has been one of the foundation group to push DX12, featured in GDC and even SIGGRAPH events about these new APIs: http://nextgenapis.realtimerendering.com/
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
Im more pissed at the AMD DX11 perf, Gameworks seems to be doing evil things again.
 
Feb 19, 2009
10,457
10
76
Im more pissed at the AMD DX11 perf, Gameworks seems to be doing evil things again.

You are confused. There's no GameWorks in this title.

As for why you would care about AMD's DX11 performance in this game, their entire GCN stack is DX12 capable and all show major gains.
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
You are confused. There's no GameWorks in this title.

As for why you would care about AMD's DX11 performance in this game, their entire GCN stack is DX12 capable and all show major gains.

Yes that was the point, for years people have been blaming Gameworks for the awfull AMD card perf on DX11 games, how we can realise they are just crap at DX11... and we have a total of 0 DX12 games at this point, so people should be pissed about that.

A for DX12 we need to wait, one thing whould be to have a bad scaling, like thats on 3dmark dx12 has been showing, another different thing is to have a negative scaling, thats just not possible.
And in the same benchmark a FX8370 gets his ass kicked by a I3, that should not be possible either.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Yes that was the point, for years people have been blaming Gameworks for the awfull AMD card perf on DX11 games, how we can realise they are just crap at DX11... and we have a total of 0 DX12 games at this point, so people should be pissed about that.

A for DX12 we need to wait, one thing whould be to have a bad scaling, like thats on 3dmark dx12 has been showing, another different thing is to have a negative scaling, thats just not possible.
And in the same benchmark a FX8370 gets his ass kicked by a I3, that should not be possible either.

years? why be so hyperbolic?
 
Feb 19, 2009
10,457
10
76
Yes that was the point, for years people have been blaming Gameworks for the awfull AMD card perf on DX11 games, how we can realise they are just crap at DX11...

In this game.

Look at recent benchmarks, AMD's lineup offers very competitive performance in DX11 games. Including GimpWorks titles like Watch Dogs, FC4, ACU, Witcher 3 etc.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
AMD never released a driver for this so i suspect its just not optimized. They should for the sake of those who don't upgrade. Probably once the game is released in full form.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Current gen at the time? The 5870 vs 480. The 480 killed it in Tessellation, in benchmarks like Unigine & TessMark (which were out before games). NV also hyped up Tessellation to the max, even making multiple demos to showcase its usage (compare that to the relative quiet re: Dx12!) IIRC, in games like Stalker & Metro with one of the first Tessellation usage in DX12 games, 480 had an advantage.

AMD instead focused on DirectCompute, we can see them investing in features for games that use it, deferred lighting & global illumination. So which games ran better on what hardware, comes down to whether it was Tessellation heavy (Crysis 2!) or Direct Compute heavy (Dirt Showdown) etc.

In this context, DX12 brings low level API for more thread support as well as lower overhead. This should in theory benefit everyone, but more for AMD, since their DX11 is crippled on single-thread and incapability to utilize the ACEs. The other features touted are async compute & shading. So it will depend on the game and the features used.

Ashes both have high draw calls and async compute/shading usage. The devs mention the usage of async compute for lighting, their "thousands of dynamic lights" etc.

So correct me if I'm wrong, because I'm trying to understand where you're coming from.

Your idea is that because DX12 is a low level API, it should be a more level playing field between the AMD and nvidia chips and the AMD chips will shine since they were crippled by their DX11 drivers right?
--------
Second part if I'm right about your thoughts I pose you this:
Even if the current NV architecture isn't suited well for DX12. Surely Pascal will be with DX12 in mind?
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
So correct me if I'm wrong, because I'm trying to understand where you're coming from.

Your idea is that because DX12 is a low level API, it should be a more level playing field between the AMD and nvidia chips and the AMD chips will shine since they were crippled by their DX11 drivers right?
--------
Second part if I'm right about your thoughts I pose you this:
Even if the current NV architecture isn't suited well for DX12. Surely Pascal will be with DX12 in mind?

crippled by dx11, not necessarily their drivers. It depends on how they themselves looked at it. If they thought the api was inadequate, making their drivers sort out it's limitations would be like polishing a turd.

This comes around to what pascal might be. It depends on where nvidia's vision was and what they thought dx12 would be (and when they realized it would be something else). Pascal could go either way but should be better than maxwell 2 minimum. Question is how well it will do against even stronger GCN without the same ROP limitations.
 

Glo.

Diamond Member
Apr 25, 2015
5,769
4,696
136
A guy named Bandersnatch comments on some of Mahigan's arguments from a programmer's perspctive.

I'm in agreement with him. It looks like the Oxide optimized the code mostly for AMD and not for NVidia.. How else do you explain the lack of performance for the DX12 path vs that of the DX11 path?

The DX12 should be much faster period, unless they screwed up royally..
http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/500#post_24325746
In the same thread on another page you have exact explanation why Nvidia GPUs are getting lackluster performance in DirectX 12.

It is not due to developer, but due Nvidia hardware that is incapable of running in parallel.

AMD never released a driver for this so i suspect its just not optimized. They should for the sake of those who don't upgrade. Probably once the game is released in full form.
There never will be drivers for games, and optimizations in DX12. The game talks directly to the GPU, without driver scheduling.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
So correct me if I'm wrong, because I'm trying to understand where you're coming from.

Your idea is that because DX12 is a low level API, it should be a more level playing field between the AMD and nvidia chips and the AMD chips will shine since they were crippled by their DX11 drivers right?
--------
Second part if I'm right about your thoughts I pose you this:
Even if the current NV architecture isn't suited well for DX12. Surely Pascal will be with DX12 in mind?

sounds like an amd defense, "just wait until..." that, is a rabbit hole.
 
Last edited:
Feb 19, 2009
10,457
10
76
So correct me if I'm wrong, because I'm trying to understand where you're coming from.

Your idea is that because DX12 is a low level API, it should be a more level playing field between the AMD and nvidia chips and the AMD chips will shine since they were crippled by their DX11 drivers right?
--------
Second part if I'm right about your thoughts I pose you this:
Even if the current NV architecture isn't suited well for DX12. Surely Pascal will be with DX12 in mind?

Correct for both. GCN is running with 1 leg crippled on DX11, as it was designed for these new APIs from the ground up. Refer to TechReport & Computerbase.de's article regarding Fury X and the trade-off of its uarch.

DX12 allows GCN to shine. Whether it allows Maxwell to shine further (because its already optimized for DX11), remains to be seen. IF not, then surely, Pascal will be great for DX12, of that, I have no doubt.

Basically DX12's heritage will give AMD an advantage for current hardware, but once NV has the full DX12 specs, they would have made changes to take advantage of the new API. We will assuredly see NV hyping up DX12 performance & uarch of Pascal once the time comes. As I've said a few times, the real DX12 battle is Artic Island vs Pascal. Oxide agrees, in their SIGGRAPH 2015 presentation, they mention specifically, the only reason we don't see DX12 shine even more is because of current hardware limitations. They mention next-gen hardware will be 200% faster (not twice as fast, three times as fast!) in DX12 mode.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
And your post is obviously meant to deflect as much blame on NVidia as possible..

If you knew anything about DX12, you would obviously find fault with the benchmarks as well.. The whole point of DX12 is to put the burden of performance and optimization mostly on the developers, since it's their code and they should know it better than the IHVs..

So Dan Baker's claim that IVHs had access to the source code is really irrelevant, since it was primarily his studio's responsibility to optimize the code and make sure it runs properly across all hardware..

Secondly, the game sometimes runs SLOWER in DX12 mode compared to DX11. That in and of itself should raise flags if piece of code is running slower using a high performance low level API vs a highly abstractive one like DX11..

Thirdly, I remember several videos where AMD and Oxide talked about the superior parallel rendering of DX12 and Mantle compared to DX11, and how multicore CPUs would finally begin to stretch their legs.

Well where is it? Looking at the CPU benchmarks, it looks like all the talk of multicore CPUs gaining larger increases has gone out the window. Here, a dual core i3 4330 is faster than an AMD FX 8 core 8370.

It's possible that the poor CPU scaling may be impacting NVidia's performance..


nvidia had access to the source, rewrote a shader(s), knew it was going to be a benchmark and released a beta driver for it. however because it lost street cred' nvidia and its blind followers try to discredit a dev shop who actually did dx11 command lists right that gave nvidia hardware a decent boost. This is the scumbag behavour i expect from AMD not the "market leader" (who always leads from behind btw). I guess nvidia felt threatened enough to earrant such a display.
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
No way GCN has been ever developed with anything but DX11 in mind, they must have started working on that like +6 years ago, at least not GCN 1.0.

There is one thing, again this is all based on speculation, if its a hardware limitation what magic DX11 is doing to perform more than DX12 on Nvidia? seriusly, there is just no reason for it.

DX12 is a low level API, remember what happens with Low level apis and diferent hardware? is the reason of why we had a api like DX11. To me we are talking about a game that is optimised for AMD GCN... and thats why it works like crap on Maxwell, thats why we need to wait and see.

nvidia had access to the source, rewrote a shader(s), knew it was going to be a benchmark and released a beta driver for it. however because it lost street cred' nvidia and its blind followers try to discredit a dev shop who actually did dx11 command lists right that gave nvidia hardware a decent boost. This is the scumbag behavour i expect from AMD not the "market leader" (who always leads from behind btw). I guess nvidia felt threatened enough to earrant such a display.

Nvidia is not going to code the game for them, we dont even have an idea of what they did or for witch API, coding a shader is not really a big deal.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
No way GCN has been ever developed with anything but DX11 in mind, they must have started working on that like 6 years ago, at least not GCN 1.0.

There is one thing, again this is all based on speculation, if its a hardware limitation what magic DX11 is doing to perform more than DX12 on Nvidia? seriusly, there is just no reason for it.

DX12 is a low level API, remember what happens with Low level apis and diferent hardware? is the reason of why we had a api like DX11. To me we are talking about a game that is optimised for AMD GCN... and thats why it works like crap on Maxwell, thats why we need to wait and see.

maybe the dx11 drivers are just better at memory management and submitting drawcalls currently compared to what oxide is capable of?
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Correct for both. GCN is running with 1 leg crippled on DX11, as it was designed for these new APIs from the ground up. Refer to TechReport & Computerbase.de's article regarding Fury X and the trade-off of its uarch.

DX12 allows GCN to shine. Whether it allows Maxwell to shine further (because its already optimized for DX11), remains to be seen. IF not, then surely, Pascal will be great for DX12, of that, I have no doubt.

Basically DX12's heritage will give AMD an advantage for current hardware, but once NV has the full DX12 specs, they would have made changes to take advantage of the new API. We will assuredly see NV hyping up DX12 performance & uarch of Pascal once the time comes. As I've said a few times, the real DX12 battle is Artic Island vs Pascal. Oxide agrees, in their SIGGRAPH 2015 presentation, they mention specifically, the only reason we don't see DX12 shine even more is because of current hardware limitations. They mention next-gen hardware will be 200% faster (not twice as fast, three times as fast!) in DX12 mode.

Can you link me to those articles so I can be on the same page as you?

Alright, that makes a lot more sense now though.
And I been saying that due to the speed of next gen hardware, none of us are going to care about the 980ti/fury x. It's why I'm struggling to buy them when I know next gen hardware is going to be fast.
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
maybe the dx11 drivers are just better at memory management and submitting drawcalls currently compared to what oxide is capable of?

If that the case them there is something VERY wrong with it, DX11 should not be better in any way.

I think people are missing the point of a low level API, it needs hardware specific optimizations on the game side and its a lot of work, and Oxide has been working on GCN for a while now.
 
Last edited:

Ma_Deuce

Member
Jun 19, 2015
175
0
0
I wonder if this will help the 390/x fully utilize it's 8GB or ram. I realize that it's way to early to tell anything but that would be interesting to see the 390x outgunning a 980ti especially with them both in xfire/sli.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
I wonder if this will help the 390/x fully utilize it's 8GB or ram. I realize that it's way to early to tell anything but that would be interesting to see the 390x outgunning a 980ti especially with them both in xfire/sli.

The reason the 390 is appealing to me is crossfire with the 8GB VRAM. That's a flexible combo. Good for new games, but for old games you can crank that AA.

I don't see why the 390 in Crossfire even NOW is a bad choice vs the 980Ti. Well, for me anyway as I would never play games that don't have good crossfire/sli support and I'm more than willing to wait up to 5+ years for a game to have a working fix for crossfire/sli, or just have single card performance be enough to play the game in the level of graphical fidelity I desire.

But I'm a patient beast =D.
 

Ma_Deuce

Member
Jun 19, 2015
175
0
0
I'm more than willing to wait up to 5+ years for a game to have a working fix for crossfire/sli, or just have single card performance be enough to play the game in the level of graphical fidelity I desire.

But I'm a patient beast =D.

I'm with you there. Wait a few years, let all of the bugs get patched out and scoop up a GOTY edition with all of the DLC on the cheap. By that time I can play the game without compromising on the graphics.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |