PCPERDX12 GPU and CPU Performance Tested: Ashes of the Singularity Benchmark

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Nvidia's advantage in DX11 may come down to both AMD not bothering to optimize for DX11, and also Nvidia making full use of the deferred contexts feature in DX11, which AMD has never supported. Ashes of the Singularity uses the same engine as the Star Swarm demo, right? That sorted deferred contexts. Couldn't say why there's sometimes a performance regression fur Nvidia in DX12.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Well, since we can't test this ourselves we will never know what's going on. With so many different results that don't make much sense, this is a benchmark that I can't draw conclusions from yet. I thought so at first but not so much now.
 

positivedoppler

Golden Member
Apr 30, 2012
1,112
174
106
Here's another one. 780Ti / i7-3770 gets obliterated by 770 / i5-3570K.

The only explanation I can think of for this kind of stuff is that Nvidia has settings and optimizations in their DX12 drivers that are on for some cards, but not others.

There's always the possibility that Nvidia optimized their hardware for Maxwell years ago on the drawing board for what DirectX 11 had to offer. There's the possibility that maybe Maxwell's hardware is better suited in games for DirectX 11 than DirectX 12.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
There's always the possibility that Nvidia optimized their hardware for Maxwell years ago on the drawing board for what DirectX 11 had to offer. There's the possibility that maybe Maxwell's hardware is better suited in games for DirectX 11 than DirectX 12.

While that could be, why do that when you advertise full DX12 support? Now I know what people will say, supporting it doesn't mean performance in it and I get that but still doesn't add up to me. Besides, 780 and 770 aren't even Maxwell cards. I could accept that we are seeing the potential of AMD cards come out now and Nvidia cards just can't keep pace when you remove overhead. However, it's too early to call it like that.

I'm leaning heavily toward the need for better drivers for DX12 from Nvidia.
 
Last edited:

brandonmatic

Member
Jul 13, 2013
199
21
81
Here's ExtremeTech's conclusion. They didn't seem to find a MSAA bug for Nvidia.

As things stand right now, AMD showcases the kind of performance that DirectX 12 can deliver over DirectX 11, and Nvidia offers more consistent performance between the two APIs. Nvidia’s strong performance in DX11, however, is overshadowed by negative scaling in DirectX 12 and the complete non-existence of any MSAA bug. Given this, it’s hard not to think that Nvidia’s strenuous objections to Ashes had more to do with its decision to focus on DX11 performance over DX12 or its hardware’s lackluster performance when running in that API.

http://www.extremetech.com/gaming/2...-singularity-amd-and-nvidia-go-head-to-head/3

Edit: AMD's DX11 performance is really awful compared to Nvidia, which seem to have done some impressive DX11 driver work.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
When there is no game yet maybe Nvidia didn't put much work into it? It is pre-alpha right? They might have just been lazy.

Oxide doesn't know what Nvidia did with their driver either, they offered code for the game but that doesn't mean they did any work in their drivers for it yet. Nvidia usually releases a game ready driver for new games right at release. I don't know how long they hold those optimizations out.

nvidia did drop a beta driver for this game engine, http://www.geforce.com/whats-new/articles/geforce-355-60-whql-driver-released
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
This reminds me of Tessellation. Recall how NV pretty much said Tessellation wasn't important. Until Fermi. Then it's all about Tessellation.

I'm expecting the same with Pascal. It's all about DX12 & VR post-Pascal.
I was thinking along the same lines.
Overreacting doesn't begin to describe the conclusions being drawn from this benchmark.
Mostly from Nvidia they seem to be all up in arms and quick to discredit an alpha level benchmark.
 

positivedoppler

Golden Member
Apr 30, 2012
1,112
174
106
While that could be, why do that when you advertise full DX12 support? Now I know what people will say, supporting it doesn't mean performance in it and I get that but still doesn't add up to me. Besides, 780 and 770 aren't even Maxwell cards. I could accept that we are seeing the potential of AMD cards come out now and Nvidia cards just can't keep pace when you remove overhead. However, it's too early to call it like that.

I'm leaning heavily toward the need for better drivers for DX12 from Nvidia.

Anything is possible so one would hope that it's merely a driver's issue. I don't know how much DirectX12 inherited from Mantle, but AMD has been designing their Hardware for Mantle for sometime now and current Maxwell was released almost a year ago. It was probably another 1/2 year before that at the least that Nvidia finalized their design for Maxwell. It would have left them with little time to rearchitect Maxwell if DirectX 12 was a popup thing for them.


Microsoft announced DirectX 12 around March of 2014. 970 and 980 came out 6 months later.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
While that could be, why do that when you advertise full DX12 support? Now I know what people will say, supporting it doesn't mean performance in it and I get that but still doesn't add up to me. Besides, 780 and 770 aren't even Maxwell cards. I could accept that we are seeing the potential of AMD cards come out now and Nvidia cards just can't keep pace when you remove overhead. However, it's too early to call it like that.

I'm leaning heavily toward the need for better drivers for DX12 from Nvidia.

I agree with this, still too early to draw conclusions, especially with a bench that nvidia doesnt believe will show true dx12 potential.
 
Feb 19, 2009
10,457
10
76
I agree with this, still too early to draw conclusions, especially with a bench that nvidia doesnt believe will show true dx12 potential.

Maybe NV should shift GameWorks, into an actual games production branch, that way, they can show the true DX12 potential in their own games if they are not happy with the results from a studio that was at THE forefront of Mantle & DX12. Oxide & DICE are basically the devs synonymous with these new APIs afterall.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
well the NVIDIA driver shloud be optimised

date: march 20, 2014

"NVIDIA will support the DX12 API on all the DX11-class GPUs it has shipped; these belong to the Fermi, Kepler and Maxwell architectural families. - See more at: http://blogs.nvidia.com/blog/2014/03/20/directx-12/#sthash.Lg1CBec0.hr50ELd0.dpuf"

So I guess that means it's fine to say that AMD's GCN cards all support DX12? Even though they're only feature level 11.1, Fermi and Kepler are even further back at 11.0.
 

kagui

Member
Jun 1, 2013
78
0
0
"NVIDIA will support the DX12 API on all the DX11-class GPUs it has shipped; these belong to the Fermi, Kepler and Maxwell architectural families. - See more at: http://blogs.nvidia.com/blog/2014/03/20/directx-12/#sthash.Lg1CBec0.hr50ELd0.dpuf"

So I guess that means it's fine to say that AMD's GCN cards all support DX12? Even though they're only feature level 11.1, Fermi and Kepler are even further back at 11.0.

well i asume yes, but is better to wait until tested
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
These results validate that the efforts AMD put into Mantle/Vulkan are beginning to pay off. Without Mantle/Vulkan/DX12 AMD could never have caught up to Nvidia given the fact that Nvidia have a higher R&D budget and focus primarily on GPUs while AMD with a smaller R&D budget focus on high performance x86 CPU cores (Zen/ Zen+) and GPUs (GCN). DX12 levels the playing field for AMD against Nvidia. The next 12 months could see the AMD Fury X finally fulfill its potential as more DX12 games like Starwars Battlefront, Deus Ex Mankind Divided, Hitman, Ashes of the Singularity and Fable Legends launch. I think almost all the major engines - Frostbite, Cryengine, Unreal, Unity are going to have DX12 support without much of a time lag. I believe DX12 will see faster adoption as more than anything it will increase power efficiency and reduce power consumption through reduced CPU API overhead. AMD's efforts with Mantle should be lauded for taking the PC gaming industry in the right direction - reduced API overhead, more programming power to developers and better efficiency. :thumbsup:
 
Last edited:
Feb 19, 2009
10,457
10
76
I agree @raghu78!

AMD was criticized for designing GCN as being so future focused with poor efficiency for the current, but they really had no choice. They don't have the budget to constantly come up with new uarch. GCN was made to last and its made to excel with a new API beyond DX11.

They gambled on GCN & Mantle's success and later pawning for free to various players (Apple, Kronos), which would have to force MS to also adopt it or fall behind. A good move given their financial situation.

As said, the battle at DX12 will certainly be GCN vs Pascal.

What's really interesting is why Faildozer performs so badly. Again we were all told to expect great multi-core scaling and the tests show its HALF the speed of an i7.
 

selni

Senior member
Oct 24, 2013
249
0
41
Here's another one. 780Ti / i7-3770 gets obliterated by 770 / i5-3570K.

<image>

The only explanation I can think of for this kind of stuff is that Nvidia has settings and optimizations in their DX12 drivers that are on for some cards, but not others.

The 770 results in the wccftech article appear to be a CPU test (second line) vs "Full System Test" whatever differences that implies. The average CPU framerate number appears to be the same (ish) between them.

Not sure how the benchmark setup works but I think wccftech just screwed up here and ran the wrong thing.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
Borrowing from .vodka's quote:

"All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months."
Then, farther on:



"Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.


To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.
We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present)."
Thank goodness for no middleware! This is how games should be developed and helmed by the developer themselves beginning to end.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I agree @raghu78!

AMD was criticized for designing GCN as being so future focused with poor efficiency for the current, but they really had no choice. They don't have the budget to constantly come up with new uarch. GCN was made to last and its made to excel with a new API beyond DX11.

They gambled on GCN & Mantle's success and later pawning for free to various players (Apple, Kronos), which would have to force MS to also adopt it or fall behind. A good move given their financial situation.

As said, the battle at DX12 will certainly be GCN vs Pascal.

AMD GCN GPUs are going to age very well given the console heritage and the low level APIs (DX12/Vulkan/Mantle). Now what we need is AMD to improve the efficiency of their GPUs to compete with Nvidia (perf/watt, perf/sq mm and perf/transistor). Thats the key battle for the next gen FINFET GPUs launching in late 2016. AMD has disappointed in terms of efficiency and has lost market share badly. Maxwell is a fantastic showcase for GPU efficiency and AMD need to get back in the game.

What's really interesting is why Faildozer performs so badly. Again we were all told to expect great multi-core scaling and the tests show its HALF the speed of an i7.

Bulldozer is a wretched architecture with too many mistakes. One of the main flaws was the pathetic performance of the cache subsystem and horrible cache latency. For good gaming performance you need a very good cache subsytem with low latency. Hopefully Keller and his team can correct the blunders of the past with Zen as its a clean sheet design. AMD's future and survival depends on Zen and the next gen FINFET GPUs. If those are not competitive AMD is dead.
 
Feb 19, 2009
10,457
10
76
The 770 results in the wccftech article appear to be a CPU test (second line) vs "Full System Test" whatever differences that implies. The average CPU framerate number appears to be the same (ish) between them.

Not sure how the benchmark setup works but I think wccftech just screwed up here and ran the wrong thing.

Reading various review, the benchmark is very flexible, so that one can setup to test CPU bottlenecks, GPU bottlenecks or a mixture. On a CPU bottleneck test, graphics settings are minimized.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
Reading various review, the benchmark is very flexible, so that one can setup to test CPU bottlenecks, GPU bottlenecks or a mixture. On a CPU bottleneck test, graphics settings are minimized.


I think I read that the gpu is a non factor in the cpu test. The gpu is simulated, bottleneck free.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |