Ashes of the Singularity User Benchmarks Thread

Page 29 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
So you are proving my point,they don't have the knowledge of the hardware and are forced to ask,well not forced but they are only doing what they want to do and tell PR that whoever wants something better come and code our game for yourselves because we sure as hell will not.

Really no,

NVIDIA has access to the game for more than a YEAR, if they havent provided OXIDE with more MAXWELL optimized code then it simple means OXIDE software is very efficient as it is, and its only the NVIDIA MAXWELL Hardware that cant be more optimized for DX-12 any more.
People at some time will have to realize that NVIDIA Maxwell is not optimized for DX-12 as GCN is, simple as that.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
The OP that I responded to said games today, and not games tomorrow. I don't know of a single game today that is actually compute bound..

And compute is still compute. A game can still be compute bound whether the compute shaders are executed in serial or in parallel.

That said, I think it would be bad game design if a game was compute bound rather than graphics bound..

Why?

Oh, and some games are probably somewhat compute bound. Dirt Showdown I remember had an option that did tiled forward lighting(aka forward+ I believe?) in the compute shader and killed performance on nvidia cards at the time. Ryse and Hitman absolution may have also been compute bound, not sure.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Really no,

NVIDIA has access to the game for more than a YEAR, if they havent provided OXIDE with more MAXWELL optimized code then it simple means OXIDE software is very efficient as it is, and its only the NVIDIA MAXWELL Hardware that cant be more optimized for DX-12 any more.
People at some time will have to realize that NVIDIA Maxwell is not optimized for DX-12 as GCN is, simple as that.

If other DX12 games doesnt show the same. Then your entire post is completely trashed. And it likely will considering Oxide is so closely connected to AMD, both contracts and financially.

AMD also had access to Project Cars for a year btw....

And do you recall Oxide and Star Swarm? Sounds really familiar with this situation as well. Back then they crippled DX11 to try and sell Mantle.

Oxide is close to being nothing but a PR arm of AMD.
 
Last edited:

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
That said, I think it would be bad game design if a game was compute bound rather than graphics bound..

Just how educated are you to make that claim ?

You do want to be compute bound otherwise hitting bottlenecks at the rasterizer or sampler means the end of the line as far as performance goes whereas it's just plain easier to get more compute power than hoping the game doesn't hit a bottleneck with fixed function units ...
 

littleg

Senior member
Jul 9, 2015
355
38
91
1) AMD takes incentive to talk up asynchronous capability, persuades sites to put out articles (like the anandtech article)
2) they partner with the same small developer that was used to to showcase Mantle.
3) rush out a pre beta benchmark, not to the public but to the press.


You would think that a developer working on a game they were proud of would really want to show it off to fans. But the whole point of this demo, in my opinion was to promote AMD.
They sent this alpha benchmark to the press, rushed out for what purpose? To talk up AMD, obviously.

When has this happened before?

AMD is their partner. They wanted to showcase AMD, not their game. It's not even impressive at all. Has anybody seen it. I have no idea how people aren't seeing this

The public have got it. The thread you're posting in is the user benchmarks thread.
 
Feb 19, 2009
10,457
10
76
Yes, i like the part about responsibility. And yet he is blaming nVidia instead of himself. :hmm:

Yes it's funny, why would NV fake support for a DX12 feature, did they think they can get away with it when Oxide is one the few studios who are leaders in the DX12 field?
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Not even nVidia. Why would Microsoft create a fake presentation talking about Asynchronous Compute on a nVidia card when this "Asynchronous Compute" isnt even supported by nVidia.

Damn, so many questions.

And how can the driver report something which isnt even exposed by the API itself? Damn, a huge mystery.
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
4,026
753
126
Really no,

NVIDIA has access to the game for more than a YEAR, if they havent provided OXIDE with more MAXWELL optimized code then it simple means OXIDE software is very efficient as it is, and its only the NVIDIA MAXWELL Hardware that cant be more optimized for DX-12 any more.
People at some time will have to realize that NVIDIA Maxwell is not optimized for DX-12 as GCN is, simple as that.
Simple as what?
Did nvidia had access to windows 10 or to dx12?
No!They had access to dx11 and there they killed it,ashes runs twice as fast on nvidia then it does on amd.And it is a AMD title,figure that.

Oxide couldn't even code a fast shader for nvidia hardware,talking about experienced coders here, and you want us to believe that they managed to code async compute for them?
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Simple as what?
Did nvidia had access to windows 10 or to dx12?
No!They had access to dx11 and there they killed it,ashes runs twice as fast on nvidia then it does on amd.And it is a AMD title,figure that.

Oxide couldn't even code a fast shader for nvidia hardware,talking about experienced coders here, and you want us to believe that they managed to code async compute for them?

Whats this got to do with Oxide -they have been transparent in their dealings with Nvidia,and ultimately this is more an issue on the Nvidia side,and yet if were due to Oxide "poorly coding" stuff,their PR would have been out in full force,but there is absolutely nothing after they tried saying it was an MSAA issue.

This is what I don't get with people like you,its like when Apple releases a iProduct with a problem,and instead of taking Apple to task they deflect and complain about Android or Google.

Its the same thing here,instead of making excuses,take them to task - like AMD,Nvidia is a company not a charity,its the only way to do it. Also,the world plus dog knows of the poor overhead in DX11 with AMD drivers,and they have been taken to task with it anyway,but this is about DX12,right??

I haven't had an AMD card for years,but Nvidia does deserve criticism for this since I want them to take some action on improving the situation and I couldn't care less about brand E-PEEN too,since it only means if they know they can get away with,they will,since its cheaper. Don't think being loyal to AMD or Nvidia means diddly squat.

Thats my 2p in this thread - back to the bickering.
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,573
13,829
136
Oxide couldn't even code a fast shader for nvidia hardware,talking about experienced coders here, and you want us to believe that they managed to code async compute for them?
Seeing as Nvidia asked Oxide to disable async compute on their cards, one might (subjectively) think not even Nvidia can code it. Sarcasm aside, I fail to see how developers accepting optimized code from vendors is a sign of incompetence. Had they declined the offer would you deem them more worthy?

One should make a clear distinction between asking for code, which is alarming, and accepting suggestions/code, which is a sign of a rather productive collaboration.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Simple as what?
Did nvidia had access to windows 10 or to dx12?
No!They had access to dx11 and there they killed it,ashes runs twice as fast on nvidia then it does on amd.And it is a AMD title,figure that.

Oxide couldn't even code a fast shader for nvidia hardware,talking about experienced coders here, and you want us to believe that they managed to code async compute for them?

Kinda makes the theory that it runs bad because it's an AMD sponsored title seem pretty stupid.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
I'm curious what people actually consider as developer bias then. Robert Hallock recent posted the following -
https://www.reddit.com/r/pcmasterra...aming_nvidia_gpus_do_not_support_dx12/cum3xow
I think gamers are learning an important lesson: there's no such thing as "full support" for DX12 on the market today.
There have been many attempts to distract people from this truth through campaigns that deliberately conflate feature levels, individual untiered features and the definition of "support." This has been confusing, and caused so much unnecessary heartache and rumor-mongering.
Here is the unvarnished truth: Every graphics architecture has unique features, and no one architecture has them all. Some of those unique features are more powerful than others.
Yes, we're extremely pleased that people are finally beginning to see the game of chess we've been playing with the interrelationship of GCN, Mantle, DX12, Vulkan and LiquidVR.
In terms of what AMD's current architecture lacks -
https://www.reddit.com/r/pcmasterra...aming_nvidia_gpus_do_not_support_dx12/cum6vy6
Raster Ordered Views and Conservative Raster. Thankfully, the techniques that these enable (like global illumination) can already be done in other ways at high framerates (see: DiRT Showdown).
So let us accept or moment that both sides have these unique advantages and that Nvidia cannot support async shaders (although what is interesting in practice is that Intel might support all).

Then we can have 4 scenarios for a DX12 game if we look at the above in isolation -

1) Developer does not leverage async shaders, ROVs/CR.
2) Developer leverages Async Shaders but not ROVs/CR.
3) Developer leverages ROVs/CR but not Aysnc Shaders.
4) Developer leverages Async Shaders, ROVs/CR.

Which scenarios would you consider biased? This isn't even going into the further complications of how you actually allocate man hour resources in each area.

It also seems evident now that both sides want to influence the software side to better support their feature set.
 

Goatsecks

Senior member
May 7, 2012
210
7
76
I think people are reading waaaaaaay to much into these results. The game is still in development; it is not even in beta. The expectation that any hardware company should have optimised drivers for a game, in alpha and that showcases a brand new API is simply absurd.

The community needs to take a deep breath and wipe the rabid foam from its chin.
 

maddie

Diamond Member
Jul 18, 2010
4,840
4,869
136
After all is said and argued here, we have A BIG QUESTION.

I need a new card, have around $300 ish to spend and will not be able to upgrade for at least 2-3 years. Whose card should I buy at present?

Forget about suggesting Arctic Islands or Pascal as I will not be in the market so soon and need a card now.
 

Spanners

Senior member
Mar 16, 2014
325
1
0
AMD also had access to Project Cars for a year btw....
AMD had game codes for the pre-release version of the game. Not access to the source code as Oxide has stated here. So saying they also "had access" is disingenuous.

I'm not sure Project CARS analogies are really the best idea for you anyway as I recall you calling for AMD to take responsibility for their performance shortcomings in that game and not make any "excuses" about the developer. Startlingly similar situation. Where is that self reliant rhetoric now?
And do you recall Oxide and Star Swarm? Sounds really familiar with this situation as well. Back then they crippled DX11 to try and sell Mantle.
Anything at all to back that statement up? Referring back to Project CARS again I recall you being adamant that people backup this kind of statement with facts. Here's a quote from Oxide.
Q. This is just a marketing tool for AMD; you’ve obviously crippled the DirectX version!
A. We really haven’t; to be perfectly honest we’ve spent more time optimizing for DirectX than we have on Mantle. The fact is that DirectX is conceived and implemented as a single-threaded API, and so a lot of the more significant gains we see thanks to the Nitrous engine’s aggressive multithreading are badly limited by API overhead when we’re using it.

We obviously can’t prove this to the satisfaction of everyone on the Internet, but understand that our primary goal with Nitrous is to make the best engine we can so that we can open the door to the new kinds of games that we want to make (and play ourselves!). An awful lot of the gamers we hope to entertain don’t and won’t have access to Mantle-enabled hardware any time soon, so we’d be making a huge mistake as entertainers and as businesspeople by not supporting or poorly supporting non-Mantle hardware.
Are these just lies or do you know more than the engine developers?
Oxide is close to being nothing but a PR arm of AMD.
Unless you're calling Oxide liars again then it seems they are working closely with Nvidia to optimise the game and inserting Nvidia specific code on request. Curious behavior for AMD's PR arm.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Look at Star Swarm at release and now.

If we are to believe Oxide at launch, it would be impossible. not to mention the deferred context issue.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Raster Ordered Views and Conservative Raster. Thankfully, the techniques that these enable (like global illumination) can already be done in other ways at high framerates (see: DiRT Showdown).

saided it
 

TheELF

Diamond Member
Dec 22, 2012
4,026
753
126
This is what I don't get with people like you,its like when Apple releases a iProduct with a problem,and instead of taking Apple to task they deflect and complain about Android or Google.
I was responding to very concrete things.

1.Oxide gave the source code to companies so if there where a way for nvidia to improve dx12 they would have.
-Sure ok they did give out the source code,how where the companies supposed to run this source code in an dx12 environment?
So bottom line nvidia only got to work on dx11 for more then a year.

2.Oxide are very capable of programming so there is nothing wrong with their dx12 code.
-Doesn't convince me if they can't even get a shader to run fast.

This game still needs a lot of work, if afterwards it still runs crappy on nvidia and dx12 then sure it's nvidia's fault and the product is bad.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
I was responding to very concrete things.

1.Oxide gave the source code to companies so if there where a way for nvidia to improve dx12 they would have.
-Sure ok they did give out the source code,how where the companies supposed to run this source code in an dx12 environment?
So bottom line nvidia only got to work on dx11 for more then a year.

2.Oxide are very capable of programming so there is nothing wrong with their dx12 code.
-Doesn't convince me if they can't even get a shader to run fast.

This game still needs a lot of work, if afterwards it still runs crappy on nvidia and dx12 then sure it's nvidia's fault and the product is bad.

I am confident it will run fine on Maxwell once released. It would be foolish of Oxide to isolate the vast majority of PC gamers that own Nvidia's GPUs. That's just a bad business move. I expect Oxide will work closely with Nvidia to get it running well; regardless if Maxell is or isn't weak on this particular DX12 implementation.
 
Feb 19, 2009
10,457
10
76
Look at Star Swarm at release and now.

If we are to believe Oxide at launch, it would be impossible. not to mention the deferred context issue.

Yeah and a anti-NV studio who never bother to go and optimize a synthetic benchmark that's a one-off showcase.

Some of you forum warriors are insulting some of the greatest programmers around with years of experience in CREATING DirectX standard for the entire industry and game engines that push the tech boundaries, Civ 5, first for multi-thread rendering in the DX11 era, one of the first for DirectCompute shading etc.

If you don't want to believe it fine, wait and see. Don't drag good people into the mud with you. If you should be hating on crook devs, go hating on Warner Bros (Batman AK anyone?) or Ubifail.

Seriously SIGGRAPH 2015 was recent, imagine Dan Baker and Tim Foley in that same conference room educating people about next-gen APIs... and here a bunch of forum warriors are attacking their credibility on the very topic of which they were called to represent as the best in their fields.

I recall a similar attack against DICE's Johan Anderson, for being an AMD PR or shill back during the Mantle announcement. Guess what? All the Frosbite games ran excellent on NV hardware, even better than on AMD. Proof right there of a high standard of ethics.

Attacking respectable messengers when they deliver a message you dislike hearing is shameless.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Yeah and a anti-NV studio who never bother to go and optimize a synthetic benchmark that's a one-off showcase.

Some of you forum warriors are insulting some of the greatest programmers around with years of experience in CREATING DirectX standard for the entire industry and game engines that push the tech boundaries, Civ 5, first for multi-thread rendering in the DX11 era, one of the first for DirectCompute shading etc.

If you don't want to believe it fine, wait and see. Don't drag good people into the mud with you. If you should be hating on crook devs, go hating on Warner Bros (Batman AK anyone?) or Ubifail.

Seriously SIGGRAPH 2015 was recent, imagine Dan Baker and Tim Foley in that same conference room educating people about next-gen APIs... and here a bunch of forum warriors are attacking their credibility on the very topic of which they were called to represent as the best in their fields.

I recall a similar attack against DICE's Johan Anderson, for being an AMD PR or shill back during the Mantle announcement. Guess what? All the Frosbite games ran excellent on NV hardware, even better than on AMD. Proof right there of a high standard of ethics.

Attacking respectable messengers when they deliver a message you dislike hearing is shameless.

Very well said :thumbsup:
 

selni

Senior member
Oct 24, 2013
249
0
41
NVIDIA DID gave them MAXWELL OPTIMIZED code they integrated in to the engine, AotS is not GCN optimized only. Now if Async Compute is working better with GCN, It is not the software fault that NVIDIA Maxwell doesnt work as good as AMD GCN but the Hardware, simple as that.

Directly from OXIDE and Dan Baker.

http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

And yet DX12 is still slower than 11 on NV hardware. That seems like it couldn't be anything but a software problem? All the talk about async compute has nothing to do with this because it's disabled on NV hardware anyway. It's not bias (the number of times this gets thrown round... do people think devs make games explicitly just to make their favoured IHV look good?) or incompetence, but if DX12 requires coding at a level at which you can't get good performance out of differing architectures at the same time, well...

The AMD vs Nvidia comparison is interesting but says very little about how well the game is or isn't optimised for each architecture (as we have no idea how each card "should" perform in this sort of engine yet - oxide might, but we don't).
 
Last edited:

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Ahem...

It seems they both support a variation of Async Shaders, AMD's is capable of handling more before performance drops, Maxwell 2 can handle up to 31 before performance drops and levels with AMD.

"Maxwell is capable of Async compute (and Async Shaders), and is actually faster when it can stay within its work order limit (1+31 queues). Though, it evens out with GCN parts toward 96-128 simultaneous command lists (3-4 work order loads). Additionally, it exposes how differently Async Shaders can perform on either architecture due to how they're compiled.

These preliminary benchmarks are NOT the end-all-be-all of GPU performance in DX12, and are interesting data points in an emerging DX12 landscape."


https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/

It's going to be down to developers to code efficiently for all platforms.
 

TheELF

Diamond Member
Dec 22, 2012
4,026
753
126
Some of you forum warriors are insulting some of the greatest programmers around with years of experience in CREATING DirectX standard for the entire industry and game engines that push the tech boundaries, Civ 5, first for multi-thread rendering in the DX11 era, one of the first for DirectCompute shading etc.
They are insulting themselves by releasing an alpha benchmark of dx12 where dx12 is actually slower then dx11.

They should at least have released it to the major (i)GPU manufacturers (amd,nvidia,intel) for a month or so for internal testing before very well knowingly releasing it in a state that makes one company look very bad and one company look very good.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |