Ashes of the Singularity User Benchmarks Thread

Page 22 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VR Enthusiast

Member
Jul 5, 2015
133
1
0
They'd sell off their entire inventory at $2, Nvidia would buy it all, and resell it and take AMD out and keep themselves....

Right...maybe you want to think a bit harder about that one.

Dumping your inventory first off is illegal like that.

Bribing devs to not use your competitors advanced technology is illegal.

Secondly, it's been proven to be a completely horrendous strategy so really don't see that.

Where was this proven?

If they were a sitting duck they'd sell off their company... lol....
Company's are in the business of making a profit, not "taking another company with them".

You didn't really follow any of the discussion did you? The point I'm making is that AMD is not so helpless as to be some sitting duck for Nvidia to abuse at their leisure.

If they were then Nvidia's awful DX12 performance wouldn't be an issue for so many of you here. :thumbsup:
 
Last edited:

VR Enthusiast

Member
Jul 5, 2015
133
1
0
While I agree with you that about open source, etc.. I think your politics suck. You might want to leave them out of VC&G. Not only isn't this the place, but it could bias people's opinion of you, unnecessarily.

Agreed on that. Mahigan could use a bit less of the unneeded stuff and a bit more sticking to the technology.

Like everybody else here (me too).

(English is not my first language so some things don't come across well).
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
Nvidia does not really need to pay them anything, if its a feature that only works on AMD then no game will gona use it, we already seem that happening with Deferred Contexts on DX11, and thats was on Nvidia side that has most of the market. Same reason of why Mantle was DOA no matter if it was good or not. Same reason of why nvidia has to pay for them to use Gameworks, etc.
 

VR Enthusiast

Member
Jul 5, 2015
133
1
0
Nvidia does not really need to pay them anything, if its a feature that only works on AMD then no game will gona use it, we already seem that happening with Deferred Contexts on DX11, and thats was on Nvidia side that has most of the market. Same reason of why Mantle was DOA no matter if it was good or not. Same reason of why nvidia has to pay for them to use Gameworks, etc.

It's a feature that is on the consoles > is on AMD by default (easy) > Needs to be removed before AMD loses the benefit!

This is a big difference guys, please try to understand it's not like GW. AMD's advantage is built-in, it needs to be deliberately taken out! This is illegal if done for financial incentives.
 
Feb 19, 2009
10,457
10
76
Nvidia does not really need to pay them anything, if its a feature that only works on AMD then no game will gona use it, we already seem that happening with Deferred Contexts on DX11, and thats was on Nvidia side that has most of the market. Same reason of why Mantle was DOA no matter if it was good or not. Same reason of why nvidia has to pay for them to use Gameworks, etc.

As a developer if you had to make a cross-platform game, you would design your game engine around the abilities of the important market, consoles. You would therefore try to find ways to maximize their performance due to their weaker APUs. How would you achieve that? Well, the ACEs are there to be used, so you offload all your lighting, shadow processing and physics compute to operate in Async Mode.

Now when you port it over to your PC, you find that AMD runs it well, NV runs it less well. What option would you take that is the fastest and easiest?

Assuming:
1. Neutral title
2. AMD sponsored title
3. NV sponsored title

Btw, some food for thought, there are games where AMD runs better. Such as SOM, Civ, Hitman, Sleeping Dogs etc. It did not hurt their sales. Do you think the average gamers blame devs or NV/drivers when a game runs poorly on NV GPUs?
 
Last edited:
Feb 19, 2009
10,457
10
76
Some more words from insiders (AMD Robert Hallock):

https://www.reddit.com/r/AdvancedMi...ide_games_made_a_post_discussing_dx12/cul9auq

NVIDIA claims "full support" for DX12, but conveniently ignores that Maxwell is utterly incapable of performing asynchronous compute without heavy reliance on slow context switching.

And it comes back to context switching... that term is familiar by now I hope. That's the single engine that NV's uarch have, they cannot do graphics AND compute simultaneously without a context switch.

Also, thank you @Zlatan for enlightening the masses since many months ago, it is exactly as you say, Maxwell does have a problem with context switching and in fact do not support async compute since it doesn't occur in parallel (and therefore, no performance gains can be had from using it, at best, it will have a small perf impact, at worse, a big one).

GCN has supported async shading since its inception, and it did so because we hoped and expected that gaming would lean into these workloads heavily. Mantle, Vulkan and DX12 all do. The consoles do (with gusto). PC games are chock full of compute-driven effects.

AMD going public with that is a first salvo. Let's see the response from NV.
 
Last edited:

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Nvidia does not really need to pay them anything, if its a feature that only works on AMD then no game will gona use it, we already seem that happening with Deferred Contexts on DX11, and thats was on Nvidia side that has most of the market. Same reason of why Mantle was DOA no matter if it was good or not. Same reason of why nvidia has to pay for them to use Gameworks, etc.

The PC marketshare is slipping or has slipped below 20% for AMD gpus. Certain peoplle really seem to be full of wishful thinking than based in reality. There was a ps4 developer posting here at anandtech, stating that it is unknown if Asynchronous compute will make much of a difference at all. In theory, yes but in practice its feasibility is very dependent on the application. But now we have so many posters turning this one capability into the most important feature of dx12.

We have one alpha benchmark that starts up with an AMD logo and people are running wild with it.

Surely it should be understood that maxwell is a totally different architecture. What this means is not everything will work the same as it does on GCN. does these mean nvidia is doomed or that they simply can't compete in dx12? What a huge jump in logic from such limited information coming from an alpha stage benchmark.

Nvidia asked the developer not to have a feature enabled at this time, so the jump in logic is that nvidia will not and can never use that feature. Well, they also asked that msaa not be used at this point, I guess nvidia cards can't do msaa. Everyone spread the word, nvidia cards can't do msaa.

How many games are gonna be like this alpha stage ashes benchmark? How many games will be on other engines? I somehow feel like this will be a tiny speck compared to the big block buster AAA games built for dx12.

Let's say that nvidia cannot do any asynchronous compute at all....Considering the xbone has only 2 ACE units, i can't see why or how nvidia will be in such a terrible position once we actually have dx12 games come out.

See, i won't kid myself or anyone else. The ACE is strictly GCN. I mean, it is an AMD thing. That doesn't mean that maxwell cannot perform any asynchronous functions at all. Perhaps they achieve this in a drastically different way, perhaps the specific implementation is causing stalls. Does this mean that nvidia can't fix this, that they are totally helpless?
We have so so little information. Yet people are writing books. You cannot ignore the fact that the PC gaming market is 80% nvidia. Developers want to sell their games, they have to.

We saw nvidia fix star swarm so dramatically their dx11 results were jaw dropping. It's easy to see there is a problem with nvidia dx12 performance in the Ashes alpha demo but it is a real far stretch to make this into anything else.

Ultimately, we have very little data and a lot of wishful thinking. The real games of dx12 will be all that matters, not some alpha benchmark that starts up with an AMD logo on it
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
Ultimately, we have very little data and a lot of wishful thinking. The real games of dx12 will be all that matters, not some alpha benchmark that starts up with an AMD logo on it
How about games (and game engines) with Nvidia logos everywhere are those too going to be discounted as a fair assessment of hardware speed/capability?
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
If anyones curious (and assuming this wasn't posted already) a guy from the beyond3D forums wrote a small program that basically runs a compute job alone, a graphics job alone, and then compute and graphics together (link).

Basically if your GPU supports async compute, then the time taken for compute and graphics together should be less than the sum of them run alone (but not less than the longest of the two, since they can be run in parallel with async compute). Without async compute the time taken to run compute and graphics together should be exactly equal to the sum of them run separately (since without async compute, they are run serially).

Early tests (from the linked beyond3D thread) indicate that Maxwell 2 (GTX 960 and GTX 970 have been tested) does in fact not support async compute (as far as this test is concerned anyway), since the time taken for compute+graphics, is basically the same as the sum of the two run separately.

Of course this is not an actual game, and there might be any number of bugs in it and/or unintended behavior, so take it with the usual grain of salt
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
As a developer if you had to make a cross-platform game, you would design your game engine around the abilities of the important market, consoles. You would therefore try to find ways to maximize their performance due to their weaker APUs. How would you achieve that? Well, the ACEs are there to be used, so you offload all your lighting, shadow processing and physics compute to operate in Async Mode.

Now when you port it over to your PC, you find that AMD runs it well, NV runs it less well. What option would you take that is the fastest and easiest?

Assuming:
1. Neutral title
2. AMD sponsored title
3. NV sponsored title

Btw, some food for thought, there are games where AMD runs better. Such as SOM, Civ, Hitman, Sleeping Dogs etc. It did not hurt their sales. Do you think the average gamers blame devs or NV/drivers when a game runs poorly on NV GPUs?

CIV5 as far i recall was running best on NV because of deffered contexts im not sure if i missed anything here. The rest are barely a game, specially if it comes from ubisoft.

xone runs DX12, but the PS4 do not, not matter how similar it is, it needs for a developer to put in money, no company ever do that, specially howdays.

It was the same argument about why everyone whould use Mantle(GCN is on Consoles!!!!!!) and it did NOT happen! and it will not happen now.

Just stop for a moment and think, if a company has to put in money for it and it only works for a portion of the target market, they will not do it, i mean wake up, games are getting released before their development is complete.

If GameWorks where Nvidia only, no one whould use it, apart from the ones getting paid.
 
Last edited:

dogen1

Senior member
Oct 14, 2014
739
40
91
Is that program from Sebbi? Since he said he was thinking of doing something like that but was too busy before.

No, he said he was too busy.

If anyones curious (and assuming this wasn't posted already) a guy from the beyond3D forums wrote a small program that basically runs a compute job alone, a graphics job alone, and then compute and graphics together ([URL="[/URL]).

Basically if your GPU supports async compute, then the time taken for compute and graphics together should be less than the sum of them run alone (but not less than the longest of the two, since they can be run in parallel with async compute). Without async compute the time taken to run compute and graphics together should be exactly equal to the sum of them run separately (since without async compute, they are run serially).

Early tests (from the linked beyond3D thread) indicate that Maxwell 2 (GTX 960 and GTX 970 have been tested) does in fact not support async compute (as far as this test is concerned anyway), since the time taken for compute+graphics, is basically the same as the sum of the two run separately.

Of course this is not an actual game, and there might be any number of bugs in it and/or unintended behavior, so take it with the usual grain of salt

Do we know yet that the test is actually designed in a way that's conducive for async compute to improve performance?

It doesn't look like it considering none of the cards seem to benefit(Unless I'm reading the results wrong).
 
Feb 19, 2009
10,457
10
76
No, he said he was too busy.



Do we know yet that the test is actually designed in a way that's conducive for async compute to improve performance?

It doesn't look like it considering none of the cards seem to benefit(Unless I'm reading the results wrong).

GCN completes the async (compute + graphics) task in the same amount of time as if it was running either graphics or compute only.

NV completes the asyc task in the time it takes for the combined graphics + compute, meaning it doesn't do it asynchronously, it does it in serial, one then the other. The time is added up.

That's about all we can say without knowing more about the software and what the actual task is.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
GCN completes the async (compute + graphics) task in the same amount of time as if it was running either graphics or compute only.

NV completes the asyc task in the time it takes for the combined graphics + compute, meaning it doesn't do it asynchronously, it does it in serial, one then the other. The time is added up.

That's about all we can say without knowing more about the software and what the actual task is.

Oh right. Didn't notice. Still wondering why the times increase for each iteration.
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
CIV5 as far i recall was running best on NV because of deffered contexts im not sure if i missed anything here. The rest are barely a game, specially if it comes from ubisoft.

xone runs DX12, but the PS4 do not, not matter how similar it is, it needs for a developer to put in money, no company ever do that, specially howdays.

I was the same argument of why everyone whould use Mantle(GCN is on Consoles!!!!!!) and it did NOT happen! and it will not happen now.

Just stop for a moment and think, if a company has to put in money for it and it only works for a portion of the target market, they will not do it, i mean wake up, games are getting released before their development is complete.

If GameWorks where Nvidia only, no one whould use it, apart from the ones getting paid.

Exactly...This i posted above but you rephrase it in a better way.
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
So to resume, the problem is that nvidia cards cant do compute and graphics at the same time? There is anything else beyond that?
 
Feb 19, 2009
10,457
10
76
CIV5 as far i recall was running best on NV because of deffered contexts im not sure if i missed anything here. The rest are barely a game, specially if it comes from ubisoft.

Civ BE I meant. The other games aren't Ubisoft. Get facts straight please.

My point was console devs will take the easiest route when they port to PC. It's far easier to enable/disable a certain feature than to write an alternate path that runs well on NV. For that to happen, NV would have to pay them, ie. GameWorks sponsorship.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Oh right. Didn't notice. Still wondering why the times increase for each iteration.

The time doesn't actually increase for each iteration (thread), but rather it increases every 32 threads:



This is on a GTX 980 Ti. I don't know exactly what's going on, but I would imagine that it might have something to do with Nvidia's HyperQ implementation "only" being able to handle 32 threads at a time, whereas the 8 ACEs in GCN 1.1/1.2 can handle up to 64 threads at a time (I don't have a graph from an AMD GPU to compare to though).

Of course in actual games I doubt you would ever see anywhere near this many threads.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
The time doesn't actually increase for each iteration (thread), but rather it increases every 32 threads:



This is on a GTX 980 Ti. I don't know exactly what's going on, but I would imagine that it might have something to do with Nvidia's HyperQ implementation "only" being able to handle 32 threads at a time, whereas the 8 ACEs in GCN 1.1/1.2 can handle up to 64 threads at a time (I don't have a graph from an AMD GPU to compare to though).

Of course in actual games I doubt you would ever see anywhere near this many threads.

I meant increased in general not necessarily for each step, but yeah I didn't know what the number meant.


Actually he says it only uses 2 command queues.. I'm not sure which is which now.

edit - no graph, but 290x stays constant for all 128 whatevers
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
Do you realise that is not a big deal right? how much OpenCL/DirectCompute is used in gaming? those has been around a long time.

But it kinda makes me wonder if we can finally use those big IGPs for something usefull.

Civ BE I meant. The other games aren't Ubisoft. Get facts straight please.

My point was console devs will take the easiest route when they port to PC. It's far easier to enable/disable a certain feature than to write an alternate path that runs well on NV. For that to happen, NV would have to pay them, ie. GameWorks sponsorship.

I trought you said Assassin creed in that bunch, i do remember you mentioning it.

The easier route is something that work well on every platform, they will always go with the less work option.
If they need to use AsyncCompute for XONE and AMD in PC only they will just not use it.

It remains to be seem, but remember how much consoles helped Mantle... i remember people arguing about how easier will be to port from console to mantle because its GCN.... and? what did happen?
 
Feb 19, 2009
10,457
10
76
So @Shivansps says Compute is no big deal in games.. devs say it is, thats why they wanted next-gen APIs.

Sony defnitely thinks its important, thats why they help AMD designed GCN with 8 ACEs, and PS4 games have already started to use Async Compute to give some some performance gains.

As said, when working with weaker hardware on consoles, devs will have to find methods to extract peak performance out of it, so ofc the statement from Oxide that console devs are pushing the console's ACEs in next-gen games makes perfect sense.

The question is what will be done for those heavy async compute console games when they get ported to PC? This is where you are free to speculate. But to say consoles don't use async compute, is wrong.
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
So @Shivansps says Compute is no big deal in games.. devs say it is, thats why they wanted next-gen APIs.

Sony defnitely thinks its important, thats why they help AMD designed GCN with 8 ACEs, and PS4 games have already started to use Async Compute to give some some performance gains.

As said, when working with weaker hardware on consoles, devs will have to find methods to extract peak performance out of it, so ofc the statement from Oxide that console devs are pushing the console's ACEs in next-gen games makes perfect sense.

The question is what will be done for those heavy async compute console games when they get ported to PC? This is where you are free to speculate. But to say consoles don't use async compute, is wrong.

Reemplace "Compute" for "Mantle" and its the exact same argument from Mantle launch hype... again how Mantle is doing howdays?

So i hope you understand that argument alone does not work.
 

Hitman928

Diamond Member
Apr 15, 2012
5,622
8,847
136
Here is Tahiti (GCN1.0) for comparison, it's not mine, I pulled the numbers from the thread to graph them.

 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |