Ashes of the Singularity User Benchmarks Thread

Page 25 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Read on: https://forum.beyond3d.com/posts/1869076/

Its the way the time is reported. Compute takes 7.6ms on GCN.

Nope. That is the theoretical execution time without overhead.

This is why I advised not to draw conclusions from a synthetic that was designed ONLY to test async compute function as present (parallel execution) or absent (serial, addictive time result).

The only conclusion you can draw from that, is NV GPUs cannot actually process graphics & compute asynchronously while GCN can, ASSUMING the program is functional as indicated and its not an error in programming.

Async compute is working on gcn in that test. Nvidia's hardware may be completely incapable or it may not be enabled by the driver.

The test, as written, shows that gcn is having a problem with kernel initialization but can process a lot of kernels simultaneously. The Fury X also seems to have much lower pixel performance.

Ultimately though AMD has gone for the large shader array approach. Fiji has a lot of shader power and may do quite well in future games.

As far as DX12 games go all current gpu architectures are suboptimal in some way. I do not expect a well suited gpu architecture for DX12 until AMD's or Nvidia's next generation. AMD is quite likely to do better but buying any gpu today specifically for DX 12 is foolish.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
As far as DX12 games go all current gpu architectures are suboptimal in some way. I do not expect a well suited gpu architecture for DX12 until AMD's or Nvidia's next generation. AMD is quite likely to do better but buying any gpu today specifically for DX 12 is foolish.

not really. If you are buying, might as well add it to your list of considerations. It might even be smart to dump a kepler or sub-980 GPU rather than risk lower value later.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
But only if you expect every game form here on out to be a clone of ashes.

According to Oxide's dev, Ashes isn't particularly a async compute showcase, it uses a moderate amount of it, but recognizes the console devs are dumping even more compute asynchronously than what their bench does. So, depending on the engine and the dev (i.e Unreal Games would probably not even support this, considering it's bias), you might expect even more gains from async compute.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Why are people taking feature level support to mean hardware support? Since when was supporting dx 12.1 mean that your architecture is optimized for it.

This is getting more and more interesting and is of course playing out right before holiday season which is even more interesting
 

eddman

Senior member
Dec 28, 2010
239
87
101
I suppose anandtech should get on this issue, ASAP.

The news is in the wild and people LOVE to hate and jump to conclusions.

I'm quite interested to see what is really going on.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
I suppose anandtech should get on this issue, ASAP.

The news is in the wild and people LOVE to hate and jump to conclusions.

I'm quite interested to see what is really going on.
No offense to anandtech but if we waited for them, the issue would be over lol. Anandtech been late to the party for awhile sadly.

I'm excited to see the other outlets give us early preliminary analysis but I'll hope anandtech gives us information whores more info.
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
According to Oxide's dev, Ashes isn't particularly a async compute showcase, it uses a moderate amount of it, but recognizes the console devs are dumping even more compute asynchronously than what their bench does. So, depending on the engine and the dev (i.e Unreal Games would probably not even support this, considering it's bias), you might expect even more gains from async compute.

The consoles are forced to use it because of the very low hardware.
On the desktop the CPU/gpu won't even sweat it,a desktop gpu can run it serially and be x times faster.

Ashes is a game developed for desktop,and seriously, the amd card 390x get double FPS with dx12 (reaching/matching the GTX 980) so what is this gain based of if not async? I don't consider double FPS moderate anything.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
The consoles are forced to use it because of the very low hardware.
On the desktop the CPU/gpu won't even sweat it,a desktop gpu can run it serially and be x times faster.

Agree but as we get near DX12 games and get ready for 4k resolutions it will be harder for PC GPU's to render and that's when Async Compute comes in (the PC land)
Let's not downplay it's importance please.

On the other side, Nvidia might have decided to not implement Async Compute on it's current lineup because by the time we have DX12 games released, Pascal will be here and i bet they'll address this by then.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
I'm very curious what AMD has in store with Arctic Islands, could we see a major ramping up of Async Compute capability? Or maybe something else "hidden" in DX12 that gets exposed with Arctic Islands.

async compute is a patch to a somewhat broken system than a feature.
here intel employee explains what an ideal pipeline should be:

Not to get too into the weeds here, but what exactly is the benchmark doing? You have to be a bit careful when "testing async compute" as the notion isn't even a well-defined concept. Depending on the specifics of the load on the various units on the machine, certain architectures may or may not get performance benefits from async queues, but it's certainly not as simple as "supported or not". Similar notion with async copy as well: you're providing the driver/hardware with some additional parallelism that it may be able to make use of to improve performance, but it depends highly on the specific characteristics of both the net workloads running on the machine at the time, and the architecture.

And let's remember, an ideal architecture would not require additional parallelism to reach full throughput, so while the API is nice to have, seeing "no speedup" from async compute is not a bad thing if it's because the architecture had no issues keeping the relevant units busy without the additional help It is quite analogous to CPU architectures that require higher degrees of multi-threading to run at full throughput vs. ones with higher IPCs.

https://forum.beyond3d.com/threads/dx12-performance-thread.57188/page-5#post-1868411
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
Agree but as we get near DX12 games and get ready for 4k resolutions it will be harder for PC GPU's to render and that's when Async Compute comes in (the PC land)
Let's not downplay it's importance please.

I'm not trying to downplay it,claiming that because a very weak console makes havy use of it ergo as soon as it will come to desktop your $500 GPU will be worthless ,is up playing it.

Claiming that since a $450 card got up to a $500 card because of async ergo the $500 card now is (on a fast track of becomming) worthless,that is up playing it.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
claiming that because a very weak console makes havy use of it ergo as soon as it will come to desktop your $500 GPU will be worthless ,is up playing it

If you're refering to what Robert Hallock said, i can see from where you're coming from since he works for AMD. Both companies do it and that's pure PR nothing more.

In the end, two wrongs don't make a right
 

NomanA

Member
May 15, 2014
128
31
101
Even if this were true, isn't that all we are really interested in in this forum anyway? dGPU? I really don't see people battling over igp/APU that much in here.
And "shipped"? Nvidia shipped 80% dGPU? Well, what percent did AMD ship? Can it be more than 20%? I don't see how.

The point is, that the 80/20 divide is often mentioned to show the status of current dGPU market. It's incorrect because, these are quarter shipment numbers, and not the market situation for let's say all DX11 GPUs.

The most these quarterly numbers can affect the overall market numbers, corresponds to a scenario where every single new dGPU being sold, is to someone who was using an earlier dGPU (which was then promptly thrown away, instead of getting handed down). For normal sales and new machines being built added into the mix, the quarterly share impact on the overall dGPU market is averaged out further.
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
If you're refering to what Robert Hallock said, i can see from where you're coming from since he works for AMD. Both companies do it and that's pure PR nothing more.

In the end, two wrongs don't make a right
And what exactlly was my wrong?
Stating that the consoles have weak hardware?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
async compute is a patch to a somewhat broken system than a feature.

Some well-known game developers disagree with Intel. It's interesting how Intel even tries to get on the discussion of graphics considering they never made a good GPU in their lives and their current drivers/GUI is abysmal.

Crytek:
"Crytek Comments On PS4’s GPU Compute, Can Be An ‘Incredible Performance Win’

Eidos Montreal - Rise of the Tomb Raider

"Rise of the Tomb Raider uses async compute to render breathtaking volumetric lighting on Xbox One. Of all the rendering techniques used in the game, the most fascinating is its use of asynchronous compute for the generation of advanced volumetric lights. For this purpose, the developer has employed a resolution-agnostic voxel method, which allows volumetric lights to be rendered using asynchronous compute after the rendering of shadows, with correctly handled transparency composition."




Uncharted 4 by Naughty Dog:
Is that why arguably the best looking game on PS4 is rumored to have the highest use of Asynchronous Compute out of any PS4 game? (Part 1 and Part 2)

Intel's theories do not match realities. Intel might think there is a better way to make a DX12 game but reality is using AC engines on PS4 provides a huge boost in graphical capabilities.

Oxide clearly stated that they tried to use AC on NV hardware since this feature was exposed in the drivers, but when they did, performance was abysmal. OTOH, this helped to improve performance on GCN products. What that tells us if Kepler/Maxwell had strong Asynchronous Compute engine(s), those GPUs would have also received an additional boost in performance for free!

Oxide has stressed that on the console side, the boost from AC on DX12 can be up to 30% on the GPU side vs. running a conventional DX11 path.

"Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware."
http://www.dsogaming.com/news/oxide...to-disable-certain-settings-in-the-benchmark/

More analysis is required on future brand-agnostic (non-GameWorks) DX12 games to see which path developers choose to take with Asynchronous Compute engines and DX12. Right now it's looking like NV got caught with its pants down advertising supposedly more advanced DX12 feature set than GCN but in reality the current NV hardware appears far less advanced to actuallytake advantage of DX12. No wonder nV is in full PR damage control with Oxide. That's why it's absurd when certain people have blatantly ignored actual GPU horsepower and VRAM capacity and instead have been trolling the forums for the last 12 months about HDMI 2.0, 4K HEVC and Maxwell's DX12.1 code-path while ignoring that in the sub-$270 price bracket, GCN 1.0-1.2 has been wiping the floor with NV's budget cards at every price level when it comes to price/performance and VRAM capacity.

Anyone who went out of his/her way to recommend GTX960 vs. 280X/290 over the last 12 months to save $50 will have his reputation on the line if more brand agnostic DX12 games expose weaknesses in Maxwell. Certainty right now the safest bet is to bet on GPU performance+VRAM capacity. In those metrics, Maxwell loses to GCN in the sub-$270 space so the choice of which GPU to pick for most gamers should be easy.

http://wccftech.com/amd-nvidias-maxwell-is-utterly-incapable-of-performing-async-compute/

I know its WCCF, but its good to read about the matter from AMD. Its quite funny that they are doing the thing for Nvidia .

If more developers confirm this to be true, this is the worst GPU news of 2015 because it means NV's Fermi/Kepler and Maxwell hardware, coupled with NV's 75%+ market share, and PaidWorks will mean the benefits Asynchronous Compute brings in terms of extra performance won't be realized for a while.

"Oxide effectively summarized my thoughts on the matter. NVIDIA claims “full support” for DX12, but conveniently ignores that Maxwell is utterly incapable of performing asynchronous compute without heavy reliance on slow context switching."

I want to see some more DX12 games and interviews/feedback of the actual developers on this issue though.
I can't wait until Pascal and AMD's 16nm HBM2 GPUs because this generation has been nothing but a disaster in terms of promises and under delivering from both camps.

On the other side, Nvidia might have decided to not implement Async Compute on it's current lineup because by the time we have DX12 games released, Pascal will be here and i bet they'll address this by then.

That would mean NV expects its owners to upgrade GPUs every new generation (every 2 years)? I have no problem with that if they are transparent about it but instead they/their focus group members kept hyping up DX12 and supposedly more extensive feature set as somehow superior for DX12 games over GCN and more future-proof? Sounds like misleading marketing again if their hardware doesn't have one of the key features DX12 takes advantage of - Asynchronous Compute.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106

Fottemberg actually has his own site, Bits & Chips. So he claims additional knowledge from industry sources. Something else he said was once DX12 came around everything was going to hit the fan because of nVidia's lack of support for certain features. He never elaborated, but maybe he was correct.

If this is Maxwell's level of support I wonder how Kepler is going to do.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
Some well-known game developers disagree with Intel. It's interesting how Intel even tries to get on the discussion of graphics considering they never made a good GPU in their lives and their current drivers/GUI is abysmal.


Eidos Montreal - Rise of the Tomb Raider
"Rise of the Tomb Raider uses async compute to render breathtaking volumetric lighting on Xbox One. Of all the rendering techniques used in the game, the most fascinating is its use of asynchronous compute for the generation of advanced volumetric lights. For this purpose, the developer has employed a resolution-agnostic voxel method, which allows volumetric lights to be rendered using asynchronous compute after the rendering of shadows, with correctly handled transparency composition."




Uncharted 4 by Naughty Dog:
Is that why arguably the best looking game on PS4 is rumored to have the highest use of Asynchronous Compute out of any PS4 game? ([URL=" 1[/URL] and [URL=" 2[/URL])

Intel's theories do not match realities. Intel might think there is a better way to make a DX12 game but reality is using AC engines on PS4 provides a huge boost in graphical capabilities.

Oxide clearly stated that they tried to use AC on NV hardware since this feature was exposed in the drivers, but when they did, performance was abysmal. OTOH, this helped to improve performance on GCN products. What that tells us if Kepler/Maxwell had strong Asynchronous Compute engine(s), those GPUs would have also received an additional boost in performance for free!

Oxide has stressed that on the console side, the boost from AC on DX12 can be up to 30% on the GPU side vs. running a conventional DX11 path.

"Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware."


More analysis is required on future brand-agnostic (non-GameWorks) DX12 games to see which path developers choose to take with Asynchronous Compute engines and DX12. Right now it's looking like NV got caught with its pants down advertising supposedly more advanced DX12 feature set than [URL=" but in reality the current NV hardware appears far less advanced to actuallytake advantage of DX12[/URL]. No wonder nV is in full PR damage control with Oxide. That's why it's absurd when certain people have blatantly ignored actual GPU horsepower and VRAM capacity and instead have been trolling the forums for the last 12 months about HDMI 2.0, 4K HEVC and Maxwell's DX12.1 code-path while ignoring that in the sub-$270 price bracket, GCN 1.0-1.2 has been wiping the floor with NV's budget cards at every price level when it comes to price/performance and VRAM capacity.

Anyone who went out of his/her way to recommend GTX960 vs. 280X/290 over the last 12 months to save $50 will have his reputation on the line if more brand agnostic DX12 games expose weaknesses in Maxwell. Certainty right now the safest bet is to bet on GPU performance+VRAM capacity. In those metrics, Maxwell loses to GCN in the sub-$270 space so the choice of which GPU to pick for most gamers should be easy.



If more developers confirm this to be true, this is the worst GPU news of 2015 because it means NV's Fermi/Kepler and Maxwell hardware, coupled with NV's 75%+ market share, and PaidWorks will mean the benefits Asynchronous Compute brings in terms of extra performance won't be realized for a while.

"Oxide effectively summarized my thoughts on the matter. NVIDIA claims “full support” for DX12, but conveniently ignores that Maxwell is utterly incapable of performing asynchronous compute without heavy reliance on slow context switching."

I want to see some more DX12 games and interviews/feedback of the actual developers on this issue though.
I can't wait until Pascal and AMD's 16nm HBM2 GPUs because this generation has been nothing but a disaster in terms of promises and under delivering from both camps.



That would mean NV expects its owners to upgrade GPUs every new generation (every 2 years)? I have no problem with that if they are transparent about it but instead they/their focus group members kept hyping up DX12 and supposedly more extensive feature set as somehow superior for DX12 games over GCN and more future-proof? Sounds like misleading marketing again if their hardware doesn't have one of the key features DX12 takes advantage of - Asynchronous Compute.

I think you're misunderstanding. Asynchronous compute lets you use compute cores that are idle because they're waiting on other parts of the pipeline to finish. Asynchronous compute is a good thing, obviously, but a perfect system would not have idle time that you needed to specifically target.

I don't think he's actually trying to advertise intel hardware though, I think it's more of an observation on all currect GPUs.

And intel does actually make some pretty good GPUs now, they're just heavily limited by power and what they can actually cram in the die.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |