computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 37 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
But what's different than with DX10/11 where the outcome turned out entirely different? All your data depends on 1 single benchmark, while you disapprove the other benchmark already as "Nvidia favoured".

I didn't disapprove of Fable at all. I own a R290X now in my main gaming rig. I had many R290s.

It's great that my R290X is able to beat the 980 in Fable. If that's DX12, I'm happy with that. Think about the crap performance of GCN in UE4 in DX11. Now compare to UE4 in DX12. That's a huge deal.

But I think you will be disappointed if you think that's all there is, once Fable gets closer to finished, a more complex scene with NPCs, actual gameplay, spell effects and dynamic foliage based on compute... I have a strong feeling we will see a repeat of Ashes in alpha vs beta 2.

You remember that one right? Ashes alpha where Fury X and 980 Ti was neck-neck, turns out, AC was disabled because NV could not compete and requested it disabled! -_- The fact that Oxide actually obliged NV's request to gimp performance on AMD is amazing, but some people here still accuse Oxide of being biased against NV and such.
 
Last edited:
Feb 19, 2009
10,457
10
76
And what is wrong with "64 factors of it to flat surfaces just because" and "lets compute nonsense like missle for every unit"?!

Really?

Are you comparing zero visual difference of tessellating to the max a flat surface to actual dynamic lights on a missile in flight? REALLY?!
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I didn't disapprove of Fable at all. I own a R290X now in my main gaming rig. I had many R290s.

It's great that my R290X is able to beat the 980 in Fable. If that's DX12, I'm happy with that. Think about the crap performance of GCN in UE4 in DX11. Now compare to UE4 in DX12. That's a huge deal.

But I think you will be disappointed if you think that's all there is, once Fable gets closer to finished, a more complex scene with NPCs, actual gameplay, spell effects and dynamic foliage based on compute... I have a strong feeling we will see a repeat of Ashes in alpha vs beta 2.

You remember that one right? Ashes alpha where Fury X and 980 Ti was neck-neck, turns out, AC was disabled because NV could not compete and requested it disabled! -_- The fact that Oxide actually obliged NV's request to gimp performance on AMD is amazing, but some people here still accuse Oxide of being biased against NV and such.

The bench didn't have a 980. Also did you notice VR? GTX980 beating a Nano. And a 980TI equal to a nano CF setup while leaving single furies in the dust? Despite all the bolstering on how AMD would be the best there as well? And yes, those numbers comes from AMD.

http://vrworld.com/2016/02/23/valve-vr-test-nvidia-geforce-wins-against-amd-radeon/

But lets call it a day and lets continue as DX12 titles comes out. Quantum Break is a good 6 weeks or so away. So we wont have to wait that long.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Really?

Are you comparing zero visual difference of tessellating to the max a flat surface to actual dynamic lights on a missile in flight? REALLY?!

No, i think there is no difference between. Every strategy game has "dynamic lights on a missle".

And when you look at videos from this game the "dynamic light" doesnt do anything to the enviroment: https://www.youtube.com/watch?v=D5Q1QDzQPgM
 
Feb 19, 2009
10,457
10
76
No, i think there is no difference between. Every strategy game has "dynamic lights on a missle".

And when you look at videos from this game the "dynamic light" doesnt do anything to the enviroment: https://www.youtube.com/watch?v=D5Q1QDzQPgM

You're entitled to your opinion, but I think there's a clear difference from having lights on the projectiles, lasers and explosions, than a flat surface being tessellated.

Pretty obvious:
https://youtu.be/ZepjUuYEjmo?t=1m5s

Now look at your desk, it's a flat surface. Let's make it flatter by upping surface triangle counts, just imagine it.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
The bench didn't have a 980. Also did you notice VR? GTX980 beating a Nano. And a 980TI equal to a nano CF setup while leaving single furies in the dust? Despite all the bolstering on how AMD would be the best there as well? And yes, those numbers comes from AMD.

http://vrworld.com/2016/02/23/valve-vr-test-nvidia-geforce-wins-against-amd-radeon/

But lets call it a day and lets continue as DX12 titles comes out. Quantum Break is a good 6 weeks or so away. So we wont have to wait that long.

Does this Benchmark measure latency ??

And why nobody will by 2x Nano (CF) for VR ???

since I don’t think anyone will be buying two Nanos to run in Crossfire
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
No, there is no difference at all. Using Tessellation on something to hurt the competition and using compute to do something to hurt the competition is the same

There is no reason to compute every "dynamic light" when it is just an animation with a simple glow effect.

Every strategy game has lights - Supreme Commander:https://youtu.be/iVXm3Eb7zaY?t=142
I dont even know why Ashes looks so bad for a 2016 game. Even Supreme Commander is at least on the same level and dont need "compute for everything"...
 
Last edited:
Feb 19, 2009
10,457
10
76
Does this Benchmark measure latency ??

Tomshardware did the latency test last year.

The program did not use LiquidVR or GameWorks VR, just straight up normal rendering.

It was horrible for all GPUs, 30-40ms motion to photon lag.

Well above the recommended 20ms.

Even by NVIDIA.

http://www.geforce.com/whats-new/ar...us-the-only-choice-for-virtual-reality-gaming

The standard VR pipeline from input in (when you move your head) to photons out (when you see the action occur in-game) is about 57 milliseconds (ms). However, for a good VR experience, this latency should be under 20ms.

Here's the real kicker. NV is actually telling people their GPUs aren't capable of a "good VR experience". For real? Yes.

Combined, and with the addition of further NVIDIA-developer tweaks, the VR pipeline is now only 25ms.

LiquidVR with Async Compute has Motion to Photon latencies of ~10ms that was thrown around by various folks in the industry.
 
May 11, 2008
20,041
1,289
126
Likely. Let's not forget that the 390 has a handful more ROPs than the 970, but the 980 Ti has a full 50% more than the Fury X. That's pixel pushing power. That's a big reason (others being VRAM and OC ability) that I think the Fury X can never catch a 980 Ti.

Even if you take the finest of the battle tested Hawaii, the 390X, it on average trades blows with the 980 - which is impressive given its older architecture and lower price. Essentially they are a wash. The 980 Ti has 37.5% more shaders, 50% more ROPs, 50% more bandwidth, and 50% more memory than the 980. It's a radically faster GPU in every way that counts. Fury X offers 45% more shaders, 0% more ROPs, 33% more bandwidth, and HALF of the memory compared to the 390X. What makes you think it should keep up with the 980 Ti when it cannot match the same gains over the lower card?

It's common sense to see the 970 with its crippled ROPs, bandwidth, and memory lose to the 390. It's not a stretch to see the 390X match the 980 since the 390X and 390 are far closer to each other than the 970 and 980. But it would take a miracle for the Fury X to really be faster than the 980 Ti. We'd need to see the 390 truly running over the 970 making it look like a value card, and we'd need to even see the 390X a full tier or more above the 980. It's asking too much.

And let's not forget that most comparisons of a Fury X and 980 Ti are stock both. Even if most people do not overclock let's not forget the numerous factory OC versions for almost no price premium offer a 5% to close to 25% performance advantage and the Fury X cannot close this gap (and if it partially does thanks to a consumer OC, let's not forget even the factory OC 980 Ti's can be OC'd more).

980 Ti will always be the faster card pending a true AMD driver miracle that makes Never Settle look meaningless. And that's ok. If your budget is $650+ then you buy Nvidia right now, and there's not much reason not to for the average consumer unless they also want to buy a cheaper Freesync monitor to go with it.

It's the below $600 field that AMD is looking fine in, and that needs to the headline here. In DX11 games on average we already have seen the 390 pull ahead of the 970, the 380 standing tall over the 960 (not to mention the 380X), the 390X fiercely battling the more expensive 980, etc. The news should be that DX12 may further this gradual lead even more. And since the bulk of Nvidia's dGPU profit in the past year and a half are likely from 970 and 960 sales, this is what you need to stress to consumers if you wish a return to at least the old 60-40 market split.

If i look at the graphics from the OP, if i put it all in perspective with the FURY X and the 980 TI, it means that AMD with "less" hardware is only one fps to two fps slower at a total of at least 48 to 63 fps.
I think AMD is doing really well.

Of course, OC headroom and everything can be important.
 
Feb 19, 2009
10,457
10
76
No, there is no difference at all. Using Tessellation on something to hurt the competition and using compute to do something to hurt the competition is the same

There is no reason to compute every "dynamic light" when it is just an animation with a simple glow effect.

Every strategy game has lights - Supreme Commander:https://youtu.be/iVXm3Eb7zaY?t=142
I dont even know why Ashes looks so bad for a 2016 game. Even Supreme Commander is at least on the same level and dont need "compute for everything"...

That's an epic battle in Supreme Commander, nicely done. In Ashes, there's many such battles like that, all over a much larger map.

There's reasons to use compute for effects, or all developers would have stuck to plain sprites. I don't pretend to know more than these guys who are at the forefront of tech. They were afterall, the same bunch who first used DX11 Multithread rendering, boosting performance of NV GPUs in CIV 5 massively. You guys didn't complaint about them then.
 

Dygaza

Member
Oct 16, 2015
176
34
101
This game is really beautiful on 4k , really impressive. Some textures could still use a bit of tweaking, but they're getting there. But all those particle effects and lightshows in fights are just incredible. If some people don't like it, then they are simple entittled to their oppinnion.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
That's an epic battle in Supreme Commander, nicely done. In Ashes, there's many such battles like that, all over a much larger map.

Supreme Commander has large maps, too...

There's reasons to use compute for effects, or all developers would have stuck to plain sprites. I don't pretend to know more than these guys who are at the forefront of tech. They were afterall, the same bunch who first used DX11 Multithread rendering, boosting performance of NV GPUs in CIV 5 massively. You guys didn't complaint about them then.

"Sprites"? We are more than a decade after sprites...
The graphic in Ashes is just outdated. Here is a screen from Supreme Commander 2:


Released 6 years ago, using DX9 and has better texture quality and the shadows are leagues better.
And DX9 hasnt supported something like DirectCompute at this time...
 

Glo.

Diamond Member
Apr 25, 2015
5,761
4,666
136
Saw it coming way back. And NV got away with lying about hardware specs, again, dominating marketshare.

By the time Pascal launches, they can simply say "With Pascal, we've improved on DX12 Async Compute support, on the hardware level over Maxwell", and folks will upgrade in droves.

Fottemberg later on said to not count on improved Asynchronous Compute capabilities on Pascal.

And if Mahigan is right, and he is right, that Pascal is just Maxwell with FP64 - that may be the case.

Expect doubling on performance of Maxwell, tho.

P.S. It will be quite hilarious to see 6144 CUDA core Pascal GPU with 600 mm2 die size, and 6144 GCN4 core GPU with 350mm2 die size.

It is only analogy, but don't be surprised if this will actually turn to reality.
 

Ext3h

Junior Member
Feb 27, 2016
4
0
0
And if Mahigan is right, and he is right, that Pascal is just Maxwell with FP64 - that may be the case.

Expect doubling on performance of Maxwell, tho.
Careful about that. The "doubling the performance" part might be true for *some* Pascal chips, especially in combination with also increasing FP64 throughput and the introduction of FP16 and improved 8bit integer arithmetic, but there's a chance that these Pascal cards are not going to be GPUs.

There is also the pure-compute Tesla lineup of cards, which used to be regular GPUs, but Nvidia might have finally eliminated a majority of the 3D graphic related circuits on Pascal based Tesla cards, decreasing size and increasing efficiency and performance that way. That would unfortunately mean that you can't use that line of chips as GPUs. And the actual Pascal based GPUs would neither achieve the efficiency nor the performance Nvidia is claiming for the Tesla models. (Not even to mention that there could be as well one high performance and one power efficient Pascal chip, the manufacturing process used for Pascal allows to trade size (-> performance via more cores) for power efficiency, and vice versa.)

Think of it as cherry picking the most favorable specifications across the entire lineup. So far we think we have more information on Pascal than on Polaris, but we actually just got thrown a few rough numbers. Each single one deliberately with only a weak context, not attributable to a specific chip, so that we would make the mistake to project them implicitly to the entire lineup. Without NV actually needing to spread false information, technically.

Not that they would ever speak up if someone unintentionally overstates the capabilities of their hardware. Only ever if someone understates.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
There is also the pure-compute Tesla lineup of cards, which used to be regular GPUs, but Nvidia might have finally eliminated a majority of the 3D graphic related circuits on Pascal based Tesla cards, decreasing size and increasing efficiency and performance that way. That would unfortunately mean that you can't use that line of chips as GPUs.

It's possible that something like this might happen. That would support the "GP102" rumors floating around, with that being the largest actual GPU chip, and GP100 being the compute-only Tesla chip. But if that's true, then it debunks all the talk that gets thrown out about how it costs forty quadrillion dollars to tape out a new FinFET design. If it was really insanely expensive to do so, then Nvidia would consolidate all these functions in one GP100 chip even if it wasn't necessarily the most efficient use of space.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
This thread is not about market share and sales, but about Ashes of Singularity.

That is the topic here.

This thread has been the catch-all thread for "AMD is better at Directx12" for weeks. Just like the last Ashes thread was the catch all for "Nvidia doesn't support async in hardware." Honestly I am not interested in the game but I read the thread because THIS is where the discussion about the future of GPUs is happening.
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
All for naught, it would seem given that NVIDIA is reporting record revenue and substantial y-o-y growth in its gaming GPU sales.

AMD's GPU revenue is so insignificant that the company lumps those sales in with their CPU sales figures, and combined they don't even add up to NVIDIA's quarterly GPU sales.

Hate to break it to everybody, but the majority of the gaming public does not give two craps about the fact that AMD has an upper hand in DX12 games. There are practically no major DX12 games out in the marketplace right now so it's not going to influence the buying decisions of the vast majority of the gaming population.

If DX12 games that utilize Async Compute are ubiquitous within a year or so and NVIDIA still hasn't addressed it in their GPU architecture, then this will be a legit selling point for AMD.
your problem is that you see ONLY amd and not the other big players
microsoft
intel
amd
samsung
sony
every arm wearer
i could go and name even more but you get the point eventually trying to hold back a game or two since most of them are going into a homogeneous big system(mobile+consoles+pc)is and will be a money wasting machine
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
This thread has been the catch-all thread for "AMD is better at Directx12" for weeks. Just like the last Ashes thread was the catch all for "Nvidia doesn't support async in hardware." Honestly I am not interested in the game but I read the thread because THIS is where the discussion about the future of GPUs is happening.

You mean how they on purpose leave out the other DX12 features where Nvidia does mych better? Wait til the DX12 games with more focus on these arrive and Nvidia cards does much better. Async Compute isnt an universal feature for all games. It will all even out in the end.
Not really a leveled discussion really.


http://cdn.wccftech.com/wp-content/uploads/2015/09/DirectX-12_GTX-980-Ti-and-Fury-X.png
 

Glo.

Diamond Member
Apr 25, 2015
5,761
4,666
136
So a demo made to fit 4Kbyte that works better on one vs the other is the latest "game changer"? ()

Can you translate it?

No it is not a game changer . It still is scary, if it will reflect what will happen when you will add Compute to Game Engine, don't you think?
 

Pinstripe

Member
Jun 17, 2014
197
12
81
It's all irrelevant in practice since the baseline for development will be consoles, and compute capability of PS4 and XBO remain miniscule compared to the Hawaii/Fury/Polaris monsters. You'll get some extra checkboxes in selected AMD sponsored titles like prettier hair and stuff enabled, all five of them. Total game changer, yeah.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I see as many AMD and Nvidia fans in this thread as each other,if not this would be a one page thread,not 43 pages long.

Way to much rhetoric here,way too much.

Its like a stupid religious war here.

I'm personally crossing my fingers AMD returns to form with Polaris. Again, I'd buy a slower AMD card if it were priced appropriately (had no issues doing it for over a decade, also had no issue giving AMD MORE money when they had the faster card).

But reading some of the pro-AMD comments lately, it's like people aren't living in this world where NV/Intel is destroying AMD. Woof. For AMD's sake (and it's devote users) I hope they hit it out of the park. Or we'll be seeing a bunch of new pro-console posters. Haha.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Can you translate it?

No it is not a game changer . It still is scary, if it will reflect what will happen when you will add Compute to Game Engine, don't you think?

No I dont. Because its a demo made for size, not for optimization. You can most likely find other demos from the same event with the exact opposite outcome. You are down to what card support what instruction you may need for that single demo. It could for example be the same as mining on an NVidia card vs AMD. All down to a couple of instructions to make all the difference there.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |