[PCWorld] Total War: Warhammer DX12 Benchmarks

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Doom isn't gameworks titles , but this IS a Gaming evolved one (at least according the the link), you cant say that a gaming evolved title wouldn't be more optimized toward AMD's advantages like compute and DX12 Async features.

Of course TW:Warhammer is optimized for DX12, Compute Shaders and GPU multi-threading. That's because strategy games have the biggest CPU bottlenecks caused by a huge amount of draw-calls that have traditionally pegged 1-2 CPU cores to 100%. We should be thankful developers are using DX12 more and more so that 6-10 core CPUs see a boost, slower/older CPUs see a boost too. Also, AMD's advantages under DX12, Async Compute and Compute shaders are all open source and are the future of PC games. Hopefully Volta gets onboard the DX12 train so that we can leave the inefficient DX11 API behind.

http://forums.anandtech.com/showthread.php?p=38238126#post38238126

More and more developers are using these features. Just because NV completely mistimed the rate at which DX12 games would start coming out on the market, doesn't mean AMD is somehow cheating. AMD just made a more forward looking architecture for lower level API and Direct Compute, and then doubled down on Async with PS4/XB1 consoles. We knew this when Eric Demers presented the reasoning for why AMD moved away from VLIW to GCN, all the way back in 2011! AMD literally changed the course of PC game development and DX12 with Mantle and GCN consoles, but it just took years for the strategy to give fruit. Now AMD's old 2013 Hawaii cards based on "outdated" GCN 1.1 architecture are looking extremely powerful once a modern DX12 API lets them flex their full/true power that was gimped by DX11's inefficient draw calls.

NV just got caught with its pants down not seeing the direction of PC gaming back in 2011-2012. Kepler and Maxwell were designed for DX11. Pascal tries to address some of these flaws with Pre-emption but we should see the real deal in Volta. Right now NV is crushing AMD due to next generation architecture, GDDR5X and 1.7-1.85Ghz boost clocks on the 1080. Brute force approach.

Seeing 390/390X beating 980 in DX12 is insane because Hawaii was designed as a 780/780Ti competitor. It modern 2015-2016 games, it now destroys them and now with DX12, the cards are making 980 look bad. Amazing how much more forward looking Hawaii was; and it was a 438mm2 die against 780Ti's 561mm2.

Vega with HBM2 should be a power house for future DX12 games.

I also find it ironic that the same people hyping up 6-10 core BW-E are crapping all over almost all DX12 games. DX12 and Vulkan are what's needed to actually start recommending 6-10 core CPUs in the first place. Otherwise, a max overclocked 6700K will smoke every BW-E CPU in 99% of DX9-DX11 games.

Anyone who wants to take advantage of 6-12 core CPUs in games should be 100% behind Vulkan and DX12. Better yet, criticize NV for poor DX12/Async hardware support and maybe they'll actually focus on this with Volta, making it a much better DX12 GPU architecture. DX12 is here to stay, which means it's in our best interests that more games use it, forcing adoption of W10 and development of even faster DX12 graphics cards. Sooner or later, DX11 will be outdated, just like pre-DX11 APIs faced their doom over time. Time to move on and embrace DX12/Vulkan.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,355
642
121
They even provided the build that was used for testing. Couldn't possibly pick a more pro-AMD scenario. From the article:



Also the curious omission of GM200 and Fiji, and select resolutions tested for select cards.

Maybe NVidia should work with more developers then....
 

brandonmatic

Member
Jul 13, 2013
199
21
81
GTX 1080:


I'm not sure, but looks like possible thermal throttling from the 1080. Performance might be more consistent with better cooling (or maybe optimized drivers). But the Founders Edition reference cooler doesn't seem to be doing the job.
 

David_k

Member
Apr 25, 2016
70
1
41
Maybe NVidia should work with more developers then....

But then every one will cry that nVidia owns the market. and will curse gimpworks.. when his 290X is behind the 780 ti with default tessellation @ OVER 9000X
 

BlitzWulf

Member
Mar 3, 2016
165
73
101
I'm not sure, but looks like possible thermal throttling from the 1080. Performance might be more consistent with better cooling (or maybe optimized drivers). But the Founders Edition reference cooler doesn't seem to be doing the job.

That's definitely a possibility .
I've seen some reviews with some 1080's throttling down to base clock under extended load in some games

I'm really curious to see what kind of clocks the AIB's can get out of these chips with improved power delivery and what the power draw and heat will be like .
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
$599 for custom cards, and $379 for (custom) Geforce GTX 1070. And sorry, I am not impressed considering this is clearly yet another AMD showcase title and NVIDIA hasn’t released drivers optimized for the game yet.

Dude AMD hasn't released optimized drivers either.

Check this out!
Author here. Straight from AMD to my inbox to this Reddit post:
"With Total War: WARHAMMER, Radeon users can enjoy the benefits of Async Compute, a feature that supercharges work completion in a compatible Radeon GPU by interleaving tasks across multiple threads to shorten overall render time."
It also supports DX12's explicit multi-GPU.

https://www.reddit.com/r/Amd/comments/4kae18/total_war_warhammer_directx_12_performance/d3dmrae
 

jpiniero

Lifer
Oct 1, 2010
14,840
5,456
136
Drivers aren't going to help much with DX12. The developer is fully in control.

Now AMD's old 2013 Hawaii cards based on "outdated" GCN 1.1 architecture are looking extremely powerful once a modern DX12 API lets them flex their full/true power that was gimped by DX11's inefficient draw calls.

It's probably more raw compute power than Async Compute, etc.

390X = 5.9 TF SP
980 = 4.6
1080 = 8.2 (it could be throttling as mentioned)
1070 = 6.5
Full P10 = 6.6?

Just based upon that, you would have to think that both the 1070 and Full P10 would get in the mid 60s at 1440p.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Drivers aren't going to help much with DX12. The developer is fully in control.



It's probably more raw compute power than Async Compute, etc.

390X = 5.9 TF SP
980 = 4.6
1080 = 8.2 (it could be throttling as mentioned)
1070 = 6.5
Full P10 = 6.6?

Just based upon that, you would have to think that both the 1070 and Full P10 would get in the mid 60s at 1440p.
Agreed here and what I've observed for some time. Async insn't a magic bullet that increases compute performance. It allows use of resources that would have gone unused, allowing hetter utilization of available compute.

It should be noted that Async would likely benefit cards with more cores than those that go for fewer and ratchet up clock speeds instead. Even if Nvidia were to have built Async to the same spec as AMD, the 1080 wouldn't benefit quite as much as, say, a Fury X.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
Good to see AMD get more devs on their side. They got to build up a good library to start changing perception from regular gamers.

Need to get more popular games though. Hopefully Deus turns some more heads for them. NV is running amok uncontested.
The important distinction here is that "AMD optimized" games seem to be leveraging next gen technology like DX12 and Async Compute and not unnecessary levels of tessellation and a 3 minute turbo boost in order to cheat the benchmarks.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
But then every one will cry that nVidia owns the market. and will curse gimpworks.. when his 290X is behind the 780 ti with default tessellation @ OVER 9000X

ie DOOM, a non-Gameworks game dragged through the mud because AMD can't get their ship right at launch.

Either way, AMD needs Gaming Evolved to combat Gameworks. I say let them fight it out. If I have to own two GPUs, trust me - I will

The important distinction here is that "AMD optimized" games seem to be leveraging next gen technology like DX12 and Async Compute and not unnecessary levels of tessellation and a 3 minute turbo boost in order to cheat the benchmarks.

So tessellation was at not something new, for DX11, talked to the clouds by AMD when they had the advantage, buried when NV started to tessellate road barriers and unseen oceans?

Oh kay. Guess one company promotes their strengths it's "OMG, awe-spiring" the other one does it "y u broke mah gaem?!"

Face it, if both companies want to flex their muscles and consumers go with it - you should geat ready to buy a second GPU like I will


Maybe this is why Win10 and DX12 allows multiple GPUs. :sneaky: MSFT you rascals!
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
It also supports DX12's explicit multi-GPU.

nice. not just ashes then. Thats something for a lot of people to be excited about. Total war should have a larger audience to care about that.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
So tessellation was at not something new, for DX11, talked to the clouds by AMD when they had the advantage, buried when NV started to tessellate road barriers and unseen oceans?

Oh kay. Guess one company promotes their strengths it's "OMG, awe-spiring" the other one does it "y u broke mah gaem?!"

Face it, if both companies want to flex their muscles and consumers go with it - you should geat ready to buy a second GPU like I will
There is a huge ethical difference between overloading a game with unneeded tessellation, wasting energy in the process as well. And leveraging a tech to offload the CPU and lower game latency with Async Compute.

Compute shaders are a good idea, invisible tessellation or over tessellation has no benefit.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
There is a huge ethical difference between overloading a game with unneeded tessellation, wasting energy in the process as well. And leveraging a tech to offload the CPU and lower game latency with Async Compute.

Ethical? Did they deceive you? Is NV's hardware not better at tessellation than AMDs?

Is AMD's hardware not better at Async Compute than Nvidias?

Roger roger. Good to see AMD is ethical by having their devs use a feature NV bombs out.

No wait, that's a good business move! (Until NV does get proper Async Compute and if it works better than AMDs, I hope you aren't posting about ethics if games use too much async compute from that point forward.)
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
Ethical? Did they deceive you? Is NV's hardware not better at tessellation than AMDs?

Is AMD's hardware not better at Async Compute than Nvidias?

Roger roger. Good to see AMD is ethical by having their devs use a feature NV bombs out.

No wait, that's a good business move! (Until NV does get proper Async Compute and if it works better than AMDs, I hope you aren't posting about ethics if games use too much async compute from that point forward.)
Both Nvidia and AMD customers were cheated by over tessellation.

- AMD customer's graphic cards produced less FPS for no actual improvement in visuals or gameplay. Remove the unnecessary tessellation the game looks the same yet runs way better.

- Nvidia Kepler users got hit just as hard.

- Everyone including the Maxwell users wasted more power for no FPS or artistic value.Maxwell just took less of a performance hit.

Meanwhile AMD is gaining ground on legitimate technical merits with async compute. Native Async Compute isn't about gimping the other cards, it's about making the cards designed for it run more efficiently. There is a huge difference between the two.

Also it's not like Nvidia discovered Tessellation and decided wow cool new tech let's use the shit out of it. AMD (ATI) supported tessellation way before Nvidia, back since 2001.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Both Nvidia and AMD customers were cheated by over tessellation.

Cheated? I actually enjoyed the tessellation in Batman games. And I had a Radeon 5870 then.

- AMD customer's graphic cards produced less FPS for no actual improvement in visuals or gameplay. Remove the unnecessary tessellation the game looks the same yet runs way better.

Seems to be AMD's problem. They added a slider. We all use it.

- Nvidia Kepler users got hit just as hard.

Kepler handled it better than Fermi, and Maxwell better than Kepler, and most likely Pascal better than Maxwell.

When the hardware improves, they use it in more things. Amazing.

- Everyone including the Maxwell users wasted more power for no FPS or artistic value.Maxwell just took less of a performance hit.

Everyone? I used it for The Witcher 3. I enjoyed it. So much for everyone.

Meanwhile AMD is gaining ground on legitimate technical merits with async compute. Native Async Compute isn't about gimping the other cards, it's about making the cards designed for it run more efficiently. There is a huge difference between the two.

Kudos for AMD, keep it up. Use your strengths. If NV isn't good at it, beat them over the head with it. Not sure why you argue ethics. /shrug

Also it's not like Nvidia discovered Tessellation and decided wow cool new tech let's use the shit out of it. AMD (ATI) supported tessellation way before Nvidia, back in 2001.

Who said they did? I'm well aware who promoted first. I've used Radeons exclusively since 1997, ended in 2014.

If AMD is going to be more aggressive with their tech - about freaking time.
 

sirmo

Golden Member
Oct 10, 2011
1,014
391
136
Cheated? I actually enjoyed the tessellation in Batman games. And I had a Radeon 5870 then.
I feel like you're not informed about the issue we're arguing over. There is nothing wrong with Tessellation. I am talking about invisible Tessellation designed purely to overload graphic cards. I could write an essay about it here, but I feel AdoredTV already did it in a video form, so why don't you take a look:

https://www.youtube.com/watch?v=O7fA_JC_R5s
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
Ethical? Did they deceive you? Is NV's hardware not better at tessellation than AMDs?

Is AMD's hardware not better at Async Compute than Nvidias?

Roger roger. Good to see AMD is ethical by having their devs use a feature NV bombs out.

No wait, that's a good business move! (Until NV does get proper Async Compute and if it works better than AMDs, I hope you aren't posting about ethics if games use too much async compute from that point forward.)

Apples and kumquats.

Excessive amounts of tesselation decreases performance for no visible benefit.

Async Compute increases performance and shows a direct in game benefit on compliant hardware.

I'm not sure that it's possible to use too much Async Compute. As I understand it, it's only used if the resources are available. If they're not available then the normal order is used.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
EDIT:
I feel like you're not informed about the issue we're arguing over. There is nothing wrong with Tessellation. I am talking about invisible Tessellation designed purely to overload graphic cards. I could write an essay about it here, but I feel AdoredTV already did it in a video form, so why don't you take a look:

https://www.youtube.com/watch?v=O7fA_JC_R5s

I feel like you are ignoring my argument. I also had a 7970 during Crysis 2 and the now legendary ocean. I used the slider to turn down tess and happily played. I didn't raze forums that NV was hindering my experience because they can do tessellation better and thus are using more of it (whether needless or not).

Anyways, this is pointless to continue. AMD does no wrong, ever. I keep forgetting that (probably tattooed some where on my body haha.)

Laters!

Apples and kumquats.

Excessive amounts of tesselation decreases performance for no visible benefit.

Async Compute increases performance and shows a direct in game benefit on compliant hardware.

This is getting tiresome. If AMD uses its strengths - that's good (to you). if NV does its bad (to you). Because reasons.

I get it. People don't like the short end of the stick. The exact same thing is happen with Async Compute in reverse roles. Yet, it's "apples to orange, derps!!!!"

/facepalm

I'd love to see both companies duke it out using their strengths. And we're getting there.

I'm not sure that it's possible to use too much Async Compute. As I understand it, it's only used if the resources are available. If they're not available then the normal order is used.

How ironic. In another post you tell me something "oh its possible" to defend your stance, but here "I don't think its possible."

Oh well, this is already far off topic. So I'll just bow it.



On Topic:
Kudos AMD, keep fighting the good fight!
 
Last edited:

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
How ironic. In another post you tell me something "oh its possible" to defend your stance, but here "I don't think its possible."

Oh well, this is already far off topic. So I'll just bow it.



On Topic:
Kudos AMD, keep fighting the good fight!
Seems to me, in layman's terms, that Async compute on GCN isn't far off from Intel's HT implementation. If so then the gains could very well be capped at or around 20~30% & the duplication of resources makes sense, why waste resources for specific tasks that games may or may not use, also depends on Nvidia being on board with them developers!
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,063
7,489
136
So long as you can turn off whatever offending feature to get both vendors to the same baseline, I say game on AMD/NV. I confess, after years of seeing AMD play Mr. Nice Guy (aka technically inferior poor guy) I am heartily enjoying all the whining about "AMD slanted games".

Let's not kid ourselves with the whole "Taking the high road" thing either. AMD finally has its ducks in a row and can turn the screws on NV. Is this Lisa "We're not a value brand" Su's doing? Whatever, I hope to see it continue.

NV has been playing chess while AMD has been playing checkers, nice to see them both playing the same game.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
That's funny. You've said nothing I didn't already know. You know what...nevermind.

Do you honestly not see a difference between a feature that creates geometry that go un-displayed but still have to be rendered (over tessellation) and a feature that allows you to better utilize "unused" parts of the GPU at the same time as it does other work?

One is only there to harm performance.

The other is there to add performance

Blame Nvidia for not following core DX12 specs and claiming they support it.



I see Maxwell claiming it has Async Compute support, ask Nvidia where the drivers are to enable it.

Fun FACT of the day: Async Compute is NOT enabled on the driver-side with public Game Ready Drivers. You need app-side + driver-side!

https://twitter.com/PellyNV/status/702556025816125440

Stop blaming developers for using core DX12 features to make their game run better. That is the whole point of using DX12, optimization!
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |