Various Wolfenstein II's Benchmarks

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

psolord

Platinum Member
Sep 16, 2009
2,015
1,225
136
They should've expanded the y axis. That chart is almost unreadable.

The run-through is about 7 minutes long.

This is proper testing mate. Not the 30-60sec that most sites do. Although it is very understandable when you have to test 20 video cards on 10 cpus.

The downside of that however, is that indeed when you pack 7 minutes worth of benchmark on a chart, it becomes clattered up and is very difficult to read.

That's why I posted the rest of the accompanying charts in #11. One chart to rule them all, is just not possible.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
This is proper testing mate. Not the 30-60sec that most sites do. Although it is very understandable when you have to test 20 video cards on 10 cpus.

The downside of that however, is that indeed when you pack 7 minutes worth of benchmark on a chart, it becomes clattered up and is very difficult to read.

That's why I posted the rest of the accompanying charts in #11. One chart to rule them all, is just not possible.

How does that conflict with just expanding the y-axis? Double the range of the y-axis and the chart would instantly be far more readable.

Can't you see how this is actually telling you something,


while this


is practically just noise?
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
Still, quite a few people still claimed that Pascal's solution was half baked as well, specifically that it was rooted in software which isn't true. This rumor was spread mostly by Mahigan if you recall.



The notion that AMD GPUs age better is kind of true, and there are reasons for this. The biggest reason of course is that AMD GPUs share similar architecture with the GPUs in the consoles. That's a big advantage, but it's taken a long time to manifest. Also not every vendor is willing to push the ante when it comes to enabling console style optimizations for AMD GPUs. NVidia remarkably is still easily capable of competing though due to their amazing software scheduler.

The second reason is that AMD GPUs always take a long time to reach optimal performance in their life cycle from driver updates, which provides the illusion that they are aging better when in reality, it's just taking longer for their architecture to peak. NVidia is much faster than AMD when it comes to pushing driver updates that exploit new architectures.

I'd wager that Vega at the end of its life cycle should be solidly outperforming the GTX 1080 despite being mostly slower today. GTX 1080 is already topped out, but Vega still has room to grow. It won't reach GTX 1080 Ti levels of performance though.

I can't believe you are still pushing this narrative. Nvidia THEMSELVES said their dynamic load balancing was in the drivers. That's the mechanism to allows AC to function in the first place. Mahigan was spot on because was repeating what Nvidia told everyone. Nvidia's front end scheduling is driver based and then there is a hardware component after that. We've known this for years before AC was even a thing. The reason why Nvidia was so good in DX11 was because they effectively built a mini-OS inside the drivers that was multi-thread capable. It handles scheduling and a few other clever things like rewriting optimized shaders on the fly. It uses CPU cycles that DX11 would have available. It's exactly why Nvidia only gets to DX11 parity or slightly faster in next gen APIs. They don't see nearly the amount of gains as AMD. It's exactly why if there is a lot of compute back pressure they start to choke where AMD actually likes back pressure to maintain being fed. It's exactly why Nvidia is CPU bound at lower resolution in the few games where Vega 64 beats the 1080ti. On top of that, you can just look at the block diagrams. AMD ACEs are their front end scheduling. Nvidia has nothing like them in hardware. It's done in their drivers.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
How does that conflict with just expanding the y-axis? Double the range of the y-axis and the chart would instantly be far more readable.

Can't you see how this is actually telling you something,


while this


is practically just noise?
The second chart is not bad actually. It has much longer x axis.
Vega is much better in terms of minimums, it does not stutter that much.
Vega appears to me to be a winner in this game. 1080Ti is all over the place. It's not good.
 

amenx

Diamond Member
Dec 17, 2004
4,013
2,285
136
The second chart is not bad actually. It has much longer x axis.
Vega is much better in terms of minimums, it does not stutter that much.
Vega appears to me to be a winner in this game. 1080Ti is all over the place. It's not good.
Where is the figure for minimums? Or frametimes? Thats just an FPS chart over 17000 frames. The reviewer says under the chart:
The average framerate doesn’t tell the whole story. There is a section of the game, in the beginning of the run-through, that runs a lot smoother on GeForce GTX 1080 Ti. It seems Radeon RX Vega 64 struggles at the beginning, and GTX 1080 Ti is much more playable, once we get past that part then performance jumps up. The scene is where you step off the boat for the first time looking out across Manhattan. That large draw distance seems to be a burden on Vega 64, and you can feel the framerates lag. Once you get into more close quarters combat then the performance jumps up for most of the run-through. There’s one other area where it drops between 40-50 FPS, once again a long draw distance.

Overall, for 4K, we’d stick with the GeForce GTX 1080 Ti for the best gameplay experience. While Vega 64 is good in a lot of parts, there are a few places it chugs at 4K, unless you turn down the quality a notch. Compared to GTX 1080 though, Vega 64 is clearly better.

https://www.hardocp.com/article/2017/11/13/wolfenstein_ii_new_colossus_performance_review/8
 

psolord

Platinum Member
Sep 16, 2009
2,015
1,225
136
How does that conflict with just expanding the y-axis? Double the range of the y-axis and the chart would instantly be far more readable.

Can't you see how this is actually telling you something,


while this


is practically just noise?

I see what you mean. If the Y (vertical axis) would be twice as tall, it would offer more granularity. However the same problem would persist, since everything would be twice as stretched out on the Y axis.

The main problem here is that the settings they are using, are just too much. 4k at mein leben is just too much. That's why the frametimes are spiking like crazy. It's the same as the spiking on my 970 run.

However due to the big performance delta of my systems, the frametimes are much more easier to distinguish. The three cards [H] is testing, cannot distance themselves from each other, in the same pronounced way the 1070 does compared to the 970.

Thinking about it, more granularity on the X axis would be more helpful, since it shows the run time. The longer the graph, the less clattered it would be. It would need wider monitors however, if it was to be displayed correctly.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I can't believe you are still pushing this narrative. Nvidia THEMSELVES said their dynamic load balancing was in the drivers. That's the mechanism to allows AC to function in the first place. Mahigan was spot on because was repeating what Nvidia told everyone. Nvidia's front end scheduling is driver based and then there is a hardware component after that.

What you are saying is essentially impossible. Dynamic load balancing requires driver input of course, as the driver is the nexus between the hardware and the API/game. But it is definitely all done in hardware in the end as that is the only way the GPU can react fast enough to perform concurrent graphics and compute workloads.

And Mahigan was way off. He was peddling all over the net that Pascal was actually incapable of doing true asynchronous compute (whatever that means), and that if faced with a heavy graphics+compute scenario, would effectively crap itself. That's where the entire Futuremark TimeSpy fiasco came from. He criticized Futuremark by saying that their asynchronous workload was minimal in an effort to preserve Pascal's performance and stop it from tanking.

Of course, none of that turned out to be true, and this is shown by Wolfenstein 2 which has a very heavy compute workloads and supports asynchronous compute yet still gains performance on Pascal when it is enabled. The fact is, NVidia's asynchronous compute solution is very effective and rivals AMD's, likely with a smaller die space penalty.

We've known this for years before AC was even a thing. The reason why Nvidia was so good in DX11 was because they effectively built a mini-OS inside the drivers that was multi-thread capable. It handles scheduling and a few other clever things like rewriting optimized shaders on the fly. It uses CPU cycles that DX11 would have available. It's exactly why Nvidia only gets to DX11 parity or slightly faster in next gen APIs. They don't see nearly the amount of gains as AMD. It's exactly why if there is a lot of compute back pressure they start to choke where AMD actually likes back pressure to maintain being fed. It's exactly why Nvidia is CPU bound at lower resolution in the few games where Vega 64 beats the 1080ti. On top of that, you can just look at the block diagrams. AMD ACEs are their front end scheduling. Nvidia has nothing like them in hardware. It's done in their drivers.

This is all just pure postulation. Nobody other than NVidia understand the full extent of how their software instruction scheduler works and affects performance.
 
Last edited:

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
What you are saying is essentially impossible. Dynamic load balancing requires driver input of course, as the driver is the nexus between the hardware and the API/game. But it is definitely all done in hardware in the end as that is the only way the GPU can react fast enough to perform concurrent graphics and compute workloads.

And Mahigan was way off. He was peddling all over the net that Pascal was actually incapable of doing true asynchronous compute (whatever that means), and that if faced with a heavy graphics+compute scenario, would effectively crap itself. That's where the entire Futuremark TimeSpy fiasco came from. He criticized Futuremark by saying that their asynchronous workload was minimal in an effort to preserve Pascal's performance and stop it from tanking.

Of course, none of that turned out to be true, and this is shown by Wolfenstein 2 which has a very heavy compute workloads and supports asynchronous compute yet still gains performance on Pascal when it is enabled. The fact is, NVidia's asynchronous compute solution is very effective and rivals AMD's, likely with a smaller die space penalty.



This is all just pure postulation. Nobody other than NVidia understand the full extent of how their software instruction scheduler works and affects performance.

You are arguing a point for something that Nvidia has already told everyone how it works. Just because you don't know how it works doesn't make it not true. It's also a known fact that Nvidia's AC solution is inferior to AMD's. Anything beyond "async light" implementations (MS games, Timespy, etc) choke Nvidia cards before AMD. It's been tested. It's been verified by game devs. The amount of back pressure matters. You just never saw the data or the forum posts. Futhermore you are arguing against a point which is quite obviously proving out right now. We are waiting for Nvidia to provide a driver specifically for AC to work correctly.
 
Reactions: kondziowy

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You are arguing a point for something that Nvidia has already told everyone how it works.

And where was this?

It's also a known fact that Nvidia's AC solution is inferior to AMD's. Anything beyond "async light" implementations (MS games, Timespy, etc) choke Nvidia cards before AMD.

So Doom had a "async light" implementation? Because before Wolfenstein 2, Doom was held up as the pinnacle in asynchronous compute implementation by AMD fans. And last time I checked, NVidia's asynchronous compute works fine in that game, and has the fastest GPUs overall. Same thing with Gears of War 4, which has an excellent asynchronous compute implementation.

We are waiting for Nvidia to provide a driver specifically for AC to work correctly.

Um, AC was working correctly from launch. Only with the November 7 patch was it disabled on NVidia hardware. There are likely some stability issues with it no doubt, but I don't think NVidia can be blamed for that.

This game is in many ways a disaster on NVidia hardware, replete with tons of stability issues. It looks like Machine Games didn't optimize for NVidia at all during development, since the game was heavily sponsored by AMD. Well I hope that AMD paid them a lot of money for that, because the vast majority of their consumers are running NVidia hardware, and many can't even get the game to work properly.

At a certain point in the game, I had the infamous "crash dump error" and I was unable to continue the game until the November 7 patch fixed it.
 
Last edited:
May 11, 2008
20,068
1,295
126
The problem with some Nvidia owners is that any game crash can never be a bug in the driver because Nvidia is in their view infallible. It is always the game, never Nvidia, Sigh...
 
Reactions: kawi6rr and raghu78

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
Game worked great for me on my GTX 1060. I think I had one or two issues in about 10 hours of playtime. Never one crash or game breaking issue.
 

pj-

Senior member
May 5, 2015
481
249
116
The problem with some Nvidia owners is that any game crash can never be a bug in the driver because Nvidia is in their view infallible. It is always the game, never Nvidia, Sigh...

Wolfenstein 2 is the first game since I built my current pc over two years ago that actually caused my entire computer to crash.

It's possible that the root cause was the driver, but it seems pretty sloppy for the developer to have not noticed it beforehand and given nvidia time to fix it before release.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
I can't believe you are still pushing this narrative. Nvidia THEMSELVES said their dynamic load balancing was in the drivers.

False. Nvidia's dynamic load balancing is in hardware. Whitepaper: http://international.download.nvidi...al/pdfs/GeForce_GTX_1080_Whitepaper_FINAL.pdf

Nvidia's front end scheduling is driver based and then there is a hardware component after that

False again. Nvidia provides scheduling hints at shader compile time. Scheduling is done in a hardware scheduler.

Nvidia's AC solution is inferior to AMD's. Anything beyond "async light" implementations (MS games, Timespy, etc) choke Nvidia cards before AMD.

Perhaps you should study what Asynchronous Compute is. Or to quote Anandtech -
for async’s concurrent execution abilities to be beneficial at all, there needs to be idle time bubbles to begin with. Throwing compute into the mix doesn’t accomplish anything if the graphics queue can sufficiently saturate the entire GPU

So this "inferior" solution is because Nvidia does a better job keeping their GPU's filled with graphics work. You know, the stuff gamers care about.

Stop spreading lies and misinformation.
 
Reactions: dogen1 and Carfax83
May 11, 2008
20,068
1,295
126
Wolfenstein 2 is the first game since I built my current pc over two years ago that actually caused my entire computer to crash.

It's possible that the root cause was the driver, but it seems pretty sloppy for the developer to have not noticed it beforehand and given nvidia time to fix it before release.

It happens.
It can be that the driver provides functions that do not behave according what is written in the specification provided by Nvidia driver developers. Can be some corner case.
I had some issues with wolfenstein new order and my RX480 card.
Luckily, someone (A great person )on the steam forum posted that it would work with a specific driver while i noticed at the expensive of performance reduction.
Of course i went to the AMD community for help but my own search quest on steam solved the issue.

Recently:
After the windows update to 1709 my startmenu would not work right away after logging in or would just stop functioning after a while.
There was another issue i had as well with respect to the console (log in screen) and it is to early to be sure but ever since i disabled the console lockout timeout function, i have not had any issues anymore with the startmenu or searchbox. It is too early to tell if that really was and is the solution(Knocks on wood and hopes that it is ) but it has already been a week since i encountered the issue.
Just saying it is not that easy.

Forgot to mention that after playing a game aweek ago after the 1709 update , my computer would ignore all CTRL+ALT+DEL and any restart or shutdown request either through the gui or by use of the command shell.
That is the first time in at least 3 years that i had to use the reset button. Go figure.
 
Last edited:

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Where is the figure for minimums? Or frametimes? Thats just an FPS chart over 17000 frames. The reviewer says under the chart:
What do the blue dips below 50-40 mean then?
How many red dips do you see?

There is this data on one side and the testimony that you quoted on the other. Which one you choose?
Myself, it is hard to argue with data.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The problem with some Nvidia owners is that any game crash can never be a bug in the driver because Nvidia is in their view infallible. It is always the game, never Nvidia, Sigh...

Driver crashes are pretty distinct. Usually the game will lock up and freeze completely without crashing, BSOD, or black screen. If the game actually crashes due to a driver error and the driver recovers, then it will generate an error in Windows which can be seen with the event viewer.

None of these things happened. What happened to me is the game froze with the audio still running. When I hit CTRL+ALT+DELETE to escape and close it with task manager, I saw that there was an error which said "Could not write crash dump." So that to me is more a game error than a driver error.

Especially when you consider that I never had this error the entire time until I got to a specific location in the game, and the November 7 patch actually fixed it whilst I was still on the same set of drivers since launch. Only with yesterday's 388.31 release have I updated my drivers. I played the vast majority of the game with the 388.10, and 388.13 drivers with no problems until I got to that specific area, and then it took a week for them to release the November 7 patch which ended up fixing it.
 
May 11, 2008
20,068
1,295
126
Driver crashes are pretty distinct. Usually the game will lock up and freeze completely without crashing, BSOD, or black screen. If the game actually crashes due to a driver error and the driver recovers, then it will generate an error in Windows which can be seen with the event viewer.

None of these things happened. What happened to me is the game froze with the audio still running. When I hit CTRL+ALT+DELETE to escape and close it with task manager, I saw that there was an error which said "Could not write crash dump." So that to me is more a game error than a driver error.

Especially when you consider that I never had this error the entire time until I got to a specific location in the game, and the November 7 patch actually fixed it whilst I was still on the same set of drivers since launch. Only with yesterday's 388.31 release have I updated my drivers. I played the vast majority of the game with the 388.10, and 388.13 drivers with no problems until I got to that specific area, and then it took a week for them to release the November 7 patch which ended up fixing it.

Do you know what they patched and why they had to patch it ?
Meaning to ask what went wrong ?
That is the real interesting question.
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
http://international.download.nvidi...al/pdfs/GeForce_GTX_1080_Whitepaper_FINAL.pdf

False again. Nvidia provides scheduling hints at shader compile time. Scheduling is done in a hardware scheduler.

Perhaps you should study what Asynchronous Compute is. Or to quote Anandtech -


So this "inferior" solution is because Nvidia does a better job keeping their GPU's filled with graphics work. You know, the stuff gamers care about.

Stop spreading lies and misinformation.[/QUOTE]

Nvidia's driver, where dynamic load balancing lives (https://www.youtube.com/watch?v=Bh7ECiXfMWQ), is a traffic cop. The equivalent are AMD ACE. Both of their drivers track the amount of work coming in to be able to tell their scheduling front ends how many resources to assign for job completion. The difference is that Nvidia's main scheduler is also in the driver. AMD sends the work to the ACEs. Think about this. You wouldn't be able to adjust workloads at any other point in the solutions from either company if there wasn't a traffic cop figuring out how to manage the stream of jobs coming in. At a high level all these architectures do is take a job and break it down into smaller pieces for execution. You can see it in the block diagrams how a job flows from big to small.

As far as Nvidia doing a better job at keeping their GPU full. There is some context that is important. AMD's issue is that they designed an architecture for an API that didn't exist until Mantle/DX12/Vulkan. DX11 at it's core is single threaded. All of the optional band-aids MS added never changed that reality. It's simply incapable of feeding GCN fast enough. That's why GCN got an immediate 20% uplift from nextgen APIs. The API threading improved and so did the throughput. Again, Nvidia's DX11 driver is a mini-OS that was multithreaded. That's why it took them 3 years to write their famous performance driver. That was what was required to satisfy the the optional part of DX11 and why it was completely unacceptable from a programmatic standpoint. Drivers should not under any circumstance be responsible for what MS requested. It's exactly why AMD created Mantle to force the API change. Nvidia should have followed AMD, but they did it anyways for the advantage. Nvidia does a lot of things they shouldn't.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
False again. Nvidia provides scheduling hints at shader compile time. Scheduling is done in a hardware scheduler.

Perhaps you should study what Asynchronous Compute is. Or to quote Anandtech -


So this "inferior" solution is because Nvidia does a better job keeping their GPU's filled with graphics work. You know, the stuff gamers care about.

Stop spreading lies and misinformation.

Nvidia's driver, where dynamic load balancing lives (https://www.youtube.com/watch?v=Bh7ECiXfMWQ), is a traffic cop. The equivalent are AMD ACE. Both of their drivers track the amount of work coming in to be able to tell their scheduling front ends how many resources to assign for job completion. The difference is that Nvidia's main scheduler is also in the driver. AMD sends the work to the ACEs. Think about this. You wouldn't be able to adjust workloads at any other point in the solutions from either company if there wasn't a traffic cop figuring out how to manage the stream of jobs coming in. At a high level all these architectures do is take a job and break it down into smaller pieces for execution. You can see it in the block diagrams how a job flows from big to small.

As far as Nvidia doing a better job at keeping their GPU full. There is some context that is important. AMD's issue is that they designed an architecture for an API that didn't exist until Mantle/DX12/Vulkan. DX11 at it's core is single threaded. All of the optional band-aids MS added never changed that reality. It's simply incapable of feeding GCN fast enough. That's why GCN got an immediate 20% uplift from nextgen APIs. The API threading improved and so did the throughput. Again, Nvidia's DX11 driver is a mini-OS that was multithreaded. That's why it took them 3 years to write their famous performance driver. That was what was required to satisfy the the optional part of DX11 and why it was completely unacceptable from a programmatic standpoint. Drivers should not under any circumstance be responsible for what MS requested. It's exactly why AMD created Mantle to force the API change. Nvidia should have followed AMD, but they did it anyways for the advantage. Nvidia does a lot of things they shouldn't.

You are wasting your time arguing with him.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
AMD's issue is that they designed an architecture for an API that didn't exist until Mantle/DX12/Vulkan

Yes, AMD designed a GPU in 2008 for API's that came out seven years later. Nice crystal ball they have. If only that crystal ball could have seen a little further into the future to prevent the Vega disaster.

Ockham would have a different idea about it - more lies.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
DX11 at it's core is single threaded. All of the optional band-aids MS added never changed that reality. It's simply incapable of feeding GCN fast enough.

Yet it can feed a 1080Ti or a Titan fast enough. Or, a null driver for that matter.

You keep making things up and posting them as facts, just to be proven wrong.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |