They should've expanded the y axis. That chart is almost unreadable.
They should've expanded the y axis. That chart is almost unreadable.
The run-through is about 7 minutes long.
This is proper testing mate. Not the 30-60sec that most sites do. Although it is very understandable when you have to test 20 video cards on 10 cpus.
The downside of that however, is that indeed when you pack 7 minutes worth of benchmark on a chart, it becomes clattered up and is very difficult to read.
That's why I posted the rest of the accompanying charts in #11. One chart to rule them all, is just not possible.
Still, quite a few people still claimed that Pascal's solution was half baked as well, specifically that it was rooted in software which isn't true. This rumor was spread mostly by Mahigan if you recall.
The notion that AMD GPUs age better is kind of true, and there are reasons for this. The biggest reason of course is that AMD GPUs share similar architecture with the GPUs in the consoles. That's a big advantage, but it's taken a long time to manifest. Also not every vendor is willing to push the ante when it comes to enabling console style optimizations for AMD GPUs. NVidia remarkably is still easily capable of competing though due to their amazing software scheduler.
The second reason is that AMD GPUs always take a long time to reach optimal performance in their life cycle from driver updates, which provides the illusion that they are aging better when in reality, it's just taking longer for their architecture to peak. NVidia is much faster than AMD when it comes to pushing driver updates that exploit new architectures.
I'd wager that Vega at the end of its life cycle should be solidly outperforming the GTX 1080 despite being mostly slower today. GTX 1080 is already topped out, but Vega still has room to grow. It won't reach GTX 1080 Ti levels of performance though.
The second chart is not bad actually. It has much longer x axis.How does that conflict with just expanding the y-axis? Double the range of the y-axis and the chart would instantly be far more readable.
Can't you see how this is actually telling you something,
while this
is practically just noise?
Where is the figure for minimums? Or frametimes? Thats just an FPS chart over 17000 frames. The reviewer says under the chart:The second chart is not bad actually. It has much longer x axis.
Vega is much better in terms of minimums, it does not stutter that much.
Vega appears to me to be a winner in this game. 1080Ti is all over the place. It's not good.
The average framerate doesn’t tell the whole story. There is a section of the game, in the beginning of the run-through, that runs a lot smoother on GeForce GTX 1080 Ti. It seems Radeon RX Vega 64 struggles at the beginning, and GTX 1080 Ti is much more playable, once we get past that part then performance jumps up. The scene is where you step off the boat for the first time looking out across Manhattan. That large draw distance seems to be a burden on Vega 64, and you can feel the framerates lag. Once you get into more close quarters combat then the performance jumps up for most of the run-through. There’s one other area where it drops between 40-50 FPS, once again a long draw distance.
Overall, for 4K, we’d stick with the GeForce GTX 1080 Ti for the best gameplay experience. While Vega 64 is good in a lot of parts, there are a few places it chugs at 4K, unless you turn down the quality a notch. Compared to GTX 1080 though, Vega 64 is clearly better.
https://www.hardocp.com/article/2017/11/13/wolfenstein_ii_new_colossus_performance_review/8
How does that conflict with just expanding the y-axis? Double the range of the y-axis and the chart would instantly be far more readable.
Can't you see how this is actually telling you something,
while this
is practically just noise?
I can't believe you are still pushing this narrative. Nvidia THEMSELVES said their dynamic load balancing was in the drivers. That's the mechanism to allows AC to function in the first place. Mahigan was spot on because was repeating what Nvidia told everyone. Nvidia's front end scheduling is driver based and then there is a hardware component after that.
We've known this for years before AC was even a thing. The reason why Nvidia was so good in DX11 was because they effectively built a mini-OS inside the drivers that was multi-thread capable. It handles scheduling and a few other clever things like rewriting optimized shaders on the fly. It uses CPU cycles that DX11 would have available. It's exactly why Nvidia only gets to DX11 parity or slightly faster in next gen APIs. They don't see nearly the amount of gains as AMD. It's exactly why if there is a lot of compute back pressure they start to choke where AMD actually likes back pressure to maintain being fed. It's exactly why Nvidia is CPU bound at lower resolution in the few games where Vega 64 beats the 1080ti. On top of that, you can just look at the block diagrams. AMD ACEs are their front end scheduling. Nvidia has nothing like them in hardware. It's done in their drivers.
What you are saying is essentially impossible. Dynamic load balancing requires driver input of course, as the driver is the nexus between the hardware and the API/game. But it is definitely all done in hardware in the end as that is the only way the GPU can react fast enough to perform concurrent graphics and compute workloads.
And Mahigan was way off. He was peddling all over the net that Pascal was actually incapable of doing true asynchronous compute (whatever that means), and that if faced with a heavy graphics+compute scenario, would effectively crap itself. That's where the entire Futuremark TimeSpy fiasco came from. He criticized Futuremark by saying that their asynchronous workload was minimal in an effort to preserve Pascal's performance and stop it from tanking.
Of course, none of that turned out to be true, and this is shown by Wolfenstein 2 which has a very heavy compute workloads and supports asynchronous compute yet still gains performance on Pascal when it is enabled. The fact is, NVidia's asynchronous compute solution is very effective and rivals AMD's, likely with a smaller die space penalty.
This is all just pure postulation. Nobody other than NVidia understand the full extent of how their software instruction scheduler works and affects performance.
You are arguing a point for something that Nvidia has already told everyone how it works.
It's also a known fact that Nvidia's AC solution is inferior to AMD's. Anything beyond "async light" implementations (MS games, Timespy, etc) choke Nvidia cards before AMD.
We are waiting for Nvidia to provide a driver specifically for AC to work correctly.
It is alwas a combination of factors. PC hardware and software is so complex.Game worked great for me on my GTX 1060. I think I had one or two issues in about 10 hours of playtime. Never one crash or game breaking issue.
The problem with some Nvidia owners is that any game crash can never be a bug in the driver because Nvidia is in their view infallible. It is always the game, never Nvidia, Sigh...
I can't believe you are still pushing this narrative. Nvidia THEMSELVES said their dynamic load balancing was in the drivers.
Nvidia's front end scheduling is driver based and then there is a hardware component after that
Nvidia's AC solution is inferior to AMD's. Anything beyond "async light" implementations (MS games, Timespy, etc) choke Nvidia cards before AMD.
for async’s concurrent execution abilities to be beneficial at all, there needs to be idle time bubbles to begin with. Throwing compute into the mix doesn’t accomplish anything if the graphics queue can sufficiently saturate the entire GPU
Wolfenstein 2 is the first game since I built my current pc over two years ago that actually caused my entire computer to crash.
It's possible that the root cause was the driver, but it seems pretty sloppy for the developer to have not noticed it beforehand and given nvidia time to fix it before release.
What do the blue dips below 50-40 mean then?Where is the figure for minimums? Or frametimes? Thats just an FPS chart over 17000 frames. The reviewer says under the chart:
The problem with some Nvidia owners is that any game crash can never be a bug in the driver because Nvidia is in their view infallible. It is always the game, never Nvidia, Sigh...
Driver crashes are pretty distinct. Usually the game will lock up and freeze completely without crashing, BSOD, or black screen. If the game actually crashes due to a driver error and the driver recovers, then it will generate an error in Windows which can be seen with the event viewer.
None of these things happened. What happened to me is the game froze with the audio still running. When I hit CTRL+ALT+DELETE to escape and close it with task manager, I saw that there was an error which said "Could not write crash dump." So that to me is more a game error than a driver error.
Especially when you consider that I never had this error the entire time until I got to a specific location in the game, and the November 7 patch actually fixed it whilst I was still on the same set of drivers since launch. Only with yesterday's 388.31 release have I updated my drivers. I played the vast majority of the game with the 388.10, and 388.13 drivers with no problems until I got to that specific area, and then it took a week for them to release the November 7 patch which ended up fixing it.
http://international.download.nvidi...al/pdfs/GeForce_GTX_1080_Whitepaper_FINAL.pdfFalse. Nvidia's dynamic load balancing is in hardware. Whitepaper: http://international.download.nvidia.com/geforce-com/international/pdfs/GeForce_GTX_1080_Whitepaper_FINAL.pdf
False again. Nvidia provides scheduling hints at shader compile time. Scheduling is done in a hardware scheduler.
Perhaps you should study what Asynchronous Compute is. Or to quote Anandtech -
So this "inferior" solution is because Nvidia does a better job keeping their GPU's filled with graphics work. You know, the stuff gamers care about.
Stop spreading lies and misinformation.
Nvidia's driver, where dynamic load balancing lives (https://www.youtube.com/watch?v=Bh7ECiXfMWQ), is a traffic cop. The equivalent are AMD ACE. Both of their drivers track the amount of work coming in to be able to tell their scheduling front ends how many resources to assign for job completion. The difference is that Nvidia's main scheduler is also in the driver. AMD sends the work to the ACEs. Think about this. You wouldn't be able to adjust workloads at any other point in the solutions from either company if there wasn't a traffic cop figuring out how to manage the stream of jobs coming in. At a high level all these architectures do is take a job and break it down into smaller pieces for execution. You can see it in the block diagrams how a job flows from big to small.
As far as Nvidia doing a better job at keeping their GPU full. There is some context that is important. AMD's issue is that they designed an architecture for an API that didn't exist until Mantle/DX12/Vulkan. DX11 at it's core is single threaded. All of the optional band-aids MS added never changed that reality. It's simply incapable of feeding GCN fast enough. That's why GCN got an immediate 20% uplift from nextgen APIs. The API threading improved and so did the throughput. Again, Nvidia's DX11 driver is a mini-OS that was multithreaded. That's why it took them 3 years to write their famous performance driver. That was what was required to satisfy the the optional part of DX11 and why it was completely unacceptable from a programmatic standpoint. Drivers should not under any circumstance be responsible for what MS requested. It's exactly why AMD created Mantle to force the API change. Nvidia should have followed AMD, but they did it anyways for the advantage. Nvidia does a lot of things they shouldn't.
DX11 at it's core is single threaded. It's simply incapable of feeding GCN fast enough.
AMD's issue is that they designed an architecture for an API that didn't exist until Mantle/DX12/Vulkan
DX11 at it's core is single threaded. All of the optional band-aids MS added never changed that reality. It's simply incapable of feeding GCN fast enough.