ComputerBase & GameGPURise of the Tomb Raider: DX11 vs DX12 + VXAO Tested

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Possible and practical are 2 different ways. But in terms of DX its very simple, no hardware, no support.

And as you quote:
However both approaches add performance overhead, and as such usage of conservative rasterization in real time graphics has been pretty limited so far
 

Det0x

Golden Member
Sep 11, 2014
1,063
3,110
136
Possible and practical are 2 different ways. But in terms of DX its very simple, no hardware, no support.

So what does this mean in regards to Nvidias "Asynchronous Compute support" ?

Possible =/ practical

For all intent and purposes, Nvidia don't support Async compute.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
So what does this mean in regards to Nvidias "Asynchronous Compute support" ?

Possible =/ practical

For all intent and purposes, Nvidia don't support Async compute.

Not practical. Just like mining on NVidia cards.
 

Krteq

Senior member
May 22, 2015
993
672
136
Nope, nV doesn't support DX12/Vulkan Async-Compute at all.

They support "Async-Compute" in a manner of asynchronous execution of compute kernels via Hyper-Q only.
 

Udgnim

Diamond Member
Apr 16, 2008
3,664
111
106
pretty big visual difference between HBAO+ and VXAO here

although part of the reason why is that it looks like HBAO+ isn't being applied to the vegetation

 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
pretty big visual difference between HBAO+ and VXAO here

although part of the reason why is that it looks like HBAO+ isn't being applied to the vegetation


second comparison I have seen that looks a bit off. We know HBAO+ darkens more than is being shown. And it does darken under foliage, or at least it used to. The other one looked more like VXAO vs off. This one looks like toned down AO vs VXAO.

http://images.nvidia.com/geforce-co...eractive-comparison-003-hbao-plus-vs-off.html

 

Snafuh

Member
Mar 16, 2015
115
0
16
Yeah, VXAO seems like a very good AO solution.

It looks good but a 20%-30% performance hit from HBAO+ to VXAO? I think a voxel based solution is pretty hardcore for an effect like AO.
I'd rather have proper GI with this technique.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
And, in all honesty, the only thing that's showing as genuinely slower on those DX 11 vs 12 charts seems to be fury.

I would not be hugely surprised if these sorts of charts end up being what most programmed DX12 games to end up looking like.

Yes the API lets you get non trivial advantages for specific GPU architectures if you really put big effort into optimising at a low level for that architecture.

Given the amount of 'effort' seemingly put into PC ports, would anyone expect them to do that? For, the ~half dozen architectures there will be the PC space?
(For GCN 1.1 yes, but that architecture is gone in 3-6 months time.).

Especially given that all that effort is wasted/even counter productive in 2/3 years time when all the cards on sale are using different architectures.

I imagine that the CPU benefits should be much safer to get.

PS - Serious question. Has anyone checked to see what all these DX12 things do on Intel iGPU's? That would be quite an interesting sort of data point. And, yes, given trends we do need to take those reasonably seriously too.

This was my concern with DX12 as well, I wasn't convinced devs would put in the effort required to really take advantage of it. Even still, i'm surprised it's THIS bad. I did not expect significantly worse performance, I just didn't expect some of the "possible" features some people were excited about. Like being able to use any combination of cards for SLI/CF, or being able to double up on your usable vram when running SLI/CF

I know DICE got a lot of flak for their implementation of Mantle where a lot of people had issues, but in hindsight it's probably the best implementation of a "patched in" API we have yet. It may have had some bugs, but it actually did what it set out to do.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Is this that bad?

The charts overleaf looks like a <5% loss for most of the situations/GPUs. Not ideal but I'm unsure if that counts as hugely significant Probably balanced by the gains in terms of weaker CPUs.

GCN 1.2 (380/Fury) does look to need more work for some reason.

Certainly if DX12 can be implemented in a manner which only drops ~5% for new architectures, with options for heavy optimisation for some specific ones, it'll be fine.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Is this that bad?

The charts overleaf looks like a <5% loss for most of the situations/GPUs. Not ideal but I'm unsure if that counts as hugely significant Probably balanced by the gains in terms of weaker CPUs.

GCN 1.2 (380/Fury) does look to need more work for some reason.

Certainly if DX12 can be implemented in a manner which only drops ~5% for new architectures, with options for heavy optimisation for some specific ones, it'll be fine.

Considering it's an API that should do the exact opposite of what it's doing, yes it's that bad. What the heck is the point of DX12 otherwise?

I suppose if you were playing this game in DX11 and you just had way too much performance that it bothered you, DX12 would be useful. I haven't heard of any such complaints though.
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
DX11 and other similar have been around for a long time and how they worked and how the drivers worked were well optimized. This is a new API with some major differences in how things work, and what needs to be done to get optimal results is still being worked out.

Shouldn't be to surprising that we get these sort of results from some of these early implementations.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
It should be obvious to even the trolliest of the trolls, that when you consider DX11 970 is handily beating DX12 970, that this DX12 implementation is horrendously broken... And this is coming form an NVidia owner.

DX12 can be awesome, but essentially only on paper. That's the problem when reality knocks on the door. The CPU benefit should always prevail tho.

Considering it's an API that should do the exact opposite of what it's doing, yes it's that bad. What the heck is the point of DX12 otherwise?

I suppose if you were playing this game in DX11 and you just had way too much performance that it bothered you, DX12 would be useful. I haven't heard of any such complaints though.

The point of DX12 is CPU overhead reduction, specially for Xbox One. And as shown, FX, i3 users etc. When you get pure DX12 only games, the CPU resources freed can and will simply be used on more game logic. But you ask for an API that needs to be optimized carefully for every single uarch, perhaps even into different SKUs of the same uarch to really shine.

This was my concern with DX12 as well, I wasn't convinced devs would put in the effort required to really take advantage of it. Even still, i'm surprised it's THIS bad. I did not expect significantly worse performance, I just didn't expect some of the "possible" features some people were excited about. Like being able to use any combination of cards for SLI/CF, or being able to double up on your usable vram when running SLI/CF

I know DICE got a lot of flak for their implementation of Mantle where a lot of people had issues, but in hindsight it's probably the best implementation of a "patched in" API we have yet. It may have had some bugs, but it actually did what it set out to do.

Even you know what happens when conflict of interest and money is involved

Personally I was never in doubt how DX12 would turn out. It also only repeat history when API reductions, tho harder to code for gets introduced. But the hyperbole prevailed as always with the 5 stages of grief to follow.

People with a GCN 1.1 8GB 390/390X will be the big winner in the long run. And everyone else the losers. Assuming they dont have weak CPUs.
 
Last edited:

garagisti

Senior member
Aug 7, 2007
592
7
81
I reckon Assassin's Creed Black Flag didn't get any major performance issues being a Gameworks title.
I think you wanted to mention Unity. Oh yes, Unity had some issues.

FWIW, Black Flag was one of the better written games, when looking at Ubi's record of Assassins Creed since the third. My deluxe key didn't work on Ubi's site/ Uplay. Solution, buy it again when cheaper as DLC bundle costed more still, since then i pay only about $5 or so for their games. It is just not worth it in my humble opinion.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Considering it's an API that should do the exact opposite of what it's doing, yes it's that bad. What the heck is the point of DX12 otherwise?

I suppose if you were playing this game in DX11 and you just had way too much performance that it bothered you, DX12 would be useful. I haven't heard of any such complaints though.

Leave aside GCN 1.2 which seems to need patching or something but still isn't collapsing.

Otherwise, as far as I can see - and I might be wrong - you're dropping ~5% at worst, and people with lower powered CPUs are seeing quite big frame rate increases?

You'd obviously prefer it to be +10% than -5% or something - and imagine once they're really used it to they might manage it - but it's OK. Feels like what you'd expect from DX12 done plausibly competently but without specific architectural optimisations.
(Which is likely how most PC gamers will end up experiencing it.).

The gears of war remake was much more troubled.

Mind you, I would still like to know how it goes on Intel's IGP
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Leave aside GCN 1.2 which seems to need patching or something but still isn't collapsing.

Otherwise, as far as I can see - and I might be wrong - you're dropping ~5% at worst, and people with lower powered CPUs are seeing quite big frame rate increases?

You'd obviously prefer it to be +10% than -5% or something - and imagine once they're really used it to they might manage it - but it's OK. Feels like what you'd expect from DX12 done plausibly competently but without specific architectural optimisations.
(Which is likely how most PC gamers will end up experiencing it.).

The gears of war remake was much more troubled.

Mind you, I would still like to know how it goes on Intel's IGP

It shouldn't have a drop at all. It's suppose to be more efficient so at worst, meaning if you had no CPU bottleneck at all at any point in the game, you should get the same performance. When you're dropping frames almost across the board, yes, it's that bad. In this game, DX12 isn't doing what it's supposed to do. It's doing the opposite. I know I sound like a broken record, but it's also the second time you made a nearly identical statement.

It's like going into a job expecting a promotion and raise. You get the promotion but a pay cut instead. Not that bad is it?
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
It shouldn't have a drop at all. It's suppose to be more efficient so at worst, meaning if you had no CPU bottleneck at all at any point in the game, you should get the same performance. When you're dropping frames almost across the board, yes, it's that bad. In this game, DX12 isn't doing what it's supposed to do. It's doing the opposite. I know I sound like a broken record, but it's also the second time you made a nearly identical statement.

It's like going into a job expecting a promotion and raise. You get the promotion but a pay cut instead. Not that bad is it?

Why shouldn't it have a drop at all? It's a new API, if you aren't using it correctly you very well will get a drop in performance compared to another API you use correctly. How you write efficient quality code for the new API may very well be different than what you did for the old API. Just because the code works doesn't mean it works well.

Hopefully GDC this week will get some more progress in the right direction.

I am interested to see what we see from Frostbite and DX12.
 

TheELF

Diamond Member
Dec 22, 2012
3,993
744
126
It shouldn't have a drop at all.
Synchronization between threads is a bitch,you see it in games all the time,the more threads the more synchronization the more you drop.
It's the price you have to pay if you can't pull it off with just one core.
 

flynnsk

Member
Sep 24, 2005
98
0
0
...

The point of DX12 is CPU overhead reduction, specially for Xbox One. ..blahblah blah

stopped reading right there as it is obvious you DON'T know the point of DX12.. please do yourself and everyone else a favor and actually read up on DirectX 12 before commenting

https://blogs.msdn.microsoft.com/directx/2014/03/20/directx-12/
https://msdn.microsoft.com/en-us/library/windows/desktop/dn903821(v=vs.85).aspx
https://www.youtube.com/channel/UCiaX2B8XiXR70jaN7NK-FpA
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
stopped reading right there as it is obvious you DON'T know the point of DX12.. please do yourself and everyone else a favor and actually read up on DirectX 12 before commenting

https://blogs.msdn.microsoft.com/directx/2014/03/20/directx-12/
https://msdn.microsoft.com/en-us/library/windows/desktop/dn903821(v=vs.85).aspx
https://www.youtube.com/channel/UCiaX2B8XiXR70jaN7NK-FpA

Yes, new features as well. But as the blog also states:
What makes Direct3D 12 better? First and foremost, it provides a lower level of hardware abstraction than ever before, allowing games to significantly improve multithread scaling and CPU utilization.

Until DX12, MS only had a high level API on the Xbox One. While PS4 got both a low and high level API. A huge disadvantage for MS when coupled with anemic CPUs.

The showcase for Intel was also better performance/watt due to this exact reason in the mobile space.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Why shouldn't it have a drop at all? It's a new API, if you aren't using it correctly you very well will get a drop in performance compared to another API you use correctly. How you write efficient quality code for the new API may very well be different than what you did for the old API. Just because the code works doesn't mean it works well.

Hopefully GDC this week will get some more progress in the right direction.

I am interested to see what we see from Frostbite and DX12.

That's entirely my point... It clearly isn't being used correctly here.

It should be obvious to even the trolliest of the trolls, that when you consider DX11 970 is handily beating DX12 970, that this DX12 implementation is horrendously broken... And this is coming form an NVidia owner.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Is this that bad?

The charts overleaf looks like a <5% loss for most of the situations/GPUs. Not ideal but I'm unsure if that counts as hugely significant Probably balanced by the gains in terms of weaker CPUs.

GCN 1.2 (380/Fury) does look to need more work for some reason.

Certainly if DX12 can be implemented in a manner which only drops ~5% for new architectures, with options for heavy optimisation for some specific ones, it'll be fine.

DX12 is purely for performance enhancement. If it's not doing that then it's not done properly, either on the software or hardware side (or both, possibly).

As someone else pointed out, AMD and it's partners have more experience with prior work on Mantle. Maybe it'll just take a bit of time for everyone else to work it out.
 

Game_dev

Member
Mar 2, 2016
133
0
0
DX12 is purely for performance enhancement. If it's not doing that then it's not done properly, either on the software or hardware side (or both, possibly).

As someone else pointed out, AMD and it's partners have more experience with prior work on Mantle. Maybe it'll just take a bit of time for everyone else to work it out.

It's not just performance enhancements.


http://www.geforce.com/whats-new/articles/geforce-gtx-is-game-ready-for-windows-10-and-directx-12
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |