guru3dDoom Vulkan Benchmarks

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
Stop with the silly excuses and ask for evidence, the fact that it performs worse than in OGL by a big amount is all the evidence you ever need, there is NO REASON for the performance regression other than a shitty rendering path for that arquitecture. The way that some people is defending this is worrisome.

Salt mine explosion
Dooms vulcan performance coresponds to the cards theoretical throughput. Its not about drivers but what an api can do in an fps if all ressources is used.

If all it can do with "all ressources in use" is to provide worse performance than than the older API that does not use all ressources, that is really a redactedapi (or redactedimplementation) right there.

cursing is not allowed in the technical forums
Markfw900
 
Last edited by a moderator:

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
I'm going to go with NVidia having a redacteddriver implementation; gonna side with an actual dev on this one.


cursing is not allowed in the technical forums
Markfw900
 
Last edited by a moderator:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Oh and on top of the broken Vulkan path Doom doesnt scale on Pascal cards. A GTX1080 is ~40% faster than a GTX980TI in Timespy but in Doom the same card is only 20% faster: http://www.purepc.pl/karty_graficzn..._radeon_rx_480_test_kart_graficznych?page=0,8

I'm going to go with NVidia having a redacteddriver implementation; gonna side with an actual dev on this one.

And this "redacteddriver implementation" is only a problem in Doom but not in Talos and Dota 2? Right...


cursing is not allowed in the technical forums
Markfw900
 
Last edited by a moderator:

Hitman928

Diamond Member
Apr 15, 2012
5,605
8,813
136
Well, yes, but the fact is, it works better with OGL, they cant even do memgr right?

If Zlatan is correct in his comments in the previous link, it could be that the Nvidia and intel Vulkan drivers are tying devs hands when trying to do things like memory management. I'm not a programmer, let a lone a graphics programmer so I can only defer to those who are. Maybe people with more experience with DX12/Vulkan can chime in.

Zlatan said:
Vulkan has a good chance, but most of the IHV implementations are buggy as hell. For example the NV and Intel Vulkan implementations are simply ignore VkImageMemoryBarrier, which is kinda sad. The valodation layers are still not good enough to ship a Vulkan game to the market. Only AMD has stable implementation, they have absolutely trustworthy Vulkan driver, but the devs also need this from Intel and NV of course. Still, the immature validation layers are the biggest problem.

https://www.khronos.org/registry/vulkan/specs/1.0/man/html/VkImageMemoryBarrier.html
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Oh and on top of the broken Vulkan path Doom doesnt scale on Pascal cards. A GTX1080 is ~40% faster than a GTX980TI in Timespy but in Doom the same card is only 20% faster: http://www.purepc.pl/karty_graficzn..._radeon_rx_480_test_kart_graficznych?page=0,8



And this "shitty driver implementation" is only a problem in Doom but not in Talos and Dota 2? Right...
Enjoy the synthetic timespy performance because it will never hit actual games. Eg Asynch beefup is for all practical reasons only a thing gcn cards will see. It takes tons of dev time and cost to implement it properly and its only there because of the consoles getting most of the ressources available.

Seeing it for pascal is a pipedream. It really doesnt matter pascal dont have much compute power because its not woth it to get it. The devs have no incentive to pay for 5 to 10% perf on a small pc market. The path might be broken but it doesnt matter.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Enjoy the synthetic timespy performance because it will never hit actual games. Eg Asynch beefup is for all practical reasons only a thing gcn cards will see. It takes tons of dev time and cost to implement it properly and its only there because of the consoles getting most of the ressources available.

So developers havent cared to optimize their low level APIs for nVidia hardware? Great. Hopefully they will start now. :thumbsup:

Seeing it for pascal is a pipedream. It really doesnt matter pascal dont have much compute power because its not woth it to get it. The devs have no incentive to pay for 5 to 10% perf on a small pc market. The path might be broken but it doesnt matter.

And yet they have enough time and money to release broken DX12/Vulkan paths?
 

Hitman928

Diamond Member
Apr 15, 2012
5,605
8,813
136
There's no way for GCN specific shader intrinsic functions designed to expose functionality exclusive to the GCN shader cores to deliver benefits (or even work frankly) on NVIDIA hardware.

That's not at all pertinent to the subject of GPUOpen. Obviously hardware support must be present for the GCN intrinsic shader optimizations to be of any use, but that doesn't make it any less open.

From my understanding of the design philosophies, AMD has been adding features within their architectures each generation basically for console developers and future APIs to utilize. Under DX11 and even early Mantle/DX12/Vulkan, these features have been pretty much ignored due to lack of API support or lack of time/resources/experiences by the devs. Now that the next gen APIs are unlocking the ability to access these features, AMD is trying to get devs to use them to utilize the full functionality of their cards. There are even more features still not utilized in the PC space that will come with further Vulkan and DX12 updates.

Nvidia on the other hand, has built an architectural progression that is streamlined for DX11 and some DX12 features. This has allowed them to continue to be the performance and efficiency leader for the last few generations. However, as devs start to get deeper into the next gen APIs, they may not have much more power to tap into. If there are additional features here, Nvidia needs to either work with the devs to get them into games or do like AMD and document them sufficiently for devs to utilize.

Again, not a graphics guy, but this is my understanding of the current state of the industry.
 
Reactions: 3DVagabond

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
We aught to have a Vulcan thread. ID engine is interesting but what makes vulcan big news is imho Android. Eg. Getting mentioned in prime time witht Samsung S7 and VR is THE thing.
 

fingerbob69

Member
Jun 8, 2016
38
10
36
Be patient, NV will eventually release a new driver to fix this.


Be patient, NV will eventually release a new gpu architecture to fix this.

There ....fixed it for ya :biggrin:

Because @ Shivansps ...you're right. It is NOT nVidia drivers that are at fault ...it's the Pascal architecture itself and the way it handles certain DX12/Vulcan features [async] ...or not as the case maybe.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
So developers havent cared to optimize their low level APIs for nVidia hardware? Great. Hopefully they will start now.



And yet they have enough time and money to release broken DX12/Vulkan paths?
Ask nv why its that bad. My point is just it wont get much better with pascal anyway. It will get eg the drawcall benefits in rts but eg asynch is going nowhere because of hardware and dev cost.
It takes 3 years to develop an asynch ready engine even for an arch that is made for it like gcn. And only because of consoles. Who beliewes ID made this because of amd pc gfx?
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Ask nv why its that bad. My point is just it wont get much better with pascal anyway.

Agreed. The train has left for Pascal. Best bet for NVidia is to secure a few more gameworks titles going forward.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Agreed. The train has left for Pascal. Best bet for NVidia is to secure a few more gameworks titles going forward.
I remembered buying a nv 6800 card solely to play Doom 3 with all the new as i recall dx9 glory tacked on. I only used the card for that and played for perhaps 5 hrs. Damn fine graphics but scary stuff . And i was glad.

Now people is salted like a saltmine because ID moves the benchmark of how a game can look on fairly modest hardware using a vulcan api. Thats just good imo.

We need pc gaming to move from the dx9 age and forward. Its about time.

The salty stuff is just more selfinflicted pain.
 

Unreal123

Senior member
Jul 27, 2016
223
71
101
I must have a special Vulkan on Nvidia. I do not know why i am getting better performance compare to OpenGL.

Vulklan


OpenGL
 
Last edited:

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I must have a special Vulkan on Nvidia. I do not know why i am getting better performance compare to OpenGL.

Vulklan


OpenGL

Depends on the card and resolution. Its only in some cases where nVidia gets worse FPS in vulcan.
 

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
Stop with the silly excuses and ask for evidence, the fact that it performs worse than in OGL by a big amount is all the evidence you ever need, there is NO REASON for the performance regression other than a shitty rendering path for that arquitecture. The way that some people is defending this is worrisome.
There's always OpenGL if your cards aren't up to the task. The more salient thing to present is a DX12 game where Maxwell gains performance, on average. Can you provide one?

Of course not, best you can hope for is parity and some advantage in CPU limited situations. Obviously cards designed for a different API aren't going to do so well outside of their intended usage. Perhaps we can flip it around. Why don't devs work harder to make up for GCN's DX11 deficits? It's the same argument.

Perhaps RotR devs, who themselves have a marketing deal with NV should be under fire as well, but I guess it's no big deal there because AMD is losing. The devs have even taken responsibility for it, but barely a word from NV supporters.

Failing that, as pointed out above, there are still benefits to be had, just sometimes that doesn't work out to an average performance improvement.

When NV pimped their Vulkan performance in DOOM at the 1000 series unveiling event, as far as I'm concerned, they're taking responsibility for your experience in it. If their cards are coming up short, it's their fault. That's what that demo says to me. AMD also did demos on DOOM, in OpenGL, and they are rightfully blamed for coming up short in that department. Why the double standard? I thought NVIDIA had the best dev relations.

The fact is, Maxwell is now a legacy arch and NV doesn't care about it anymore. They haven't lifted a finger to improve its performance in DX12 (in general) or to speak out about DOOM devs screwing them, or seemingly to offer any comment at all. Their silence makes it clear what's going on. They don't need you defending them. They're a huge company with a massive and effective PR machine, one that made the bold choice never to apologize for the missing 0.5GB on the 970. They can do it by themselves, and not a single word of accusation against id. Last we heard, NV is working with Vulkan to fix the issues. Perhaps we should just take them at their word and evaluate the results when the fix surfaces.

Finally, the low level optimizations that allow the game to be run at a solid 60 on consoles are already done, and since the arch is the same in desktop AMD cards, you can see why it might carry over. If, with a lot of work, performance parity could be achieved with Maxwell in Vulkan, what's even the point when OpenGL works fine on that side? The evidence suggests that Maxwell can't actively gain performance from DX12. Seems like a waste of NV's/iD's time to bother with it with the new arch out.
 
Last edited:
Feb 19, 2009
10,457
10
76
Well, when Tom Peterson was asked about Maxwell's Async Compute driver update (PCPER interview on youtube), his response was "no comment". NV is ignoring Maxwell, or trying to sweep it under the rug, hope everybody forgets about it and upgrade to Pascal asap.
 

eddman

Senior member
Dec 28, 2010
239
87
101
There are a lot of oddities going on with DX12/Vulkan when it comes to their performance on Pascal, and there aren't enough titles out there to draw a definite conclusion.

For now, we can only speculate. It could be any or a combination of the following:

1. Regarding async, maybe the majority of current DX12 titles, as Ryan pointed out, are simply not optimized for Pascal's async and hence the reason it loses performance in certain titles, e.g. TW:warhammer.

2. Since DX12 and Vulkan are low-level APIs, which puts a much bigger importance on engine optimizations for specific architectures, and seeing that GCN is used on both MS and Sony consoles, it's possible that current engines/games are not optimized for Pascal for low-level APIs, since there was no need for it, up until the arrival of windows 10.

3. Another possibility, could be the lack of optimizations and features in NVidia's drivers for DX12/Vulkan's low-level requirements.

4. ...or, simply, Pascal (which is Maxwell based) was designed to perform best in DX11 and OpenGL and performs inefficiently in low-level APIs at an architectural level.

P.S. Maxwell does not benefit from async and can even lose performance when it's enabled, that much is known, but it doesn't explain Pascal's poor results.
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,400
12,854
136
Coming up with the idea that it's the developer's fault for not building an efficient Vulkan render path for Nvidia is quite a joke when you place this event in the grand scheme of things.

We are in H2 2016. Mantle was a warning shot back in late 2013. More than 2 years have passed, time in which the world leader in PC gaming hardware, a company with impressive software developing capabilities and extensive ties with the gaming industry should have sparred no effort or funding to develop the absolute best support for their products in regards to the developing new APIs.

Nvidia is a giant. Irresponsible amounts of engineering talent. Absurd R&D funding. They have no excuse.

If they consider DX11 to be their better solution for the time being, I have no qualms with that. It's ok, whatever works with the efficiency they got us accustomed to is fine. But if that is the case, then fellow forumites better stop accusing developers.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
You're seeing the effects of gpu open according to another poster.

Amd unified the libraries between console and desktop so optimizations in console can carry over or something to that effect.

Someone needs to explain that program to me more though because I fail to see how it's open when it's clear amd gains the most benefit from it.

Not saying I agree with the cause of the situation. Just a reply re GPU open.

It's open because all of the source code is available and free to change. Now, I wouldn't expect AMD to take the time to optimize for nVidia. But nVidia can look at the source code and optimize however they want to. They can even give it their own name and call it a Gameworks feature if they want to with no reference to AMD at all.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Not what I'm talking about. See:



http://gpuopen.com/gcn-shader-extensions-for-direct3d-and-vulkan/

There's no way for GCN specific shader intrinsic functions designed to expose functionality exclusive to the GCN shader cores to deliver benefits (or even work frankly) on NVIDIA hardware.

All nVidia has to do is the same thing. As it says they are to "expose additional GCN features to developers". You aren't really saying that AMD should write the code required to expose whatever additional features nVidia hardware offers? You need to petition nVidia to reveal all the additional hardware features their cards have (if any) and release the code to access it. AMD can't do that.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
This is exactly what the Vulkan render path in Doom was designed to create the impression of. Seriously, go listen to what Robert Duffy was saying in the RX 480 marketing/launch video and it becomes immediately clear that they did a lot of extra optimization work for AMD GCN hardware in the Vulkan rendering path than they did for the NVIDIA cards.

Like I said before, these devs have to do a LOT of low level optimization to get framerates on consoles to acceptable levels and AMD is exposing a lot of this hardware functionality to developers with DX12/Vulkan (with extensions that go beyond even what DX12/Vulkan provide). The devs can, as in the case of Doom, take their optimizations/tweaks and bring them to GCN with low level APIs like DX12 and Vulkan, which certainly helps AMD hardware look better than NVIDIA's.

The AMD marketing has worked, and from a biz perspective AMD did a very good job here. Props to them on that front.

Any source for Id, or any dev, using these "beyond the API" features/tweaks?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Not saying I agree with the cause of the situation. Just a reply re GPU open.

It's open because all of the source code is available and free to change. Now, I wouldn't expect AMD to take the time to optimize for nVidia. But nVidia can look at the source code and optimize however they want to. They can even give it their own name and call it a Gameworks feature if they want to with no reference to AMD at all.

Already done, Pure Hair in Rise Of The Tomb Rider is AMDs TressFX customized and optimized for both AMD and NVIDIA hardware by the Developer because its OPEN.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
So I picked up the game through the Quake Con sale on Steam. And yes, it's a well-optimized wonder. Smooth 60 FPS on my rig, all settings maxed (except texture page size and shadow quality, since the game insists it needs 5 GB of VRAM for that). Internal metrics report consistent 16 ms render time per frame or less on both CPU and GPU. Thank you, Vulkan!

How about another question: can Doom run on a nearly 10 year old CPU? I went ahead and installed it on my brother's rig, with a stock Core 2 Quad Q6600, Radeon R7-260X 2 GB, and 8 GB of 800 MHz DDR2 memory. I tested in the open area at the start of the first UAC mission, not really benchmarking, but keeping my eye on the frame rate counter and CPU/GPU render time meters. Tested at 1440x900, medium settings, TSSAA, 2x decal anisotropic filtering and all checkmark settings on (the default medium preset turns off checkmark settings like compute, player self shadow, etc). Both counters seemed to hover around 20 ms, with the GPU more fluctuating between 15-20 and the CPU hovering between 20-25. Frame rate fluctuated between 30-45 FPS in combat, 50 FPS while just walking around , playable but certainly not ideal. Turning settings down to low didn't seem to help things all that much. I figured it's really a CPU bottleneck judging from that and the render times, and that upping the graphics power wouldn't help much. Maybe I'd be able to bump up some settings without loss, but the frame rate wouldn't go up.

...but I went ahead and installed the 2 GB 270X I still have hanging around. And, somewhat to my surprise, I both was able to increase settings and saw a frame rate improvement. I bumped lights, particles, decals, and motion blur to high, decal AF to 4x. I kept shadows, directional occlusion, and texturing page file size at medium, since shadows and occlusion tend to be CPU intensive (as far as I'm aware) and it is still a 2 GB card. Frame rate during combat improved to hover more between 40-50 FPS, CPU time more around 20 ms or lower and GPU time at 16 ms or lower. The frame rate keeps to a stable 60 FPS when walking around not in combat, with both GPU and CPU time sticking to 16 ms or lower. I can attest that gameplay certainly feels smoother on the 270X than on the 260X.

So it does seem that with Vulkan, even an aging CPU like the Q6600 is GPU bottlenecked at medium settings on a low-end card like the 260X. What's causing the difference? The 270X is a considerable improvement in stream processors (896 vs 1280) but more than that, it's nearly double the memory bandwidth thanks to its 256 bit memory bus width vs the 260X's 128 bit bus. My guess is it really comes down to the memory bandwidth, as the PS4 version has a full 256 bit bus to its DDR5 memory while the Xbox One has its 32 MB of super-fast ESRAM to make up for its slow DDR3 RAM. The additional stream processors undoubtedly help out as well, especially to let asynchronous compute stretch its legs further.

I just think it's incredible that a nearly 10 year old CPU can run the game. To put this in context, Intel released the 200 MHz Pentium Pro processor in 1997. Imagine trying to run a shooter from 2006, such as Call of Duty 3, on that. Yeesh. Still, this CPU is about at the end of its usefulness. Vulkan runs fine, but a recent AMD driver update to DirectX 12 added a requirement for an instruction set the Q6600 doesn't support, meaning games won't run in DirectX 12 mode on the Q6600 at all. AMD could easily do the same with Vulkan in a future update. Running Doom on Vulkan is Conroe/Kentsfield's last huzzah, you could say.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |