[WCCFtech] AMD and NVIDIA DX12 big picture mode

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I think both companies had a lot of say and input about dx12 with MS. The manufacturers just prioritize different things. Like nvidia probably felt it was better to focus on one area and amd focused elsewhere. That lands us here today where there are different things each excels at over the competition.

And if this is true, we're going to see a lot of sponsored games heavily go one way. It will then fall to the two companies to optimize their drivers to compensate for what is missing, and frankly I have little confidence AMD can do that after they went so long with no multi-threaded DX11 driver.

Just look at AOTS.
NV hardware runs fine in DX11 mode and then takes a slight hit in DX12 mode.
AMD hardware is floundering in DX11 mode and requires DX12 to just catch up to NV's DX11 mode.

If this one game where AMD is pour resources into is an example of things to come, when it's an NV sponsored game I'm worried the current trend will still exist and AMD hardware will struggle.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Sooner or later Nvidia will have to deal with designs that have over 4000 stream processors ...

What better way is there to get higher utilization than to feed more work ?

I don't see why tons of people are downplaying good asynchronous compute implementations and I also don't see the parallel with tru-form or asynchronous compute since the former was just an experiment, even then it was only after Nvidia burnt a lot silicon to get better performance than AMD on tessellation with their polymorph engine but as time went on that went for naught when Fiji closed the gap as far as efficiency went ...

You can move most of the work out of the graphics pipeline with asynchronous shaders ...

Both the vertex shader and pixel shader are just compute shaders in disguise ...
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
And if this is true, we're going to see a lot of sponsored games heavily go one way. It will then fall to the two companies to optimize their drivers to compensate for what is missing, and frankly I have little confidence AMD can do that after they went so long with no multi-threaded DX11 driver.

Just look at AOTS.
NV hardware runs fine in DX11 mode and then takes a slight hit in DX12 mode.
AMD hardware is floundering in DX11 mode and requires DX12 to just catch up to NV's DX11 mode.

If this one game where AMD is pour resources into is an example of things to come, when it's an NV sponsored game I'm worried the current trend will still exist and AMD hardware will struggle.

I think, for the next 3-4 years, DX12 and GCN in consoles will level the playing field. By default, console ports will come more optimized for AMD than Nvidia because of GCN. Nvidia probably knows this and will adjust Pascal's uarch to address this.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I think, for the next 3-4 years, DX12 and GCN in consoles will level the playing field. By default, console ports will come more optimized for AMD than Nvidia because of GCN. Nvidia probably knows this and will adjust Pascal's uarch to address this.

Or devs don't rely to heavily on ACE due to such wide discrepancies between xbone/ps4 and the hardware functions in GCN go largely unused.

We're entering the second year of console games and only three are slated to use the new feature. Of which we have no pc versions yet to draw conclusions from.

If say one of these games does perform better in maxwell2 does that mean aots is the exception? Don't know. Got to wait and see.

I just think people assuming because AMD hardware is in the new consoles this will auto give AMD an advantage. AMD hardware was in the generation 7 leading console - didn't do jack for AMD pc gpu performance.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
As much as I hate to say it, but this is the kind of post that rings true. AMD has yet to make any of the techs they produce lead to their success. They're first to the party, sure, but they seem to fall asleep once the party actually starts.

This could be the start of a new thing, but I still remember Bulldozer, and Mantle, and TrueAudio, and tessellation, and unified shaders.

At this point I'm firmly in the "wait and see" crowd. It can go either way, but NV's money pot and greed is not something to underestimate.
nah, more like they don't know how to market themselves amd needs to create fanbois on tech forums to spread the good news around for free ^_^ they need nv level marketing. so good it creates free internet warriors to work for them. If you visit this forum and other major tech forums you would know this.

@buzzkiller read their post history and you would understand exactly why.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
Or devs don't rely to heavily on ACE due to such wide discrepancies between xbone/ps4 and the hardware functions in GCN go largely unused.

We're entering the second year of console games and only three are slated to use the new feature. Of which we have no pc versions yet to draw conclusions from.

If say one of these games does perform better in maxwell2 does that mean aots is the exception? Don't know. Got to wait and see.

I just think people assuming because AMD hardware is in the new consoles this will auto give AMD an advantage. AMD hardware was in the generation 7 leading console - didn't do jack for AMD pc gpu performance.

That's the thing. The missing puzzle was DX12. It doesn't matter if AMD had hardware in previous consoles because DX11 had to work differently. DX11 didn't allow developers to leverage the hardware at a level it had with consoles. With mantle and DX12 being so similar, and the console using GCN, the writing is on the wall. There IS an inherent advantage in favor AMD's GPU. That doesn't mean AMD's GPU is better. It just means most of the coding is already done with GCN in mind.
 

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
Based on what I've gathered in the b3d forums, there is an issue with terminology here. Both Maxwell and Kepler can do asynchronous dispatch of compute threads (what most, including Nvidia, would call async compute).
What they can't do, however, is concurrent async compute and graphics (what Oxide and AMD refer to, doing graphics and compute at the same time in parallel).

The picture that Silverforce11 posted earlier is a decent explanation, see how the orange blocks are split from everything else in the top timeframe, and in the bottom they are mixed into the blue blocks, filling up resources that would have otherwise not been used? That is concurrent compute and graphics (at least as far as I can tell).

The picture that Carfax has posted shows that Maxwell definitely can do 32 compute threads in parallel. I'm not sure if it's a good demonstration of async compute, because those threads are the same length and have no reason to stall or pass each other.
What should be fairly easy to tell, however, is that they don't fill up any resources that are unused during the graphics rendering, because the offset between pure compute and compute+graphics is nearly the same no matter how much compute is loaded onto the GPU. That means either Maxwell has no unused resources during graphics rendering (unlikely) or compute is calculated before or after graphics in that program.

No one is disputing that it can't do asynchronous computing of purely compute based instructions.

It's mixed mode compute+graphics that everyone is concerned about, which is not looking like a possibility.

As much as I hate to say it, but this is the kind of post that rings true. AMD has yet to make any of the techs they produce lead to their success. They're first to the party, sure, but they seem to fall asleep once the party actually starts.

This could be the start of a new thing, but I still remember Bulldozer, and Mantle, and TrueAudio, and tessellation, and unified shaders.

At this point I'm firmly in the "wait and see" crowd. It can go either way, but NV's money pot and greed is not something to underestimate.

I wouldn't say never. Look at how DX 9 played out, definitely ATI dominated. Nvidia was behind for quite a while until the G80 chips.
 
Last edited:

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
.

Go read on ps4/xbone game developement and how they make this weak hardware push the best looking games of the year putting a lot of gameworks games into shame.


Sorry, some of the games look good but not close to what the games look like on a higher end PC.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
That's the thing. The missing puzzle was DX12. It doesn't matter if AMD had hardware in previous consoles because DX11 had to work differently. DX11 didn't allow developers to leverage the hardware at a level it had with consoles. With mantle and DX12 being so similar, and the console using GCN, the writing is on the wall. There IS an inherent advantage in favor AMD's GPU. That doesn't mean AMD's GPU is better. It just means most of the coding is already done with GCN in mind.


Only with regard to the Xbox one because the ps4 does not use dx12 but rather a proprietary api based on certain features of dx12. It's not like you can import all the code and you're done.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Then explain why the simultaneous compute and graphics graph, is about equal to the individual compute and graphics times added together?

New development. Apparently NVidia's drivers don't have complete asynchronous compute functionality like Oxide thought. Oxide dev Kollock said this:

Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

Source

So apparently asynchronous compute is broken on NVidia hardware right now because it hasn't been fully implemented in the drivers yet.. But NVidia definitely have plans to do it..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I don't see why tons of people are downplaying good asynchronous compute implementations and I also don't see the parallel with tru-form or asynchronous compute since the former was just an experiment, even then it was only after Nvidia burnt a lot silicon to get better performance than AMD on tessellation with their polymorph engine but as time went on that went for naught when Fiji closed the gap as far as efficiency went .....

Well I for one wasn't downplaying what AMD has done. In fact, if you read my OP I had nothing but praise for them as I consider it to be a masterful plot to exploit their domination of the console market and close the performance gap with NVidia.

And asynchronous shaders obviously have a lot of use. My main contention was that AMD's marketing team was trying to make it seem as though asynchronous compute was a major feature of DX12, that required explicit hardware support.

Obviously it doesn't. GPUs have been capable of doing asynchronous compute for some time now, they just needed an API to expose that capability.

But of course AMD has gone balls deep into it by adding explicit hardware support, and so they should reap greater performance gains than NVidia or Intel..
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Well I for one wasn't downplaying what AMD has done. In fact, if you read my OP I had nothing but praise for them as I consider it to be a masterful plot to exploit their domination of the console market and close the performance gap with NVidia.

And asynchronous shaders obviously have a lot of use. My main contention was that AMD's marketing team was trying to make it seem as though asynchronous compute was a major feature of DX12, that required explicit hardware support.

Obviously it doesn't. GPUs have been capable of doing asynchronous compute for some time now, they just needed an API to expose that capability.

But of course AMD has gone balls deep into it by adding explicit hardware support, and so they should reap greater performance gains than NVidia or Intel..

Many people don't consider this praise worthy. More like a back handed compliment.
 
Feb 19, 2009
10,457
10
76
As much as I hate to say it, but this is the kind of post that rings true. AMD has yet to make any of the techs they produce lead to their success. They're first to the party, sure, but they seem to fall asleep once the party actually starts.

GDDR5 help their success in the 4800 and 5800 series. Then NV caught up.

Potentially here, HBM and Async Compute can help their success in the next few generations, then NV catches up. Not saying it will or not, just, there's a potential.

Oh, software scheduler for queues and async compute would negate the latency improvement and performance. That's "drivers fixing async compute".. when talking to hardware can't get it to do it right.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Many people don't consider this praise worthy. More like a back handed compliment.

People can believe whatever they like. AMD is a business, not a charity. They can't afford to be honorable at all times with NVidia and Intel. If they have an advantage, they should ruthlessly exploit it.

With their complete domination of the console market, developers will have no choice but to make heavy use of asynchronous compute in their engines, as GCN requires it to achieve it's full potential. And even if GCN did not require it, it's still a good way to maximize performance.

It will be interesting to see what NVidia does with Pascal, and whether Pascal has dedicated hardware ACEs like GCN with their own internal schedulers, or sticks with the more power efficient manner of Maxwell where the GMU functions as a single massive asynchronous compute engine, but with additional queues like up to 64 instead of 32.

Besides, NVidia could stand to lose a bit of market share if you ask me. I don't want to see AMD go down any further, for the sake of healthy competition.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
People can believe whatever they like. AMD is a business, not a charity. They can't afford to be honorable at all times with NVidia and Intel. If they have an advantage, they should ruthlessly exploit it.
That ship has sailed, if AMD had kept Mantle to themselves for a couple of years then maybe the could have given themselves a major advantage. Now at best AMD will be able to exploit DX12 being possibly more GCN friendly, not a bad thing but not a knockout blow by any means. Make no mistake AMD needs to do some serious damage to Nvidia and fast.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That ship has sailed, if AMD had kept Mantle to themselves for a couple of years then maybe the could have given themselves a major advantage. Now at best AMD will be able to exploit DX12 being possibly more GCN friendly, not a bad thing but not a knockout blow by any means. Make no mistake AMD needs to do some serious damage to Nvidia and fast.

Keeping Mantle to themselves for two more years wouldn't have changed anything. DX12 was always going to take priority over Mantle, and despite what many AMD supporters think, DX12's development was not accelerated because of Mantle.

DX12 was going to launch with Windows 10 no matter what..
 

geoxile

Senior member
Sep 23, 2014
327
25
91
My assumption:

AMD worked with both MSFT and Sony to develop close to metal API that saw decent gains compared to DX11. MSFT then started tweaking it and based on this they created DX12. Since both Xbox and PS4 have similar GCN gpus, it is only natural that DX12 integrates better with GCN than anything Intel or Nvidia have designed. That doesn't mean that they wont get better, they will but it will require them to launch new archs.

Hence why Nvidia and Intel are quiet about it

...AMD's masterful plan is unveiled Mantle purpose was to convince partners that such API's had a future in x86 applications

I'm going to say your assumption is probably wrong.

http://www.eurogamer.net/articles/d...its-really-like-to-make-a-multi-platform-game

According to the Metro devs the Xbox One didn't even have a low level API at launch and none was ready in time for the devs to implement, even though we've heard of devs implementing DX12 in a matter of weeks with small teams. As far as we know MS just did 0 work on low-level APIs until "recently" whereas Sony seems to have had one ready out of the gate.

Also, Mantle was not designed by AMD. AMD likely didn't do much work with Sony or MS until after early-Mantle was designed by EA DICE's Johan Andersson

http://www.heise.de/newsticker/meld...ber-AMDs-3D-Schnittstelle-Mantle-2045398.html

Johan laid the foundation for the API that would become Mantle and started work on it in 2008 (5 years ago from the interview in 2013 he says).

AMD came in later as they were the only ones to agree with Johan's ideas, but the paradigm behind Mantle is something Johan designed after having worked with consoles for a long time.

My impression is Sony made their own low-level API for PS4 right from the gate. Johan designed his own hypothetical API based on what he learned and then Mantle was based on that, and DX12 likely was designed after seeing some of the ideas become concrete in Mantle's early designs.

That ship has sailed, if AMD had kept Mantle to themselves for a couple of years then maybe the could have given themselves a major advantage. Now at best AMD will be able to exploit DX12 being possibly more GCN friendly, not a bad thing but not a knockout blow by any means. Make no mistake AMD needs to do some serious damage to Nvidia and fast.

The sooner Mantle-like APIs become common the better for AMD.

http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019

This post by a former Nvidia driver dev has some good info on why.

To summarize, with DX11 the IHVs did a ton of handholding via the drivers. DX11 and previous APIs obfuscated the mechanics from game devs, which meant that in IHVs would have to get involved and work in a constant back and forth. Games would be ready for launch with broken renderers that the IHVs would have to fix via driver patchwork to have the game working and if drivers break something the devs would have to fix it on their end and so forth.

Mantle-like APIs do less handholding; the drivers are less complex and have less functionality (don't quote me on that); and the IHVs can do less from the driver-side to fix games. That means more work for devs, and less work for IHVs.

We know that Nvidia is pretty good about handholding games. They always have day-1 patches for big titles and they do a ton of work to fix up a variety of games. That's not even mentioning stories about nvidia engineering going on-site to actually work on the game or giving free testing equipment.

AMD on the other hand has never been very good about supporting devs.

It's clear to see that the key importance of DX12 and Vulkan is that AMD simply has to do less work to match Nvidia's level of support.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
The sooner Mantle-like APIs become common the better for AMD.

http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019

This post by a former Nvidia driver dev has some good info on why.

To summarize, with DX11 the IHVs did a ton of handholding via the drivers. DX11 and previous APIs obfuscated the mechanics from game devs, which meant that in IHVs would have to get involved and work in a constant back and forth. Games would be ready for launch with broken renderers that the IHVs would have to fix via driver patchwork to have the game working and if drivers break something the devs would have to fix it on their end and so forth.

Mantle-like APIs do less handholding; the drivers are less complex and have less functionality (don't quote me on that); and the IHVs can do less from the driver-side to fix games. That means more work for devs, and less work for IHVs.

We know that Nvidia is pretty good about handholding games. They always have day-1 patches for big titles and they do a ton of work to fix up a variety of games. That's not even mentioning stories about nvidia engineering going on-site to actually work on the game or giving free testing equipment.

AMD on the other hand has never been very good about supporting devs.

It's clear to see that the key importance of DX12 and Vulkan is that AMD simply has to do less work to match Nvidia's level of support.

Optimization of individual games never should have been the job of AMD and Nvidia in the first place. DX12 will put that work back where it belongs, on the shoulders of game developers. Letting DirectX get so out of touch with the underlying hardware that it required all these game-specific optimizations and hacks in the driver was a mistake.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Optimization of individual games never should have been the job of AMD and Nvidia in the first place. DX12 will put that work back where it belongs, on the shoulders of game developers. Letting DirectX get so out of touch with the underlying hardware that it required all these game-specific optimizations and hacks in the driver was a mistake.

back when there were more than 2 gpu vendors, you can see where this made perfect sense. It was also like this in the beginning, right after the decline of software rendering. It is funny to see this coming full circle and the future pointing back to software rendering -think larabee.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Optimization of individual games never should have been the job of AMD and Nvidia in the first place. DX12 will put that work back where it belongs, on the shoulders of game developers. Letting DirectX get so out of touch with the underlying hardware that it required all these game-specific optimizations and hacks in the driver was a mistake.

Honestly I think games could be even more of a hot mess now. You'll have developers who don't want to or can't afford to dedicate the resources to optimization for the PC and will just try to push code over from Xbox One. If the game runs they will leave it. I don't think they will all turn out like this but it's possible that putting the burden on the developers and with no way or limited ways to optimize for the game at the driver level, performance could suffer at times. Just look at how games like Arkham Knight were handled and that shows pretty well what can happen to a game that isn't properly optimized. Drivers can't fix that.
 
Feb 19, 2009
10,457
10
76
Honestly I think games could be even more of a hot mess now. You'll have developers who don't want to or can't afford to dedicate the resources to optimization for the PC and will just try to push code over from Xbox One. If the game runs they will leave it. I don't think they will all turn out like this but it's possible that putting the burden on the developers and with no way or limited ways to optimize for the game at the driver level, performance could suffer at times. Just look at how games like Arkham Knight were handled and that shows pretty well what can happen to a game that isn't properly optimized. Drivers can't fix that.

Just never support those developers or publishers who behave that way and treat PC as second class citizens.

For me, that's avoiding all Ubifail and Warner Bros games.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Just never support those developers or publishers who behave that way and treat PC as second class citizens.

For me, that's avoiding all Ubifail and Warner Bros games.

It's tough though because Rocksteady had a decent track record and we didn't find out that they outsourced the porting of the game until launch. Games like Shadow of Mordor are from WB and they are fantastic and run very well.
 

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
New development. Apparently NVidia's drivers don't have complete asynchronous compute functionality like Oxide thought. Oxide dev Kollock said this:



Source

So apparently asynchronous compute is broken on NVidia hardware right now because it hasn't been fully implemented in the drivers yet.. But NVidia definitely have plans to do it..

Well that seems like good news.

I hope this "issue" gets resolved soon. My GTX 580 is on its way out, requiring more and more voltage to stay stable at stock speeds, so I need a replacement soonish.

Really wish Fury didn't have only 4gb of ram, seems short sighted.

Be great if I can get another 5 years out of a card.
 
Last edited:
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |