Vega/Navi Rumors (Updated)

Page 234 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
This is ridiculous. It requires developer time to implement architecture specific features.

What are these "stages" that developers "bypass"?
Why don't you educate yourself, first, then comment on forums?

Here is educative film about Primitive Shaders. Programmable Geometry Pipeline which increases Geometry Throughput of Vega architecture is part of Primitive Shaders feature.

You are combining both Shaders, into single one command shader. It saves time from development point of view.
 

pj-

Senior member
May 5, 2015
481
249
116
Why don't you educate yourself, first, then comment on forums?

Here is educative film about Primitive Shaders. Programmable Geometry Pipeline which increases Geometry Throughput of Vega architecture is part of Primitive Shaders feature.

You are combining both Shaders, into single one command shader. It saves time from development point of view.

How does it save development time if you also have to accommodate Nvidia hardware?
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
How does it save development time if you also have to accommodate Nvidia hardware?
Do you have to the same things for Nvidia hardware and AMD? Which hardware is well documented for developers, and which one has very little documentation? How detailed is this documentation?

And finally. How big effort in optimizing the software for Nvidia hardware is done by developers, and how big effort is done by Nvidia software engineers?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
"That's all"?

How many man months do you think a game developer is going to spend on architecture specific enhancements for a tiny fraction of the market? Hint: There's a reason why companies use Gameworks instead of developing their own graphics functions - $$$.

More than 90-100 Million GCN Consoles today, add all the GCN PC Graphics cards and suddenly Pascal is a tiny fraction of the market. No wonder why NV created GameWorks



edit: Just ask zlatan how much easier is to optimize a GCN Console Game for PC GCN Graphics Cards and for NVIDIA Maxwell, Kepler and Pascal cards.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
More than 90-100 Million GCN Consoles today, add all the GCN PC Graphics cards and suddenly Pascal is a tiny fraction of the market. No wonder why NV created GameWorks



edit: Just ask zlatan how much easier is to optimize a GCN Console Game for PC GCN Graphics Cards and for NVIDIA Maxwell, Kepler and Pascal cards.

Gameworks is just one weapon in NVidia's arsenal to diminish the effects of console optimizations. The other is their compiler technology. NVidia has among the very best compilers in the industry, and that I believe is a large part in how they have been able to maintain their edge over AMD despite so many game engines being heavily optimized for GCN architecture.

NVidia's compilers use the power of modern day multicore CPUs to optimize the instruction stream to their GPUs to maximize performance and hardware utilization. NVidia's dominance in the DX11 era is an attest to the effectiveness of this approach. I believe this also explains why NVidia GPUs suffer a performance penalty when using AMD CPUs, because AMD CPUs have lower IPC and less overall performance than their Intel counterparts.

AMD on the other hand has relied on hardware scheduling, which can also be highly effective due to being located on die, but is also limited due to transistor budget constraints and power usage. As GPUs become faster and more complex, the problem of how to feed them them is exacerbated. With hardware scheduling, designers will have to devote more and more die space to improve the capability and performance of their hardware schedulers to keep up with the increased performance and complexity of the GPU.

AMD seems to be struggling with this concept, as Vega's power usage has increased significantly compared to previous generations along with its performance, but not enough to catch up with the competition (which lacks hardware instruction schedulers) in performance per watt.
 
Last edited:
Reactions: xpea

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You make it sound like using Gameworks by itself makes a game well-optimized on NVIDIA hardware, but that's simply not true. Developers optimize for NVIDIA because they have the majority market share by a significant margin.

I think people forget that over the past several years before the advent of low level APIs, developers lacked the capability to target specific hardware architectures due to the high level of abstraction that DX11 and previous APIs from Microsoft required. So developers would really optimize for DX11, and then it would be up to the drivers to translate those instructions to the GPU. This environment favored NVidia because NVidia is one of the most competent and proficient software developers in the world, especially compared to AMD.

Now with DX12 and Vulkan, developers are finally able to target GPUs on a micro architectural level, which theoretically should favor AMD more so than NVidia.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
More than 90-100 Million GCN Consoles today, add all the GCN PC Graphics cards and suddenly Pascal is a tiny fraction of the market.

Been hearing this song and dance since roughly 2012. It would be nice to finally see this materialize. Every year it's updated with a new asterisks.

Meanwhile, I played Nier: Automata from start through all 26 endings with only two crashes, but while browsing forums for tips, all I kept reading was how this game was basically broken for older NV cards and basically all AMD cards. Took AMD I believe a month and a half to fix, the devs - didn't do jack.

So much "install base" and AMD still can't get them to use Async Compute in more than the titles they have their hands in.
 
Reactions: xpea and Muhammed

Mopetar

Diamond Member
Jan 31, 2011
8,011
6,455
136
More than 90-100 Million GCN Consoles today, add all the GCN PC Graphics cards and suddenly Pascal is a tiny fraction of the market.

There's a difference between GCN and Vega. Developers will certainly make some optimizations for GCN in general, but if we use the current Vega performance as an indication of how far that gets you the results aren't nearly good enough. The consoles do very little for AMD (unless the X Box One X has a lot of the same Vega technologies bolted-on because it's listed as a Polaris derivative) because they don't contain the various new technologies that are found in Vega and the console developers aren't going to spend time optimizing for something that doesn't exist in the console hardware. If both Microsoft and Sony had held off on upgrades until next year so that they contained GPUs with Vega technology you'd have a much better argument.

AMD almost needs to take the same approach as NVidia and build some type of framework or create a software library that makes it less work for developers to utilize some of Vega's features. It isn't an issue of the features being bad, but just one of developers needing to spend a lot of time implementing them. Worse yet they probably end up independently doing a lot of the same work. AMD should try to work with a few major developers (especially those that license their game engines) to incorporate all of those Vega goodies into the major game engines that are going to be used in developing titles over the next several years.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
And finally. How big effort in optimizing the software for Nvidia hardware is done by developers, and how big effort is done by Nvidia software engineers?

A lot of the effort for optimizing for Nvidia is, indeed, done by Nvidia's driver developers. Game developers like that because it saves them time and budget. Nvidia is willing to do it because it's a competitive advantage. Only AMD loses out. The fact is that, to put it bluntly, AMD needs to hand-hold and baby-sit game developers, to provide them with tons of resources at no cost, if they want to compete. It's that simple - Nvidia does it, so AMD has to do it too, whether it's "fair" or not.

Here is educative film about Primitive Shaders. Programmable Geometry Pipeline which increases Geometry Throughput of Vega architecture is part of Primitive Shaders feature.

Please read the Vega architecture release day slide deck and pay close attention to what primitive shaders can and cannot do. They can assist in better culling - removing polygons that are not being displayed. They cannot increase the actual throughput of polygons that need to be drawn. And do you really think Pascal doesn't have some way of automatically culling triangles that don't need to be rendered? The fact remains, GP102 can actually render 6 triangles/clock, and all variants of GCN including Vega can only render a maximum of 4 triangles/clock. When it comes down to games that need to actually display lots of geometry - not throw it away - big Pascal and even big Maxwell have advantages that Vega cannot match, and primitive shaders won't ever be able to fix this even if developers optimize for them (which they won't).
 

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
So what AMD needs, is GPUOpen?

Granted, AMD really should push for it's adoption in engines like CryEngine and UE4.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
There's a difference between GCN and Vega. Developers will certainly make some optimizations for GCN in general, but if we use the current Vega performance as an indication of how far that gets you the results aren't nearly good enough. The consoles do very little for AMD (unless the X Box One X has a lot of the same Vega technologies bolted-on because it's listed as a Polaris derivative) because they don't contain the various new technologies that are found in Vega and the console developers aren't going to spend time optimizing for something that doesn't exist in the console hardware. If both Microsoft and Sony had held off on upgrades until next year so that they contained GPUs with Vega technology you'd have a much better argument.

AMD almost needs to take the same approach as NVidia and build some type of framework or create a software library that makes it less work for developers to utilize some of Vega's features. It isn't an issue of the features being bad, but just one of developers needing to spend a lot of time implementing them. Worse yet they probably end up independently doing a lot of the same work. AMD should try to work with a few major developers (especially those that license their game engines) to incorporate all of those Vega goodies into the major game engines that are going to be used in developing titles over the next several years.

Does AMD have a compiler that can take PS4/XB1 code and automatically port it to the PC with the console GCN optimizations included, no fuss, plug and play?

If not, why not? PS4/XB1 are just fancy PCs, nothing special about them. If AMD isn't doing this, it's a massive strategic oversight. The hardware development budget for RTG needs to be increased at least 2x-3x if they want to compete... but the software development budget needs to be increased 10x or more; it's a massive bottleneck and one of their biggest obstacles to getting things done in a timely manner.
 
Reactions: crisium

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Gameworks is just one weapon in NVidia's arsenal to diminish the effects of console optimizations. The other is their compiler technology. NVidia has among the very best compilers in the industry, and that I believe is a large part in how they have been able to maintain their edge over AMD despite so many game engines being heavily optimized for GCN architecture.

NVidia's compilers use the power of modern day multicore CPUs to optimize the instruction stream to their GPUs to maximize performance and hardware utilization. NVidia's dominance in the DX11 era is an attest to the effectiveness of this approach. I believe this also explains why NVidia GPUs suffer a performance penalty when using AMD CPUs, because AMD CPUs have lower IPC and less overall performance than their Intel counterparts.

AMD on the other hand has relied on hardware scheduling, which can also be highly effective due to being located on die, but is also limited due to transistor budget constraints and power usage. As GPUs become faster and more complex, the problem of how to feed them them is exacerbated. With hardware scheduling, designers will have to devote more and more die space to improve the capability and performance of their hardware schedulers to keep up with the increased performance and complexity of the GPU.

AMD seems to be struggling with this concept, as Vega's power usage has increased significantly compared to previous generations along with its performance, but not enough to catch up with the competition (which lacks hardware instruction schedulers) in performance per watt.
I have thought that this might change with Navi. If it is a multi-die GPU, then the command processor would be difficult to segment so they might go the Nvidia route with a software scheduler. Fits very well with their high core count CPUs.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
So what AMD needs, is GPUOpen?

Granted, AMD really should push for it's adoption in engines like CryEngine and UE4.

It's not enough to throw open-source stuff out there. AMD needs to flat-out GIVE developers actual, tangible resources to optimize for their stuff. I mean on the level of "here's one of our top GPU coders, we'll let you borrow him for a couple weeks to work on optimizing your AAA game/engine". If the answer is they can't afford to do this, then they can't afford to compete and might as well drop out - it's as simple as that. Nvidia will do this; if AMD won't, then with their minority market share they are dead in the water and 90% of the way to being completely ignored when it comes to optimizations.
 
Reactions: tonyfreak215

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Does AMD have a compiler that can take PS4/XB1 code and automatically port it to the PC with the console GCN optimizations included, no fuss, plug and play?

If not, why not? PS4/XB1 are just fancy PCs, nothing special about them. If AMD isn't doing this, it's a massive strategic oversight. The hardware development budget for RTG needs to be increased at least 2x-3x if they want to compete... but the software development budget needs to be increased 10x or more; it's a massive bottleneck and one of their biggest obstacles to getting things done in a timely manner.
Sounds good at face value, but as all things, not everyone might agree.

If you were Sony or Microsoft, would you want game developers to produce probably better performing PC ports at will. Why a console?
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
More than 90-100 Million GCN Consoles today, add all the GCN PC Graphics cards and suddenly Pascal is a tiny fraction of the market. No wonder why NV created GameWorks



edit: Just ask zlatan how much easier is to optimize a GCN Console Game for PC GCN Graphics Cards and for NVIDIA Maxwell, Kepler and Pascal cards.

Clearly, two different markets - otherwise, AMD's dominance in consoles would have turned the tide for them on PCs. This hasn't happened - and this particular argument is shown to be faulty by direct evidence.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Sounds good at face value, but as all things, not everyone might agree.

If you were Sony or Microsoft, would you want game developers to produce probably better performing PC ports at will. Why a console?

PC ports are already there, and already perform better than console ports. That horse has left the barn. Unless there is some explicit agreement with Sony and/or MS barring what I described (unlikely), AMD would have a great deal of upside in creating a good cross-compiler and very little downside.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
If you were Sony or Microsoft, would you want game developers to produce probably better performing PC ports at will. Why a console?

I feel this is one of the issues that has led to the console contracts not working as some expected. Sony is nefarious for not sharing their techniques with 3rd party devs. That's why PhyreEngine was such a surprise to many, but that was Sony trying to get devs to work with the PS3 hardware that was already a nightmare.

PS4 is now the leading console and Sony's first party games are most likely using all these fancy features (I mean, Sony was one of the primary driving forces to how GCN came to be, if my memory serves me right) but Sony games never see the light of day outside of Sony consoles. MSFT was tasked with carrying DX12 into everyone's household and I'd wager their abysmal performance this generation tore giant holes into AMD's sails.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Why don't you educate yourself, first, then comment on forums?

Here is educative film about Primitive Shaders. Programmable Geometry Pipeline which increases Geometry Throughput of Vega architecture is part of Primitive Shaders feature.

You are combining both Shaders, into single one command shader. It saves time from development point of view.

You seem to have a fundamental misunderstanding of hidden surface removal.

1. In the first couple of minutes of the video it is stated AMD already does it. It has been a feature of all modern GPUs for at least a decade. https://en.wikipedia.org/wiki/Hidden_surface_determination.

Here is a reference to to hidden surface removal in Quake from 1996: https://www.gamedev.net/articles/programming/graphics/quake-hidden-surface-removal-r656/.
GPU accelerated hidden surface removal 2005: http://research.nvidia.com/sites/de...-07_GPU-Accelerated-High-Quality/gpuhider.pdf
And finally Hidden surface removal in the ATI X1000 series in 2005: http://techreport.com/review/8864/ati-radeon-x1000-series-gpus/2

2. The video goes on to explain that Vega can perform hidden surface removal earlier in rendering process, so an attribute fetch isn't needed for culled triangles, potentially saving a memory access. That's it, nothing more.

3. Absolutely at no time in the video is the actual implementation discussed. To say that it means that developers can somehow skip programming steps is completely false.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
More than 90-100 Million GCN Consoles today, add all the GCN PC Graphics cards and suddenly Pascal is a tiny fraction of the market. No wonder why NV created GameWorks



edit: Just ask zlatan how much easier is to optimize a GCN Console Game for PC GCN Graphics Cards and for NVIDIA Maxwell, Kepler and Pascal cards.

As I said earlier:

So we're back to the "AMD is in the consoles so they will be better on PC" argument.

Which has been true 0% of the time since that argument was started somewhere around eight years ago.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I have thought that this might change with Navi. If it is a multi-die GPU, then the command processor would be difficult to segment so they might go the Nvidia route with a software scheduler. Fits very well with their high core count CPUs.

I don't think AMD will go that route, as their compilers likely just aren't good enough. And I base this comment on their DX11 drivers. NVidia is right up there at the very top along with Intel when it comes to compiler technology.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
RTG is what, 3000 employees? I bet that many people in NVIDIA work on the drivers and software development alone. That's what makes the real difference.

The console argument doesn't work as a general rule because Sony doesn't make their tools available for the PC. We are yet to see checkerboarding implemented in a PC port, just to give one example, while it works very well on GCN GPUs in consoles. You have to deal with DirectX when you're porting to Windows anyway, so a lot of the effort spent on making the game performant on consoles is wasted, and you have to start all over again. Plus the fact that DX12 is difficult to get right doesn't help much either. So naturally, with NVIDIA's commanding market share and better software environment, their hardware will always have the edge when it comes to DX11, which is the dominant API in use today, and will likely remain that way for a while.
 
Reactions: Kuosimodo

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Now with DX12 and Vulkan, developers are finally able to target GPUs on a micro architectural level, which theoretically should favor AMD more so than NVidia.

They need Sony to stop being Sony, otherwise it's up to Microsoft, and it's clear they don't want to be in the hardware business much longer.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
RTG is what, 3000 employees? I bet that many people in NVIDIA work on the drivers and software development alone. That's what makes the real difference

Exactly. On the CPU side, AMD can just make good x86 processors (Ryzen) and the corresponding chipsets, and the only "software" they have to write is the AGESA microcode. On the GPU side, that doesn't fly - it's a whole ecosystem. Like I said, RTG needs 10x as many software developers as they have if they want to compete seriously.

The console argument doesn't work as a general rule because Sony doesn't make their tools available for the PC. We are yet to see checkerboarding implemented in a PC port, just to give one example, while it works very well on GCN GPUs in consoles. You have to deal with DirectX when you're porting to Windows anyway, so a lot of the effort spent on making the game performant on consoles is wasted, and you have to start all over again. Plus the fact that DX12 is difficult to get right doesn't help much either. So naturally, with NVIDIA's commanding market share and better software environment, their hardware will always have the edge when it comes to DX11, which is the dominant API in use today, and will likely remain that way for a while.

Yes, Sony obviously doesn't care about PC compatibility. This is why AMD needs to take the lead, and write a cross-compiler that will take PS4 code and turn it into Vulkan code on Windows. Yes, it will be a lot of work. But if they could do it, then they basically get the GCN performance optimizations they need "for free" on every PC-ported title originally written for PS4, since the console coders had to do it to make their code work at acceptable speed on the weaker console APUs.
 
Reactions: crisium and xpea
Mar 11, 2004
23,177
5,641
146
This is at railven.

https://www.techpowerup.com/reviews/AMD/Vega_Microarchitecture_Technical_Overview/4.html

AMD actually added a feature that could help DX11 performance too (conservative rasterization). Not sure if it'll actually be backported to support the DX11 implementation or if its just about DX12. Especially noteworthy is that the multi-view rendering is one of the things Nvidia talked up with Pascal, and is what that video was talking about with the discarding. Which from what I gather means that they basically check rendering from multiple angles, and discard as much as they can preventing them from having to discard a lot of the same stuff over and over. That specifically targets some of the tessellation stuff that Nvidia pulled with Gameworks, where they'd jack the tessellation rate to a ridiculous level for no benefit (there's a reason the video mentions culling things that are too small to even show up). And AMD has some Vega specific discard path that is faster or discards a lot more, which apparently so far isn't implemented (although in general Vega has more discard capability than Fiji).

That article also absolutely supports that GCN5/Vega is enabling extra robustness for developers to manage issues in the pipeline (see slide 20). It will obviously be up to developers to really make good use of it though. But it means if they optimize they can both mitigate what would otherwise be stalls or things they'd be stuck waiting to get processed (i.e. discarding instead of fully processing), but also could adjust to get more work done (and seems like they could do this at multiple stages, giving them more flexibility to manage things).

And that is Glo's point (even if I do think he's being pretty optimistic on how much it will improve things, but he's talking about the objective max potential there, not saying it will absolutely happen from what I've seen). There are a lot of features that could make Vega much much better, but it requires developers putting in the effort, and will take time to implement. The reason why he's probably so optimistic is that, from what I'm seeing, this is all stuff that is part of the DX12 featureset moving forward, meaning, Nvidia will almost certainly be doing the same (in some instances already is), so its not like developers are likely to ignore it. Unless Nvidia really starts blackboxing stuff and basically does the Nvidia specific paths (while also putting in the work in the driver), developers putting in the work will benefit both. If I'm not mistaken developers have even spoken out about how blackboxing stuff just ended up causing their games to be broken on release (not even just for AMD stuff either), and generally when we saw a game with Gameworks (especially the broken ones), the sequel had the feature but it was implemented by the developer, and performance vastly improved while quality didn't seem to suffer much if at all.

As for the consoles, the PS4 Pro and Scorpio are really the first ones to offer even some of the new Vega features (but they don't offer the fullset that Vega does, meaning, it'll be the consoles after that will probably offer that; and I strongly disagree with your dismissal of how important being in the consoles is for AMD as it is absolutely responsible for the gains we saw with just some of the DX12/Vulkan features being implemented in games like Doom). And that has been my point, Vega is another forward looking design. So this implementation of the Vega architecture might be a dud (for various reasons, which could be hardware stuff that can't be fixed/improved with software updates and newer software) and never fully show its potential. But it might also be like the 7970, where when utilizing its featureset it could potentially see large gains some time later (meaning, yes, years). Yes, that really doesn't help now (once again, always always always always always buy based on perf/$ when you buy and not future performance), but this stuff is the groundwork for what graphics processing will be doing (Nvidia is already saying they're going to support plenty of this in the future, RPM being one they specifically said Volta will bring). So people dismissing the architecture as a failure when you're not even seeing a lot of its features really utilized (I'd guess that even games/engines that have support for some of this stuff it is quite limited right now).

That's why both Sony and Microsoft have been saying the Pro and One X are not really next gen consoles, as they're more extending the current stuff with a few extra features that won't make a big difference right now, but will likely be a bigger part in the future. So that doesn't actually help Vega much as then it'll still be years before this stuff will be common to developers, but that's how they've had to go about their GPU design. Be forward facing and hope that developers make good use of it. If not, then they look worse (which is what has been happening; although plenty of that was other things not related to their GPU performance; which they've been working to recognize and improve across the board, time will tell how things work out).

I feel this is one of the issues that has led to the console contracts not working as some expected. Sony is nefarious for not sharing their techniques with 3rd party devs. That's why PhyreEngine was such a surprise to many, but that was Sony trying to get devs to work with the PS3 hardware that was already a nightmare.

PS4 is now the leading console and Sony's first party games are most likely using all these fancy features (I mean, Sony was one of the primary driving forces to how GCN came to be, if my memory serves me right) but Sony games never see the light of day outside of Sony consoles. MSFT was tasked with carrying DX12 into everyone's household and I'd wager their abysmal performance this generation tore giant holes into AMD's sails.

Would AMD be doing better if Xbox was selling in line with PS4? Possibly. But the thing is, that almost certainly would have come at the expense of PS4 sales, so unless they get a higher margin from Microsoft, it likely wouldn't make a big difference.

From what I've seen, Microsoft functions a lot like AMD in that they try to provide documentation, but they leave it up to the developers for the end results (I'd say they've been much more developer friendly than AMD, but AMD has had more of a focus on the hardware for obvious reasons; but Microsoft often has competing software so they're not going to just give away their unique advantages either; and plenty of game developers just use middleware anyway). They do offer tools but most of that isn't about eking the maximum performance is more about limiting outright problems. In the case of the Xboxes its likely mostly about CPU threading and managing the memory setup (the 360 and One both using eDRAM), and then baseline GPU stuff (didn't they just start talking seriously about DX12 on the One like 2 years ago? Meaning it very clearly was not designed with DX12 in mind, even though it can support a fair amount of its features). Scorpio definitely has a DX12 focus (it has hardware specifically for scheduling DX12 calls so that they can lessen CPU load), and Microsoft has been talking up their analysis tool, which both helped in them figuring out what they wanted in Scorpio (hardwarewise), and also showed them performance of the games already on the Xbox to get an idea of where they're bottlenecked. I would almost guarantee that will pay dividends.

Sony definitely is more interested in helping their own ecosystem. I believe they've had a few of their star developers help create tools and offer insight (although often it seems aimed at companies willing to do PS exclusives). Sony thinks their low level stuff on the PS4 is better than even Vulkan, but they are supporting it as well. I don't know if a developer could license DX12 for games on the PS4 or not.
 
Reactions: Kuosimodo

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Dat text wall! I read it all, and will be responding to specific parts (at work so time crunch):

This is at railven.

https://www.techpowerup.com/reviews/AMD/Vega_Microarchitecture_Technical_Overview/4.html

AMD actually added a feature that could help DX11 performance too (conservative rasterization). Not sure if it'll actually be backported to support the DX11 implementation or if its just about DX12. Especially noteworthy is that the multi-view rendering is one of the things Nvidia talked up with Pascal, and is what that video was talking about with the discarding. Which from what I gather means that they basically check rendering from multiple angles, and discard as much as they can preventing them from having to discard a lot of the same stuff over and over. That specifically targets some of the tessellation stuff that Nvidia pulled with Gameworks, where they'd jack the tessellation rate to a ridiculous level for no benefit (there's a reason the video mentions culling things that are too small to even show up). And AMD has some Vega specific discard path that is faster or discards a lot more, which apparently so far isn't implemented (although in general Vega has more discard capability than Fiji).

I believe that feature was brought on with Polaris, which is overall to me a good GPU. However, as I'll explain in a following paragraph.

That article also absolutely supports that GCN5/Vega is enabling extra robustness for developers to manage issues in the pipeline (see slide 20). It will obviously be up to developers to really make good use of it though. But it means if they optimize they can both mitigate what would otherwise be stalls or things they'd be stuck waiting to get processed (i.e. discarding instead of fully processing), but also could adjust to get more work done (and seems like they could do this at multiple stages, giving them more flexibility to manage things).

And that is Glo's point (even if I do think he's being pretty optimistic on how much it will improve things, but he's talking about the objective max potential there, not saying it will absolutely happen from what I've seen). There are a lot of features that could make Vega much much better, but it requires developers putting in the effort, and will take time to implement. The reason why he's probably so optimistic is that, from what I'm seeing, this is all stuff that is part of the DX12 featureset moving forward, meaning, Nvidia will almost certainly be doing the same (in some instances already is), so its not like developers are likely to ignore it. Unless Nvidia really starts blackboxing stuff and basically does the Nvidia specific paths (while also putting in the work in the driver), developers putting in the work will benefit both. If I'm not mistaken developers have even spoken out about how blackboxing stuff just ended up causing their games to be broken on release (not even just for AMD stuff either), and generally when we saw a game with Gameworks (especially the broken ones), the sequel had the feature but it was implemented by the developer, and performance vastly improved while quality didn't seem to suffer much if at all.

And this is where my bitterness and overall contempt to some posters here stems from. What Glo is basically arguing now, others argued about Async Compute and even the new feature in Polaris that alleviated some of the Gameworks foul play. Polaris was the next HD 4800, it was going to be 90% of GTX 1080 in certain scenarios for half the price. It was going to use the new culling features and be 85% or better in DX11 performance of GTX 1080 for half the price.

That never materialized. So I'm very skeptical of the same predictions especially when it's some of the same posters.

And you are right, NV will most likely follow suit, but as I started saying back in 2014 (when someone was telling me the list of DX12 games was going to be the most important metric, and then in 2015, and then in 2016, you get the gist of it) when this does actually happen I expect NV to jump in with the systems are mature. And there are no bugs, and I wouldn't be surprised if NV's implementation is superior, thus making all the hardships on AMD pioneering the tech that much more agonizing.

As for the consoles, the PS4 Pro and Scorpio are really the first ones to offer even some of the new Vega features (but they don't offer the fullset that Vega does, meaning, it'll be the consoles after that will probably offer that; and I strongly disagree with your dismissal of how important being in the consoles is for AMD as it is absolutely responsible for the gains we saw with just some of the DX12/Vulkan features being implemented in games like Doom). And that has been my point, Vega is another forward looking design. So this implementation of the Vega architecture might be a dud (for various reasons, which could be hardware stuff that can't be fixed/improved with software updates and newer software) and never fully show its potential. But it might also be like the 7970, where when utilizing its featureset it could potentially see large gains some time later (meaning, yes, years). Yes, that really doesn't help now (once again, always always always always always buy based on perf/$ when you buy and not future performance), but this stuff is the groundwork for what graphics processing will be doing (Nvidia is already saying they're going to support plenty of this in the future, RPM being one they specifically said Volta will bring). So people dismissing the architecture as a failure when you're not even seeing a lot of its features really utilized (I'd guess that even games/engines that have support for some of this stuff it is quite limited right now).

You might not know my history on ATF, but I was a hardcore ATI/Radeon fanboy. I carried that badge proudly until I got pushed out by miners when trying to buy a 290 ref was almost 50% over MSRP. So I got a used GTX 780 Lightning (didn't even want to give NV my fresh dollars, yet).

The problem is, as a CFX 7970 user, those improvements came far too late. I upgrade regularly (my own schedule) that AMD had no problem maintaining until recently. But it was the framepacing issues and the constant borking of advertise features that got to me and I sold off my 7970s during that first mining craze (of course I did make some money myself mining).

So back to this post, the DOOM benefits didn't even come at launch. And if you want an interesting little tail - DOOM was slammed as being an NV sponsored game because it was featured in the GTX 1080 presentation, and had issues with AMD cards at launch. So yeah, I'm glad AMD can get these kind of gains, but in AMD fashion - it's often far too late. By the time DOOM got sorted out on AMD hardware I had already beat it and moved on. Yes, others benefited from it, but DOOM is now an old game but it's still at the fore front of what AMD hardware can do. That is a little discouraging. Hitman was another saving grace, and then that flopped. DUES, another example of botched launched. Just about every DX12 game that was suppose to make AMD shiny faltered at launch. While I'm not putting the blame on AMD, this just made me more aware of AMD's lack of resources to court devs.

That's why both Sony and Microsoft have been saying the Pro and One X are not really next gen consoles, as they're more extending the current stuff with a few extra features that won't make a big difference right now, but will likely be a bigger part in the future. So that doesn't actually help Vega much as then it'll still be years before this stuff will be common to developers, but that's how they've had to go about their GPU design. Be forward facing and hope that developers make good use of it. If not, then they look worse (which is what has been happening; although plenty of that was other things not related to their GPU performance; which they've been working to recognize and improve across the board, time will tell how things work out).

The Pro and Scorpio are mistakes in my opinion. Mistakes that are going to cost AMD. It prolonged a generation where the base models still have to be supported. We're basically stagnated 2013 consoles that featured GPU hardware already obsolete. Sure, the Pro and Scorpio might look better for console's but progress overall would falter. But Sony, with their devs will make beautiful games (Looking at you new Spiderman) but what good will that do AMD and other devs? Not much.

Would AMD be doing better if Xbox was selling in line with PS4? Possibly. But the thing is, that almost certainly would have come at the expense of PS4 sales, so unless they get a higher margin from Microsoft, it likely wouldn't make a big difference.

It's a dog eat dog world.

From what I've seen, Microsoft functions a lot like AMD in that they try to provide documentation, but they leave it up to the developers for the end results (I'd say they've been much more developer friendly than AMD, but AMD has had more of a focus on the hardware for obvious reasons; but Microsoft often has competing software so they're not going to just give away their unique advantages either; and plenty of game developers just use middleware anyway). They do offer tools but most of that isn't about eking the maximum performance is more about limiting outright problems. In the case of the Xboxes its likely mostly about CPU threading and managing the memory setup (the 360 and One both using eDRAM), and then baseline GPU stuff (didn't they just start talking seriously about DX12 on the One like 2 years ago? Meaning it very clearly was not designed with DX12 in mind, even though it can support a fair amount of its features). Scorpio definitely has a DX12 focus (it has hardware specifically for scheduling DX12 calls so that they can lessen CPU load), and Microsoft has been talking up their analysis tool, which both helped in them figuring out what they wanted in Scorpio (hardwarewise), and also showed them performance of the games already on the Xbox to get an idea of where they're bottlenecked. I would almost guarantee that will pay dividends.

MSFT's lack luster efforts this generation should have been a red flag for AMD. Gen 6, MSFT was courting pretty much everyone they could. Securing a great roster of titles to bolster their platform. This generation, barely anything, and then they collapsed and started to make their own platform exclusives available on PC.

We can argue what Xbox One was designed for, but one of it's huge marketing points was DX12. To the point where just like over on the PC side, I had to read through countless console fanboy wars when DX12 would be enabled, how Xbox One will match or eclipse PS4. That also never happened.

Sony definitely is more interested in helping their own ecosystem. I believe they've had a few of their star developers help create tools and offer insight (although often it seems aimed at companies willing to do PS exclusives). Sony thinks their low level stuff on the PS4 is better than even Vulkan, but they are supporting it as well. I don't know if a developer could license DX12 for games on the PS4 or not.

Sony is a Japanese company through and through (tie stereotypes of a xenophobic culture). It was only during the PS3's failings did Sony become a little more modest. But now look at them with PS4. They are back to "you will get two jobs to afford one" mentality.
 
Reactions: Muhammed and Konan
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |