imaheadcase
Diamond Member
- May 9, 2005
- 3,850
- 7
- 76
Also shadows messed up, the flashed in and out.
Ball is in the court of DICE, they have access to memory management and they aren't memory managing the 285 ideally, it being managed like its "first gen GCN".I'm not sure what you are trying to say here... Is it the fault of the game developer, the engine developer, or the driver? What needs to be updated for newer cards to work?
What I want to know is, can we expect something similar with DX12 and new cards? What is the issue with the 285 and Mantle, who is responsible, and why hasn't it been mitigated? My guess is it lies on the engine developers, or the game developers. I haven't heard of any problems with the 285 and Civ:BE or DAI using Mantle, but I couldn't find many reviews. I wanted to check for an update from Anandtech on this, but they didn't review the 960 and the Titan X review doesn't include the 285.
Sorry if this seems off-topic, but can you imagine a newer, faster card getting released but is shown as slower in DX12 games because "whatever" hasn't been updated yet?
There's nothing surprising in that post and it confirms that low level APIs aren't viable. If AAA games are shipping broken now, that's going to increase tenfold when the driver complexities needed to deal with them are shifted onto game developers.
He says the end result "just works". Sure, as long as it's running on the hardware he originally coded it for. But what happens on future hardware? Is he expecting the whole software industry to constantly patch their games whenever new nVidia/AMD/Intel hardware arrives? We've already seen Thief and BF4 broken with Mantle and the 285.
Low level APIs are only viable for fixed/embedded hardware (e.g. consoles) or back in the primitive days of DOS.
I expect DX12 will have more impact than Mantle/Vulcan simply because of its exposure, but DX11 is going to remain overwhelmingly widespread for the reasons above. No customer is going accept all of their games breaking (which includes running slower than they did before) whenever they buy a new graphics card.
Ball is in the court of DICE, they have access to memory management and they aren't memory managing the 285 ideally, it being managed like its "first gen GCN".
it could be an issue with the way they coded mantle support ( eg. hard coded things that need to be variable) or it could just be that now DICE is focused on the engine the people focused on BF4 dont care enough.
Saying it is in DICE's hands, do you mean them as the engine developer or the game developer? Visceral Games developed BF Hardline, using DICE's FB3 engine.
I wonder if Mantle/DX12 will close the performance gap between consoles and what PCs were capable of pulling off.
If you are talking draw calls then yes, that's mostly what it's for. It will allow much easier porting of console games because they won't have to optimize the draw calls for the PC.
GPU's have just advanced so much faster than CPU's lately that the bottleneck for games was being shifted to the CPU. Unless you wanted to use some real high AA setting or lighting setting or something to simply load down the GPU.
I wonder if Mantle/DX12 will close the performance gap between consoles and what PCs were capable of pulling off.
to the code. The bottleneck is shifting to the code. IMO
There's nothing surprising in that post and it confirms that low level APIs aren't viable. If AAA games are shipping broken now, that's going to increase tenfold when the driver complexities needed to deal with them are shifted onto game developers.
He says the end result "just works". Sure, as long as it's running on the hardware he originally coded it for. But what happens on future hardware? Is he expecting the whole software industry to constantly patch their games whenever new nVidia/AMD/Intel hardware arrives? We've already seen Thief and BF4 broken with Mantle and the 285.
Low level APIs are only viable for fixed/embedded hardware (e.g. consoles) or back in the primitive days of DOS.
I expect DX12 will have more impact than Mantle/Vulcan simply because of its exposure, but DX11 is going to remain overwhelmingly widespread for the reasons above. No customer is going accept all of their games breaking (which includes running slower than they did before) whenever they buy a new graphics card.
I don't think it confirms that they are unviable -- developers wouldn't be pushing for this if they weren't prepared for the logical consequences -- but that does seem to be the potential downside of low level APIs that few people are talking about. It sounds like the responsibility for optimizing for new hardware under low level APIs shifts from the IHVs and drivers to the game itself, and thus the game developers. AMD and Nvidia have been consistent with keeping their drivers optimized for individual games, because they need to keep performance up as a selling point. Game developers don't have that motivation, most of their sales happen near release. If new hardware comes out after the game's release, the developer doesn't have as pressing a reason to update their game to better support their hardware. At least, that's how it would seem. We'll have to see what happens in practice.
I don't think it confirms that they are unviable -- developers wouldn't be pushing for this if they weren't prepared for the logical consequences -- but that does seem to be the potential downside of low level APIs that few people are talking about. It sounds like the responsibility for optimizing for new hardware under low level APIs shifts from the IHVs and drivers to the game itself, and thus the game developers. AMD and Nvidia have been consistent with keeping their drivers optimized for individual games, because they need to keep performance up as a selling point. Game developers don't have that motivation, most of their sales happen near release. If new hardware comes out after the game's release, the developer doesn't have as pressing a reason to update their game to better support their hardware. At least, that's how it would seem. We'll have to see what happens in practice.
There's nothing surprising in that post and it confirms that low level APIs aren't viable. If AAA games are shipping broken now, that's going to increase tenfold when the driver complexities needed to deal with them are shifted onto game developers.
He says the end result "just works". Sure, as long as it's running on the hardware he originally coded it for. But what happens on future hardware? Is he expecting the whole software industry to constantly patch their games whenever new nVidia/AMD/Intel hardware arrives? We've already seen Thief and BF4 broken with Mantle and the 285.
Environment variable: OKRA_DISABLE_FIX_HSAIL. HSA Programmer's Reference Manual 1.0P specification has been released recently. Refer to https://github.com/HSAFoundation/HSA-docs-AMD/wiki for documentation. However, all users of OKRA have not upgraded their code generation to 1.0P HSAIL. That is, some customers are still generating 0.95 HSAIL. For those customers who do not yet generate 1.0P instructions (which is the majority today), OKRA fixes the hsail instructions to match up to 1.0P instructions. Note that, this is not a universal fix for all possible instructions. This has been done only for a known set of tests from what we have seen before from our customers (see known issues). But, if the OKRA user is already generating 1.0P HSAIL instructions, then they should disable OKRA from doing any fixup. They can do so by setting environment variable OKRA_DISABLE_FIX_HSAIL=1
Really hope they will not continue to do that.Shader code replacement will still be an option with DX12/Vulkan, so improvements by the driver is not going away.
Really hope they will not continue to do that.
Proper way would be help developer to add changes into the game, not running arbitrary code instead of what was written.
This would also be major relief to driver coders.
Yes, but if the devs don't want to work with you, there's no other option.
That, and maybe you find a more effective way to handle thing along the way.
In Vulkan, because the driver will have to compile the SPIR-V to his own asm/commands, there's a lot of optimisations possible.
Proper way would be help developer to add changes into the game, not running arbitrary code instead of what was written.
This would also be major relief to driver coders.
I would say that the bottleneck will be the GPU frontend and not the software.
Extensive article at HFR about the possible performances gains with AMD GPUs and DX12.
http://www.hardware.fr/news/14133/gdc-d3d12-amd-parle-gains-gpu.html
Seems that GCN is clearly more advanced than Nvidia s Maxwell wich is limited on parralel commands, also it s more than obvious that DX12 is a litteraly carbon copy of Mantle.
Yes, but if the devs don't want to work with you, there's no other option.
That, and maybe you find a more effective way to handle thing along the way.
In Vulkan, because the driver will have to compile the SPIR-V to his own asm/commands, there's a lot of optimisations possible.
that sounds like flamebait and handwaving but I'll reserve judgement till I read the article. NVidia is a very smart company, poor design doesn't really slip past them IMO, excepting their mobile division
Well, traditionally ATI/AMD usually has the more future looking architecture, with the exception of G80 vs. R600 and (just because of lack of tessellation performance) Evergreen vs. Fermi, so it wouldn't be much of a surprise to see history repeating itself. On the other hand, with AMD's shrinking R&D budget it's unlikely this trend will last forever.
Evergreen/Northern Islands didn't fall short of Fermi just because of tessellation performance. Its VLIW5/4 architecture was well suited for simple graphics rendering and shading, but it fell short in compute and GPGPU tasks.
GCN basically did for AMD what Fermi did for Nvidia, AMD just did it a couple years late.
opening the path for custom coded shaders "meant to be played on NVidia" and intentionally gimped to be poor-performing for AMD like FSAA or FXAA or whatever the crappy AA scheme Nvidia cooked up was