Direct X 12 is one of the greatest generational leaps in graphics API history, how can developer and consumer adoption be improved exponentially?
The move away from fixed function vertex and pixel graphics pipeline led to the birth of OpenGL 2 and DirectX 9 which was massive in the sense that it opened up a range of possibilities in both creative and technical development of games. On the hardware side, we saw a massive change from fixed function pixel and vertex units to what our current graphics hardware implementation still relies on: massive amounts of parallel compute/shader units.
It seems like DX12 and its upcoming/GCN hardware is equivalent to the DX9 era of hardware and API changes. Over the span of 16 years, the graphics APIs have been opening up considerably. Developers have significantly more (almost all) control over hardware when comparing OpenGL 1.0 and its incredibly simplistic design to Vulkan, shows this change. As time goes on the drivers have gotten thinner and are now almost a means of just relaying information to and from the kernel/graphics card(s). This is great for talented indie developers and huge studios although it doesn't seem to be practical for people like myself with limited time and knowledge of the API. Which leads me to its usage.
I think DX12 can see a massive surge in developer and customer adoption if development environments like Unity and Unreal Engine adopted it as a backend. This brings with it compatibility issues: how do you translate what were once dx11 calls to those of dx12? The answer is almost universal to every developer: you include an OPTIONAL "glue" layer which translates Unity and UE4's DX11 graphics SDK to that of DX12 - some DX11 parts should be deprecated in order to push for DX12-like functionality. Although it sounds useless, this allows game developers to transition to DX12 while allowing Unity and UE4 engineers to make back end optimisations to their engines using DX12. This may lead to a performance regression in the short run, but significantly better performance and efficiency in the long-run.
Adding to that: there will almost definitely be DX11-like wrappers released for DX12 that some developers may choose to use. This lets them work with the knowledge they had developing with DX11 and carry that over to a DX12 rendering pipeline with the added possibility of using unmasked DX12 specific features and optimisations.
An evolving high-level API like DX will eventually have to start exposing more and more of its internals and that's exactly what we've been seeing over the years and through every iteration of DX. Hacky driver tweaks from manufacturers to boost game performance are almost abolished with DX 12 and the API can finally be used like it was meant to without any "ifs" or "buts". Now IHVS are forced to adapt to what developers are doing. Developers write the software and hardware vendors create the hardware that can best fit the software. A good example of this being ImaginationTech's TBDR (tile based deferred renderer) and AMD's discard accelerator which adapt the hardware to work with unoptimised software via hardware optimisations and not gimmicky software tricks (driver updates for games).
I see a group of misinformed users here who think anomalies of small regression in DX12 performance is bad, not remembering how the shift to DX9 and DX9 hardware was necessary to move the graphics industry along despite the change making the graphics cards inefficient (fixed function units vs a CPU-like instruction processor) and increasing developer workload and costs significantly.
So a couple of discussion questions so I can realise peoples' thoughts on DX12:
What are everybody's thoughts on increasing adoption rates by moving game engines to use DX12 and DX12 giving developers more control over what they can do? Personally, I am in complete favour of it even though I won't be using nor learning DX12 anytime soon (might get some reading done on Vulkan though).
Should IHVs be blamed for crappy DX12 performance or should the framework be blamed for forcing features on inept cards/architectures?
TBDR and "discard accelerator" are some great ways of optimising all sorts of funky rendering techniques via hardware instead of IHVs giving devs a checklist of what they can and cannott do on their hardware, should this be the way of the future considering DX12's thinner drivers will make it more difficult to optimise crappy code or should the current bloated DX11 drivers be the guideline with DX12 too?
The move away from fixed function vertex and pixel graphics pipeline led to the birth of OpenGL 2 and DirectX 9 which was massive in the sense that it opened up a range of possibilities in both creative and technical development of games. On the hardware side, we saw a massive change from fixed function pixel and vertex units to what our current graphics hardware implementation still relies on: massive amounts of parallel compute/shader units.
It seems like DX12 and its upcoming/GCN hardware is equivalent to the DX9 era of hardware and API changes. Over the span of 16 years, the graphics APIs have been opening up considerably. Developers have significantly more (almost all) control over hardware when comparing OpenGL 1.0 and its incredibly simplistic design to Vulkan, shows this change. As time goes on the drivers have gotten thinner and are now almost a means of just relaying information to and from the kernel/graphics card(s). This is great for talented indie developers and huge studios although it doesn't seem to be practical for people like myself with limited time and knowledge of the API. Which leads me to its usage.
I think DX12 can see a massive surge in developer and customer adoption if development environments like Unity and Unreal Engine adopted it as a backend. This brings with it compatibility issues: how do you translate what were once dx11 calls to those of dx12? The answer is almost universal to every developer: you include an OPTIONAL "glue" layer which translates Unity and UE4's DX11 graphics SDK to that of DX12 - some DX11 parts should be deprecated in order to push for DX12-like functionality. Although it sounds useless, this allows game developers to transition to DX12 while allowing Unity and UE4 engineers to make back end optimisations to their engines using DX12. This may lead to a performance regression in the short run, but significantly better performance and efficiency in the long-run.
Adding to that: there will almost definitely be DX11-like wrappers released for DX12 that some developers may choose to use. This lets them work with the knowledge they had developing with DX11 and carry that over to a DX12 rendering pipeline with the added possibility of using unmasked DX12 specific features and optimisations.
An evolving high-level API like DX will eventually have to start exposing more and more of its internals and that's exactly what we've been seeing over the years and through every iteration of DX. Hacky driver tweaks from manufacturers to boost game performance are almost abolished with DX 12 and the API can finally be used like it was meant to without any "ifs" or "buts". Now IHVS are forced to adapt to what developers are doing. Developers write the software and hardware vendors create the hardware that can best fit the software. A good example of this being ImaginationTech's TBDR (tile based deferred renderer) and AMD's discard accelerator which adapt the hardware to work with unoptimised software via hardware optimisations and not gimmicky software tricks (driver updates for games).
I see a group of misinformed users here who think anomalies of small regression in DX12 performance is bad, not remembering how the shift to DX9 and DX9 hardware was necessary to move the graphics industry along despite the change making the graphics cards inefficient (fixed function units vs a CPU-like instruction processor) and increasing developer workload and costs significantly.
So a couple of discussion questions so I can realise peoples' thoughts on DX12:
What are everybody's thoughts on increasing adoption rates by moving game engines to use DX12 and DX12 giving developers more control over what they can do? Personally, I am in complete favour of it even though I won't be using nor learning DX12 anytime soon (might get some reading done on Vulkan though).
Should IHVs be blamed for crappy DX12 performance or should the framework be blamed for forcing features on inept cards/architectures?
TBDR and "discard accelerator" are some great ways of optimising all sorts of funky rendering techniques via hardware instead of IHVs giving devs a checklist of what they can and cannott do on their hardware, should this be the way of the future considering DX12's thinner drivers will make it more difficult to optimise crappy code or should the current bloated DX11 drivers be the guideline with DX12 too?