Even Project car did not market nvidia that much.
Now i think some people doubts should be cleared about Oxide.Therefore, their main priority is only AMD.
Maybe what the Oxide guy said has something to do with it.
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995
You have to wonder how they suddenly found this performance just before the benchmark launched. AMD should probably take a closer look at the game to see if there's any more cheating there.
There is a motion-to-photon min lag of 36ms with Maxwell cards. This is well above the minimum of 20ms advised by John Carmack for a non-puking VR experience. AMD seems to be obtaining around 11ms of motion-to-photon lag.
I think gamers are learning an important lesson: there's no such thing as "full support" for DX12 on the market today.
There have been many attempts to distract people from this truth through campaigns that deliberately conflate feature levels, individual untiered features and the definition of "support." This has been confusing, and caused so much unnecessary heartache and rumor-mongering.
Here is the unvarnished truth: Every graphics architecture has unique features, and no one architecture has them all. Some of those unique features are more powerful than others.
Yes, we're extremely pleased that people are finally beginning to see the game of chess we've been playing with the interrelationship of GCN, Mantle, DX12, Vulkan and LiquidVR.
I sense some very unhappy maxwell owners. And some very very happy pascal owners!
Do you want to link or mention who said that?
Performance Optimizations
Batman: Arkham Knight - Performance and quality/stability updates
Ashes of the Singularity - Performance optimizations for DirectX® 12
http://support.amd.com/en-us/kb-articles/Pages/latest-catalyst-windows-beta.aspx
new beta drivers for AMD. ANyone ran the benchmark on this yet?
Apparently this was mentioned on the Nvidia subreddit:
Related perhaps to DX12/VR?
AsyncCompute should be used with caution as it can cause more unpredicatble performance and requires more coding effort for synchronization.
Looks like Epic is warning developers about using AC haphazardly, as there are many pitfalls. Oxide should take note here.
If that Oxide dev didn't even know that Maxwell supported asynchronous compute, then what else doesn't he know?
This just goes to show that low level APIs are nothing to fool around with, and should only be used by very experienced graphics programmers..
AsyncCompute should be used with caution as it can cause more unpredicatble performance and requires more coding effort for synchronization.
Looks like Epic is warning developers about using AC haphazardly, as there are many pitfalls. Oxide should take note here.
If that Oxide dev didn't even know that Maxwell supported asynchronous compute, then what else doesn't he know?
This just goes to show that low level APIs are nothing to fool around with, and should only be used by very experienced graphics programmers..
AsyncCompute should be used with caution as it can cause more unpredicatble performance and requires more coding effort for synchronization.
Looks like Epic is warning developers about using AC haphazardly, as there are many pitfalls. Oxide should take note here.
If that Oxide dev didn't even know that Maxwell supported asynchronous compute, then what else doesn't he know?
This just goes to show that low level APIs are nothing to fool around with, and should only be used by very experienced graphics programmers..
Jawed said:D3D12 should be used with caution as it requires more coding effort.
AsyncCompute should be used with caution as it can cause more unpredicatble performance and requires more coding effort for synchronization.
Looks like Epic is warning developers about using AC haphazardly, as there are many pitfalls. Oxide should take note here.
This just goes to show that low level APIs are nothing to fool around with, and should only be used by very experienced graphics programmers..
How long did that support last is the question! Just until first beta benchmark. NV dropped the support immediately after results were in, demanding to disable this feature as it is no longer supported.If that Oxide dev didn't even know that Maxwell supported asynchronous compute, then what else doesn't he know?
Do you realize Dan Baker was a chief engineer and designer of DX9/10 at MS for 6 years?
YES they should use it with caution, no doubts about it. People less talented with programming will use off-the-shelf engines like Unity, UE4, CryTek, Frostbite, etc.
ps. Oxide didn't know about Async Compute in Maxwell, cos the driver was allowing them to expose it, "faking" its functionality. When they tried, it was a disaster so they disabled it at the request of NV.
Very unpredictable boost up to 30%. Mindblowing. Its like semi-generational leap in performance thanks to a single feature supported by all AMD GCN GPUs and APUs.
Epic's long experience at creating a pitfall of an engine is certainly something to have in mind. They are designing their next gen engine to not haphazardly use more than 4 CPU threads. Something like AC is way beyond their scope for sure as creating graphical effects is apparently too hard for them so they used closed source libraries from nv...
That particular dev statement only shows that nvidia doesnt have async compute in the strict sense of the word, as they tried to implement AC under NV and went horribly. You can state anything on paper, but if on practice that feature is FUBAR, you can safely say it is not what was said. This is the case with AC on maxwell. It kinda is there, but you better not use it because the results are underwelming.
That 30% number was quoted on an unspecified console though, and isn't the PS4 GPU configuration far far heavier on dedicated asynch compute engines than comparable PC GPUs?
Not to discount the increase, but given the hardware configuration it's perhaps not surprising (and not directly comparable to PC either)
You hold beyond3d in high esteem right? Well you should know that the application that MDolenc created tests explicitly for Asynchronous Compute capability.
And guess what, it shows that Maxwell 2 does use Asynchronous Compute. Both Maxwell and GCN have asynchronous compute capability, but GCN obviously has a stronger implementation.
That 30% number was quoted on an unspecified console though, and isn't the PS4 GPU configuration far far heavier on dedicated asynch compute engines than comparable PC GPUs?
Not to discount the increase, but given the hardware configuration it's perhaps not surprising (and not directly comparable to PC either)
On the first graph (maxwell) you can see that it does run multiple compute tasks at once (31) but not simultaneously with graphics as it adds additional time.
And this gives him access to secret nvidia insider info how?Do you realize Dan Baker was a chief engineer and designer of DX9/10 at MS for 6 years?
The overall time in MS is still lower for NVidia than it is for AMD, especially for the 31 queue depth and under workload.
Yes that's correct. 30% was for consoles, but the dedicated ACEs for the PS4 are the same as for comparable PC GPUs.. Does that mean that you can achieve a similar performance gain on the PC though, I'm not certain..
The PS4 uses HUMA, which should theoretically increase the gains for AC compared to the PC which uses NUMA.