You're right in saying that yes, Nvidia has horrendous async support and compute support and needs to fix it.So i am right that computing and Async crippled Nvidia performance ,however, those developers knew that.
You're right in saying that yes, Nvidia has horrendous async support and compute support and needs to fix it.So i am right that computing and Async crippled Nvidia performance ,however, those developers knew that.
I remember you said in Overclock.Net that you worked for AMD in recent time and you left.
Neogaf QB performance forum. There is so many post and complain. Even MS rep and Phil responded that they are aware of issues and monitoring it. They said Vsync fix will not until mid May in that forum.Do you have a link.
Neogaf QB performance forum. There is so many post and complain. Even MS rep and Phil responded that they are aware of issues and monitoring it. They said Vsync fix will not until mid May in that forum.
Yes there are tons of posts on Neogaf complaining because Neogaf uses Nvidia cards mostly and Nvidia cards suck at dx12.Neogaf QB performance forum. There is so many post and complain. Even MS rep and Phil responded that they are aware of issues and monitoring it. They said Vsync fix will not until mid May in that forum.
of course Cuda.Yes there are tons of posts on Neogaf complaining because Neogaf uses Nvidia cards mostly and Nvidia cards suck at dx12.
So yes you're right, Nvidia has issues on dx12 games like quantum break which leads to people complaining. So yes Nvidia should fix their async performance.
Why does Nvidia suck so much at compute desperado?
From QB rep.
" While the 390 is the faster card running that game - there is a problem there with VSync where the 970 is just missing the multiplier and the 390 is just hitting it - resulting in a massive but artificial performance differential"
of course Cuda.
That i do not know.CUDA is a programming language. Nobody is denying NVIDIA's inherent software superiority.
The question being asked to you is why does NVIDIA's hardware suck when it comes to parallel computing tasks?
So what have the console guys been using all this time? I thought it was low level apis.What i mean to say is that you are seeing the effect of low level api. There are too many complicated system and different setups which is causing problems. In some benchmark site you will see GTX 980 TI and Fury X are on par with each other and some other sites you see GTX 980 Ti or fury X by large margin even For R9 390 and GTX 970.
People think that DX12 is very easy to code. No it is not and it will 10X more effort and 10X more time to port a stable and good DX12 game compare to Dx11.
That i do not know.
they do one close box system where every using same specs.So what have the console guys been using all this time? I thought it was low level apis.
Sent from my SM-G930T using Tapatalk
So what have the console guys been using all this time? I thought it was low level apis.
i copied from neogaf. I do not know about here.So tell me Olivon,
Why is the quote, you shared with us, showing up as attributed to some random forum member at overclockers?
https://forums.overclockers.co.uk/showpost.php?p=29369925&postcount=97
Could that be why you're unwilling to share the link with us?
So every benchmark for UWP is fake?
That i do not know.
i copied from neogaf. I do not know about here.
NVIDIA GPUs are just horrible at DX12. They lose performance relative to DX11 in every single title except for a few minute cases when a weak CPU is used.
As we can see, and contrary to other titles, the benefits of DX12 are easily noticeable in HITMAN. In DX12, our GTX980Ti was used to its fullest during the built-in benchmark, and we got an average framerate of 88fps. For comparison purposes, the benchmark ran with 78fps in DX11.
Then could you please link us?
Why does Nvidia suck so much at compute desperado?