because the game makes use of heavy compute via async people got to blame the company but they didnt realise that the volumetrics were compute going by async thus making those with a mid class cpu with a top nvidia card being chocked to death and those with an i7 4xx+ being fine with itHoly gaben, what a ! And all because Nvidia can't quite keep up? Blame it on Nvidia, fanboys.
Holy gaben, what a ! And all because Nvidia can't quite keep up? Blame it on Nvidia, fanboys.
if the problem was just nvidia performance OK, but the port is pretty bad with serious frame pacing issues and horrible performance even on AMD hardware compared to the Xbox one version.
quick port + UWA + DX12 = fail
but as long as Nvidia is even worse it's all good for some I guess
did you check the digital foundry video? they don't use traditional tools they use a capture card and analyze the video, the same way they do for consoles, and the game is a mess with stutters and forced double buffering vsync and the best it can do is 50FPS even with a 980 ti at 720P lowThere is no frame pacing issue. That's the thing. We cannot use the traditional tools in order to decipher frame data under UWP games and DX12.
The game is absolutely stable running AMD hardware. It's a mess when running NVIDIA hardware. That's what all the videos show.
did you check the digital foundry video? they don't use traditional tools they use a capture card and analyze the video, the same way they do for consoles, and the game is a mess with stutters and forced double buffering vsync and the best it can do is 50FPS even with a 980 ti at 720P low
yes great port :thumbsdown:
you do understand it makes heavy async/compute usage right?
the same thing you will see with deus ex...
not on all hardwares...also using a capture card as you say given that the best one is from elgato and they have said they have tons of issues with dx12 is moot (apparently they have a problem on mcfi that they are waiting for ms to fix it "someday")what does that have to do with a crap port that has horrible frame pacing and performance issues on all hardware? no to mention the stupid use of mandatory vsync double buffered!?
So what have the console guys been using all this time? I thought it was low level apis.
Sent from my SM-G930T using Tapatalk
did you check the digital foundry video? they don't use traditional tools they use a capture card and analyze the video, the same way they do for consoles, and the game is a mess with stutters and forced double buffering vsync and the best it can do is 50FPS even with a 980 ti at 720P low
yes great port :thumbsdown:
as good as the batman game last year I guess
Because they make GPUs for PCs that are actually capable of running games and have no need of the GPU to run the whole game.
Look at that even the i3-2100 hits 56FPS avg and that's with sli which is degrading performance.
And not only that but the ancient dual core only runs at 60% usage.
For anyone that complained about AI and degrading gameplay until now, the party is just starting, new games will be graphics pr0n with a minimum of gameplay.
Can you show me a game that makes use of light async/compute in which Nvidia shows gains with async on vs off?you do understand it makes heavy async/compute usage right?
the same thing you will see with deus ex...
Nvidia cards have comparable fp32 compute performance...it isnt accurate sayinh they have less compute...async compute is where amd has an advantage.Get one of those,play a game on it and tell us how that goes...
How much is it then? 30% 120% ?
Nvidia has less compute, period.
They have more pixel pushing,tesselation and other stuff that helps a lot in other games but less compute,I'm not saying that one is better than the other, I'm just saying what I see here.
Games that rely heavily on compute will run better on GPUs with more compute,it's not about lack of async or bad drivers or any of the other things everybody is saying.
And it can not be fixed on current gen you can't fix something into existence,next gen will have a lot more compute and it will do a lot better in these games.
Can you show me a game that makes use of light async/compute in which Nvidia shows gains with async on vs off?
Sent from my C6833 using Tapatalk
I didn't see any stutters on the R9 390 side. Looked perfectly smooth to me. Only the cut scenes looked stuttery.
the cut scenes are funny, they are locked at 30FPS and even the ingame 30FPS lock is broken,
UWP will be great in time. Sure there some growing pains, but PC games need stricker control. I'm sick of games leaving their crap everywhere when I try to uninstall, the game needing admin rights which always makes me uneasy, not having universal control.
They might have used it before it was ready, but if UWP forces all PC games to have an unlocked framerate, proper alt-tab support and just generally more stable, then that is great and that is what MS is trying to do. They just seem to have started using a little too early.
The cut scenes are also internet sourced.
http://arstechnica.com/gaming/2016/...et-connection-for-streaming-cut-scenes-on-pc/
The cut scenes are also internet sourced.
http://arstechnica.com/gaming/2016/...et-connection-for-streaming-cut-scenes-on-pc/
After watching that video proof, I have to recant my OP about how broken the game is on all hardware. It's clearly not that broken on AMD.
There seems to be an issue with PresentMon here as well, because it suggests a lot of stutter, but it's not very noticeable on the 390 in that 60 fps video. On pcgameshardware.de's video with their 980, the stutter was obvious.
So basically it's Quantum Broken for NV.
As for whether it uses Async Compute, I am almost certain it does. All that fancy tech on the Xbone's weak hardware would be impossible without it.
DF mentions Volumetric Lighting settings tank NV performance, that is a clear indicator, as VI is a compute effect. TR and other console games run VI, SSAO, (& PureHair) in AC mode. There's got to be AC in play else the performance wouldn't tank so heavy on NV GPUs.
Pascal is an improvement on maxwell though so we expect that.
The question is, will pascals driver improvements trickle down like in gcn, or will maxwell be left behind further crippling it's performance in dx12 games?
Could this be a generation where Nvidia has more cpu/driver overhead now compared to amd?