Well those results weren't what I expected.
More so the performance lost on a weaker CPU considering all the hoolah about how it's suppose to help slower CPUs.
Optimization is probably ongoing as with the talos principle.
Well those results weren't what I expected.
More so the performance lost on a weaker CPU considering all the hoolah about how it's suppose to help slower CPUs.
Then limit your fps. You're going to have framerates all over the place, no matter what CPU you have, and no matter what API the game is using.
Limiting fps or vsync on/ff does nothing, gsync is flawless in that aspect, 0 stuttering compared to using opengl or vulcan.
Why is it so far behind in The Talos Principle?
Valve really need to update their engine. Its like 10 years old.
Limiting fps or vsync on/ff does nothing, gsync is flawless in that aspect, 0 stuttering compared to using opengl or vulcan.
Optimization is probably ongoing as with the talos principle.
I've never seen any stuttering in any game with gsync on and can notice it as soon as it's off.Thats just wrong. gsync is just a trade-off but far from perfect. Issue is, gsync displays the frame when it is finished, however the frame is not necessarily finished when the game engine intended the frame to be displayed.
In essence the game engine makes a prediction when the next frame is to be displayed and transforms the geometry accordingly. When the prediction is wrong you see stutter, even with gsync.
In order to avoid wrong predictions you need to enable vsync and make sure each frame is computed within the time limits of the refresh rate. In this case the game engine will always predict correctly and you will see a perfectly smooth animation, which would not be possible with gsync.
Yup, the trend of AMD's cards aging better will continue until Nvidia releases a new architecture.the most interesting thing on this test is the 980 ti result for DirectX11 vs OpenGL...
In replays the system already knows what comes next so you can channel all your resources into rendering frames,in real time gaming the card has to wait for input from all players to know what to do next.
At least that's my thinking,if someone with lots of cores could tell us CPU utilization difference between real play and replay we could make sure.
Yay,works as promised,no driver thread=the other threads get to share the freed up resources=moar FPS.
https://www.youtube.com/watch?v=-cDAWW8nkGU
I've never seen any stuttering in any game with gsync on and can notice it as soon as it's off.
This is on a xb270hu monitor with two 980tis.
But Gsync is designed to eliminate tearing not stuttering. You could still have a quick change from 100 to 40 fps and notice it.
Why are we on about Gsync in a Vulkan thread? We should start a new thread if you want to continue to conversation I would think.
One possible explanation might be that the DX12 implementation of DOTA 2 can scale to more threads than the DX11 implementation. For instance 8 threads instead of 4 threads.
This would only be an advantage for the i7 CPU, since the x4 CPU is limited to running 4 threads at a time either way (only 4 cores and no SMT).
It's not just about the number of threads but also about how much work each thread gets to complete in a given time,even though the x880 and a i5 have the same amount of cores each of the i5s cores is much faster.
Because replay is basically a rendering software like cinebench it is very probable that it can use as many cores as there are available.
Anyways,like I said we need someone with a lot of cores to show us the threads of game vs replay if we are to make any sense out of this.
Yes the work that's needed to be done is the same,so if one 4core machine has twice the speed (per core) it will finish it in half the time of the other 4core machine.The total amount of work that needs to be done per frame ought to be the same for both CPUs and thus if they are running the same number of threads (4 or less), then the work per thread per frame should also be the same.
The per core performance doesn't change based on the API,but!Also this shouldn't have anything to do with per core performance either. Yes and i7 is obviously faster than an x880 per core, but it is faster per core in both DX11 and DX12, the per core performance shouldn't change based on the API.
Yes but Dx11 is still able to use only one of the i7(or any CPU) cores while vulkan can (up to) quadruple the processing power. ( +whatever HT contributes)Also this has nothing to do with it being a replay, since it's obviously also a replay with DX11, so no difference there.
Yes the work that's needed to be done is the same,so if one 4core machine has twice the speed (per core) it will finish it in half the time of the other 4core machine.
(or do twice the work in the same time)
The per core performance doesn't change based on the API,but!
Dx11 has to use brute force cpu single thread power to get it's work done,with dx12,and probably vulkan as well,the same load can be split up into several threads (each one will still relay on brute force cpu single thread power but it can use more cores)
Yes but Dx11 is still able to use only one of the i7(or any CPU) cores while vulkan can (up to) quadruple the processing power. ( +whatever HT contributes)
The only question is if vulkan can use all available processing power even while really gaming or only if it already has all the data available,like in a replay, so it can render full speed since it doesn't have to wait for anything.