wouldn't triple buffering the mouse lag more?
In theory triple buffering should improve the situation but in practice I?ve found it can make things worse.
I would imagine you would want to use D3D overrider to completely eliminate buffering.
If you eliminate buffering you couldn?t render anything. Two buffers are needed at a minimum to ensure there?s no flicker during animation.
[/quote]
Sorry I meant make into single buffer. I mean, what kind of theory is that where triple buffer will have less input lag? Triple buffer is reducing stutter and image lag at the cost of increased input lag.
Time = start
Single buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will not render until frame 2 is displayed. (or it will and discard, depending on vsync)
Double buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will render ASAP.
Frame 4 will not begin rendering until frame 2 is displayed. (or it will and discard, depending on vsync)
Triple buffer:
Frame 1 is displayed.
Frame 2 is rendering.
Frame 3 will render ASAP
Frame 4 will render ASAP.
Frame 5 will not begin rendering until frame 2 is displayed. (or it will and discard, depending on vsync)
without vsync each frame is rendered ASAP, so the faster the video card, the less time passess between frames. With vsync, each frame is matched to be 1/60th of a second apart.
Time meaningfully progresses every 1/60th of a second, when a new frame is sent to the monitor. If your video card is fast enough to render 240fps for game X, then it will render and DISCARD about 3 or so frames before the next monitor refresh. Meaning that regardless of buffer the next frame WILL display your input. If your video card is vsynced that it would have instead rendered 3 "future" frames that are each 1/60th of a second apart and EACH is going to be displayed, resulting in X/60th of a second input lag.
vsync off (250fps, frames 1/250th of a second apart):
time=0 : Frame 1 is displayed.
time=1/250s: Frame 2 created.
time=1.5/250s: input from user
time=2/250s: Frame 3 created.
time=3/250s: Frame 4 created.
time=1/60s: Frame 4 begins sending to monitor
time=4/250s: Frame 5 enters buffer while frame 4 is being sent. Resulting in tearing as the top half of frame 4 and bottom half of frame 5 are displayed.
vsync on (250fps CAPABLE card working at 60fps) :
time=0 : Frame 1 is displayed.
time=1/250s: Frame 2 created
time=1.5/250s: input from user
time=2/250s: Frame 3 created.
time=3/250s: Frame 4 created.
time=1/60s: Frame 2 (which is missing the last input) begins sending to monitor, when it finishes, frame 5 will begin rendering.
Basically with very high FPS situation, input lag will be introduced by triple and double buffering. (2/60th and 1/60th of a second respectively). But the tearing is eliminated. With low FPS the input lag is lessened because it is less like that frames are rendered ahead (since the video card is just not fast enough), but it might still occur in times of high FPS spikes. However tearing is completely gone.
If you think vsync reduces input lag then you are just confusing input lag with lag in general. Or your CPU is choking, and reducing the framerate by capping it allows quicker calculations.
PS. Input lag could mean one of two types. Where you gave input but it did not display on the next image (it took X miliseconds before the gun animation started). Or when you gave a command and it did not REGISTER with the computer until some time later (i clicked first but died).
Oh wow, I just understood the stutter in quad GPU AFR rendering.. it all makes perfect sense now...
Anyways, if you are suffering from cases where you shot first and still died that this is a case where you want to unburden your CPU as much as possible, in which case vsync + triple buffer means you are doing the LEAST amount of work per image displayed, resulting in a snappier system, that will more quickly detect your click.
When I was saying "image lag" before I meant stutter between pictures caused by low FPS. Example: crysis at 5fps lags and looks like a slideshow.