Gsync is still going to end up using double buffering, because it needs to render into one buffer while it scans out another.
Only on the GPU side (as usual).
On the monitor side, the 768MB of memory likely isn't for framebuffer delay, but for color processing that they said -- which I interpret to mean LUT's and historical (past) framebuffers, to calculate overdrive-compensated/rate-compensated LCD refreshes.
During normal situations (times between 1/30sec and 1/144sec frame rendering time intervals),
G-SYNC works like this
1. GPU finishes generating frame into back buffer.
2. Direct3D Present() call triggers immediate delivery to the monitor.
3. Frame delivery occurs at full dotclock (144Hz), regardless of current frame rate / refresh rate.
4. While GPU is delivering the frame to the monitor, the monitor is painting the refresh simultaneously (in real time, as the data comes over the cable).
5. The refresh completes in 1/144sec.
6. The whole refresh is immediately visible to human eyes about 1/144sec after the Direct3D Present() call. (plus ~2ms for pixel transition time)
Below 30fps, a repeat refresh occurs.
Above 144fps, frame delivery is delayed. (technically, it's also possible you can do "proper" triplebuffering at this point, to reduce lag)
Current 120Hz/144Hz ASUS/BENQ monitors already does realtime painting of the refresh directly off the cable. See my
high-speed video proof (non-LightBoost portion). The display is refreshed while the data is coming off the cable. Plus, I measured input lag of about 2.8ms for top edge of screen in 120Hz non-LightBoost mode on my VG278H, measured by oscilloscope+photodiode, plus also my prototype Blur Busters Input Lag Tester (which corresponds to the oscilloscope+photodiode), which is probably due to ASUS VG278H's 2ms of pixel transition. My XL2411T in non-LightBoost mode, is also similar. (~3ms for top edge, ~7ms for center, ~11ms for bottom edge)
G-SYNC is going to continue to do exactly the same thing (realtime paint of panel, off the cable), but eliminate the fixed refreshing intervals. Meaning, you have any refresh rate, but with frametimes of 1/144sec (cable-transmission and realtime panel-refresh-cycle of 1/144sec). In fact, it can lower the latency of a fixed refresh rate, due to the accelerated frame delivery/panel top-to-bottom refresh cycle (60Hz refresh scanout in 1/144sec). This will even
reduce input lag for 60Hz emulators too, because of this.
The powerful FPGA, the 768MB of memory, and everything, is needed because of realtime unbuffered processing (or perhaps a single scanline buffer, ~10 microseconds) needed for realtime painting of the panel, without color flicker/modulations of variable refresh rates, including complex variable-refresh-compensated overdrive algorithms. Remember 60Hz-vs-120Hz-vs-144Hz has different colors on your monitors, poorer colors at 144Hz, so trying to keep all variable refresh rates at exactly the same color (less than 0.25% error to avoid color flicker during rapid 30fps->144fps->30fps->144ps modulations), while keeping fast response, avoid flicker caused by refresh rate transitions, avoiding interaction problems with FRC and/or inversion, variable-refresh overdrive algorithms, all seems to be a major mathematical engineering feat. And doing all of this, without degrading averaged color quality too much. Some good Ph.D's probably went into G-SYNC. nVidia said "color processing", and that's an accurate, albiet, simplified statement.