Indeed the advantages of forward+ lie elsewhere.
IE. Ability to render transparent surfaces using same lighting methods as anything else is a big win.
The problem with deferred rendering (DR) is that it brings a list of cons in the form of heavier performance hit when handling multiple materials, and because it usually discards the geometry data it can't really apply proper MultiSampling antialiasing.
One solution is to run a compute shader to apply the lighting to the Forward Rendered image, instead of the usual way of "Render everything 1 time for each light source in the scene!" This way you save a great deal of passes, save on memory by not needing the G buffer (the geometry is always present on a Forward Renderer, instead of discarded), and you also get the proper MSAA that's been included on every traditional GPU design in the last 10 years. Finally, multiple materials can be used without the big performance & memory hit of the DR. All it takes is compute time for the new compute shader.
It's pretty obvious why NV isn't promoting forward+ lighting game engines with developers. Their existing GK104 GPU generation tanks when Compute shaders need to do any work. Additionally, because traditional MSAA doesn't work properly with deferred lighting engines, it results in an exponentially large performance hit on AMD cards. The end result is both NV and AMD GPUs take a huge performance hit with MSAA but NV tends to do a little better. The problem is we gamers suffer because even with NV there is still a > 30% performance hit most of the time.
Forward+ lighting model + compute shaders allow you to bring back the traditional MSAA approach, and you minimize the performance hit at the same time.
http://forums.anandtech.com/showpost.php?p=33832466&postcount=5
I am personally of the view that software developers should adopt superior programming methods that maximize image quality and performance, if these alternatives are available, even if it means for GPU developers to rethink their hardware designs (i.e., focus more on compute for example). Recently, the intermediary solution has been the introduction of FXAA/MLAA/TXAA filters which especially in TXAA's case blur details in games. These are not great solutions because they are trying to fix what is a fundamentally flawed approach to game engine design in the first place. IMO instead of trying to come up with some hybrid AA filters, they should just revamp how the engines are coded for in the first place and traditional MSAA can be properly applied to the entire scene. When you have a situation where a 2013 game suffers from a 33-52% performance hit when forcing just 4xMSAA, it is a problem.
This pretty much shows that traditional MSAA has been surpassed in every way.
SMAA done properly with subsamples and temporal information looks a lot better and so does TXAA especially when in motion.
While TXAA reduces pixel crawl, it blurs the entire picture and thus reduces details. TXAA, without tweaks in SweetFX, has atrocious native IQ filter quality. It's like playing a game with a wrong set of prescription glasses.
A clear example of why TXAA is inferior to MSAA is Black Ops 2 or the Secret World on the PC:
COD BO2 8xMSAA
COD BO2 4xTXAA
Now if you want to spend $800+ on high-end GPUs and turn your game to look more like a blurry console title, by all means. In that case, I'd rather save $400 and get a PS4...
Your opinion that MSAA is outdated even contradicts the IQ in Crysis 3, where MSAA clearly provides superior IQ to FXAA or TXAA.
FXAA 4x (anti-aliasing doesn't even work)
TXAA 4x (significant detail texture quality degradation/blurring)
MSAA 4x (way better IQ than FXAA without any of the blurfest of TXAA).
I'm interested in the x2 TXAA setting.
See above. TXAA is a blurfest, as usual.
Lol, on high settings HD 7970 GHz can't win against GTX 660 Ti.
Ironically none of the single GPU cards are playable either. 28 fps min and 36 fps avg on 680? This again goes back to the unreasonable performance hit of MSAA in this deferred lighting game engine.
At 1080P VHQ with 4xMSAA, neither the GTX680 nor the HD7970GE seems to be fast enough with current drivers.