That covers the areas that are similar to AMD's LiquidVR, but Nvidia has one feather in their cap that AMD doesn't currently support, and that's Multi-Resolution Shading. This one is something I got to preview last year, and it's another clever use of technology. Nvidia's Maxwell 2.0 architecture (sorry, GM107/GM108 and Kepler users, you're out of luck!) has a feature called viewport multicasting, or multi-projection acceleration. They first talked about this at GDC2015, but there weren't a lot of details provided; now, however, we have a concrete example of what Nvidia can do with this feature.
Normally, graphics rendering takes place from a single viewport; if you want to render from a different perspective, you have to recalculate a bunch of stuff and set things up for a new viewport, and that takes a lot of time. Multiport viewcasting allows Nvidia's GM20x architecture to do multiple viewports in a single pass, and what that means for VR is that Nvidia can do nine different scaled viewports. Why would they want to do that? Because a lot of the data normally rendered for VR gets discarded/lost during the preparation for the VR optics.