It's all extrapolation, but it's based on the 480 generally winning in DX12 games and the presumption that, like Kepler and Maxwell, AMDs chips will eventually win to their NV competitors (290 vs 970 vs 780, for example).
The DX12 bit is credible IMO, but the second part is iffy.
The RX 480 has considerably more theoretical compute performance than the 1060 (5.8 TFlops vs 4.4 TFlops). DX12 and Vulkan, in general, help AMD cards make more use of their theoretical advantages in actual gaming. Nvidia has the edge in DX11 for a variety of reasons related to the drivers and hardware architecture.
The other bit to bring up is that DX12 is built on top of AMD-developed tech and the Mantle API, so of course it would tend to favor AMD. This is also the reason that older AMD cards also favor DX12 and show even more significant gains there when compared to their contemporaries, like Maxwell.
The big async compute onion in nVidia's ointment is merely one of those features that greatly benefits well-tooled DX12 implementations, but nVidia is still dragging it's feet on this 1 year out. They continue to break promises about making this "available through drivers" in Maxwell, and while it appears to be a feature in the hardware of Pascal (see AT review for 1080), it either isn't implemented fully or at all, at the moment.
These have long been open api and development tools that nVidia could have freely used for some years now, but it wasn't in their interest, I suppose, because of market dominance through advertising. Meanwhile, AMD had been lagging through this time with hardware designed for tomorrow (now, today) as they had been designing for low-level api and compute, which is far more important in consoles that need greater efficiency with closed systems designed to last for 4+ years on aging hardware.
When you consider that the vast bulk of the gaming market is console-based now, with most AAA titles primarily being console-to-PC ports, you can see how developers would prefer to design for hardware that is both more efficient in designing for their primary revenue target, as well as efficiency in porting to PCs. Further added to that is the fact that only AMD will be in Sony and MSFT's upcoming console refreshes (MSFT has 2--one is supposed to be targeting 4K) for the next ~4 or 5 years; though it appears that these companies are aiming for shorter refreshes. ....if there are shorter refreshes, then that could mean an even greater future for AMD *if* they retain those contracts.
Anyway, in light of the facts of the reality of today and where the market stands, it isn't mere speculation that AMD is clearly ahead of nVidia in terms of modern API and future game-based performance over the next 4 years (at least), it is simply an acceptance of truth.
Not that nVidia isn't a slouch--I suspect they will continue to design for brute force performance with a 6 month-1 year lifespan on each card that they put out. They will probably win at that higher end and with shorter refreshes as they always do, without doing much to implementing better design in their architecture. But if they keep winning the mindshare game and customers still want to toss out ~$800 per year to claim 5-10% better performance on each yearly refresh, then why should nVidia care? At least with AMD< you can take the Russian Sensation route, mine with your cards, and claim your 5-10% less performance year-by-year and pay $800 less for that privilege ($800 cost based on nVidia staggering pricing creep between the last 3 generations, and their new taxes)