No one says Pascal cannot get better. It's a matter of magnitude/scaling. RX 480 shows 40-48% increase in performance, and Fury X even more at 52-66%. The more serial architecture - GCN - has Command Processors and ACE engines designed for high throughout. We have seen enough DX12 benches to see that DX11 never allowed these hardware benefits to come into play based on how the code is issued. The extra hardware in GCN was underutilized because the architecture wasn't made for serial APIs.
If you go way back to 2011 when Eric Demers unveiled GCN, he explained all of this. It's just too bad it took 5 years to see that he was 100% correct. DX11 is simply outdated and hopefully by the time PS5/XB2 come out DX12/Vulkan can put DX11/OpenGL out to pasture.
That's not even discussing the immense CPU overhead benefits for strategy games that DX12 already shows.
If NV finally designs a parallel + hardware Async Compute architecture for 2018-2019, the entire gaming industry will benefit as all major modern GPUs and consoles will be on the same page.
Under no circumstances should a Fury X be beating a 1070 by 26% in a modern game. We also already see that NV's architectures have utilization issues by comparing 1070 vs. 1080. The extra shader and TMU performance on the 1080 is just wasted. That means while Pascal is better utilized under DX11 than GCN is, Pascal itself is starting to run into its own bottlenecks.
DX11 API is already creating a CPU bottleneck for 1080 SLI based on Guru3D and TPU benches. This isn't about AMD vs. NV but the fact that DX11 is simply outdated and last gen.