The pixel fill-rate in slide matches values listed in 290x slides (
http://www.legitreviews.com/wp-content/uploads/2014/04/r9-specs-645x498.png). As been stated before pixel fill-rate shouldn't be an issue even very high res scenarios (3x 1440p, 4k, etc.). Have to save transistors somewhere and
if more pixel fill-rate doesn't get you anything, I'd say wise decision not to waste silicon on it. Though, please correct me if I'm wrong as I too was expecting ROP increase as been the norm with each new GPU arch.
It is not correct to compare theoretical pixel fill-rate on paper, not across AMD vs. AMD or across AMD vs. NV. Every time this point is brought up, it's either brushed aside or purposely ignored. Like some gamers tried to compare GM200's 96 ROPs to AMD's 128 ROPs and use GPU clock speeds to support their position that AMD will need 128 ROPs clocked at 1-1.05Ghz to match GM200's 96 ROPs clocked at 1.2Ghz Boost. That's not how it works at all. Different GPU architectures utilize ROPs differently. Just because 2 GPUs have the same number of ROPs, it doesn't tell us the actual real world throughput. We can only guesstimate and often we will be wrong.
"Even with the same number of ROPs and a similar theoretical performance limit (29.6 vs 28.16), 7970 is pushing 51% more pixels than 6970 is."
~ AT
We have already seen 285 outperform a 290 in pixel-fill rate tests. Even though these tests aren't 100% pixel fill-rate limited since they are also memory bandwidth dependent, the point is you can have a card with 32 ROPs that puts down the same or higher pixel fill-rate than a 64 ROP card. For that reason we can't just compare Hawaii's 64 ROPs to Fury's 64 ROPs. I bet it's not that simple at all. It's not out of the question that AMD's pixel fill-rate in the real world has increased 70-100% with the same number of ROPs. However, based on theoretical differences between Hawaii's and Tahiti's ROP throughput, Hawaii's ROPs were highly inefficient. We need to wait for benches but I am confident Fury's pixel fill-rate performance will trounce 290X's by a lot.
As such, I don't see how AMD can do anything....short of just removing random stuff from memory.
Unless you have a 4K monitor and plan on buying 2 $550+ cards and keeping them > 3 years, this is probably a non-issue. 980 SLI beats Titan X at 4K in almost all games where SLI scales, including GTA V. The chance that a single Fury X will become VRAM limited before it becomes GPU limited is very slim. We really need to wait for benchmarks and overclocking results because these matter a lot more than 4GB vs. 6GB of VRAM. A lot of gamers who buy a pair of $550+ cards will be reselling them and going with 14nm/16nm HBM2 cards as I don't think too many elite PC gamers spend $1100-1300 on GPUs and keep them for 5 years. It's just not a very good upgrading strategy.
It's also game dependent. If someone is planning to get a $200 racing wheel and play Project CARS and can't live without PhysX in Batman (say this gamer's favourite franchise), well the 980Ti wins automatically. Similarly someone might choose 2x Fury cards since they don't want 600W+ of power exhausted into their case. Also, knowing NV's driver support for Fermi and Kepler, it's a tall order to bet that with time NV's card will age better. History is highly against nV in this regard as time has shown that AMD's GCN cards age a lot better than Fermi and Kepler. What guarantee is there that NV will spend as much time optimizing for Maxwell once they release Pascal next year? We know AMD has no choice but to optimize GCN since for 2016 they still plan to use GCN as per their Financial Analyst Day 2015 slides. From a driver perspective, that makes Fury a 'safer' bet unless one plays GW titles predominantly.