I am not sure why the discussion also comes back to die size. For consumers, die size doesn't matter. I bet 90% of videocard buyers have no idea what a node process is.
Because we are discussing architectures!
If you want to discuss graphic cards it doesn't matter if an architecture is newer or older either does it?
When you discuss graphic cards you discuss performance, price, power consumption (and the related heat and noise), features (DX10 or DX11, etc).
When you start discussing architectures on the other hands die size does matter.
If die size doesn't matter when comparing the 6970 vs the GTX580 then the fact Cayman is supposedly a newer architecture while the GTX580 is a refresh of GF100 doesn't matter either.
OK. You don't like discussing architectures. Then don't.
If you do like discuss architectures than die size matters, performance/watt (something NVIDIA mentioned when talking about future architectures) matters, etc.
And we, or at least I and others, were talking about architectures here.
If card X is as fast as card Y, but is half the size, which one is the most efficient architecture? Of course if card X cost twice as much card Y, card Y is most likely the best buy.
See the difference between discussing architectures and cards?
The most curious is you understand why die sizes matter and performance/mm^2 matter.
Like I said if you are expecting HD6970 to beat a GTX580 by 15-20%, that would put the 6970 ~ 50-60% faster than an HD5870. That's a big difference on the same manufacturing node. If AMD can pull that off, then the 4D design has tremendous potential in HD7000 series on 28nm.
As you said in this post, if the architecture scales well there is potential on 28nm, even if the improvement now is smaller than 60%.
And in the example of 8800GT vs 3870, which wasn't the same since if both cards were on the same node the size difference would be smaller, we know how the 4800 series ended.