In the land of team-based discussion and fanboyism everything is black or white.
I think it's perfectly fair, and most likely, that both the process and the architecture have failings and successes. Clearly the GF process is dense, and yielding fairly well. Clearly, TSMC have much, much more experience manufacturing modern GPUs. I very much doubt GF's process for GPUs is to TSMC's level. I doubt anyone in the entire industry is as good at GPUs as TSMC. I also think its perfectly likely that nVidia sent more engineers for more time to sit and work with the TSMC engineers due to their proportionately larger R&D budget. It isn't either or.
There is a lot of back and forth about the foundries. Strangely the sentiment before Polaris launched was completely at odds to what many are saying now. The tsmc 16f was supposedly a disadvantage for nvidia, there was list being posted all over on the many ways the node nvidia was stuck with was inferior. I remember reading some gloating about how Nvidia was stuck helpless as they would never be able to use glofo and that they burnt bridges with their Samsung lawsuit.
The reason I am bringing this up? The irony amazes me.
The chatter now is completely flipped, its now being insisted that the tsmc process is not only an Nvidia advantage but now the nodes alone are responsible for the vastly different polaris vs pascal.
There is no way to ignore that nvidia and amd used different nodes from different foundries. But given that AMD seems to have a lot of full polaris chips, the node is yielded good...or it seems to be.
You cant blame everything on the node, the foundry does not design chips. The node offers what it offers and the foundry has to produce as many useful chips per wafer as possible. The chip design should not be downplayed, it is not some small part of the outcome, its actually on the top here. The node produces chips, their characteristics are a product of their design.
Okay examples: 40nm tsmc.
Go back in time. Amd launched the 5870. Wow, what an amassing improvement, in every single way, over the 55nm 4800series. The tsmc 40nm node was freaking amassing. Right?
Well Nvidia got very different results with their GTX 480. Had they launched on different foundries, I guess we could keep using this argument but they did not. Also, nvidia had to redesign their Fermi on a transistor level and the next launch of a very similarly stuctured gf110 had very different success. The 580 was higher clocked, more active cores, while using less power. The fermi blunder had nvidia totally rethink their strategy. Putting a newly created engineering department together that would focus on nodes and dies shrinks for their architectures.
Then 28nm- it should be obvious that nvidia was getting higher clock speeds while also being more efficient. They were using smaller and less beefed up chips to compete. This was 1000% by design. This was the direction nvidia took. Their are Maxwell chips that can clock 1500-1600 mhz. This was due to chip design.
I am not suggesting AMD was better or worse, just stating that there is clearly a different focus last gen. Nvidia had extracted higher clocks and lower consumption from the 28nm node. Amd had good designs, dont get me wrong. They had different routes, amd used hbm which greatly reduced their power consumption from their wide bus for example. It was cutting edge, the path they took.
I am not talking about just the layout, uarc. The lay out and core design is how the chip handle data on a large scale. Those blocks are made up of billions of transistors, tiny components that make up the chip that the foundry will produce. Any chip can be improved on a transistor level, it is painstaking but can be well worth it.
If we look at intel, their node shrinks have offered such minor improvements. It is baffling when you consider that these tiny inch gains are also the result of architectural changes and the smaller and smaller nodes. It's really clear that intel has gained very small amounts since 32nm.
In that light, the 480 is a major improvement. Massive compared to what little intel gains. Amd has gained a tremendous amount in that light.
The node shrinks today are no where close to what we had in the past. These shrinks are inches and the big gains are only to be had from architectural and chip level design. We seen nvidia talk about this and they have proven this with out a doubt with maxwell. I would expect that nvidia keep focussing on the things that have given them such success in the past. Watching their 1080 reveal, they spent the first segment talking amount the advancement in transistor level voltage spikes,in comparison to maxwell. They talk about their drive for efficiency and speed, this surely would be a focus, it was with maxwell. Its clear to me, they would keep making efforts and strides in the areas that would pay of big time. They were the ones who said years ago, we cant depend on nodes for results.
Surely, nvidia had to take the Samsung 14nm glofo node seriously. They would have to have. I believe they probably did spend a fortune on getting the best efficiency and highest clocks possible. Its naive to think otherwise. Why would they just rely on the node? 14nm offers a lot.
Just looking at the tonga and fury clocks, the 480 is a nice step up with a good reduction in power consumption.
Looking at maxwell boost clocks vs gcn last gen, it seems that nvidia has found ways to achieve higher clocks when on the same node. Nvidia also claims that pascal was engineered for speed, I think that focus had something to do with the result. The gap from 480 to 1070 in mhz, it seems this gap started last gen when both were on the same node