They did this by making the die size gigantic.
What's your point? Should car manufacturers not use direct injection, turbo-charging/supercharging and premium fuel to achieve improvements in internal combustion engine efficiencies? Once the actual graphics card is actually inside your case, do you care if the die size is 300mm2 or 600mm2? The competing solution also has a 601mm2 die size so I am not seeing your point at all.
So?
If the performance is 50% better, and power usage remains the same, that's still an impressive showing without a node shrink.
Ya, I think some gamers don't remember that ATI/NV used to achieve 50-100% increases in GPU performance by fitting more transistors into smaller space due to die shrinks. With this generation, this approach doesn't work, which meant growing die sizes was the only way to go. Otherwise, the performance improvement couldn't have been 50%, but more like 10-15%, or alternatively would have required an all new architecture from AMD. This is completely unrealistic since new architectures take 3-5 years to design from scratch (both NV and AMD admitted to this). Also, since this is a short-lived node, it would have been questionable as to the cost-benefit of an all new architecture to be used for 28nm node by AMD. If they are able to achieve a 50% increase in perf/watt and performance without an all new architecture and a node shrink, then it's good enough to last them 18-20 months. At that point they will move on to a 14nm/16nm node shrink + HBM2. I don't really know what else people were expecting a 5632 shader Fiji card on 28nm node?
What's with you and furmark? It almost feels like you just using it because it helps to confirm what you already thought. NV clearly has a tdp advantage these days, you're actually hurting your own argument by quoting useless information in an attempt to support it.
In his dismissal of a 'broken' Gigabyte G1, he doesn't realize that it's not that Gigabyte G1 980 is a broken card, but it was designed to handle 350-400W of power usage on purpose because it was made for overclocking. Many other cards are made this way but it doesn't mean they use 350W+ of power in games out of the box. For a lot of hardcore overclockers, the inclusion of dual 8-pin connectors is a plus because they have a peace of mind that they will not be power limited when overvolting/overclocking. When a card is designed for high-end overclocking, the PCB, VRMs, and its power circuitry are all beefed up. Furmark, as a power virus, starts to exploit these components to their own intended specifications and naturally we would see a 980 G1 draw 350W of power because it
can do so if asked, not because it does operate in such a way in games.
That's why if we were to take a 980Ti Classified vs. a reference 980Ti, it's only natural that the Classified would draw a lot more power in FurMark since they are over-engineered to handle that much power compared to the reference 980Ti card. His constant use of FurMark as an indication of maximum power usage in games ignores how the power virus actually works and ignores how after-market GPUs are designed/for what purpose their VRMs/MosFETS/power circuitry/digital power delivery are often upgraded. The actual ASIC that is used in those after-market boards has very similar characteristics of what goes into a reference 980Ti (sure there could be some differences in top clock speeds due to ASIC quality binning). Therefore, what creates such a large discrepancy in top power usage in FurMark is not the differences in GM204 chips between a Gigabyte card an a reference PNY card, but the actual board and all the related components around the ASIC. You of course understand all of that
This is the first time in a long time that I disagree with you on something, but that post really bothers me. Just because the other guy is being obtuse (and quite possibly working for his check from a certain company) is no reason for you to insult Americans in general... not everybody from the US or Canada is a world traveler like us, no need to insult those who live in the stix and don't get out much.
Ya, sorry, my bad. I shouldn't have generalized like that. I know a lot of Canadians/Americans do travel the world and work abroad and many of them were born in foreign countries too. His dismissal of a foreign site that has been used widely on our forum for a long time I found at odds with the general consensus of many other gamers who do find that site reputable.
Computerbase even called
AMD out on poor texture filtering IQ in their drivers years ago, the same for
PCGamesHardware. These sites have a good track record of being more objective than many of the US-based sites that seem to clearly favour advertising money, journalistic review sample privileges, etc. which of course today means a lot more favourable reviews of NV products.
I hope that 390 is really good, but based upon past experience this seems like a faint hope.
I doubt that because 390/390X seem to be just improved versions of 290/290X silicon, using the current GDDR5 tech. Fiji cards are the ones which should show the real breakthrough from 290X era for AMD since AMD grew the die size and implemented HBM.