I guess I should have been a little more explicit, but the theory would also include that Ampere wasn't originally intended to be the consumer line as well, or that even though they were both called Ampere there would be some differences as well. Sure it makes more sense to use a single fab, but if there aren't enough wafers then it just isn't possible and necessitates using multiple foundries.
Of course there are other explanations for why NVidia might do that as well. There's a lot of hardware that GA100 doesn't include that would need to go into the consumer cards (e.g., ray tracing) and there's some extra stuff in GA100 that wouldn't go in the consumer cards. Even though the underlying architecture can be the same, there would be enough differences where doing tape outs on two different foundries wouldn't be an issue.
You also have to look at it from the perspective of what NVidia might have to do, not what they'd ideally like to do. It's been known for a while that they didn't get a lot of wafers at TSMC, though this may have changed more recently, and their plans with it. If it comes down to a limited number of wafers then they don't have much of a choice but to tape out designs on two different nodes.
It's the same as AMD releasing Radeon VII at CES. They probably originally intended to be able to announce Navi, but it wasn't ready and they needed a 7 nm GPU to go along with their 7 nm CPUs. So they took an enterprise card and made a limited edition consumer part out of it. No one is going to claim that they planned for that all along, but circumstances made it necessary.
Like I said, this was a theory designed entirely to square with all of the rumors, not based on any particular insider knowledge. It's an exercise is assuming that all the big rumors were true and figuring out how things had to go down for those rumors to have been true as opposed to looking at the actual outcome and just deciding that any rumors which didn't support it were false all along.
I wouldn't put any particular faith in it and it doesn't really matter. Arguing over what might have been isn't terribly useful. This was just an interesting thought experiment on my part.
I think everyone here is underestimating Nvidia ability to execute as a company.
Over the last 6 months, Nvidia has been executing incredibly well and their stock and revenue growth is reflecting this.
This is also showing in their future outlooks for Q2. While AMD is expecting around a 20% improvement year on year, Nvidia is expecting around a 40% increase year on year. AMD stock fell while Nvidia stock grew after there quarterly reports were announced.
I think the strong AMD bias on these forums tend to inflate expectations for AMD while magnifying their strengths. The opposite happen for Nvidia on this forum where their strengths and successes are minimized, while their faults and mistakes are magnified. This carries over to extrapolations as well. Radeon VII for example while built on 7nm only increased fp32 by 9% compared to Vega 64 while consuming the same power. People on this forum were still really impressed on it. On the other hand people people on this forum look at the power consumption numbers and the least impressive parts of A100 increases and extrapolated to make negative predictions on Nvidia gaming cards. Looking at where Nvidia growth is happening and where it is going to take place in Q2, you can tell A100 is going to be a smashing success with Nvidia datacenter revenue and professional visualization eclipsing AMD entire revenue.
Taking there financial success into account and their ability to generally not miss performance expectations by a mile(fury X and Vega), you can see Nvidia is running exceptionally well and Turing is doing very well to compete against AMD while AMD has a nodal advantage. There is a reason why Nvidia CEO has won not only one of the best CEO awards of the year, but the very top honor of best CEO of the year. A honor even Lisa Su has not achieved.
One has to realize that there likely was not enough capacity in 2018 for Nvidia to launch Turing regardless of AMD buying wafers and eating that capacity. Turing would have likely launched in June if it wasn't for the over supply of Pascal cards.
If Nvidia launched their cards on 7nm, they would have run into big supply constraints. Nvidia cost of goods sold is as high as AMD which means that the amount they spend on wafers isn't all that different than AMD's. So launching a product on 7nm, well before AMD launched their 7nm products would have caused Nvidia to run into severe supply constraints since AMD already ran into supply constraints for products launched in 2019 when availability would have been better.
For their next generation, Nvidia isn't going to mess around and underestimate AMD. We have to remember, Nvidia is nothing like Intel in terms of execution. They are aggressive and do a much better job of guarding their markets.
Nvidia knew how much 7nm wafers they need for their products years ago and would have likely purchased them way ahead of time. I doubt that they made their products on Samsung's 8nm, screwed up and are not redoing everything on TSMC 7nm. That is simply incompetence and considering the freed up capacity at TSMC from Apple's move to 5nm, Nvidia likely planned to use TSMC 7nm from the start as it what they have always used for the launch of most of their chips and TSMC wanting to retain Nvidia as a customer, particularly with Apple moving to 5nm.
Nvidia knows how to execute and their 2nd quarter 3.66 billion outlook which have been very accurate aside from the mining debacle, show this. AMD is only showing a modest 1.85 billion dollar outlook for Q2.