Doubtful. 2080Ti already uses TU102 and even if the moniker was not enough to justify this reasoning, I'm sure the size alone at 750mm2 means there isn't a bigger chip around the corner.
In fact, some people are speculating that Nvidia probably wanted to brand the 2080Ti as the Titan of this generation, but were forced to release it as part of the "mainstream"/gaming cards because the 2080 was completely underwhelming.
It's certainly an interesting thought.
RTX Quadro 6K/8K has 4608 cores, ti has 4352. Not sure if that means you can have a card beyond the ti?
Probably not, nv will be releasing 7nm GPUs next year. RTX 20 series is just a stop gap.
If it was just stop Gap they would not have gone as far as building three differrent dies so far.
They might add 7nm to the 2000 series next year, but 2000 series won't be replaced or significantly upgraded next year.
Nvidia had no choice but to tape out 3 dies due to the huge differences in die size between GPU tiers caused by all the extra RTX hardware.
For reference:
TU102: 754mm2 RTX2080Ti
TU104: 545mm2 RTX2080
TU106: 445mm2 RTX2070
GP102: 471mm2 GTX1080Ti
GP104: 314mm2 GTX1080/1070
It's much cheaper to mass produce GP104 314mm2 vs TU104 545mm2 chips, so it made sense to cut some GP104 chips down instead of taping out a marginally smaller chip for the GTX1070.
The RTX2080 TU104 chip is already a lot more expensive to produce than a GTX1080Ti GP104 chip so it would be ridiculous to sell it @ RTX2070 prices.
Whether or not nvidia will call their 7nm GPUs a 2000 series or not does not matter. Fact is current 12nm 2000 series GPUs will be eclipsed by 7nm versions next year. It could be a straight die shrink with no IPC gains at least they'll clock higher/use less power.
Switching to 7nm will improve yields and cut costs for nvidia since current turing chips are so massive. It doesn't matter if AMD is uncompetitive, it's in nvidia's own financial interests to switch to 7nm as soon as yields are at acceptable levels.
IMO, Die size delta is not significant enough between 104 and 106 to warrant up front costs on a stop gap, short-term run. Those are planned for a long run.
Bigger chip = fewer chips per wafer but it also means lower yield per wafer too. Look up wafer yield formula. Chip cost does not scale linearly to chip size.
You can believe whatever you want, but nvidia would be stupid not to switch to 7nm next year /w die sizes this big.
I totally get die size cost issues.
But you seem to be missing the other part of the equation: The up front costs. Numbers I saw for mask cost alone for a ~14nm chip were about $80 Million dollars. You have to sell a huge pile of cards to pay that off, out of unit profits.
TU104 is really not that much bigger than the Vega 64/56 die, and AMD used the same die for both tiers.
If NVidia was really planning for a short term run, it doesn't make sense to spend an extra 80 million to have unique dies at each tier if you weren't really going to have time to amortize the hefty up front costs.
It would make MUCH more sense on a short term run to use one die and spread the up front costs across two tiers.
IMO, the first 7nm parts might most likely be for 2060/2050. Much less risk doing a small die on a new process.
Later after the process is more mature (+less expensive), and TU102-106 have been amortized, they can do 7nm parts in that class.
I totally get die size cost issues.
But you seem to be missing the other part of the equation: The up front costs. Numbers I saw for mask cost alone for a ~14nm chip were about $80 Million dollars. You have to sell a huge pile of cards to pay that off, out of unit profits.
TU104 is really not that much bigger than the Vega 64/56 die, and AMD used the same die for both tiers.
If NVidia was really planning for a short term run, it doesn't make sense to spend an extra 80 million to have unique dies at each tier if you weren't really going to have time to amortize the hefty up front costs.
It would make MUCH more sense on a short term run to use one die and spread the up front costs across two tiers.
IMO, the first 7nm parts might most likely be for 2060/2050. Much less risk doing a small die on a new process.
Later after the process is more mature (+less expensive), and TU102-106 have been amortized, they can do 7nm parts in that class.
Where did you get the $80mill figure? An intel engineer says only $2-3mill for 16/14nm finfet: https://www.quora.com/How-much-does-it-cost-to-tapeout-a-28-nm-14-nm-and-10-nm-chip
The reason Vega64/56 uses the same die is because 64 is the full fat GPU with all shaders enabled, so the 56 has salvaged dies with defective shaders.
The same is true for all big dies like TU102: full fat Quadro RTX6000 & cut down 2080Ti.
TU104: Full fat Quadro RTX5000 & cut down RTX2080
nVidia is already spreading upfront costs (which are minimal) across different tiers.
This is what I was remembering:
https://semiengineering.com/finfet-rollout-slower-than-expected/
"But perhaps the biggest issue is cost. The average IC design cost for a 28nm device is about $30 million, according to Gartner. In comparison, the IC design cost for a mid-range 14nm SoC is about $80 million."
The point being upfront costs were $30 million at 28nm and have increased to $80 million at 14nm, a lot of it due to a massive increase in masking layers.
Regardless how profitable a company is, each project will have a profitable business case. That is actually how profitable companies stay profitable. They maximize every business case.
If you are making a stopgap part, it makes a heck of LOT more sense to just do it with one design when they are so close in size/purpose, since you know it's for the short term, and you can save a tens of millions of dollars on up front costs.
IMO we wont see a 7nm respin/replacement on TU 104/106 within a year, because it appears that NVidia is counting on long enough amortization cycle for them to warrant separate parts.
Time will tell.
You're misunderstanding the article. The $80mill is for designing a 14nm SoC from the ground up, not the just the tape out. Gartner’s Wang said. “A high-end SoC can be double this amount, and a low-end SoC with re-used IP can be half of the amount.”
When they're talking about reusing IP to reduce costs it means most of what they're talking about is design cost, not tape out.
Turing design costs are already fixed, nVidia designs everything in house then scales it down for smaller dies. The only extra cost for producing different chips is the tape out which is only $2-3mill.
Where did you get the $80mill figure? An intel engineer says only $2-3mill for 16/14nm finfet: https://www.quora.com/How-much-does-it-cost-to-tapeout-a-28-nm-14-nm-and-10-nm-chip
I can confirm the $2-$3 million dollar figure for a mask set, or at least that's in the ballpark, maybe a little bit low depending on the foundry.
Thank you for the confirmation. $80 million for a tape out? LOL.