Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 81 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,065
7,491
136
Wow, what a news day.

Conflicted feelings on the rumors: on one hand, a huge jump in performance could mean some serious discount sales on RTX 2xxx series hardware, which I am definitely in the market for, but as has been mentioned it could also mean P/P ratio doesn't really move and things just keep getting more expensive.

The rumor dump posted by Konan above is absolutely mouthwatering, the amount of salt that goes with it, but in the event that NV has managed a Maxwell -> Pascal level jump again (which isn't particularly unique given they did something similar with their prior node drop Fermi -> Kepler) maybe AMD should name Big Navi the "6300 XT" then brag about how NV can barely keep up with their entry level cards
 

uzzi38

Platinum Member
Oct 16, 2019
2,703
6,405
146
AMD didn't underestimate Intel after 3 years of screwing up 10nm. Remember Rome launch? AMD mentionned back then they designed Rome to be competitive with Ice Lake-SP and didn't think they'd match on per-core performance until Milan.

I cannot see them underestimating Nvidia who - to their credit - do not stop innovating.

Also, lmao at the idea that AMD have decided to increase ROP count since it was last leaked. Navi21 first taped out last year. Over 6 months later AMD submits a patch to Linux kernels showing ROP count to be at 64. Within weeks supposedly we're supposed to believe it was upped to 96?

Also double lol at Big Navi's board power at 350-375W.
 

Gideon

Golden Member
Nov 27, 2007
1,714
3,938
136
All the leaks seems to be spilling over today!
...

This rumor, while some might be accurate, seems to have quite a lot of BS in it.

1. He didn't even get his Time-spy scores right.

RX 5700 XT is 3950 - 4150
RTX 2080 Ti is 6300 - 7000

according to him:
RTX 3070 Ti (GA104) is 7000 - 7800
Big navi is (1.5 x Navi) = ~ 6000 (that's considerably slower than even Coretex suggested)

But stil Big Navi will somehow "trade blows" with GA104? These cards would have a difference in score comparable to 2080 vs 2080 Ti.

2. AMD adding ROPs to a design "mid-flight"?

3. "AMD underestimating Nvidia" - If AMD only manages to get 50% more performance from 2x the CUs with a 505 mm2 die on a 7nm+ node (capable of clocking PS5 to 2200 Mhz game-clock) @ 330-375W TDP - then that's not underestimating. That's utterly dismal execution, especially after 50% perf/watt gains talk at the time they had silicon of RDNA2 designs.

They would essentially be totally topped out with nothing left in the tank. It wouldn't matter if Nvidia were 3x the perf of 2080 Ti, they couldn't really do anything about it. So even if everything about AMD in this rumor is true the "underestimating" part makes no sense.

Also the "AMD were full of confidence" part. You have a design unable to beat a 250W 2080 Ti (with your design @ 350W btw) You know the competitor is doing 350W designs on a better node than before and you're "full of confidence" ? If that were true, everybody higher up ion the AMD graphics team should be fired for incompentence.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
14,841
5,456
136
Conflicted feelings on the rumors: on one hand, a huge jump in performance could mean some serious discount sales on RTX 2xxx series hardware, which I am definitely in the market for, but as has been mentioned it could also mean P/P ratio doesn't really move and things just keep getting more expensive.

If you are gonna do that, it's going to be used... looks like the supply is basically gone. Almost to the point where it better be a hard launch.

Also 50% faster than the 2080 Ti FE isn't that amazing, when OC'd 2080 Ti's are already 15% faster.

Big navi is (1.5 x Navi) = ~ 6000 (that's considerably slower than even Coretex suggested)

Isn't Coreteks suggesting that what AMD releases this year is ~2080 Ti? The ~3080/3090 cards would be released next year.
 

Gideon

Golden Member
Nov 27, 2007
1,714
3,938
136
Isn't Coreteks suggesting that what AMD releases this year is ~2080 Ti? The ~3080/3090 cards would be released next year.

I was referring to this article:

Unfortunately for us PC enthusiasts it seems the news aren’t great regarding the so-called “NVidia Killer”. The performance level of the high-end GPUs based on the codenamed Sienna Cichlid ASICs is around the same as NVidia’s last-gen top SKU, the 2080ti, according to these reports. A best case scenario supposedly shared by AMD is that at most gamers should expect to get about 15% over the 2080ti in a select AMD “optimized” titles. According to the same sources AMD is aiming to launch the “Big Navi” as a direct competitor to the soon to be launched RTX 3080, and not as a competitor to NVidia’s highest performing part as has been widely reported (let alone beat it). Some have suggested that “Big Navi” would be up to 50% faster in traditional rasterization than the RTX 2080ti but according to AMD’s partners that will not be the case.

This states in no uncertain terms that he's talking about N21(Sienna Cichlid), which should be the biggest chip.

Overall, if the leaks about 2xFP32 perf are true then yeah, AMD will get smoked one way or another.

If the Navi die-size info is correct, I would have expected more of it, at least say 70% ~more perf than the 5700 XT (particularily at those TDPs and withrumors of possibly watercooled cards). I've always considered 90-100% to be the upper bound of possibility. A "moonshot" with major architectural improvements. 150% perf @350W and 2x the die size after PS5/xbox specs and all the hype qualifies as a fail for sure.
 

Saylick

Diamond Member
Sep 10, 2012
3,389
7,153
136
Out of curiosity, do you guys believe there's actually a co-processor on the back of the PCB, opposite of the GPU die? If so, won't it help explain part of the high power draw of the entire board?
 

CakeMonster

Golden Member
Nov 22, 2012
1,428
535
136
Out of curiosity, do you guys believe there's actually a co-processor on the back of the PCB, opposite of the GPU die? If so, won't it help explain part of the high power draw of the entire board?

Nope, why would they do that to themselves? Unnecessary complexity in an situation where NV almost certainly (unfortunately) has the upper hand performance wise.
 

FaaR

Golden Member
Dec 28, 2007
1,056
412
136
Wasn't Coreteks youtuber claiming that on the back of the PCB is a coprocessor that is doing the RT stuff?
Out of curiosity, do you guys believe there's actually a co-processor on the back of the PCB, opposite of the GPU die? If so, won't it help explain part of the high power draw of the entire board?
I'd say no. That's a youtuber showing how poor a grasp of technical matters he has despite supposedly being a computer hardware enthusiast.

The back of a major ASIC is typically packed with passive components, capacitors for decoupling and power delivery buffering and the like. You couldn't fit an entire additional ASIC there in that space with them in place, and doing without them is probably inadvisable. They're there for a reason after all. Besides, the PCB is riddled with vias there as well for the front side GPU, how would you fit hundreds at least (more likely 1000+) of additional solder pads for another major flip-chip substrate on the reverse side? Seems to require some magic tricks IMO or hereto undiscovered technical innovations never demonstrated in consumer computer hardware before.

Plus, cooling. There's not much space on the reverse side of the PCB as it is, if you stick another ASIC onto there the vertical height of the chip package will chew up much of what little is available. You might then be able to squeeze a very thin vapor chamber on top to transport off the heat generated, maybe, but with no room for an actual heat sink that doesn't help you very much. Add to the fact two ASICs both doing heavy computing tasks sitting back to back would be grilling each other with their respective heat output. It would significantly increase heat density in one small spot on the PCB and thus cooling difficulties, which will already be significant for a rumored 300+ watt part.

Also, intra-chip communications suffers from high latency and high power usage compared to inter-chip communications. It would complicate the 3D rendering pipeline a lot sending some of the work off-chip and then holding partially completed pixels in flight in buffer memory on the main GPU until results come back from the reverse side "accelerator".

All this seems very unlikely honestly, from an engineering and logic standpoint.

I don't see nvidia launching both a 3080 and 3090/3080ti at the same time if they completely destroy the competition and their older cards. IMO They would return to releasing a 80 one year and ti the next, similar to the Maxwell and Pascal releases).
If 3080 and 3090 both use the same ASIC there doesn't seem much point in holding off the release of the latter for months. The chip is ready, might as well sell it to people who want to buy it. What on earth would waiting really gain them when the bleeding edge gamers already know there's a better offering waiting around the corner and being artificially held back by NV for reasons of nothing except pure greed? Not very many are going to be insane enough to buy a really expensive 3080 just to tide them over for a few months.
 
Reactions: xpea and Saylick

linkgoron

Platinum Member
Mar 9, 2005
2,334
857
136
If 3080 and 3090 both use the same ASIC there doesn't seem much point in holding off the release of the latter for months. The chip is ready, might as well sell it to people who want to buy it. What on earth would waiting really gain them when the bleeding edge gamers already know there's a better offering waiting around the corner and being artificially held back by NV for reasons of nothing except pure greed?

Like releasing the Titan X just two months (+ a week) after the 1080 release? There's always something better coming, especially when you buy the GX104 card.

Not very many are going to be insane enough to buy a really expensive 3080 just to tide them over for a few months.

If they use the same GPU, of course there's no need. However, it's not like Nvidia haven't done this before.

The 980 was released 9 months before the 980ti with a Titan released 6 months after the 980. The 1080 was released 10 months before the 1080ti with a Titan released 2 months after the 1080. If the 3080 is as a good as they were, I don't see why Nvidia would change their strategy. IMO, the only reason the 2080TI was released at the same time as the 2080 was because the 2080 wasn't really worth it as an upgrade over the 1080ti.

Edit: As stated in my previous post, the above is based on the speculation that Nvidia are going to completely demolish both their previous gen and Navi 2 with just their 3080 card. If Nvidia believe that AMD can be competitive with big Navi, they'll have a large incentive to release better cards. For example, the 980ti was probably released a bit early to counter Fiji, and the Super cards were clearly a counter to Navi.
 
Last edited:

Bouowmx

Golden Member
Nov 13, 2016
1,142
550
146
Concerning the rear-side co-processor, remember back in the Tesla days (GeForce 8800), NVIDIA graphics cards had an extra chip, the NVIO, that acted as a southbridge of sorts.
https://www.beyond3d.com/content/reviews/1/3
NVIO marshalls and is responsible for all data that enters the GPU over an interface that isn't PCI Express, and everything that outputs the chip that isn't going back to the host. In short, it's not only responsible for SLI, but the dual-link DVI outputs (HDCP-protected), all analogue output (component HDTV, VGA, etc) and input from external video sources.
Not implying that the alleged Ampere co-processor would do the same thing, but I just wanted to bring up some history.
 
Reactions: psolord

CakeMonster

Golden Member
Nov 22, 2012
1,428
535
136
The 980 was released 9 months before the 980ti with a Titan released 6 months after the 980. The 1080 was released 10 months before the 1080ti with a Titan released 2 months after the 1080.

All of those line up well with the +/- 12 month upgrade cycle.

The Titan was priced outside the accepted top gamer card niche at the time. So it was 980->980Ti->1080->1080Ti->2080Ti for most people, with 20-30% improvement a year. Pretty reliable until 2019 which saw now no upgrade option (the price hikes of the 2080Ti could possibly retroactively be justified for buyers since they have had the top card for 2 whole years).

This makes it more likely that two models can arrive at once, since a lineup with 2 year old chips is already confirmed to be discontinued. I'm pretty sure NV also knows that the gamers who did spend money on that 20-30% upgrade every 12 months for so many years expect more than that 20-30% number when 2 years have gone by. They will probably still hike the price even more, but I think they will probably deliver on performance.
 

FaaR

Golden Member
Dec 28, 2007
1,056
412
136
However, it's not like Nvidia haven't done this before.
That they've done it before isn't in of itself evidence they will do it again. Also it's worth mentioning again that turing is old now and it wasn't exactly a huge performance increase even back when it first launched. People are ready to upgrade now. Forcing them to wait even longer would be pointless when the previous gen is 2+ years long in the tooth already.

Delaying might in fact end up hurting NV financially in this situation... Someone who is itching to upgrade and buys a 3080 for 1X money if it solo-launches might have bought a 3090 for 1.5X money if it'd been available, and then they probably won't bother spending that 1.5X money when the 3090 does launch later on.

That's .5X money NV left on the table - how will JHH afford his next mansion if they make such basic blunders?!
 

MrTeal

Diamond Member
Dec 7, 2003
3,586
1,748
136
I'd say no. That's a youtuber showing how poor a grasp of technical matters he has despite supposedly being a computer hardware enthusiast.

The back of a major ASIC is typically packed with passive components, capacitors for decoupling and power delivery buffering and the like. You couldn't fit an entire additional ASIC there in that space with them in place, and doing without them is probably inadvisable. They're there for a reason after all. Besides, the PCB is riddled with vias there as well for the front side GPU, how would you fit hundreds at least (more likely 1000+) of additional solder pads for another major flip-chip substrate on the reverse side? Seems to require some magic tricks IMO or hereto undiscovered technical innovations never demonstrated in consumer computer hardware before.

Plus, cooling. There's not much space on the reverse side of the PCB as it is, if you stick another ASIC onto there the vertical height of the chip package will chew up much of what little is available. You might then be able to squeeze a very thin vapor chamber on top to transport off the heat generated, maybe, but with no room for an actual heat sink that doesn't help you very much. Add to the fact two ASICs both doing heavy computing tasks sitting back to back would be grilling each other with their respective heat output. It would significantly increase heat density in one small spot on the PCB and thus cooling difficulties, which will already be significant for a rumored 300+ watt part.

Also, intra-chip communications suffers from high latency and high power usage compared to inter-chip communications. It would complicate the 3D rendering pipeline a lot sending some of the work off-chip and then holding partially completed pixels in flight in buffer memory on the main GPU until results come back from the reverse side "accelerator".

All this seems very unlikely honestly, from an engineering and logic standpoint.


If 3080 and 3090 both use the same ASIC there doesn't seem much point in holding off the release of the latter for months. The chip is ready, might as well sell it to people who want to buy it. What on earth would waiting really gain them when the bleeding edge gamers already know there's a better offering waiting around the corner and being artificially held back by NV for reasons of nothing except pure greed? Not very many are going to be insane enough to buy a really expensive 3080 just to tide them over for a few months.
Having done board layouts for 2000 ball 600W BGA ASICs I agree with your assessment on the challenges they'd face putting another chip on the other side. You can help mitigate routing and component issues using blind and buried vias, and moving to even more layers could help some but it would still be a big challenge to have two large chips on opposite sides of the board. It's not impossible of course, but I'm not sure why you would do it. If they have a coprocessor why not just stick it top side where power delivery, cooling and z-height issues are so much easier to deal with?
 
Reactions: Stuka87 and FaaR

FaaR

Golden Member
Dec 28, 2007
1,056
412
136
You can help mitigate routing and component issues using blind and buried vias, and moving to even more layers could help some
How many more layers could you even fit in a PCB which has to be insertable into a PCIe slot? Of course... You could just frankenstein on a separate edge connector onto a thicker PCB I suppose, but considering motherboard makers are loath to go above even 4 layers I'm sure they won't be happy to pile on the layers for graphics cards even though they're considerably smaller in size...
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
How many more layers could you even fit in a PCB which has to be insertable into a PCIe slot? Of course... You could just frankenstein on a separate edge connector onto a thicker PCB I suppose, but considering motherboard makers are loath to go above even 4 layers I'm sure they won't be happy to pile on the layers for graphics cards even though they're considerably smaller in size...

You could easily have extra layers that end before the edge connector. But would add to cost, and for no real gain.
 
Reactions: FaaR

MrTeal

Diamond Member
Dec 7, 2003
3,586
1,748
136
How many more layers could you even fit in a PCB which has to be insertable into a PCIe slot? Of course... You could just frankenstein on a separate edge connector onto a thicker PCB I suppose, but considering motherboard makers are loath to go above even 4 layers I'm sure they won't be happy to pile on the layers for graphics cards even though they're considerably smaller in size...
LOL... It's way more than 4. Mine was 16 10 and that's not atypical, even in a standard 1/16" thick PCB you'd put in a PCIe slot. Edit: Just double checked, that one was 10..
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,765
4,668
136
Yes, because there can only be one leaker.
Let me remind you, what yiou posted few days ago:

So, this guys is tweeting fakes just to correct himself a few days prio the launch. Great source btw.

Compare this to what has Kopite nailed, and turned out to be true:

I think between you and Kopite, its kopite who has way higher credibility to believe his words.

You embarass yourself with each word, when you doubt what he is writing.
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
However i heard Nvidia is very iffy about making another Titan as the sales on them are very dismal. People rather go full Quadro line then go Titan as Quadros have better work performance, and the marginally better titan gaming experience is not worth it.

The Titan V with simply too expensive for anything gaming and only use-case was workstations for people needing the compute/tensor cores for cheaper than Quadro/Tesla pricing and not needing more memory. Of course that market is tiny. Researchers in corporations usually don't have a workstation next to their desk, they connect to a compute server and work that way. And due to NV licensing you can only put Quadros or Teslas in servers.

On top of that I except big OEMs to not offer Titans at all in workstation but only Quadras because higher margings for them as well and customers not knowing any better that a 50% cheaper GPU would work just as well.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |