Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 64 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
That's pretty close to what they are doing. The shader count is only 20% more because they also wanted to get the die size down (TU102 is 754 mm2). They would have hit their perf and perf/w target just fine had it been on SS7.

Can i assume that the "only 20% shader" increase also just comes from rumors or is there anything published by NVidia? But sure if you are willing to clock something like 30% higher, your efficiency gain will go down the drain.

In any case, i am certain that your prediction of a flat perf/watt number compared to Turing is far off.
 
Last edited:

uzzi38

Platinum Member
Oct 16, 2019
2,702
6,405
146
No-one has considered the other alternative? NVidia is out for blood and looking to annihilate AMD so they are pushing wattage higher?

"Poor Volta"

AMD's hype is saying 50% better performance per watt them RDNA. That would put a 300 watt card roughly on par with those 3DMark scores you are talking about which most people I saw thought was the 3080, not Ti/3090.

The problem is that AMD has been hysterically dishonest with their hype in the GPU segment in the past, they threw down poor Volta years ago and have since introduced three new tiers none of which has matched the hype they claimed.

This time they claimed "nVidia killer", perhaps they just ticked JHH off enough that he's out to humiliate them this generation and reinforce the widely accepted view that AMD just can't compete.

Do I think the above is going to happen? No, but it's far more likely than what you people are setting yourselves up for.

Go back and check the Twitter claims again, the same guy saying it's 100% SS8 was saying no SS8 all SS7 previously. Then he swapped to SS8 and now he comes out with this 600 watt power plug? Sounds like someone is mole hunting.

Even if we assume this is a SS 8nm, SS 10nm was comparable to TSMC's 10nm, and supposedly this is on 8nm EUV which is a very large step up over "12nm". That would still be a full node drop and simply looking at history a full node drop for nVidia has resulted in *massive* gains every time. Meanwhile AMD is claiming a 50% performance gain per watt on a slightly refined node- something they have never come close to demonstrating they could do.

Despite all of this, you guys are truly trying to convince yourselves that your scenario has a serious chance to end up right?

Upside to all of this, everyone has pages worth of posts so when these products hit we can all see exactly how good everyone is at speculating.

On mobile and quoting sections is a bit of a pain, so I'm gonna be a bit lazy. I'll edit this into proper forums formatting in the morning, it's late and I'm half asleep.

> No one has considered the alternative...

Yeah, of course we have. Actually, Kopite isn't the source of the retaping rumours, those existed before he said anything, and the alternative where Nvidia actively tries to prevent AMD from taking any crowns is what I thought was happening. I thought NV were switching from 7LPP to N7.

> AMD is saying 50% power efficiency gen on gen... on par with what most people thought is the 3080

What most people think is irrelevent as it's spawned from baseless expectations of next gen.

It's also wrong.

> Prior bad marketing from RTG

You know the whole 50% perf/W statement? Let me run through what it's based off:

One RDNA and one RDNA2 chip. Both are at the same clocks they will release at. Both using the memory configs they will release with. Both of which are compared at the same CU count, the larger die will have CUs disabled for the test if needed.

In terms of power pulled by the GPU core alone, the 5700 and Series X should both be roughly on par at each other within the 120-130W area. The 5700 has a max boost that's 100mhz lower than the Series X and a "game clock" that is 200mhz lower than the Series X... and 16 less CUs on top.

That clock difference doesn't seem like much until you realise it's still 70mhz above the much more power hungry 5700XT.

Why am I bringing this up? Because at the 1825mhz of the Series X, its clear that there's actually a greater than 50% perf/W efficiency gain. I'm a bit lazy to try and do actual maths, but rough guesstimation says it's more like a 60-65% perf/W gain.

You can bring up the past, but it's generally not advisable to live in it. I'd instead look at the evidence we have today.

> Claimed "Nvidia Killer"

Man if Leather Jacket man got seriously annoyed at what might potentially be an internal codename at AMD and not one they ever actually publically made then maybe he should stop slinging out complete rubbish like next generation consoles being weaker than a 2080MQ.

> Twitter guy said no 8SS only 7SS but now is saying only 8SS.

Firstly the names are 8LPP and 7LPP ffs.

Secondly we've discussed what happened already. Something something 7LPP is a flop, something something retape. You get the idea.

> 8EUV

Doesn't exist.

> Full node shrink yadda yadda

Yeah, and you'd have seen major perf/W improvements if clocks weren't ramped up to extreme degrees, but alas.

Also, 8LPP isn't a full node shrink either way. It's extremely close, but quite not a full node shrink.
 

MrTeal

Diamond Member
Dec 7, 2003
3,584
1,743
136
4A per pin doesn't make much sense. That's less power than two standard 8-pin today. It just creates confusion, and is inferior to boot. So why would they go with a connector like that?

Btw, totally unrelated... Gauge must be the dumbest measuring unit ever. It makes even less sense than the fahrenheit temperature scale. How do you get a thicker wire than 1 gauge; go negative...?
Aught. It starts going the other way then.

4/0 (4 aught) > 2/0 > 1 AWG > 4 AWG> 16AWG, etc.

Does that make it less dumb?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Nobody's going to be annihilating anyone in the GPU space. That's not how it works.

Legit question, are you a teenager? I'm asking because that statement is comical if you are old enough to remember S3, Matrox, Intel(used to make discrete), Rendition, SGi and many others including 3Dfx. If there were a graveyard for dead graphics companies they'd have a lot of "nVidia was here" on the tombstones.

Who cares about whether AMD or Nvidia wins?

Anyone who partakes in a speculation thread with any honesty does. If for no other reason than to see how close their predictions were.

the alternative where Nvidia actively tries to prevent AMD from taking any crowns is what I thought was happening

No, think more like AMD can't compete with the third tier offering from nVidia using big Navi. AMD caught Intel because they were asleep at the wheel, and that's the only reason, they don't have that luxury on the GPU side.

It's also wrong

That's AMD's claim and it's disturbing for two reasons. One AMD lies through their teeth over promising and laughably under delivering, two because even it were true they'd have almost no chance of being competitive. Combine the two... Hopefully Intel steps up soon.

Yeah, and you'd have seen major perf/W improvements if clocks weren't ramped up to extreme degrees, but alas.

This is based off of the same guy that is now saying every single claim *HE MADE* was wrong, but this time you can trust him, and you do?

But aside from that, if they have say 50% performance per watt improvement and they then clock it too high, absolute performance doesn't go down, that's not how it works. In order to erase the performance per watt gain they'd need to go liquid nitrogen levels of cooling, efficiency may be dropping but performance is still going up.

I use 8SS to denote which company. It's just easier.
 

Glo.

Diamond Member
Apr 25, 2015
5,761
4,666
136
If people have to find a logic that would explain that every single rumor is wrong, because the news is bad - you can bet those rumors are actually correct.

Two things this industry has taught us recently.
1) Bad news spread with the speed of light.
2) Good news are kept as tightly as possible.
 
Last edited:
Reactions: Meghan54

jpiniero

Lifer
Oct 1, 2010
14,833
5,445
136
Can i assume that the "only 20% shader" increase also just comes from rumors or is there anything published by NVidia?

Rumors yes, but as mentioned it makes sense if you figure they wanted to get the *102 die size back down to the 470 mm2 range. Which they would have had it been on SS7.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
If people have to find a logic that would explain that every single rumor is wrong, because the news is bad - you can bet those rumors are actually correct.

So which set of rumors are you talking about?

Have you actually considered that your source said that there would be no consumer Ampere this year and it was going to be on SS7nm, then it had to be completely redone and swapped to SS8nm and all of a sudden it's coming two quarters sooner...? Really?

Now we have people claiming performance per watt is going to be static on a full node drop.....
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
I wonder if we should consider there might be a need to accommodate (or accurately represent) greater simultaneous utilization of all functional units.

The current 250w TDP rating for the 2080ti might only be for a pure raster workload, and even then it's under stated. Third party testing of power consumption is also mostly limited to a raster only workload, at least with a brief search I can only find one tech news outlet, Babeltech, which briefly examined power usage under ray trace and DLSS versus raster and that was a very limited examination.

If Ampere will have greater simultaneous usage of Raster, RT and Tensor in more scenarios, due to a greater prevalence of software support, it might mean a more pressing need to spec power consumption (power governors, gating, "optimal" clockspeeds, delivery, heatsink) higher to account for that.

For instance it could be specced optimally assuming all units are in usage but leaving the same headroom in for raster only scenarios which would mean the latter is outside the ideal efficiency. Or it could be a scenario in which raster only consumption is lower but you'd be leaving too much ray trace/tensor performance on the table unless you specced overall power limits higher to account for them.

It'll be interesting in that we might need to look at power consumption and perf/watt gains for raster only, raster+rt, raster+tensor, raster+tensor+rt as opposed to the traditional raster only scenario.

Not to mention even more complicated scenarios such as with the separated INT and FP units.
 
Reactions: nnunn

FaaR

Golden Member
Dec 28, 2007
1,056
412
136
Legit question, are you a teenager? I'm asking because that statement is comical if you are old enough to remember S3, Matrox, Intel(used to make discrete), Rendition, SGi and many others including 3Dfx. If there were a graveyard for dead graphics companies they'd have a lot of "nVidia was here" on the tombstones.
Legit question right back: why do you insist on removing the quote header so people can't tell who you're quoting? Stop that! It's really not a good idea, especially like in this case when you're quoting two entirely different peoples' posts!

And no, I'm not a teenager. Sure as hell wish I was though... You fall into the fallacy hole that goes like, just because something happened in the past means it can/will again. No, that is not really the case. Rendition and Matrox (who's actually still around btw, they exclusively make commercial gear these days) and so on were the chaff that was separated out long, looong ago.

Now there's a stable mature market with a few big players in it, like typically happens in late-stage capitalism. U.S. telecom has a few big players and it's the same in my country too, smart device OSes has two big players iOS and Android, desktop OS has three big players, Windows, Linux and macOS. And so on. Computer graphics has long been a triad with Intel actually leading by sales numbers - by far. Yeah, integrated, so what? Merely a technicality.

AMD may be doing better than ever now actually, I haven't seen any historical comparisons versus back in their previous heyday in the timespan between them launching x64 CPUs and up until Intel got back in gear with the Core2 series CPUs. In the professional marketspace they've never been this dominating technically versus Intel I believe. Big Blue still leads sales numbers by far though of course. And now they have new consoles about to launch in just a few months plus the first new high-end consumer GPU in several years.

...And you think they're about to be annihilated? lol HOW exactly? lol Sorry, but you're not being realistic here. If JHH wanted AMD annihilated he's over three years late with that. The company has long passed its weakest point, and literally all the curves are pointing upwards for them. Or well, was before corona happened anyway, haven't bothered looking for any current numbers. But that is the same for everyone, plus, this will pass.
 
Reactions: ozzy702

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
I don't think Nvidia really worry that much about AMD tbh, they compete mostly with themselves. All they think is how much better does new card Y need to be to get people to spend to upgrade from old card X. Given that they've got a process upgrade, and decent architectural enhancements to ray tracing/AI I can't see that being too much of a challenge.

As to why they don't care about AMD - well AMD are behind architecturally by quite a bit (Nvidia on 16nm is more efficient then AMD on 7), they don't have the features (no tensor cores, and their RT is gonna be 1st gen vs Nvidia 2nd gen), they don't have the production capacity (AMD will use most of their wafer allocation for CPU's and consoles), they don't have the software (pretty well every proper new gen of card needs 1 year for the drivers to work properly - Navi, Vega, ...), and they don't have the mindshare (outside of forums like this most people favour Nvidia).
 
Last edited:
Reactions: ozzy702 and FaaR

jpiniero

Lifer
Oct 1, 2010
14,833
5,445
136
I don't think Nvidia really worry that much about AMD tbh, they compete mostly with themselves. All they think is how much better does new card Y need to be to get people to spend to upgrade from old card X. Given that they've got a process upgrade, and decent architectural enhancements to ray tracing/AI I can't see that being too much of a challenge.

nVidia is in a little bit of a tight spot with SS7 blowing up. I think the AMD optimism has gotten ahead of themselves (this tends to happen with AMD...) but it does seem like AMD has fixed the design.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Rumors yes, but as mentioned it makes sense if you figure they wanted to get the *102 die size back down to the 470 mm2 range. Which they would have had it been on SS7.

It makes only sense if at least one of the assumptions would have been confirmed. So again we have 2 more assumptions: 1) What die size targets they have 2) the *102 is not going to SS7 or N7. In the past NVidia has not shown any tendency to sacrifice power for die size (at least not to the point where the efficiency stays flat when moving to smaller node).
This holds in particular since the rumors have been such unreliable as far.
 

Krteq

Senior member
May 22, 2015
993
672
136
In the past NVidia has not shown any tendency to sacrifice power for die size (at least not to the point where the efficiency stays flat when moving to smaller node).
This holds in particular since the rumors have been such unreliable as far.

Hmm, short memory? Do you remember "Thermi" (moving from Tesla 65/55nm to Fermi 40nm)?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I don't remove the quote header, I manually type the quotes out. I do things that way because I don't care who posts, I only care about the message.

You fall into the fallacy hole that goes like, just because something happened in the past means it can/will again.

They have a term for people who think because something happened it can never happen again, they are called morons. Did you mean to type something wildly different?

Your follow up comments on late stage capitalism, yikes. There are not three big players in the OS market, there is one- legally classified a monopoly on two different continents at least. There are a few telecoms in the US because they broke up the monopoly that existed- it used to just be Ma Bell.

..And you think they're about to be annihilated?

I was offering a counter point to the insanity in this thread. NVidia is going to drop a full node and their performance is barely going to budge along with their performance per watt- I'd say my counter scenario is far more likely. Also- see Pontiac- a segment of a large corporation can be destroyed without the company vanishing(or you could use Sega of you dislike the bailout component of the Pontiac segment).

Globally there are two mobile OSs for the time being, that's the only market you brought up where there is at least some open market competition(telcos were manufacturer competition by the government)- and that had one of the players going from 0% to 74%(first numbers from a search) in less than fifteen years. There are a lot of market segments you could've used, you picked poor to laughable though

Also, the last time AMD was strong on the CPU side started a while prior to x64, the Athlon was smoking the P3/P4 parts but they never gained appreciable marketshare because of production limitations and their platform had too many issues to be a safe bet for professional deployment. I was a big advocate for Athlon bank in the day, those posts are archived.

During that time AMD made the biggest blunder I can think of off the top of my head- they were going to purchase either ATi or nVidia and they went ATi because the nVidia deal would've mandated JHH became CEO of the new company.

Now AMD has a market cap of $65B and nVidia is at $250B. AMD was big enough to buy them outright, but they wanted idiots in charge that drove the company into the pits of the tech industry for the better part of two decades. Now their CPU division is doing well, at least until Intel gets their fabs going.

BTW- mentioning Intel has the highest sales in graphics- if we are going to be silly no, Qualcom utterly crushes them with Adreno. Dollars of graphics sold nVidia crushes everyone, by a huge margin.

As far as all curves looking up for AMD, their graphics division has been an embarrassment for years, only crazy rumors make it look any better but they have been absurdly wrong for so long now I question how a rational person takes them seriously.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
nVidia is in a little bit of a tight spot with SS7 blowing up. I think the AMD optimism has gotten ahead of themselves (this tends to happen with AMD...) but it does seem like AMD has fixed the design.
Well not as tight a spot as they should have been with turing which was 12nm (really 16nm) vs 7nm.
 

exquisitechar

Senior member
Apr 18, 2017
666
904
136
Well not as tight a spot as they should have been with turing which was 12nm (really 16nm) vs 7nm.
It’s much more tight, because RDNA2 is a huge improvement and will threaten them from top to bottom instead of barely putting up a fight in mid range and low end GPUs like with RDNA (and being worthless in laptops). NV chose the wrong time to stumble a bit.

Honestly, all these discussions seem pointless right now. There’s the camp that believes in the numerous leaks concerning Ampere RTX and the camp that doesn’t, and that’s all it boils down to. All will be elucidated at launch, which appears to be close.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
nVidia is in a little bit of a tight spot with SS7 blowing up. I think the AMD optimism has gotten ahead of themselves (this tends to happen with AMD...) but it does seem like AMD has fixed the design.

Why? Pascal on 16nm is faster than Navi. nVidia is two node shrinks ahead of AMD and now nVidia is in a "tight spot"? Sure.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,541
2,541
146
I am excited to see how the new cards are. That said, I think going to a 12 pin would be silly if true, though I doubt it is. If it was, I would probably go with an AMD card, assuming they don't do the same thing

Either way, we are expecting huge uplift in performance from both manufacturers. And all this doom and gloom is silly, neither AMD nor Nvidia is vanishing soon. Remember competition and choices are good.
 
Reactions: Elfear and Konan

Tup3x

Golden Member
Dec 31, 2016
1,009
997
136
It’s much more tight, because RDNA2 is a huge improvement and will threaten them from top to bottom instead of barely putting up a fight in mid range and low end GPUs like with RDNA (and being worthless in laptops). NV chose the wrong time to stumble a bit.
We'll see about that when both are out. NVIDIA has definitely made improvements too. At best AMD gets closer to NVIDIA but at worst the gap stays similar. It's too early to tell.

That being said I think even direct Turing die shrink would be competitive. They've had so much time now that it's definitely going to be more than that though.
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
NV chose the wrong time to stumble a bit.

For me persoally too because I'm in the market for a new card and out of my job in data science area a NV GPU would be preferable simply due to deeplearning/CUDA which simply doesn't work on AMD+Windows and is at best masochistic on Linux+AMD. Since I have a 290x now, I do know the pain of high power use as my room heats up a lot when gaming.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
At best AMD gets closer to NVIDIA but at worst the gap stays similar. It's too early to tell.

I am sure Intel was thinking that exact same thing two years ago. Ryzen 1 was a big jump over prevision FX chips, but didn't beat Intel. Ryzen 2 came out, and well, we all know how that is going. Navi 1 was step one (akin to Ryzen 1). Navi 2 is the next big jump. Obviously its quite possible that nVidia will also have a big jump and still come out ahead. But to say AMD will only get close in a best case scenario is shortsighted.
 
Reactions: Glo.

Glo.

Diamond Member
Apr 25, 2015
5,761
4,666
136

Summary of his info.

Full die has 5376 ALUs, but impossible to tell if this particular sample uses this ALU config, sample uses the leaked heatsink, draws slightly above 300W of power while boosting to around 2 GHz. Still 12 GB of GDDR6 memory, clocked at 18 Gbps. The leaked scores that put it below heavily OCed RTX 2080 Ti might be for this particular sample, and 30-35% above RTX 2080 Ti FE. 12 pin power connector very possible.

It can boost to 2.2 GHz, without power limit, but uses 400W of power while delivering this clock speeds.

40% faster rasterization performance in 4K.

September is still planned as a launch for gamers.
 

exquisitechar

Senior member
Apr 18, 2017
666
904
136

Summary of his info.

Full die has 5376 ALUs, but impossible to tell if this particular sample uses this ALU config, sample uses the leaked heatsink, draws slightly above 300W of power while boosting to around 2 GHz. Still 12 GB of GDDR6 memory, clocked at 18 Gbps. The leaked scores that put it below heavily OCed RTX 2080 Ti might be for this particular sample, and 30-35% above RTX 2080 Ti FE. 12 pin power connector very possible.

It can boost to 2.2 GHz, without power limit, but uses 400W of power while delivering this clock speeds.

40% faster rasterization performance in 4K.

September is still planned as a launch for gamers.
Meh. Honestly sounds like MLID just put together leaks like kopite’s and added a few things. I can’t take him seriously given his record.
 

MrTeal

Diamond Member
Dec 7, 2003
3,584
1,743
136
Sure, 400W. So for 40% more performance over the 2080TI Ampere needs 45% more power.

Yeah, right.
The video said 300W stock and 400W if overclocked, no?
Still, 40% performance increase at basically the same power isn't too far out of line. That's about the PPW improvement of Radeon VII vs Vega 64.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |