Question 'Ampere'/Next-gen gaming uarch speculation thread

Page 143 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ottonomous

Senior member
May 15, 2014
559
292
136
How much is the Samsung 7nm EUV process expected to provide in terms of gains?
How will the RTX components be scaled/developed?
Any major architectural enhancements expected?
Will VRAM be bumped to 16/12/12 for the top three?
Will there be further fragmentation in the lineup? (Keeping turing at cheaper prices, while offering 'beefed up RTX' options at the top?)
Will the top card be capable of >4K60, at least 90?
Would Nvidia ever consider an HBM implementation in the gaming lineup?
Will Nvidia introduce new proprietary technologies again?

Sorry if imprudent/uncalled for, just interested in the forum member's thoughts.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,155
136
If they make a fully unlocked RTX 3070 Super with the full-fat GA104 die, complete with the 'S' design, and offered a $90 discount exclusively to 90s kids, e.g. me, then I would buy it on the spot, right then and there. It would make for some great marketing!
Would you like a back and foot massage from JHH while you're at it?
 

jpiniero

Lifer
Oct 1, 2010
14,838
5,456
136
Still 2:1 for Samsung atm. Doesn't make any sense to me though regarding GA102 and GA104 because the better the products the better the process should be IMO.

Or rumors about no (more) capacity for NV are true.

Time to market matters too. Has to be quicker to scale up the GA107/GA108 SS8 design than port the whole thing from scratch to TSMC.
 

MrTeal

Diamond Member
Dec 7, 2003
3,585
1,743
136
3070 is a really good deal. If the 3060 sells for around 275-350, then I think they can close the gap, especially if it performs midway between a 2070 and 2070 Super. AMD will have to do some funky town magic to get sales especially when it's likely their product won't be past 80% as good.


Honest, hand on heart, I haven't been excited for a GPU launch in at least 8 years. This is amazing. IF AMD releases anything as good and forced nv to change up their pricing, that sweet 3080 or a later rumored 20 GB model from AIBs may make its sweet tootsie roll way into my build. Where it'll be treated with love, humanity and respect. If they show nv's hand, it may see a nice price cut. I'll go with a conservative $80. $620 isn't all that bad!
How is the 3070 a really good deal? It has 67% of the shaders and memory bandwidth of the 3080 with 8GB instead of 10GB, and it's 71% of the price.
The 2070 had 78% of the shaders of the 2080 with the same amount of VRAM and bandwidth for 75% of the price FE to FE and even that was marginal.
The 1070 was 75% for shaders and 72% for bandwidth with the same amount of VRAM for only 64% of the price.
The 970 was 81% of 980's shaders and the kinda the same bandwidth and VRAM outside the 0.5GB issue. It was only 60% of the 980's price.

x70 value relative to the x80 has been going down for generations, and this is the worst one yet.
 
Reactions: psolord

CP5670

Diamond Member
Jun 24, 2004
5,527
604
126
Actual used VRAM or just "reserved" VRAM? You can't trust GPU-Z etc to report actual VRAM required, some games just request it all but only use some.

It's reported by EVGA Precision while the game is running. It appears to be the actual memory use, since it changes depending on the map you're in and how long the game has been running. It's possible that games are keeping unnecessary textures in memory that don't get flushed out until it actually fills up, but they do seem to be using the extra memory if it's available.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,155
136
How is the 3070 a really good deal? It has 67% of the shaders and memory bandwidth of the 3080 with 8GB instead of 10GB, and it's 71% of the price.
The 2070 had 78% of the shaders of the 2080 with the same amount of VRAM and bandwidth for 75% of the price FE to FE and even that was marginal.
The 1070 was 75% for shaders and 72% for bandwidth with the same amount of VRAM for only 64% of the price.
The 970 was 81% of 980's shaders and the kinda the same bandwidth and VRAM outside the 0.5GB issue. It was only 60% of the 980's price.

x70 value relative to the x80 has been going down for generations, and this is the worst one yet.
At $500 which will likely go down due to an RDNA2 launch, coupled with the competition having abysmal driver support both historically and even now, it's a win. If that 3070 drops down to $450, which it likely will, I can't see it being a bad deal. I'm not going to spend $450 on an RDNA2 card only to deal with problems. RTG is a lost cause and has been for years when it comes to being on par or above the cut competitive, and their software team is terrible. I love what AMD is doing with their processors, but their GPUs are a joke, and have been for many years.

AMD has up to two weeks to announce their product stack. If they don't announce in this time period, they're signalling their product sucks compared to Ampere, even if Ampere is on a power hungry node that isn't as good as TSMC. Once Ampere sales begin, which will be prefaced with reviews, people aren't going to go through and wait to see when AMD pulls their fingers from their ears. AMD claims that desktop will launch before consoles, but that's from now until possibly mid November. They still need to get the word out in the next 2 weeks. If they announce and launch next month or even early November, it's a lost cause. Like, what was the point of waiting so long for a POS product stack? The only saving grace for AMD will be Zen 3 this year.
 

MrTeal

Diamond Member
Dec 7, 2003
3,585
1,743
136
At $500 which will likely go down due to an RDNA2 launch, coupled with the competition having abysmal driver support both historically and even now, it's a win. If that 3070 drops down to $450, which it likely will, I can't see it being a bad deal. I'm not going to spend $450 on an RDNA2 card only to deal with problems. RTG is a lost cause and has been for years when it comes to being on par or above the cut competitive, and their software team is terrible. I love what AMD is doing with their processors, but their GPUs are a joke, and have been for many years.

AMD has up to two weeks to announce their product stack. If they don't announce in this time period, they're signalling their product sucks compared to Ampere, even if Ampere is on a power hungry node that isn't as good as TSMC. Once Ampere sales begin, which will be prefaced with reviews, people aren't going to go through and wait to see when AMD pulls their fingers from their ears. AMD claims that desktop will launch before consoles, but that's from now until possibly mid November. They still need to get the word out in the next 2 weeks. If they announce and launch next month or even early November, it's a lost cause. Like, what was the point of waiting so long for a POS product stack? The only saving grace for AMD will be Zen 3 this year.
I... am not sure how any of that explains the 3070 being a good value. It's just two paragraphs of AMD sucks and their drivers suck and RDNA2 will suck. Regardless of where AMD lands with their stack, the 3070 is likely going to be the first x70 card since they moved to this nomenclature from the old GTX++=+ system that actually has worse performance per dollar than the x80. Even as far back as Maxwell, the 970 had 50% better PP$ in games than the 980 and actually was a great value. The 3090 is a beastly halo that doesn't figure into most purchasing decision and the 3080 looks like it will being a lot of performance to the table for its price, but the 3070 is just meh. With how cut down it is, it should be a $400 card.
 

JasonLD

Senior member
Aug 22, 2017
486
447
136
The 3090 is a beastly halo that doesn't figure into most purchasing decision and the 3080 looks like it will being a lot of performance to the table for its price, but the 3070 is just meh. With how cut down it is, it should be a $400 card.

I think actual retail prices will adjust accordingly. Given the expected 3080 demand, its price is going to be lot higher.
 

blckgrffn

Diamond Member
May 1, 2003
9,198
3,185
136
www.teamjuchems.com
At $500 which will likely go down due to an RDNA2 launch, coupled with the competition having abysmal driver support both historically and even now, it's a win. If that 3070 drops down to $450, which it likely will, I can't see it being a bad deal. I'm not going to spend $450 on an RDNA2 card only to deal with problems. RTG is a lost cause and has been for years when it comes to being on par or above the cut competitive, and their software team is terrible. I love what AMD is doing with their processors, but their GPUs are a joke, and have been for many years.

AMD has up to two weeks to announce their product stack. If they don't announce in this time period, they're signalling their product sucks compared to Ampere, even if Ampere is on a power hungry node that isn't as good as TSMC. Once Ampere sales begin, which will be prefaced with reviews, people aren't going to go through and wait to see when AMD pulls their fingers from their ears. AMD claims that desktop will launch before consoles, but that's from now until possibly mid November. They still need to get the word out in the next 2 weeks. If they announce and launch next month or even early November, it's a lost cause. Like, what was the point of waiting so long for a POS product stack? The only saving grace for AMD will be Zen 3 this year.

I hear what you are saying... but... ¯\_(ツ)_/¯ I disagree a lot.

I've got an amazing (for gaming) Freesync 2 monitor and I really like the current AMD driver package, the built in overlays, the per game tuning that I can do (with Freesync 2 Premium controlling every aspect of the monitor, including color and brightness levels I can really dial in different games with their matching aesthetics) and there seems to be a ton of functionality (like power tuning) that with my nvidia cards I am installing 3rd party software to control. I'd really rather not, I install as little software as possible so that is just a personal preference. I am staying in the AMD ecosystem for the monitor integration - I assume anyone with sweet g-sync monitor feels similar when contemplating changing over, it just can't be worth it.

I plan on buying an RDNA 2 card at launch for $500-$600 and it will come with a whole exclusive software ecosystem and UHD drive attached, I am choosing the PS5 SKU

I'll wait for the first round of price cuts and maybe even midcycle refreshes before bothering with my PC. HDR is my killer graphics feature atm and I already have that.

Comparing any of these cards to the PC's that will be the next consoles makes them look like terrible values, IMO. In a way that I don't really think has been done before - the Jaguar was way to gross in the last gen and before that they were way too weird to be even closely compared.
 
Reactions: psolord and A///

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
^^ That's pretty brutal, but to be honest I am inclined to agree more or less.

If Navi2 doesn't have an equal to both DLSS 2.0 and Hardware Raytracing of at least the same performance per tier, they're almost beyond redemption. Early Raytracing was more of a fun but not very practical gimmick, especially on the 2070 and below, and early RTSS was a blurry mess that invited well deserved mocking and memes.

Now? Cyberpunk, probably the biggest PC release since Witcher 3 or Skyrim, is completely optimized for tons of sophisticated Raytracing effects, and has DLSS 2.0, which is almost unthinkably effective in practice, giving well more than double the effective performance per resolution level (eg, 1080p but at 720p levels of framerate, 4k but at 1080p-1440p performance numbers, etc). If that becomes widely adopted, it could mean $179 3050ti equalling or even beating a 300+W $700+ full Navi20, and annihilating 5700XT, 1080ti, etc.

If Navi was moving from say 16nm down to 7nm, I might have some hope they could pull some real magic and get the necessary leap. However, 5700XT was already 7nm, and if the rumored full Navi 20 is basically just a doubled 5700XT, that won't nearly be enough to compete. Missing features, and there's simply no way they could run a double 5700XT die on 7nm at the same clocks, meaning the actual performance leap would be more like 150% rather than 200%, not to mention the potential increase in memory bandwidth wouldn't be enough unless they returned to 512-bit bus, which would make things even more expensive.

So the window is tight before the 3060 and 3050/3050ti start really bringing pain, unless AMD really shocks us and Navi is more than a typical more of the same kind of release.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
It's reported by EVGA Precision while the game is running. It appears to be the actual memory use, since it changes depending on the map you're in and how long the game has been running. It's possible that games are keeping unnecessary textures in memory that don't get flushed out until it actually fills up, but they do seem to be using the extra memory if it's available.

Thats not true memory usage. The card will buffer up memory until its full for many games. The way to test memory required is to limit how much memory can be used. For instance benchmarks that compared 4GB to 8GB cards that are otherwise the same. Many games ran identical on both, even if memory consumption was higher on the 8GB card.
 
Reactions: amenx

itsmydamnation

Platinum Member
Feb 6, 2011
2,864
3,418
136
^^ That's pretty brutal, but to be honest I am inclined to agree more or less.

If Navi2 doesn't have an equal to both DLSS 2.0 and Hardware Raytracing of at least the same performance per tier, they're almost beyond redemption. Early Raytracing was more of a fun but not very practical gimmick, especially on the 2070 and below, and early RTSS was a blurry mess that invited well deserved mocking and memes.

Now? Cyberpunk, probably the biggest PC release since Witcher 3 or Skyrim, is completely optimized for tons of sophisticated Raytracing effects, and has DLSS 2.0, which is almost unthinkably effective in practice, giving well more than double the effective performance per resolution level (eg, 1080p but at 720p levels of framerate, 4k but at 1080p-1440p performance numbers, etc). If that becomes widely adopted, it could mean $179 3050ti equalling or even beating a 300+W $700+ full Navi20, and annihilating 5700XT, 1080ti, etc.

If Navi was moving from say 16nm down to 7nm, I might have some hope they could pull some real magic and get the necessary leap. However, 5700XT was already 7nm, and if the rumored full Navi 20 is basically just a doubled 5700XT, that won't nearly be enough to compete. Missing features, and there's simply no way they could run a double 5700XT die on 7nm at the same clocks, meaning the actual performance leap would be more like 150% rather than 200%, not to mention the potential increase in memory bandwidth wouldn't be enough unless they returned to 512-bit bus, which would make things even more expensive.

So the window is tight before the 3060 and 3050/3050ti start really bringing pain, unless AMD really shocks us and Navi is more than a typical more of the same kind of release.
I really dont get logic like this , its really not hard

look at the R&D levels for the 3 years before a product comes to market that is what your ignoring.
When AMD started designing Navi1x AMD had its lowest R&D levels in like 15 years. 220m a Q upto 370m Q on release
Now Navi2x will be 320m a Q at start upto 460m a Q to release,
Navi3x will be higher then that,

You need lots of money to bring features and product stack to market and AMD is spending significantly more to do that.
So to base something off of short term history but to ignore inputs that determine that history is just asking to be wrong.... isn't it
 

A///

Diamond Member
Feb 24, 2017
4,352
3,155
136
I... am not sure how any of that explains the 3070 being a good value. It's just two paragraphs of AMD sucks and their drivers suck and RDNA2 will suck. Regardless of where AMD lands with their stack, the 3070 is likely going to be the first x70 card since they moved to this nomenclature from the old GTX++=+ system that actually has worse performance per dollar than the x80. Even as far back as Maxwell, the 970 had 50% better PP$ in games than the 980 and actually was a great value. The 3090 is a beastly halo that doesn't figure into most purchasing decision and the 3080 looks like it will being a lot of performance to the table for its price, but the 3070 is just meh. With how cut down it is, it should be a $400 card.
I'll say it in plain English. The days of really cheap, high YoY performance gains due to uarch maturity are long gone on GPUs. Even NVidia with their vast resources used clever marketing with their 1.9x claim. Up to 1.9x more efficient, and up to 2x performance. Even IRW performance is subpar they got their money and are ok because they said up to. Every time a fab drops a node size, the costs per wafer go up. NVidia can cover if not subsidize their FE cards because their recent acquisitions make them a lot of money and opened up more revenue streams. The AIBs got screwed over this time.

Historical figures are just that... historical figures. They don't play into a new generation that's several uarchs news and cost way more than R&D cost 5-8 years ago. It's like those investor "gurus" who keep swinging and recommending buys on Intel and stating Intel did so well from 1980 to 1997 and they've done well historically. Living in the past is nice, but it's not indicative of future success or price. If you're not into the pricing, that's fine. That's your opinion. To me it's a great deal because I was expecting a 700 USD 3070. 500 USD that's due to reel back with AMD's announcement is fine by me. If you want to wait and get AMD, that's your free choice, and I won't bust your choice. What will you do if AMD prices around the same and offers the same performance?
 
Last edited:

CP5670

Diamond Member
Jun 24, 2004
5,527
604
126
I've got an amazing (for gaming) Freesync 2 monitor and I really like the current AMD driver package, the built in overlays, the per game tuning that I can do (with Freesync 2 Premium controlling every aspect of the monitor, including color and brightness levels I can really dial in different games with their matching aesthetics) and there seems to be a ton of functionality (like power tuning) that with my nvidia cards I am installing 3rd party software to control. I'd really rather not, I install as little software as possible so that is just a personal preference. I am staying in the AMD ecosystem for the monitor integration - I assume anyone with sweet g-sync monitor feels similar when contemplating changing over, it just can't be worth it.

This is an important aspect of which card to buy, more so than just performance. I have used quite a few cards from both companies in the past but have a clear preference for Nvidia these days because I'm much more familiar with their third party tools and game fixes. Both companies have their driver issues, but I know more about the Nvidia issues and how to work around them. Any card I get has to work with not only the latest AAA titles but my existing library of games going back over 20 years. I'm less inclined to spend time fixing issues than I used to be and want things to just work with minimum effort.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
I really dont get logic like this , its really not hard

look at the R&D levels for the 3 years before a product comes to market that is what your ignoring.
When AMD started designing Navi1x AMD had its lowest R&D levels in like 15 years. 220m a Q upto 370m Q on release
Now Navi2x will be 320m a Q at start upto 460m a Q to release,
Navi3x will be higher then that,

You need lots of money to bring features and product stack to market and AMD is spending significantly more to do that.
So to base something off of short term history but to ignore inputs that determine that history is just asking to be wrong.... isn't it

Well I have no idea, just the rumors we've had to work with, basically that big Navi is a gigantic version the 5700XT, but no dedicated Raytracing or Deep Learning cores.

If they're competitive, and have answers to RTSS 2.0 and have competitive Raytracing I'll be super happy, as that would be good news for absolutely everyone. But this late in the game (2 years after RTX launch??) I think we would have at least some decent leaks by now, no?
 

A///

Diamond Member
Feb 24, 2017
4,352
3,155
136
I hear what you are saying... but... ¯\_(ツ)_/¯ I disagree a lot.

I've got an amazing (for gaming) Freesync 2 monitor and I really like the current AMD driver package, the built in overlays, the per game tuning that I can do (with Freesync 2 Premium controlling every aspect of the monitor, including color and brightness levels I can really dial in different games with their matching aesthetics) and there seems to be a ton of functionality (like power tuning) that with my nvidia cards I am installing 3rd party software to control. I'd really rather not, I install as little software as possible so that is just a personal preference. I am staying in the AMD ecosystem for the monitor integration - I assume anyone with sweet g-sync monitor feels similar when contemplating changing over, it just can't be worth it.

I plan on buying an RDNA 2 card at launch for $500-$600 and it will come with a whole exclusive software ecosystem and UHD drive attached, I am choosing the PS5 SKU

I'll wait for the first round of price cuts and maybe even midcycle refreshes before bothering with my PC. HDR is my killer graphics feature atm and I already have that.

Comparing any of these cards to the PC's that will be the next consoles makes them look like terrible values, IMO. In a way that I don't really think has been done before - the Jaguar was way to gross in the last gen and before that they were way too weird to be even closely compared.

Yeah, this is a great reply. Not someone telling me it's a crap value because X cost Such and Such 5 years ago. Give me a break (not at you). I get that people are heavily invested into the AMD environment outside of a simple processor. I don't invest in G-Sync either. I buy a quality monitor that fits my needs. I'm not a huge gamer. What I do know is my time is worth money. I don't have the enthusiasm to bat for the underachieving team. Don't want to deal with issues. Nv has their fair share of problems, but they address it quickly and they're not as severe as some like to make them out to be. I've owned close to a dozen ATI and AMD cards over the last 20 years. Only a few were good by either matching nv or beating them outright. Software was always on the edge of being good or terrible. Usually in the middle.

Anyway, I'm going for the PS5, too. XBox is nice but I've always been a Sony dude and their controller is muscle memory to me. I've never gotten the hang of using the Xbox controllers.
^^ That's pretty brutal, but to be honest I am inclined to agree more or less.

If Navi2 doesn't have an equal to both DLSS 2.0 and Hardware Raytracing of at least the same performance per tier, they're almost beyond redemption. Early Raytracing was more of a fun but not very practical gimmick, especially on the 2070 and below, and early RTSS was a blurry mess that invited well deserved mocking and memes.

Now? Cyberpunk, probably the biggest PC release since Witcher 3 or Skyrim, is completely optimized for tons of sophisticated Raytracing effects, and has DLSS 2.0, which is almost unthinkably effective in practice, giving well more than double the effective performance per resolution level (eg, 1080p but at 720p levels of framerate, 4k but at 1080p-1440p performance numbers, etc). If that becomes widely adopted, it could mean $179 3050ti equalling or even beating a 300+W $700+ full Navi20, and annihilating 5700XT, 1080ti, etc.

If Navi was moving from say 16nm down to 7nm, I might have some hope they could pull some real magic and get the necessary leap. However, 5700XT was already 7nm, and if the rumored full Navi 20 is basically just a doubled 5700XT, that won't nearly be enough to compete. Missing features, and there's simply no way they could run a double 5700XT die on 7nm at the same clocks, meaning the actual performance leap would be more like 150% rather than 200%, not to mention the potential increase in memory bandwidth wouldn't be enough unless they returned to 512-bit bus, which would make things even more expensive.

So the window is tight before the 3060 and 3050/3050ti start really bringing pain, unless AMD really shocks us and Navi is more than a typical more of the same kind of release.

I'm assuming this reply was to me. AMD needs to deliver not just on raw performance, but the software stack behind what's driving nv's further ascension. We've been treated to preview clips of the consoles coming up and nothing beyond what was told at Hot Chips. It was an interesting presentation, but I wasn't fond of paying $100 for it.

Yeah, the software stack this and future generations will experience is going to be interesting. I see a lot of people giving it grief, but given enough time to mature, it should become even more impressive. There's already talk of redoing classic games from the last 20 years and making them come back to life. Not a cheap effort since everything is redone, but still very cool to see some of your favorite old games come back to life and not look like blurry blocks. The GCN to RDNA flip brought some archaic uarch and I'll give them that. The 5700XT Nitro+ is an amazing card. I tested one for eight weeks among various AMD AIB offerings and it rocked. Also the only card out of 5 that didn't exhibit issues. It's a beautiful card, too. However, that said I see it as poor value unless you could get it on sale, and it goes on sale from time to time. If RDNA2 cannot address baseline performance without fancy software improving visual feedback during gameplay, then it's a lost war for AMD. If they can't compete in the midrange or even the low end, then I'm not sure what they can do. If nv release a 3060, they'll price it as cheap as they can, maybe even give it away at cost plus some spare just to stick it to AMD.
 
Reactions: Arkaign

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
12 layer PCBs.

Reduced distances between GPU die and memory.

Failure to reach 21Gbps as GDDR6X is supposed to allow at launch.

There's already several signs that GDDR6X has major limitations. Add onto the fact that between data travelling and module power you're looking at notably over 100W on the 3090 and yes, the stuff is absolutely cursed.

Just because Nvidia didn't want to have to take on the HBM tax on themselves. But they don't need to worry, as AIBs can deal with all the extra costs. No biggie for them
Well, it was worth it to NV to get the price right vs HBM. Even stuff like a 12 layer PCB for routing and reduced noise was worth it.
Does the GA102 have extra pins for HBM? Shoveling all that data from HBM into the same number of pins needed for GDDR6x will create problems as well.
Anyway, obviously it was a b**ch; a bit like Fermi's GDDR signalling problems.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,864
3,418
136
Well I have no idea, just the rumors we've had to work with, basically that big Navi is a gigantic version the 5700XT, but no dedicated Raytracing or Deep Learning cores.
Now your just inventing things,
This whole dedicated not dedicated thing is crap.
Raytracing is super heavy on bandwidth anyway, having a "dedicated" unit attached to the exact same register file as the ALU's means both solutions are primarily limited by the same damn thing!
On the GeMM front AMD seem to have decided to take a generalised ALU design not a dedicated unit path. What that means is if your algorithm fits NV's hardware you get awesome near peak numbers, if not you get hurt more. But lets not pretend like NV didn't have DDLS ( v1.9 as it was called) working on generic FP32 ALU's. So i find it kind of strange that ALU's that could do all sorts of AI target packed math are deemed unsuitable by the unwashed.

its just like this whole Sparce GeMM optimisations NV brought with A100 now every fanboy is quoting the peak speed without even bothering to try and understand what NV was optimising in the first place!
 

MrTeal

Diamond Member
Dec 7, 2003
3,585
1,743
136
I'll say it in plain English. The days of really cheap, high YoY performance gains due to uarch maturity are long gone on GPUs. Even NVidia with their vast resources used clever marketing with their 1.9x claim. Up to 1.9x more efficient, and up to 2x performance. Even IRW performance is subpar they got their money and are ok because they said up to. Every time a fab drops a node size, the costs per wafer go up. NVidia can cover if not subsidize their FE cards because their recent acquisitions make them a lot of money and opened up more revenue streams. The AIBs got screwed over this time.

Historical figures are just that... historical figures. They don't play into a new generation that's several uarchs news and cost way more than R&D cost 5-8 years ago. It's like those investor "gurus" who keep swinging and recommending buys on Intel and stating Intel did so well from 1980 to 1997 and they've done well historically. Living in the past is nice, but it's not indicative of future success or price. If you're not into the pricing, that's fine. That's your opinion. To me it's a great deal because I was expecting a 700 USD 3070. 500 USD that's due to reel back with AMD's announcement is fine by me. If you want to wait and get AMD, that's your free choice, and I won't bust your choice. What will you do if AMD prices around the same and offers the same performance?
Yeah, this is a great reply. Not someone telling me it's a crap value because X cost Such and Such 5 years ago. Give me a break (not at you). I get that people are heavily invested into the AMD environment outside of a simple processor. I don't invest in G-Sync either. I buy a quality monitor that fits my needs. I'm not a huge gamer. What I do know is my time is worth money. I don't have the enthusiasm to bat for the underachieving team. Don't want to deal with issues. Nv has their fair share of problems, but they address it quickly and they're not as severe as some like to make them out to be. I've owned close to a dozen ATI and AMD cards over the last 20 years. Only a few were good by either matching nv or beating them outright. Software was always on the edge of being good or terrible. Usually in the middle.
I think you're misunderstanding what I am saying. I'm saying the 3070 isn't the good value card, relative to the cards is this generation. Historically the x70 card is the value card, you get a good % of the performance of the x80 for much less money. Here, that doesn't seem like it will be the case. The 3080 will actually offer better value this generation.

If you were thinking you were going to spend $700 of the 3070, why not just get the 3080? The gap between it and the 3070 is the same +50% we typically see between the x80 and x80 Ti, which is huge for the extra $200.
 

Jaskalas

Lifer
Jun 23, 2004
33,574
7,636
136
When choking at the $500 card, whatever is higher may as well not exist regardless of the Performance / dollar ratio. #PeopleOnBudget
Hard limits are a factor, which is why the XX60 and XX70 are always sold in much higher volume.

Those of us with Polaris Pascal are going to enjoy the RTX 30 generation when it is finally time to upgrade.
 
Last edited:
Reactions: psolord

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Now your just inventing things,
This whole dedicated not dedicated thing is crap.
Raytracing is super heavy on bandwidth anyway, having a "dedicated" unit attached to the exact same register file as the ALU's means both solutions are primarily limited by the same damn thing!
On the GeMM front AMD seem to have decided to take a generalised ALU design not a dedicated unit path. What that means is if your algorithm fits NV's hardware you get awesome near peak numbers, if not you get hurt more. But lets not pretend like NV didn't have DDLS ( v1.9 as it was called) working on generic FP32 ALU's. So i find it kind of strange that ALU's that could do all sorts of AI target packed math are deemed unsuitable by the unwashed.

its just like this whole Sparce GeMM optimisations NV brought with A100 now every fanboy is quoting the peak speed without even bothering to try and understand what NV was optimising in the first place!

It doesn't matter what they call it, or how they do it. The time is up, Navi 2 needs implementation and support to equal Nvidia 3000 in Cyberpunk and beyond or they're frankly screwed. Don't get lost in the forest for the trees. It could be called Nvidia's Donkey juice and Radeon Wheat Goblins, the fact is they have to deliver or Navi2 is dead on arrival. Let's hope they do. We've seen what monopolistic industry segment behavior is like, and no thanks to that.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,155
136
for much less money.
Historical costs, right? You clearly chose to ignore what I said and keep driving this point home. The days of a x70 card costing around 280-400 are long gone. You need to sell at a price point that covers both production of the card, some R&D and still make a profit. In the 900 days when 28nm was used, a large wafer may have cost around $3K, a 7nm wafer today costs about $7-9K depending on who the fab is and whether the company seeking that fab's expertise can negotiate a cheaper price with a longer agreed purchase contract. The GTX980 GPU was about 400 mm². The new GPU in the 3080 is reportedly about 630 mm². The larger the die, the higher chance of defects. The higher amount of defects, especially on a subpar processor like Samsung 7nm (Samsung 8nm modified custom for NVidia) is going to cost way more, just in size alone, and not just the drop from 28nm to "7nm".

I'd love a killer modern dual GPU card that uses less than 210 watts that works flawlessly and isn't harangued by the dual part of the equation for under $500 but we're not in the late 2000s anymore. If you still don't want to buy it, that's fine. It's your choice. I'm not an Nv shill and I'm not being paid by them. I was never employed by them. Wait for AMD's product stack. See what piques your fancy. If not, don't forget AIBs will be selling overclocked variants in the coming months. You might find something you like that fits your power and price budget.
 

moonbogg

Lifer
Jan 8, 2011
10,637
3,095
136
So, honest question; what happens if scalpers use bots to actually buy the majority of the RTX 3080's on launch? They go on Ebay and other sites for $1000 and then what? People just buy them without a care?
 
Reactions: spursindonesia

MrTeal

Diamond Member
Dec 7, 2003
3,585
1,743
136
Historical costs, right? You clearly chose to ignore what I said and keep driving this point home. The days of a x70 card costing around 280-400 are long gone. You need to sell at a price point that covers both production of the card, some R&D and still make a profit. In the 900 days when 28nm was used, a large wafer may have cost around $3K, a 7nm wafer today costs about $7-9K depending on who the fab is and whether the company seeking that fab's expertise can negotiate a cheaper price with a longer agreed purchase contract. The GTX980 GPU was about 400 mm². The new GPU in the 3080 is reportedly about 630 mm². The larger the die, the higher chance of defects. The higher amount of defects, especially on a subpar processor like Samsung 7nm (Samsung 8nm modified custom for NVidia) is going to cost way more, just in size alone, and not just the drop from 28nm to "7nm".

I'd love a killer modern dual GPU card that uses less than 210 watts that works flawlessly and isn't harangued by the dual part of the equation for under $500 but we're not in the late 2000s anymore. If you still don't want to buy it, that's fine. It's your choice. I'm not an Nv shill and I'm not being paid by them. I was never employed by them. Wait for AMD's product stack. See what piques your fancy. If not, don't forget AIBs will be selling overclocked variants in the coming months. You might find something you like that fits your power and price budget.
Again, I’m not comparing historical costs on the x70 model. I’m comparing its positioning within the product stack. The x70 is historically the better value card, along with being cheaper. You’d get mid 70s to 80 percent of the performance of the x80, usually with the same RAM. Here, that’s not the case. The 3080 is just the better value. The increase in performance looks to be greater than the increase in price, and that’s not even taking into account what looks to be a much better cooler on the 3080.

If you have a hard cap of $500 or you just want the Ampere features and need 3070 level performance or can’t wait for the 3060 but don’t think you’ll need 3080, that’s fine. I’m sure they’ll sell a lot of them. I’ll stick by my statement that the 3070 isn’t the best value and the 3080 is the better value this generation though.
 

MrTeal

Diamond Member
Dec 7, 2003
3,585
1,743
136
So, honest question; what happens if scalpers use bots to actually buy the majority of the RTX 3080's on launch? They go on Ebay and other sites for $1000 and then what? People just buy them without a care?
Still waiting on actual reviews, but I’d comfortably spend $700 on a 3080 from a proper first party seller. Sucks to downgrade from the 1080 Ti on RAM, but I’m not sure it’ll be a real issue.

The chance of me buying a $1000 one off eBay is 0. I’d rather wait for a 3080 Super refresh in a year or buy a Navi card if it looks good.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |