Does the RTX series create an openning for AMD?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
The opening might be with Intel. They can wait with their architecture and see which way the wind blows with NV's ray tracing push.
 
Reactions: ub4ty

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
This would be nice to have an HD4870 moment again.
Or even better an HD5870 moment in Q1 2019.
 
Reactions: amenx

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
The opening might be with Intel. They can wait with their architecture and see which way the wind blows with NV's ray tracing push.

While it will be great to have Intel as a third player. They are aren't even showing up till 2020, which will be about when you would expect a 3000 series from NVidia, on a refined, higher yielding 7nm process.

To expect Intel to jump from disappointing IGPs, to having a serious competitor to RTX 3000 series, seems like a fairy tale. I definitely think the odds are higher that AMD will have something good by then, than that Intel will go from crappy IGPs to outdoing AMD, to taking advantage of of some opening at NVidia.

Also I really think the opening is a small one. Mainly it's against RTX 2000 series. NVidia may have really pushed die size, to get that RT HW in this generation, but the proportional penalty for that RT HW will likely be smaller when working with the bigger transistor budget of a refined 7nm. Meaning that opening is lot less significant, and also there is some inevitability to Ray Tracing.

Basically the farther forward you go, the less overhead tax you pay for RT HW, and the more you need that RT HW. So IMO, the opening is really gone by 2020.

Only AMD can have any chance of exploiting it in 2019.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Well, I mean that Intel is supposedly developing a new GPU and they have time to modify it and add some sort of RT core if RT catches on quick.
Whatever AMD has in the works is likely too far along to change course.

Theoretically with multi-gpu under DX12, one card could do the RT grunt work.

Maybe someone could come up with a RT co-processor GPU for low cost?

Of course if RT doesn't become popular, then it's a moot point.
 
Reactions: ub4ty

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Well, I mean that Intel is supposedly developing a new GPU and they have time to modify it and add some sort of RT core if RT catches on quick.
Whatever AMD has in the works is likely too far along to change course.

Theoretically with multi-gpu under DX12, one card could do the RT grunt work.

Maybe someone could come up with a RT co-processor GPU for low cost?

Of course if RT doesn't become popular, then it's a moot point.
But, but, but.

Aren't we told it took Nvidia 10 years of R&D to get this out. Now Intel in less than 2. For sure, this release is causing a lot of cognitive dissonance.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
But, but, but.

Aren't we told it took Nvidia 10 years of R&D to get this out. Now Intel in less than 2. For sure, this release is causing a lot of cognitive dissonance.
Intel has some real time ray tracing GPU experience already, though.

Back in the Larrabee days, Intel was running real time RT demos of things like Quake 4.

https://en.wikipedia.org/wiki/Quake_Wars:_Ray_Traced
 
Reactions: gdansk and ub4ty

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Well, I mean that Intel is supposedly developing a new GPU and they have time to modify it and add some sort of RT core if RT catches on quick.
Whatever AMD has in the works is likely too far along to change course.

It looks to me that the central argument of this thread, was that since NVidia is devoting so much die area to ray tracing. AMD focusing on rasterzation could catch them out on standard games: "... focusing performance increases in standard rasterized tasks..."

AMD could release a card that didn't waste die area on any specific RT HW, and thus be better at standard games.

Not that there was an opening for AMD (or Intel) to catch NVidia on Ray Tracing itself.

As far as Intels experience, that was running some x86 algorithm on Intel CPUs and then Larrabee (which was also x86).

I really doubt much from that can hold a candle to dedicated RT HW, and DL network to denoise.
 
Reactions: GodisanAtheist

jpiniero

Lifer
Oct 1, 2010
14,847
5,457
136
This would be nice to have an HD4870 moment again.
Or even better an HD5870 moment in Q1 2019.

It doesn't sound like there's going to be a gaming version of Vega 20, and if there is it'd probably have to be very expensive, likely more than the 2080's fake MSRP of $699 which could be real by then.

Navi might be interesting, but it sounds like at least in 2019 it's only targeting Vega 64-type performance. That might work if they are targeting $300 or less for it.
 

Timmah!

Golden Member
Jul 24, 2010
1,463
729
136
I am waiting for octane render support on amd for almost decade. It is so sad they apparently produce compute heavy cards like Vega, with even bigger potential compute performance than Nvidia gaming centered stuff; yet you cant never use it cause amd has been incapable working wih octane devs to help them release working product... see this:

https://render.otoy.com/forum/viewtopic.php?f=9&t=66456

Safe to say, i lost all hope in AMD.
Meanwhile, i can get Turing with its RTX “gimmick” which will probably speed up octane 5x to 8x...

https://www.youtube.com/watch?v=6l2vQ8eRbiY&feature=youtu.be

Those nvidia prices kill me, but th only other choice i have is not to buy, because amd simply wont provide alternative.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
If AMD pushed the biggest chip they could make profitably at $250, with no new features that require new work from Devs, on 12nm they could absolutely take advantage of this opening. They would have to have designed it years ago though due to lead times.

A super GDDR6 Polaris that is Vega level features at 12nm as big as can be made with a respectable margin at $250 on 12nm, basically. Pure sweet spot GPU. It would sell serious volume. Especially if they keep pushing clockspeed like they did on Vega. Basically 2018 version of the 4870/50 stack
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
The designed years ago is a problem.

The only magic they could pull out of their hat maybe is the console APUs as a separate dGPU.

Apparently the PS4 Pro is 2304:64:256-bit and the Xbox 1 X is 2560:32:384-bit. Even if clocked at desktop speeds of 1.3GHz+, unfortunately either would just be a modest bump over the RX 580's 2304:32:256-bit.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
You'd think they have some sort of Polaris successor in the pipeline that is close to the ideal for this opening. If only because $200-250 has been the sweet spot for many years now, and Polaris is starting to get old. My guess is that it might be too small still. A 980 style 400mm2, modest size die, modest sized bus but at a modest price instead of $550 would be a killer GPU. All rasterization hardware. You could have fancy compute+ the fixins' card line too but I think there is an opening for a no bells and whistles, just straight up rasterization card.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
It doesn't sound like there's going to be a gaming version of Vega 20, and if there is it'd probably have to be very expensive, likely more than the 2080's fake MSRP of $699 which could be real by then.

Navi might be interesting, but it sounds like at least in 2019 it's only targeting Vega 64-type performance. That might work if they are targeting $300 or less for it.

Take Vega-64
Port it to 7nm TSMC (40% speed increase at same power + double the density)

And you get the following,

7nm Vega-64 with 30% higher clocks at half the die size.
= close to RTX2080 performance with a die size of only ~250mm2

Sell it at $599, end of story.
 
Reactions: VirtualLarry

Rifter

Lifer
Oct 9, 1999
11,522
751
126
I have little faith left in AMD GPU department after vega flop, but i really hope they have a comeback planned.
 
Reactions: Ranulf

amenx

Diamond Member
Dec 17, 2004
4,013
2,285
136
Take Vega-64
Port it to 7nm TSMC (40% speed increase at same power + double the density)

And you get the following,

7nm Vega-64 with 30% higher clocks at half the die size.
= close to RTX2080 performance with a die size of only ~250mm2

Sell it at $599, end of story.
And Nvidia's 7nm will quickly come out in response.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
And Nvidia's 7nm will quickly come out in response.

No it will not,
First because there will not be a lot of 7nm volume in early 2019 to satisfy NV sales.
And secondly NV will not launch a new GPU earlier than September 2019.

If AMD could release a 7nm Vega in Q1 2019 , they would have a window of 6-9 months.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,453
10,121
126
If AMD could release a 7nm Vega in Q1 2019 , they would have a window of 6-9 months.
But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?
 

slashy16

Member
Mar 24, 2017
151
59
71
I think AMD is finished competing with Nvidia in the $300usd+ market. The only architecture they have that is competitive is polaris. They need to get polaris on 12nm or 7nm with higher clocks to compete with the 1050/1060/2060. Vega was absolute trash and everyone who adopted one was absolutely ripped off by AMD.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?

It would be shocking if AMD didn't have something in the pipeline for 2019.

The question will be how competitive is it.
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?
It's obvious that AMD wanted to be early on 7nm. Their CPUs are going that route. Is it too much of a stretch to think that all the R&D for 7nm circuitry would not also be used for the GPU division? Traditionally they were early to new nodes, most times leading Nvidia, and they must certainly know that this is one way to reduce the lead Nvidia has in architecture, which by the way has existed for several years now. This is not too outrageous to imagine.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
It looks to me that the central argument of this thread, was that since NVidia is devoting so much die area to ray tracing. AMD focusing on rasterzation could catch them out on standard games: "... focusing performance increases in standard rasterized tasks..."

AMD could release a card that didn't waste die area on any specific RT HW, and thus be better at standard games.

Not that there was an opening for AMD (or Intel) to catch NVidia on Ray Tracing itself.

As far as Intels experience, that was running some x86 algorithm on Intel CPUs and then Larrabee (which was also x86).

I really doubt much from that can hold a candle to dedicated RT HW, and DL network to denoise.
It's the same thing for both of them :
You can fill rooms w/ denoising algorithms.

I want Nvidia to clarify it's Gigaray claims. I feel Jensen took more than artistic license when coming up w/ the figure and its likely the 'upsampled' tensor core result. AMD essentially built a versatile truly asyncrhonous pipeline that could be repurposed. Nvidia b.s'd there's and thus needs dedicated cores.

I want an apple to apple detailed comparison between Nvidia and AMD's real time hybrid ray tracing solution. No marketing nonsense and no cornball claims coming from the denoised/upsampled output. How much is the ray trace portion actually processing in raw compute numbers. If Nvidia truly is doing 10x as much processing than AMD, Jensen wouldn't shut his mouth about it. So, they likely are not. Ray trace cores are just a bunch of ALUs in the SM that takes the place of the double precision compute portion of Volta. It's clocked and locked at the same rates at the rest of the SM. So, they're not doing anything magical in them.
 

amenx

Diamond Member
Dec 17, 2004
4,013
2,285
136
It's the same thing for both of them :
You can fill rooms w/ denoising algorithms.

I want Nvidia to clarify it's Gigaray claims. I feel Jensen took more than artistic license when coming up w/ the figure and its likely the 'upsampled' tensor core result. AMD essentially built a versatile truly asyncrhonous pipeline that could be repurposed. Nvidia b.s'd there's and thus needs dedicated cores.

I want an apple to apple detailed comparison between Nvidia and AMD's real time hybrid ray tracing solution. No marketing nonsense and no cornball claims coming from the denoised/upsampled output. How much is the ray trace portion actually processing in raw compute numbers. If Nvidia truly is doing 10x as much processing than AMD, Jensen wouldn't shut his mouth about it. So, they likely are not. Ray trace cores are just a bunch of ALUs in the SM that takes the place of the double precision compute portion of Volta. It's clocked and locked at the same rates at the rest of the SM. So, they're not doing anything magical in them.
All that matters is the benchmarks on review day and when RT games are out, tested and analyzed and we see what we get in terms of image quality vs performance. Consumers care little about whats under the hood, only the actual performance and what RT actually delivers for them. If anything is out of place, critics will have a field day and Nvidia will be savaged left and right and they know it.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
All that matters is the benchmarks on review day and when RT games are out, tested and analyzed and we see what we get in terms of image quality vs performance. Consumers care little about whats under the hood, only the actual performance and what RT actually delivers for them. If anything is out of place, critics will have a field day and Nvidia will be savaged left and right and they know it.
What matters are raw performance numbers if I am making a compute comparison.
Performance numbers derive from what's under the hood not what tickles you when you play a video game.
So, Nvidia has to justify how they arrived at their gigaray figure by detailing what's under the hood.
I don't think you realize what I'm talking about so re-read my post.

Rays are processed at a certain amount per second just like you have TFLOP numbers for GPUs.
There are rays produced by the ray trace pipeline and then they are upsampled and denoised later.
Jensen could be talking about this result. If he is, apples to apples, you also go w/ the unsampled and denoised result of AMD video cards which effectively do the same foolishness. I am not interested in this or would have it split into another category. I want to know the raw compute number.

Nvidia has done this garbage before so its not above them.
A benchmark is a benchmark. No one knows what underlays the number Jensen gave it is actually scene dependent.

No one cares about game performance except with RTX off and how bad of a hit you take with it ON.
There's nothing to compare qualitatively to RTX on output... So, its all about performance and that performance relates to what's under the hood.
 

del42sa

Member
May 28, 2013
65
65
91
Take Vega-64
Port it to 7nm TSMC (40% speed increase at same power + double the density)

And you get the following,

7nm Vega-64 with 30% higher clocks at half the die size.
= close to RTX2080 performance with a die size of only ~250mm2

Sell it at $599, end of story.

Vega as gaming GPU is pretty much dead chip. Why even bother with very expensive shrink if they have newer gpu architecture in pipeline ? Nonsense....
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |