The opening might be with Intel. They can wait with their architecture and see which way the wind blows with NV's ray tracing push.
But, but, but.Well, I mean that Intel is supposedly developing a new GPU and they have time to modify it and add some sort of RT core if RT catches on quick.
Whatever AMD has in the works is likely too far along to change course.
Theoretically with multi-gpu under DX12, one card could do the RT grunt work.
Maybe someone could come up with a RT co-processor GPU for low cost?
Of course if RT doesn't become popular, then it's a moot point.
Intel has some real time ray tracing GPU experience already, though.But, but, but.
Aren't we told it took Nvidia 10 years of R&D to get this out. Now Intel in less than 2. For sure, this release is causing a lot of cognitive dissonance.
Well, I mean that Intel is supposedly developing a new GPU and they have time to modify it and add some sort of RT core if RT catches on quick.
Whatever AMD has in the works is likely too far along to change course.
This would be nice to have an HD4870 moment again.
Or even better an HD5870 moment in Q1 2019.
It doesn't sound like there's going to be a gaming version of Vega 20, and if there is it'd probably have to be very expensive, likely more than the 2080's fake MSRP of $699 which could be real by then.
Navi might be interesting, but it sounds like at least in 2019 it's only targeting Vega 64-type performance. That might work if they are targeting $300 or less for it.
And Nvidia's 7nm will quickly come out in response.Take Vega-64
Port it to 7nm TSMC (40% speed increase at same power + double the density)
And you get the following,
7nm Vega-64 with 30% higher clocks at half the die size.
= close to RTX2080 performance with a die size of only ~250mm2
Sell it at $599, end of story.
And Nvidia's 7nm will quickly come out in response.
But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?If AMD could release a 7nm Vega in Q1 2019 , they would have a window of 6-9 months.
But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?
It's obvious that AMD wanted to be early on 7nm. Their CPUs are going that route. Is it too much of a stretch to think that all the R&D for 7nm circuitry would not also be used for the GPU division? Traditionally they were early to new nodes, most times leading Nvidia, and they must certainly know that this is one way to reduce the lead Nvidia has in architecture, which by the way has existed for several years now. This is not too outrageous to imagine.But how long ago, would AMD have had to have the product design in the pipeline, if tape-out was to occur in time to take rasterization card market-share from Nvidia?
It's the same thing for both of them :It looks to me that the central argument of this thread, was that since NVidia is devoting so much die area to ray tracing. AMD focusing on rasterzation could catch them out on standard games: "... focusing performance increases in standard rasterized tasks..."
AMD could release a card that didn't waste die area on any specific RT HW, and thus be better at standard games.
Not that there was an opening for AMD (or Intel) to catch NVidia on Ray Tracing itself.
As far as Intels experience, that was running some x86 algorithm on Intel CPUs and then Larrabee (which was also x86).
I really doubt much from that can hold a candle to dedicated RT HW, and DL network to denoise.
Not for professional / compute usage and tasks. It was quite a GPU miner too.Vega was absolute trash and everyone who adopted one was absolutely ripped off by AMD.
All that matters is the benchmarks on review day and when RT games are out, tested and analyzed and we see what we get in terms of image quality vs performance. Consumers care little about whats under the hood, only the actual performance and what RT actually delivers for them. If anything is out of place, critics will have a field day and Nvidia will be savaged left and right and they know it.It's the same thing for both of them :
You can fill rooms w/ denoising algorithms.
I want Nvidia to clarify it's Gigaray claims. I feel Jensen took more than artistic license when coming up w/ the figure and its likely the 'upsampled' tensor core result. AMD essentially built a versatile truly asyncrhonous pipeline that could be repurposed. Nvidia b.s'd there's and thus needs dedicated cores.
I want an apple to apple detailed comparison between Nvidia and AMD's real time hybrid ray tracing solution. No marketing nonsense and no cornball claims coming from the denoised/upsampled output. How much is the ray trace portion actually processing in raw compute numbers. If Nvidia truly is doing 10x as much processing than AMD, Jensen wouldn't shut his mouth about it. So, they likely are not. Ray trace cores are just a bunch of ALUs in the SM that takes the place of the double precision compute portion of Volta. It's clocked and locked at the same rates at the rest of the SM. So, they're not doing anything magical in them.
What matters are raw performance numbers if I am making a compute comparison.All that matters is the benchmarks on review day and when RT games are out, tested and analyzed and we see what we get in terms of image quality vs performance. Consumers care little about whats under the hood, only the actual performance and what RT actually delivers for them. If anything is out of place, critics will have a field day and Nvidia will be savaged left and right and they know it.
Take Vega-64
Port it to 7nm TSMC (40% speed increase at same power + double the density)
And you get the following,
7nm Vega-64 with 30% higher clocks at half the die size.
= close to RTX2080 performance with a die size of only ~250mm2
Sell it at $599, end of story.