lifeblood
Senior member
- Oct 17, 2001
- 999
- 88
- 91
Hopefully it will be the full Navi 14 with all 24 CU's enabled. That should be an interesting contest.12th december confirmed.
Hopefully it will be the full Navi 14 with all 24 CU's enabled. That should be an interesting contest.12th december confirmed.
I wonder if that was the original plan. Let’s be honest, AMD’s GPU division has been marking time since original Polaris. AMD focused (ie, spent all available R&D money) on ZEN while leaving the GPU division on financial life support. The 500 series was just a zero effort refresh while the RX 590 was a zero effort refresh on a slightly tweaked process. Vega and Radeon VII were an underfunded halfhearted effort for the sole purpose of crying out “We’re not dead yet!”. I suspect Navi has always been the real comeback effort.
What I’m curious about is whether RDNA v1 is just an intermediate step, another “We’re not dead yet!” moment until they can get RDNA v2 out the door, or if v1 was really part of the original plan. Was v1 the end-all-be-all or did they originally plan on v1 being the low to mid architecture and v2 being the mid to high architecture? How did Nvidia’s Ray Tracing effect their plans?
In the end it doesn’t matter, obviously RDNA v1 is a good architecture which, unlike Vega, very definitely is a step forward in performance per watt and in raw performance as well.
I suspect they originally planned on RDNA v1 being the console architecture, but then ray tracing threw a spanner in the works as the console makers demanded in their gpus. At which point there has been a rush to add it onto RDNA.
I wonder if that was the original plan. Let’s be honest, AMD’s GPU division has been marking time since original Polaris. AMD focused (ie, spent all available R&D money) on ZEN while leaving the GPU division on financial life support. The 500 series was just a zero effort refresh while the RX 590 was a zero effort refresh on a slightly tweaked process. Vega and Radeon VII were an underfunded halfhearted effort for the sole purpose of crying out “We’re not dead yet!”. I suspect Navi has always been the real comeback effort.
What I’m curious about is whether RDNA v1 is just an intermediate step, another “We’re not dead yet!” moment until they can get RDNA v2 out the door, or if v1 was really part of the original plan. Was v1 the end-all-be-all or did they originally plan on v1 being the low to mid architecture and v2 being the mid to high architecture? How did Nvidia’s Ray Tracing effect their plans?
In the end it doesn’t matter, obviously RDNA v1 is a good architecture which, unlike Vega, very definitely is a step forward in performance per watt and in raw performance as well.
In the end it doesn’t matter, obviously RDNA v1 is a good architecture
I wouldn't call matching performance/watt with a node advanatge as "good architecture"...
So, gunning for the 1660 Super then? Either cheaper, better Performance/Watt, or what? More VRAM (8GB GDDR6?)Radeon RX 5500 XT with 1408 ALUs confirmed.
What is also confirmed: RX 5500 XT will have 199$ MSRP.
Might be worth it... for mining Grin... if it has 8GB GDDR6. Would be the cheapest card with 8GB GDDR6 on the market.What is also confirmed: RX 5500 XT will have 199$ MSRP.
By who?
To my knowledge that has not been confirmed, but is guesswork from people looking at Chinese pricing, who have sales tax, and are directly converting the price.
I'll say this now: $199 is DoA if MSRP. Simple as.
If it is $199, and it matches the 1660 Super, how is it DoA? Typically a card that matches performance and is cheaper is a good thing.
Apart from the outliers, it is around GTX 1660. And those outliers are making it look worse than RX 5500 actually is.Because it's not going to match the 1660 Super. Performance wise it's up against the 1650 Super.
Apart from the outliers, it is around GTX 1660. And those outliers are making it look worse than RX 5500 actually is.
If you will look not only on performance summary on Techpowerup, but genuinely look at every game, you will see that it is closer to GTX 1660, rather than GTX 1650.
TPU is not he best place to draw conclusions about any GPUs performance...
Radeon RX 5500 XT variant with same core count as standard variant, clocked to hell at 1.9-2.0 GHz, for 1.06x performance. Sounds about right
Why don't we wait for release drivers, and the GPU itself to show up to competition, before we call it a bad value, eh?And 8GB VRAM vs 4GB.
It'd be worth picking up at $170, but at $199 its pretty bad value.
Really confusing considering the larger Navi 10 is used at full count in 5700 XT, yields should be great for a chip only 160 mm2 compared to 251 mm2.Radeon RX 5500 XT variant with same core count as standard variant, clocked to hell at 1.9-2.0 GHz, for 1.06x performance. Sounds about right
Why don't we wait for release drivers, and the GPU itself to show up to competition, before we call it a bad value, eh?
Really confusing considering the larger Navi 10 is used at full count in 5700 XT, yields should be great for a chip only 160 mm2 compared to 251 mm2.
Even if they are saving the absolute best dies for Apple there should still be enough for a PC SKU at 24 CU/12 WGP.
And the differences bwetween Reference and AIB cards were big enough to say they were exactly the same GPU, eh?Because the 5500 in OEMs has been tested, which should tell you enough, but also it's easy enough to look at the 5700 and 5700XT and extrapolate how the 5500 series will perform.
The major differences between reference and AIB models are in the performance of the coolers in terms of acoustics and temps, the difference in performance is minor at best.And the differences bwetween Reference and AIB cards were big enough to say they were exactly the same GPU, eh?
Why don't we wait for release drivers, and the GPU itself to show up to competition, before we call it a bad value, eh?
Because the 5500 in OEMs has been tested, which should tell you enough, but also it's easy enough to look at the 5700 and 5700XT and extrapolate how the 5500 series will perform.
And the differences bwetween Reference and AIB cards were big enough to say they were exactly the same GPU, eh?