AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
As no one has brought this up yet I'm just throwing it out here ...

Perhaps all the performance optimization features are enabled (culling, rasterizer, etc...), and it needs each and every one of them just to keep up with Fury in performance per clock?
This is my worst nightmare if true, but perhaps they just pulled a Bulldozer? In their quest to get higher clocks for NCUs vs CUs, they failed to get them anywhere near the design target, yet took all the performance hits from this new high-clock design?



Don't get me wrong, I would absolutely hate it, if it turned out to be true, but NCUs actually being slower per clock, compared to CUs (in current games, before other optimizations), would at least explain the performance we're seeing.

After all, AMD has had working silicon for at least 7-8 months. If the consumer version would have something big driver improvements in store it would make no sense to hide it. What would happen, they would sell 2 less Frontier editions to gamers?

If anything is in store for the gaming version, it will be that the top model will be watercooled and can reach the 1600 mhz speed more easily. But overall it seems they missed their planned clock-targets (be it due to process or something else)
You would be correct if we would see performance per clock and per core downgrade compared to Fiji in Compute.

All we may see at this point is actually slight increase per clock, and per core in compute performance vs Fiji.
 
Reactions: Gideon

Tup3x

Golden Member
Dec 31, 2016
1,012
1,002
136
I think there's definitely a reason why they are doing this water cooler nonsense too. They need to extract ever bit of juice out of the cards to be competitive (air clearly isn't enough). If that wouldn't be the case they wouldn't bother with water cooled variants.

Currently things don't look good at all. Even if VEGA would suddenly gain a lot of extra performance it still wouldn't be anywhere near enough to catch 1080 Ti. I guess Pascal last year was an utter surprise for AMD.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
I think there's definitely a reason why they are doing this water cooler nonsense too. They need to extract ever bit of juice out of the cards to be competitive (air clearly isn't enough). If that wouldn't be the case they wouldn't bother with water cooled variants.

Currently things don't look good at all. Even if VEGA would suddenly gain a lot of extra performance it still wouldn't be anywhere near enough to catch 1080 Ti. I guess Pascal last year was an utter surprise for AMD.
Theoretically, from what I understand, the architecture improvements: load balancing, two times higher geometry throughput, thanks to Primitive shaders, Tile-Based Rasterization, and Memory system should make Vega twice as fast as it is right now.

Vega should fly in High-resolution(5, 8K) textures, thanks to all of those improvements. Why is it not happening? I have absolutely no idea. Either software or drivers.
 
Reactions: Bacon1

Samwell

Senior member
May 10, 2015
225
47
101
Currently things don't look good at all. Even if VEGA would suddenly gain a lot of extra performance it still wouldn't be anywhere near enough to catch 1080 Ti. I guess Pascal last year was an utter surprise for AMD.

Maybe they can get to the 1080Ti FE or even beat it. But that will only happen with factory oced water cooled cards with 375W TDP. AMD is way to silent after these terrible benches.

Maybe they have a bug in their hardware, which they can't fix easy? Like the draw stream binning rasterizer so broken, that they can't fix it without a mayor redesign, which would take too much time and they leave it out till for the next gpus.
 

Karnak

Senior member
Jan 5, 2017
399
767
136


Seems like tiled rendering is confirmed, this isn't good old GCN anymore.

Makes it all even more weirder than before.


edit: tried trianglebin on my 290. It behaves completely different, Vega is definitely tiled rendering based.
Here is someone with a Fury. Doesn't make any sense at all that Vega with TBR looks like the same as Fiji without. Fiji driver like months ago for Vega anyone?

https://gfycat.com/InsecureEagerKingbird
https://www.reddit.com/r/Amd/comments/6kdwea/vega_fe_doesnt_seem_to_be_doing_tiled/djleqwq/
 
Reactions: Bacon1

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
Maybe they can get to the 1080Ti FE or even beat it. But that will only happen with factory oced water cooled cards with 375W TDP. AMD is way to silent after these terrible benches.

Maybe they have a bug in their hardware, which they can't fix easy? Like the draw stream binning rasterizer so broken, that they can't fix it without a mayor redesign, which would take too much time and they leave it out till for the next gpus.
Its funny.

Why do you blame software problems on hardware? Ryzen launch has not taught anybody anything?
 
Reactions: Bacon1

beginner99

Diamond Member
Jun 2, 2009
5,224
1,598
136
I'm still on #teamrespin. That would explain a lot: performance, power draw, delay, low volume.

It's obviously only a guess, one many independently made, because it makes sense. Let's wait and see :>

Yeah only thing that really makes sense. They had hardware back in January, probably earlier. So 6 month already. And now 1 additional month of driver optimization should magically result in a huge performance uplift? Any gain is welcomed but what is needed is >20% .
 
Reactions: Phynaz and Head1985

Samwell

Senior member
May 10, 2015
225
47
101
Its funny.

Why do you blame software problems on hardware? Ryzen launch has not taught anybody anything?

Still not having your biggest feature of the generation in drivers one month before launch is just bad. Last month should be only for game optimization and everything else should work. Also releasing your highly anticipated new architecture, which has a lot of hype in such a way and with such drivers? You could say it's not for gaming, but why do they publish gaming drivers on their homepage then? AMDs launches are often bad, but this vega launch seems to be worse than even all others before.
 

Peicy

Member
Feb 19, 2017
28
14
81
Custom GTX 1080 Ti models consume 250-290W, some of them ~320W in peak. Some better GTX 1080 models draw ~230W.

I really don't think people who spend $1000+ on gaming PC care about ~60W. Especially when enitere system consumption remains at 500W or less. And I really don't know anyone who owns $500+ GPU with average 500W PSU. People who have bougth GTX 980 or GTX 1080 are using 80+ Gold 700W+ PSUs, even if entire system power consumption is ~400W. Don't make a big deal of this 300W.

The fact guy managed to run all tests without issues with 550W PSU tells a lot. Sure it's now good for PSU if it is under 90-100% load, but good 700-750W PSUs cost $100, and will be under 60-70% laod (peak).

The difference is far greater than 60W at the moment. Also, of course performance per watt matters, even when you have an 850w PSU.

If you really expected Vega will be as efficient as Pascal, than I don't know what to say
Factory OCed 1080s and 1080TIs models do draw a good amount of power...they are much faster than Vega is at the moment though. The TDP for the most basic, barebone 1080 (FE) is at 180w, the TDP for air-cooled Vega, which does not seem to hit 1600mhz, is at 300w while delivering roughly the same performance of a 1080 atm. Thats not good.

Also, the watercooled version has a TDP of 375w, and may or may not hit 1600mhz stable. Thats not good either in terms of performance/watt if they cant conjure up a miracle with performance gains through drivers.
The difference is far greater than 60W at the moment, too. Also, of course performance per watt matters, even when you have an 850w PSU.
They wanted to improve performance/watt. Naturally, you would expect more than a few percentages with an architecture thats hyped up so much by the company creating it.


I really wish that there is some software optimization left that they are just havent enabled yet. But how likely is that? They had working version of this for 7,8,9 months, and yet have not figured out how to use the biggest architectural advances yet? That just doesn´t seem likely to me.



Also something funny: I used the word "m a g i c" without spaces ealier, and i couldnt post this because its considered spam xD
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
Still not having your biggest feature of the generation in drivers one month before launch is just bad. Last month should be only for game optimization and everything else should work. Also releasing your highly anticipated new architecture, which has a lot of hype in such a way and with such drivers? You could say it's not for gaming, but why do they publish gaming drivers on their homepage then? AMDs launches are often bad, but this vega launch seems to be worse than even all others before.
Drivers may show to application possibilties, but the application may not be able to use this feature, because it was not designed in the first place, to do it.

That is the main problem here.

We are looking at more complex picture than most people think.
 

Peicy

Member
Feb 19, 2017
28
14
81
Drivers may show to application possibilties, but the application may not be able to use this feature, because it was not designed in the first place, to do it.

That is the main problem here.

We are looking at more complex picture than most people think.
Are you thinking about game specific optimization here?
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
Are you thinking about game specific optimization here?
No. Specific hardware optimization in games.

Without it, it may just be another Fiji, apart from improved load balancing, which should be automatic.

Primitive Shaders, and Memory system are those two specific optimizations, which should be implemented by devs to games, for Vega to utilize it. At least that is what I understand at this point.

It is interesting to see that NONE of features of Vega, the applications are able to "see". Primitive Shaders - not apparent. Improvement in performance from High Bandiwdth Cache Controller, and new memory System - not apparent. Tile Based Rasterization - not apparent.

At current state the GPU behaves exactly like OC'ed Fiji, and nothing more. Which is way more odd than people think.
 

jpiniero

Lifer
Oct 1, 2010
14,847
5,457
136
They wanted to improve performance/watt. Naturally, you would expect more than a few percentages with an architecture thats hyped up so much by the company creating it.

Having to increase voltage significantly to be able to increase clock speed can certainly undermine that.
 

Peicy

Member
Feb 19, 2017
28
14
81
No. Specific hardware optimization in games.

Without it, it may just be another Fiji, apart from improved load balancing, which should be automatic.

Primitive Shaders, and Memory system are those two specific optimizations, which should be implemented by devs to games, for Vega to utilize it. At least that is what I understand at this point.

It is interesting to see that NONE of features of Vega, the applications are able to "see". Primitive Shaders - not apparent. Improvement in performance from High Bandiwdth Cache Controller, and new memory System - not apparent. Tile Based Rasterization - not apparent.

At current state the GPU behaves exactly like OC'ed Fiji, and nothing more. Which is way more odd than people think.
Its possible that many features need to be enabled on a per-game basis or need developer input, that would not be a good thing though since AMDs driver team is not that big compared to Nvidias and ressources of game development teams are streched as it is.
Then again, why wouldn´t they work with Futuremark or implement specific features for 3d mark early on, the single most used gaming centric benchmark. That would just seem really, really dumb.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
What you are talking about has nothing to do with Drivers.

Drivers can only make visible the features to the application, but its developers job to use them. If they do not use them - the hardware will not benefit from them.

It is a form of hardware vendor lock in. Specific optimization for Specific hardware. Only possible in Vulkan, DirectX12 and Metal.

They did worked with one company before the Vega release. Bethesda. Prey should be Vega optimized game. So far it baffles me that NOBODY thought about this and decided to not test Prey.
 

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
@Peicy
Is it confirmed air-cooled Vega FE can't keep 1600 MHz? Should custom models (better heat-sink, 2-3 fans) should be able to hit 1600-1650 MHz stable with 300W power draw? So if 56 CU model has the same performance as GTX 1080 (and full 64 CU chip 10% better), the same price only ~70W higher TDP, is that really such a big deal? How can 230W be amazingly efficient and 300W total disaster?

This looks a lot like Fiji launch, where 56 CU Fury wasn't much faster than 44 CU Grenada (~10% in 1440p), even though it had more than 20% advantage in raw power (TFlops). So probably the biggest (not to say the only) mistake AMD made with Vega is they didn't launch mid-size chip (44-48 CUs, 300-350 mm^2, probably GDDR) first with performance around GTX 1070 level, probably slightly faster. It would be cheaper to produce, test the architecture, gain more customers...
 

tential

Diamond Member
May 13, 2008
7,355
642
121
I don't know why everyone is getting so angry about Vega.

If RX Vega is between a 1070 and 1080 for $450 it will sell. (And be a good deal) Most people don't buy $700 cards.
That would simply put it at the relative price/perf levels of the GTX 1070 and 1080. The GTX 1080 is faster for slightly more and with a much better perf/watt as well as better mining potential.
GTX 1070 is slightly slower, but cheaper, better mining potential and better perf/watt.

That's the current state of things. At the current performance levels, even $450 would be a bad deal purely on a gpu vs gpu basis.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
If Vega would not bring any optimizations, then Raven Ridge APU GPU would not be 40% faster than Bristol Ridge, while having 30% lower core clocks.

3.0/3.3 GHz 4C/8T+11 CU, 800 MHz, 35W TDP APU vs 4C/4T+8CU, 1108 MHz GPU, 65W TDP and the GPU in Raven Ridge is 40% faster, according to AMD.

TFLOPs performance on both APUs is exactly the same: 1.13 TFLOPs. So what would that mean, other than Geometry - gaming - performance?
 
Reactions: Bacon1

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Why? So many seem to think AMD must beat a 1080ti to accomplish anything. AMD just need something to sell in the sub $500, 1070/1080 and upcoming 2070/2080 segment. I certainly dont understand how their flagship can appear to be so far behind NVIDIA top card after all these delays but it really doesnt matter in the long term. AMD now has Ryzen to prop them up as they make improvements. They just need actual products in certain high selling market segements. 1080 performance for $400 will last them for another year if they can deliver that.

The problem is that Nvidia, on a generational change, bumps each performance level down one chip. Thus, if Vega can barely beat the 1080 (despite being a much larger chip with >60% higher power usage) then it will be competing not with GV104 but with GV106. That means it will have to go up against a ~$250 "GTX 2060" (or 1160 or whatever) that has 1080 performance plus maybe 5-10%, and uses ~120W. There's no way that AMD can possibly make a profit on that, and how many people will want a card that uses literally OVER TWICE the power of the equivalent competition?

AMD is now more than a full generation behind Nvidia and they seem to be regressing. This is Bulldozer-level worrying - they need to purge the R&D team and start over from a clean sheet of paper.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
If Vega would not bring any optimizations, then Raven Ridge APU GPU would not be 40% faster than Bristol Ridge, while having 30% lower core clocks.

3.0/3.3 GHz 4C/8T+11 CU, 800 MHz, 35W TDP APU vs 4C/4T+8CU, 1108 MHz GPU, 65W TDP and the GPU in Raven Ridge is 40% faster, according to AMD.

TFLOPs performance on both APUs is exactly the same: 1.13 TFLOPs. So what would that mean, other than Geometry - gaming - performance?

Vega the architecture is not the same thing as Vega 10 the chip.
It's possible that the terrible performance observed from Vega 10 is due to bottlenecks like having only 4 shader engines (should be 8) and 64 ROPs (should be at least 96, if not 128). Raven Ridge then would not have these problems, nor would a smaller Vega 11 chip with a more balanced layout. Still, it's hard to imagine why they would not have fixed problems that were well known in 2015 when Fury X was released. Even Raja tacitly admitted (in an interview with TechReport) that there were bottlenecks, pleading the 28nm reticle limit as an excuse. Now that's gone and the transistor budget went way up, but the exact same bottlenecks remain... how does that happen?
Of course, the other possibility is that they're lying about Raven Ridge's GPU performance, like they've been lying about Vega all along. One thing we've learned from this fiasco is that Raja is as much a liar as JF-AMD was. Not a word coming out of his mouth can be trusted. The CPU division has improved massively on both the technical and marketing sides. RTG - not so much.
 
Reactions: crisium

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
Vega the architecture is not the same thing as Vega 10 the chip.
It's possible that the terrible performance observed from Vega 10 is due to bottlenecks like having only 4 shader engines (should be 8) and 64 ROPs (should be at least 96, if not 128). Raven Ridge then would not have these problems, nor would a smaller Vega 11 chip with a more balanced layout. Still, it's hard to imagine why they would not have fixed problems that were well known in 2015 when Fury X was released. Even Raja tacitly admitted (in an interview with TechReport) that there were bottlenecks, pleading the 28nm reticle limit as an excuse. Now that's gone and the transistor budget went way up, but the exact same bottlenecks remain... how does that happen?
Of course, the other possibility is that they're lying about Raven Ridge's GPU performance, like they've been lying about Vega all along. One thing we've learned from this fiasco is that Raja is as much a liar as JF-AMD was. Not a word coming out of his mouth can be trusted. The CPU division has improved massively on both the technical and marketing sides. RTG - not so much.
Scheduling is solved. There is no bottleneck in terms of load, and scheduling, which is actually observable in the tests done by reviewers, of the GPU.

Edit. Or not. It may also appear that Improved Load Balancing is part of the Primitive Shaders feature, and requires possibly the reworking of the application in order to utilize this.

Programmable Geometry Pipeline increases Geometry performance two times, but requires reworking of the Application.

High Bandwidth Cache controller requires also reworking the application to utilize it.

So, there you have it. Possible reasons why AMD Radeon Vega is performing like it is right now.

The good part of this is that after reworking of the applications we should see improvement between 50 and 100% over current state of performance of the GPU, and biggest gains in High-Resolution texture workloads, because of improved Culling techniques that Vega has.
 
Last edited:

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
@JDG1980
If nVidia puts GTX 1080 perfomance in ~$250 in next 6-12 months, they will shot themselves in the leg. Since no-one with 1080p monitor (95% of users) would bother to spend more than that for GPU. Even if they are looking for high FPS experience in popular multiplayer games.
 

Mopetar

Diamond Member
Jan 31, 2011
8,024
6,484
136
The problem is that Nvidia, on a generational change, bumps each performance level down one chip. Thus, if Vega can barely beat the 1080 (despite being a much larger chip with >60% higher power usage) then it will be competing not with GV104 but with GV106. That means it will have to go up against a ~$250 "GTX 2060" (or 1160 or whatever) that has 1080 performance plus maybe 5-10%, and uses ~120W. There's no way that AMD can possibly make a profit on that, and how many people will want a card that uses literally OVER TWICE the power of the equivalent competition?

If Vega is a complete mess NVidia will probably hold back on consumer Volta as they have no pressure and have better margins in the professional and HPC markets.
 

Krteq

Senior member
May 22, 2015
993
672
136
So, Draw-stream binning rasterizer is not supported now by driver. What else is not supported atm.? What was AMD doing since January?
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
So, Draw-stream binning rasterizer is not supported now by driver. What else is not supported atm.? What was AMD doing since January?
It can be supported by driver.

Nobody knows why the application is not using it. It is suggested on Beyond3D forum, that it may only be "another" technique that AMD GPU has, to improve performance, but not required to use by developer. Is this the reason?

I don't know.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |