AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 82 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

urvile

Golden Member
Aug 3, 2017
1,575
474
96
I just got a Gigabyte AORUS Extreme with attached WB and it is a GREAT gpu. Salty but great in a custom loop. It is in my 5960x rig below. Moved my GTX1080 with a WB to my Ryzen 1800x rig since it will match the Vega64 right now.

I have a 3440x1440 screen and I don't think vega is going to cut it but I know the AORUS will because I have seen the benchmarks. Decisions, decisions. All I know is I need to retire my trusty Fury X cards.
 

urvile

Golden Member
Aug 3, 2017
1,575
474
96
Unless you're desperate to replace immediately, at least wait till we get full in-depth reviews by multiple sources allowing you to make a truly informed decision. There should be considerable benchmarking done at 1440p.

Yeah I will be waiting for Vega 64 benchmarks. My screen has freesync so I would prefer vega. It's only about a week off I think. The waiting is killing me.....
 

Krteq

Senior member
May 22, 2015
993
672
136
Mike Mantor of RTG says use of HBM isn't necessary to be classified as being compatible with HBCC. When asked regarding whether Notebook and APU Vega would use HBM2 because ALL Vega chips are going to use the HBCC feature, he says not necessarily and other memory types(like GDDR5) will qualify.
Huh, can you please post post a link to that video/interview?

Software matures with time compared to perfect software from the get go, yeah I know which one is supreme alright!
Well, never seen so many gamebreaking bugs like crashes and hangs caused by driver with AMD/ATi cards/drivers in decade like with nV cards/drivers lately. That's all from me to "supreme software" claim.
 
Last edited:
Reactions: Kuosimodo

Snarf Snarf

Senior member
Feb 19, 2015
399
327
136
nothing else does, though... real world applications show an average performance of 90% for the fraction of the price.

In fact, that Tom's review linked above even goes as far as to state that SpecView shouldn't even be considered as valid because it's outdated and both camps inflate their numbers artificially in some benches and that doesn't reflect reality. In the tests done there, Vega competes very well, in fact beating the P6000 in some tests while also being 1/5th the price.

As for what @Glo. is talking about Raja talked briefly about it in his AMA on Reddit and his answer very much read like "when coded for specifically in Vulkan or DX12 Vega will shine" so it is still possible that there is a lot to tap into, the real question is who is actually going to bother?

If there was a patch coming for AAA titles then there would be announcements just like the Ashes Ryzen patch. DICE and Bethesda have said 0 about getting a special Vega optimized patch released for any games, which either means that one isn't coming, or that it's not coming any time soon. So the question is, if AMD's biggest game developer friends aren't going to push a Vega patch out, who is going to actually bother to write Vega specific codepaths for Primitive Shaders?

@Glo. I appreciate your enthusiasm for the underlying workings of GPU's , but we all do need to accept that while AMD has been technically superior in some ways before, it doesn't often translate into increased performance in mainstream products. Primitive Shaders very well may actually have a 40% performance increase in geometry bound scenarios, but with a 20% market share and not enough engineers to send to help game developers write this code it's also entirely possible we may never see a true implementation of Primitive Shaders. This is why people are fighting with you, because a lot of them are sick of AMD having great technology that doesn't translate into performance that matters to them. I personally have no problem with your optimism in regards to AMD's uarch prowess, I just think when you're the little guy and you have to ask for people to help you out, it usually doesn't work out well for you.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Blender which I have already quoted, does not agree with you.
Blender is not a game, it's a pure ALU workout, suits AMD's architecture well, games though not so much.
Clock speed does not give Vega that much, because of software and drivers.
Clockspeed gains have nothing to do with drivers. That's a purely hardware thing.
If Vega has longer pipeline, if Draw Stream Binning rasterizer is not working with Vega FE,
(sigh), Draw Stream Binning rasterizer was working in AMD's Vega 64 slides, they tested the game with it and it didn't substantially increase anything, in fact AMD went out of their way to indicate so to Anand.
f load balancing is not working because of not ready BIOS of the GPU, and its voltage control over different parts of GPU
This is related to Power Consumption and not performance, and guess what? AMD gave it's TDP with all of these enabled, it's still freaking high (295W for air, 345W for water), we've seen it in Vega FE too.
IF You praise Nvidia for such superior hardware, why you cannot see that AMD solving one bottleneck - Geometry Throughput, by 2.5 times will increase the Vega architecture performance dramatically?
Because AMD themselves can't see that, I would be a fool to believe each new piece of hardware is going to pick 20% more performance through drivers. That's just unprecedented and just doesn't happen. FX 5000 didn't manage it, HD 2900 didn't manage it. Grow up.
Currently Vega is registering 4 Triangles per clock. What will happen with performance when developers will implement Primitive Shaders and it will register 10 triangles per clock? What effect this will have on performance of the GPU? Simple question, simple answer.
Probably 10% more or something even less than that. Experience tells you that if you do something in hardware it's far better than software. Primitive Shaders will use ALUs to reach high polygon throughput, which means it's stressing out parts of the system that would otherwise be free to do something else. ie, it's a trade off, not an immediate win like how NV does it.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
TileBased Rasterization allows you to lift this problem, because you basically are culling unused parts of the displayed image.

That's not good enough when (1) it requires per-application support from developers, and (2) GM200 and GP102 can actually render (not just cull) 6 triangles/clock to Vega's 4.

But with small additions. Vega is foundation for all future AMD GPUs, at least on architectural level.

If this is true, then AMD has no future in gaming GPUs.

Remember Bulldozer. You don't get to be a special snowflake when you have <20% market share.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I do not see a reason why architecture that has higher throughput, but requires software optimization, would be slower than weaker architecture in optimized software.

The only optimized software will be what AMD pays for. Why else would a profit-oriented game developer focus on a tiny number of users? Remember, all of AMD's discrete GPUs don't even add up to 20% market share. Vega will have essentially zero market share at the beginning.
 

xpea

Senior member
Feb 14, 2014
449
150
116
And last bit. Primitive Shaders allow GCN5 to register 10 Triangles/clock with 4 shader engines. This alone should make Vega 40% faster per clock than Fiji is. All of the improvements of the architecture, with higher clock speeds should make Vega two times faster than Fury X is. And around 30% faster than GTX 1080 Ti.
I keep hearing that but you are dreaming. Primitive shader will improve geometry performance, yes, but will take resources from the shader array, thus lowering the ALU throughput. Basically you use your resources for difference purpose. No free lunch here. 1080Ti will still be much faster...
 

DeeJayBump

Member
Oct 9, 2008
60
63
91
.....If there was a patch coming for AAA titles then there would be announcements just like the Ashes Ryzen patch. DICE and Bethesda have said 0 about getting a special Vega optimized patch released for any games, which either means that one isn't coming, or that it's not coming any time soon. So the question is, if AMD's biggest game developer friends aren't going to push a Vega patch out, who is going to actually bother to write Vega specific codepaths for Primitive Shaders?...

Or it could mean that any Vega-specific optimizations/patches [game engine/dev-side or otherwise] simply won't be disclosed until they are released and/or available to the gaming public.
 
Reactions: Kuosimodo

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
The only optimized software will be what AMD pays for. Why else would a profit-oriented game developer focus on a tiny number of users? Remember, all of AMD's discrete GPUs don't even add up to 20% market share. Vega will have essentially zero market share at the beginning.

This proves you do not understand how game developers work today. Most AAA titles are developed console first and the PC is basically a port. So when developers add features to games they do it to get the maximum performance from the consoles - PS4 Pro and Xbox One X. Features like rapid packed math are found in PS4 Pro , Xbox One X and Vega graphics cards. Unsurprisingly rapid packed math is the first major PS4 pro / Xbox One X / Vega feature which Bethesda and Ubisoft are adding in titles like Wolfenstein Return of Colossus and Farcry 5.
 

Rasterizer

Member
Aug 6, 2017
30
48
41
Because AMD themselves can't see that, I would be a fool to believe each new piece of hardware is going to pick 20% more performance through drivers. That's just unprecedented and just doesn't happen.

Well, I wouldn't necessarily say that in this case, unless you think it's impossible that RTG will fix the 20% memory bandwidth regression in Vega vs. Fiji as tested in Vega FE...

 

Elixer

Lifer
May 7, 2002
10,376
762
126
To sum this wall of text. In DX11 games, Vega will be at best around GTX 1080 performance. With properly optimized software in DX12 and Vulkan, it will be 30% faster than GTX 1080 Ti.
If you take a game that is optimized for both GCN & gameworks (good luck with that!) in DX12/Vulkan, you aren't going to see Vega be 30% faster than a 1080Ti.

This proves you do not understand how game developers work today. Most AAA titles are developed console first and the PC is basically a port. So when developers add features to games they do it to get the maximum performance from the consoles - PS4 Pro and Xbox One X. Features like rapid packed math are found in PS4 Pro , Xbox One X and Vega graphics cards. Unsurprisingly rapid packed math is the first major PS4 pro / Xbox One X / Vega feature which Bethesda and Ubisoft are adding in titles like Wolfenstein Return of Colossus and Farcry 5.
That isn't how game developers work, they are not going to add a specific feature for company A's product, unless company A pays them, it is that simple.
The rules for consoles are completely different than on PCs, you can't really compare what they are doing with consoles to PCs.

So, from a game developers point of view, they want the widest market out there, and are not going to exclude 99.9% of the market that don't do 2 FP16 ops per cycle without getting $$$ to add a special code path for whatever they were paid for.

FP16 is supported by Polaris, GCN 1.2, and Maxwell (actually faster than Pascal) & Pascal (but nvidia really gimped it for GP104), but, don't do 2 ops per cycle.
 

DeeJayBump

Member
Oct 9, 2008
60
63
91
RE: Vega-specific game engine optimizations/patches

A few more thoughts. AMD themselves (the recent Chris Hook interview with [H]) have stated that Vega is late. What if although not yet publicly-disclosed, some of the major game-engine devs were ready with some of their Vega-specific optimizations/code for Vega RX's original expected release? And if that is the case, what if in the time that's passed since Vega RX's original expected release, those same game engine devs have further optimized their engines for Vega performance?
 
Last edited:

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
A few more thoughts. AMD themselves (the recent Chris Hook interview with [H]) have stated that Vega is late. What if although not yet publicly-disclosed, some of the major game-engine devs were ready with some of their Vega-specific optimizations/code for Vega RX's original expected release? And if that is the case, what if in the time that's passed since Vega RX's original expected release, those same game engine devs have further optimized their engines for Vega performance?
I would have expected that with Prey, since the marketing coincides with Vega. Doesn't look like it has any of the special Vega features though.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
RE: Vega-specific game engine optimizations/patches

A few more thoughts. AMD themselves (the recent Chris Hook interview with [H]) have stated that Vega is late. What if although not yet publicly-disclosed, some of the major game-engine devs were ready with some of their Vega-specific optimizations/code for Vega RX's original expected release? And if that is the case, what if in the time that's passed since Vega RX's original expected release, those same game engine devs have further optimized their engines for Vega performance?

I'd have hoped these partners who'd have given AMD a heads up, and let AMD us it in their slides. Nothing like some OFFICIAL AMD slides showing highly positive numbers to get people going and looking forward to a huge launch. Instead they chose to leave AMD hanging, show a slide deck with even at their best, not very encouraging numbers, and two weeks of sour grapes and failed parade clean up.

AMD needs better partners if they do have secret sauce some where.
 

Snarf Snarf

Senior member
Feb 19, 2015
399
327
136
I'd have hoped these partners who'd have given AMD a heads up, and let AMD us it in their slides. Nothing like some OFFICIAL AMD slides showing highly positive numbers to get people going and looking forward to a huge launch. Instead they chose to leave AMD hanging, show a slide deck with even at their best, not very encouraging numbers, and two weeks of sour grapes and failed parade clean up.

AMD needs better partners if they do have secret sauce some where.

If this is anything like the Ryzen launch the game devs are going to say they didn't get hardware in their hands to even start working on it until the embargo lifted. Probably 3-4 weeks for the really fast devs to get something out maybe?
 
Reactions: estarkey7

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Well, I wouldn't necessarily say that in this case, unless you think it's impossible that RTG will fix the 20% memory bandwidth regression in Vega vs. Fiji as tested in Vega FE...


Don't know if you realize this, but the bandwidth score in that benchmark is for PCI-e and not the GPU. So the difference in scores could be due to the CPU or motherboard used, since the PCI-e controller is typically located on the CPU die these days.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
Because AMD themselves can't see that, I would be a fool to believe each new piece of hardware is going to pick 20% more performance through drivers. That's just unprecedented and just doesn't happen. FX 5000 didn't manage it, HD 2900 didn't manage it. Grow up.

Probably 10% more or something even less than that. Experience tells you that if you do something in hardware it's far better than software. Primitive Shaders will use ALUs to reach high polygon throughput, which means it's stressing out parts of the system that would otherwise be free to do something else. ie, it's a trade off, not an immediate win like how NV does it.

Even though you're telling people to "grow up," you're the one exhibiting a child-like degree of certainty. Big driver improvements are rare, but they've happened before. GeForce 3 improved about that much. Tahiti probably gained 35-40% or so thanks to drivers. Remember the 7970 launch? It wasn't that much faster than the GTX 580 at the time, even losing in some benchmarks. Now it's nipping at the heels of OG Titan. Even Hawaii probably got close to 20% improvement during its lifetime.

The unique situation with Vega is that we know in advance that the drivers are substandard, given that AMD actually had to disable functionality for the Frontier Edition.

Will Vega see relatively large gains from drivers? Probably. Are those gains going to be in the realm of 20% or higher? Quite possible, but probably not right away. Will Vega pick up additional performance versus its rivals thanks to its forward looking features? Obviously. Packed math is bound to see wide adoption eventually and that alone will give a huge boost. Will any of this happen in time to help AMD's market position? Probably not. Is the launch far too close to the Volta generation for comfort? Unfortunately, it looks that way.
 

Krteq

Senior member
May 22, 2015
993
672
136
Probably 10% more or something even less than that. Experience tells you that if you do something in hardware it's far better than software. Primitive Shaders will use ALUs to reach high polygon throughput, which means it's stressing out parts of the system that would otherwise be free to do something else. ie, it's a trade off, not an immediate win like how NV does it.
I keep hearing that but you are dreaming. Primitive shader will improve geometry performance, yes, but will take resources from the shader array, thus lowering the ALU throughput. Basically you use your resources for difference purpose. No free lunch here. 1080Ti will still be much faster...
Guys, do you even know how is geometry processed during pipeline stages currently? You are already using Hull,Domain and Geometry shaders processed on ALUs .

Primitive shader is in fact combination of all of these merged to single simplified shader. So it's opposite to your claims, It's actually less ALU demanding than current approach
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
That isn't how game developers work, they are not going to add a specific feature for company A's product, unless company A pays them, it is that simple.
The rules for consoles are completely different than on PCs, you can't really compare what they are doing with consoles to PCs.

So, from a game developers point of view, they want the widest market out there, and are not going to exclude 99.9% of the market that don't do 2 FP16 ops per cycle without getting $$$ to add a special code path for whatever they were paid for.

FP16 is supported by Polaris, GCN 1.2, and Maxwell (actually faster than Pascal) & Pascal (but nvidia really gimped it for GP104), but, don't do 2 ops per cycle.

I hate to break it to you, but you don't know how game development works. Developers will include a feature if they feel that the benefits outweigh the costs, nothing more and nothing less. Getting assistance from an IHV increases the benefit. You're looking at the subject as an absolute, but nothing is further from the truth. You have to judge each situation individually.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Tahiti probably gained 35-40% or so thanks to drivers. Remember the 7970 launch? It wasn't that much faster than the GTX 580 at the time, even losing in some benchmarks. Now it's nipping at the heels of OG Titan. Even Hawaii probably got close to 20% improvement during its lifetime.
Unsubstantiated performance numbers, with no source or truth to them. Tahiti close to OG Titan? Yeah in the dream world perhaps. 7970 remains just as fast as the GTX 680 (GK104).
https://www.youtube.com/watch?v=BfSi-Z8r12M

What you're talking about is the regression of GK102 (780, 780Ti, Titan) compared to Hawaii cards, notice it's regression, AMD cards didn't improve with drivers, big Kepler cards just regressed due to insufficient compute performance.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
Unsubstantiated performance numbers, with no source or truth to them. Tahiti close to OG Titan? Yeah in the dream world perhaps. 7970 remains just as fast as the GTX 680 (GK104).

While I say them somewhere or other recently, 7970 vs. Titan benchmarks with current drivers are quite hard to find, and not actually on topic, so you'll excuse me if I get right to the point of the discussion instead of taking an hour or so to dig them up.

While the 7970 most certainly has not remained just as fast as the GTX 680, the 680 is also a moving target, itself getting driver improvements over the same time-span. That doesn't fit or purpose. While finding perfectly like-for-like is pretty difficult, a quick search of AT reviews found a couple more or less good points of comparison:

BF3 1080p, ultra, 4XAA:

Dec 2011: 48.8 FPS

Oct 2013: 68.7 FPS

That's a 41% speedup in less than two years, and keep in mind the drivers were still improving at that point.

Another example, albeit less like-for-like, compare the 7970 trading blows with the 580 at launch in Civ 5, versus beating the ever loving snot out of it in Civ:BE by the time of the Fury X launch.

http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/24

http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/16

I don't think you realize exactly how bad the GCN launch drivers were. Now, I'm not saying that Vega is necessarily going to see the same massive boost that Tahiti did, since that was obviously an outlier, and it's even a possibility that it won't see as much as a 20% boost like you say. But to be absolutely certain that there isn't much of any performance to extract with more mature drivers, and to tell people who disagree with you on that point to "grow up" is very wrong headed to say the least.
 
Last edited:
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |