AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 84 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Last edited:

french toast

Senior member
Feb 22, 2017
988
825
136
Xbox one x does not have rapid packed math by the way.
Bu the point is a legit one, all AMD uarch moving forward will be gcn5 or contain many features from it, from consoles to apus to PC, so Devs will have to optimise.
Also some variations of Maxwell and Pascal have fp 16 boost as well as likely Volta, to go along with ps4 pro and every Vega product from now on, so it will get optimised no matter what.
I can see Vega matching 1080ti in about a year, not beating it by 30℅ though
 

beginner99

Diamond Member
Jun 2, 2009
5,224
1,598
136
On hardware level there is absolutely no reason why Vega would not be faster than GTX 1080 Ti.
There is. Effective memory bandwidth. Effective meaning taking color compression into account where NV is way, way ahead of AMD. While GTX 1080TI has similar bandwidth to vega, it's effective bandwidth is much higher.

Why wouldn't AMD claim such facts themselves, being faster than 1080Ti in future titles? Why are they pricing it like a gtx 1070/1080? It either means it's not true or no such future titles are anywhere on the horizon. I'm sure you are aware that many cool features in GPUs actually never got used because they required software optimization?
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Heh, so nothing concrete then. Well, we will see in less than a week
Yeah, nothing concrete, just AMD's official word to every press outlet outhere, AMD's marketing slides, Vega FE results, RX Vega's leaks, and AMD's blind tests with 1080 without fps counters. Pretty much those. I guess what remains is to be written in the bible or as a law of state to be concrete, eh?
They're painting the best possible picture that they can, and "trade blows" is literally the BEST Vega 64 will do.
Yep, and they are getting pretty desperate in their charts, picking up DX12 only games, and then picking the one title where NV has a recent driver bug that introduces major fps drops (COD Infinite Warfare). And then claim NV has bad min fps there. These are desperate tactics.

Issues and updates:

  • [Call of Duty Infinite Warfare] Major FPS drop in Call of Duty Infinite Warfare after updating to R384 drivers [1955894]
https://forums.geforce.com/default/...lay-driver-feedback-thread-released-7-24-17-/
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
I'm sure you are aware that many cool features in GPUs actually never got used because they required software optimization?
People hyped Async compute to the moon and beyond! 3 years later, it gives like what? 10-15% in best case scenarios, in truth it averages to about 5~10%. Now async is gone and primitive shaders are supposed to give 70% more performance! HYPE!
 
Reactions: tviceman

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Yeah, nothing concrete, just AMD's official word to every press outlet outhere, AMD's marketing slides, Vega FE results, RX Vega's leaks, and AMD's blind tests with 1080 without fps counters. Pretty much those. I guess what remains is to be written in the bible or as a law of state to be concrete, eh?

Yep, and they are getting pretty desperate in their charts, picking up DX12 only games, and then picking the one title where NV has a recent driver bug that introduces major fps drops (COD Infinite Warfare). And then claim NV has bad min fps there. These are desperate tactics.


https://forums.geforce.com/default/...lay-driver-feedback-thread-released-7-24-17-/

It's useless. Even after the reviews splatter the web, people will still cling to upcoming mythical magic drivers. RX 480 has gained 5% more than a GTX 1060 since the 1060 came out. 5%. The console GCN optimization factor is already baked in. There are no more surprise massive driver improvements until the truly next gen consoles come. Vega will always only trade blows with the 1080, and it'll do it with 100+ more watts of power use.
 
May 13, 2009
12,333
612
126
It's useless. Even after the reviews splatter the web, people will still cling to upcoming mythical magic drivers. RX 480 has gained 5% more than a GTX 1060 since the 1060 came out. 5%. The console GCN optimization factor is already baked in. Vega 64 will always only trade blows with the 1080, and it'll do it with 100+ more watts of power use.
Let's look again in another year or two and see the 580 vs 1060. Nvidia will drop driver support and the 1060 will fall off the cliff performance wise.
 
May 13, 2009
12,333
612
126
Also all this bickering about amd dropping the ball and these won't sell talk is really a moot point. They will sell everyone for the foreseeable future due to miners, unfortunately.
 
Reactions: rgallant

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Why does anyone believe that Vega will not be faster than GTX 1080 Ti? On hardware level there is absolutely no reason why Vega would not be faster than GTX 1080 Ti.

Of course there is. Fewer ROPs (64 vs 88), narrower front-end (4 triangles/clock vs 6), and potentially less memory bandwidth (thermal throttling) with somewhat less effective delta color compression.

And no, primitive shaders aren't a magic solution to any of this. Primitive shaders help more effectively cull what WON'T be displayed. When it comes to what WILL be displayed, the 1080 Ti has a massive (~50%) edge in geometry and pixel throughput.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Let's look again in another year or two and see the 580 vs 1060. Nvidia will drop driver support and the 1060 will fall off the cliff performance wise.
The one dropping support all over is AMD, already they dropped Win8 support for all of their GPUs (including Polaris & Vega)

Note that AMD is no longer supporting Windows 8/8.1 with newer RX 400 and 500 series (and also upcoming RX Vega). When asked about this, AMD provided the following statement: "With the declining use of Windows 8.1 we have focused our efforts on providing the best experience on the operating systems that the vast majority of our users are now using. Although we no longer officially support Windows 8.1, users can try the Windows 7 driver on an "as-is" basis."
https://www.techpowerup.com/235525/amd-releases-the-crimson-relive-edition-17-7-2-whql-drivers
 
Reactions: psolord

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
There is. Effective memory bandwidth. Effective meaning taking color compression into account where NV is way, way ahead of AMD. While GTX 1080TI has similar bandwidth to vega, it's effective bandwidth is much higher.

Why wouldn't AMD claim such facts themselves, being faster than 1080Ti in future titles? Why are they pricing it like a gtx 1070/1080? It either means it's not true or no such future titles are anywhere on the horizon. I'm sure you are aware that many cool features in GPUs actually never got used because they required software optimization?
Memory Bandwidth is actually directly related to TileBased Rasterization. If in games, the features was inactive, which testing done by PCPer actually proven to be correct, and AMD engineers saying it was actually disabled for them to push the GPU out, there is no wonder why effective memory bandwidth suffered so much in Vega.

A lot of features related to memory controller were disabled, and you think that Vega FE is representative of performance of architecture and Vega GPU?

Of course there is. Fewer ROPs (64 vs 88), narrower front-end (4 triangles/clock vs 6), and potentially less memory bandwidth (thermal throttling) with somewhat less effective delta color compression.

And no, primitive shaders aren't a magic solution to any of this. Primitive shaders help more effectively cull what WON'T be displayed. When it comes to what WILL be displayed, the 1080 Ti has a massive (~50%) edge in geometry and pixel throughput.
That is what I am saying, what will happen when developers will implement Primitive Shaders and Programmable Geometry Pipeline and we will compare front end between both of them? 10(GCN) vs 6(CUDA)?


Those features are tied together. Programmable Geometry Pipeline increases the throughput of Geometry, and it is part of Primitive Shaders feature. Primitive Shaders is explicit culling of unused geometry.

ROPs do not make a difference in performance of GPU, when you have culling techniques, and TileBased Rasterization.
 

leoneazzurro

Golden Member
Jul 26, 2016
1,015
1,610
136
Y
Yep, and they are getting pretty desperate in their charts, picking up DX12 only games/

I give you a little hint: as times passes, more of new games and AAA titles will use DX12/Vulkan. Surprised?
DX11 is still here because there are still some DX11 only devices out there but now DX12 capable hardware is massively present in the market.
And it makes no sense from a programmer's perspective support the old APIs forever, especially when the new APIs have charcteristics that can make you game run more efficiently.
 

Tup3x

Golden Member
Dec 31, 2016
1,012
1,002
136
Heh, so nothing concrete then. Well, we will see in less than a week
I guess you missed this:

That average fps. Obviously cherry picked numbers like all marketing slides. Trading blows with GTX 1080 means that some times Vega wins and sometimes GTX 1080 wins. Basically both are very similar. That's what AMD said and that's what everyone should be expecting.

That slide... Basically they mean that even the minimums are sometimes higher with GTX 1080. Also that's a bit deceptive since GTX 1080 is easily within the gsync range.
 
Reactions: Muhammed

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
I guess you missed this:

That average fps. Obviously cherry picked numbers like all marketing slides. Trading blows with GTX 1080 means that some times Vega wins and sometimes GTX 1080 wins. Basically both are very similar. That's what AMD said and that's what everyone should be expecting.

That slide... Basically they mean that even the minimums are sometimes higher with GTX 1080. Also that's a bit deceptive since GTX 1080 is easily within the gsync range.
I would agree that there is nothing concrete in that slide.
 
Reactions: Kuosimodo

Muhammed

Senior member
Jul 8, 2009
453
199
116
DX11 is still here because there are still some DX11 only devices out there but now DX12 capable hardware is massively present in the market.
DX11 is still here because only 13 games support DX12, 90% of games being released are DX11 only. DX12 is already a thorn in the developers butt with them complaining about it on several occasions. Stating that DX11 is more comfortable and consistent to work with. From the looks of it, DX11 is staying with us a while longer.



The idea behind new-generation "close-to-the-metal" APIs such as DirectX 12 and Vulkan, has been to make graphics drivers as less relevant to the rendering pipeline as possible. The speakers contend that the drivers are still very relevant, and instead, with the advent of the new APIs, their complexities have further gone up, in the areas of memory management, manual multi-GPU (custom, non-AFR multi-GPU implementations)
https://www.techpowerup.com/231079/is-directx-12-worth-the-trouble
If you take the narrow view that you only care about raw performance you probably won’t be that satisfied with amount of resources and effort it takes to even get to performance parity with DX11
https://www.pcgamesn.com/microsoft/ubisoft-dx12-performance

I am not saying DX12 is all bad, all I am saying is AMD picked up their favorite DX12 titles, while leaving NV's favorites, like RoTTR, Halo Wars 2, Gears 4 .. etc. They also cherrypicked their settings as well (testing Deus Ex & Sniper 4 @high instead of Ultra). It's a well known fact that DX12 games are evenly split across NV and AMD, with some titles favoring AMD and others favoring NVIDA.
 

nathanddrews

Graphics Cards, CPU Moderator
Aug 9, 2016
965
534
136
www.youtube.com
A mashup of average minimum (1st percentile?) rates isn't sufficient. Neither is an average of averages. Although large, I believe that this is the single best way to compare performance across two graphics cards would be a graph for 1st or 99th percentile (however you want to define that) and averages like this:
 

Rasterizer

Member
Aug 6, 2017
30
48
41
Don't know if you realize this, but the bandwidth score in that benchmark is for PCI-e and not the GPU. So the difference in scores could be due to the CPU or motherboard used, since the PCI-e controller is typically located on the CPU die these days.
PCI-e's bandwidth is so much greater than the numbers being looked at here that it didn't even occur to me to consider whether the bandwidth bottleneck could lie outside the the HBM2 memory speed itself.
 

Tup3x

Golden Member
Dec 31, 2016
1,012
1,002
136
I would agree that there is nothing concrete in that slide.
Aye (it's rather ridiculous slide) but it gives idea about the performance and how AMD thinks where VEGA is compared to the competition.

It could be though that the average is clearly behind the GTX 1080 so they give us really weird numbers. If it would be higher, they would have flood us with average FPS benches.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
I guess you missed this:

That average fps. Obviously cherry picked numbers like all marketing slides. Trading blows with GTX 1080 means that some times Vega wins and sometimes GTX 1080 wins. Basically both are very similar. That's what AMD said and that's what everyone should be expecting.

That slide... Basically they mean that even the minimums are sometimes higher with GTX 1080. Also that's a bit deceptive since GTX 1080 is easily within the gsync range.
Erm, it blatantly says that it accounts for minimums, not averages...
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
And when any company says trade blows, to me that means it will lose more than it wins. If it was faster overall or faster in more games, then a company wouldn't say "trade blows," they would say "faster." They're painting the best possible picture that they can, and "trade blows" is literally the BEST Vega 64 will do.

The phrase trading blows connotes equivalence. AMD is saying they will win some games and lose others.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
The phrase trading blows connotes equivalence. AMD is saying they will win some games and lose others.

Whenever a company says something about their product, they tend to say it in the most positive light aka best case scenario. If RX Vega truly traded blows with a 1080 across the board, AMD would be showing more canned benchmarks with more averages, and not limiting the scope of their comparisons to blind tests, 4-5 games, and minimum frame rates only.
 
Reactions: crisium

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Let's look again in another year or two and see the 580 vs 1060. Nvidia will drop driver support and the 1060 will fall off the cliff performance wise.

I'm not in the camp that Nvidia crippled or neglected Kepler; I think Kepler suffered from a combination of aging bad due to design decisions and AMD getting a massive boost during that time frame with the console contracts using GCN coming to fruition. HOWEVER, I'd be interested in revisiting performance. Currently the RX 580 is about 5% faster than a GTX 1060.
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
DX11 is still here because only 13 games support DX12, 90% of games being released are DX11 only. DX12 is already a thorn in the developers butt with them complaining about it on several occasions. Stating that DX11 is more comfortable and consistent to work with. From the looks of it, DX11 is staying with us a while longer.



The idea behind new-generation "close-to-the-metal" APIs such as DirectX 12 and Vulkan, has been to make graphics drivers as less relevant to the rendering pipeline as possible. The speakers contend that the drivers are still very relevant, and instead, with the advent of the new APIs, their complexities have further gone up, in the areas of memory management, manual multi-GPU (custom, non-AFR multi-GPU implementations)
https://www.techpowerup.com/231079/is-directx-12-worth-the-trouble

It's a good thing that AMD is pushing technology forward with features like HBCC so programmers don't have to worry about memory management.

In terms of mGPU, AFR is a garbage implementation. It's high latency and most of the work has to be done in the driver. AMD and Nvidia have to find the fast path for code that often times isn't done properly. Why should AMD or Nvidia do that? A driver's role is not to fix bad code or to make up for an API's shortcomings. DX12, Vulkan, and Mantle A) reset the appropriate responsibilities for each component of GPU game development B) defines a framework on how to do it correctly for stake holders which forces stake holders to do things correctly or not do it at all for things like mGPU.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |