Vega/Navi Rumors (Updated)

Page 235 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Elixer

Lifer
May 7, 2002
10,376
762
126
Hmm... more rumors, since we don't have enough already!
Though, it must be said that GN isn't being sampled with review cards.
AMD Moves Vega 56 Embargo Forward, Asks Reviewers to Prioritize Over 64
...
As of today, AMD noted that RX Vega 56 cards have been shipped to reviewers, along with a request that reviewers specifically “prioritize coverage” of RX Vega 56 over RX Vega 64 under time-constrained conditions. This clearly indicates AMD’s faith in RX Vega 64 and 56, demonstrating that 56 should more reasonably compete with nVidia at the ~$400 price-point, while 64 will undoubtedly be more fiercely embattled at $500-$600. AMD has timed RX Vega strategically so that it launches following Threadripper, where most reviewers have had attention focused for the past week. Cards have been received over the past day or so, leaving little time for deep testing. RX Vega 56 cards are to arrive by the weekend.
...
  • Unboxing embargo lift on Saturday, August 12, 11AM EDT: AMD permits unboxing photos or videos only.
  • Performance embargo lift for Vega 56 on August 14, 9AM EDT.
  • RX Vega 64 already in hands of some reviewers, with unboxing embargo lift on Saturday, August 12, 11AM EDT.
  • RX Vega 64 performance embargo lift on August 14, 9AM EDT.
  • Update: Vega 56 launches on 8/28.

http://www.gamersnexus.net/news-pc/3016-amd-moves-vega-56-embargo-forward-prioritizes-over-64
 
Reactions: Muhammed

coercitiv

Diamond Member
Jan 24, 2014
6,400
12,849
136
more rumors, since we don't have enough already!
From what I understand Vega 64 is already in the hands of some reviewers since a few days ago, while Vega 56 is due to arrive in the weekend. AMD asking to prioritize Vega 56 in the limited time before Aug. 14 sounds more like making sure Vega 56 gets decent coverage rather than pushing it under the spotlight.

Though, it must be said that GN isn't being sampled with review cards.
Although I agreed with the cleanup operation AMD did a while ago to straighten up some reviewers, I'm not sure I like what I hear now. There's a fine thread between exclusive content deals and pressuring others for positive coverage.
 

Kallogan

Senior member
Aug 2, 2010
340
5
76
There won't be enough vega 56 for everybody. Good luck people.

As for vega 64, who cares about that power hog
 

Elixer

Lifer
May 7, 2002
10,376
762
126
Although I agreed with the cleanup operation AMD did a while ago to straighten up some reviewers, I'm not sure I like what I hear now. There's a fine thread between exclusive content deals and pressuring others for positive coverage.
I see no pressure here, seems pretty simple.
GN wasn't supposed disclose information yet, and they knew it, and they tried to play games with the data that they weren't suppose to disclose.

Frankly, while a nice gesture, I also don't think any "review" site should have gotten that swag from AMD(or intel or nvidia, or...). That just sends the wrong message.

As for Vega 64 cards already been sent, we just recently knew that, and as for Vega 56, this looks like they needed to get at least 1 "win" here, but, we all have a strong suspicion that Vega 56 will just be a paper launch, with the majority of the cards being Vega64.
 

coercitiv

Diamond Member
Jan 24, 2014
6,400
12,849
136
I see no pressure here, seems pretty simple.
GN wasn't supposed disclose information yet, and they knew it, and they tried to play games with the data that they weren't suppose to disclose.
Linus Tech was allowed to post thermal data on Aug 5. Whether this kind of exclusive deal passes the thin line I was talking about remains to be seen with future launches. I just said I'm not sure I like where this is going.
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
You seem to have a fundamental misunderstanding of hidden surface removal.

1. In the first couple of minutes of the video it is stated AMD already does it. It has been a feature of all modern GPUs for at least a decade. https://en.wikipedia.org/wiki/Hidden_surface_determination.

Here is a reference to to hidden surface removal in Quake from 1996: https://www.gamedev.net/articles/programming/graphics/quake-hidden-surface-removal-r656/.
GPU accelerated hidden surface removal 2005: http://research.nvidia.com/sites/de...-07_GPU-Accelerated-High-Quality/gpuhider.pdf
And finally Hidden surface removal in the ATI X1000 series in 2005: http://techreport.com/review/8864/ati-radeon-x1000-series-gpus/2

2. The video goes on to explain that Vega can perform hidden surface removal earlier in rendering process, so an attribute fetch isn't needed for culled triangles, potentially saving a memory access. That's it, nothing more.

3. Absolutely at no time in the video is the actual implementation discussed. To say that it means that developers can somehow skip programming steps is completely false.
I have not said this. Film is explaining to you that it bypasses few steps in development.

It was brought by me to give you point of view what is being discussed. I have talked with game developers about this, and they said that this feature(Primitive Shaders) will simplify the development process.

I have said specifically that Programmable Geometry Pipeline will increase Geometry Throughput, not Primitive Shaders. Yesterday I once again asked my fellow game devs about this, and it turned out, I misunderstood them. It appears that Primitive Shaders are part of Programmable Geometry Pipeline, not the other way around. Primitive Shaders are saving resources on geometry, that can be used elsewhere in the pipeline.

But Programmable Geometry Pipeline is going to increase the Geometry Throughput, exactly what is said in that slide, you quoted.

I know all of this, what you have written. The thing is You are coming from software side, not hardware side.
Please read the Vega architecture release day slide deck and pay close attention to what primitive shaders can and cannot do. They can assist in better culling - removing polygons that are not being displayed. They cannot increase the actual throughput of polygons that need to be drawn. And do you really think Pascal doesn't have some way of automatically culling triangles that don't need to be rendered? The fact remains, GP102 can actually render 6 triangles/clock, and all variants of GCN including Vega can only render a maximum of 4 triangles/clock. When it comes down to games that need to actually display lots of geometry - not throw it away - big Pascal and even big Maxwell have advantages that Vega cannot match, and primitive shaders won't ever be able to fix this even if developers optimize for them (which they won't).
I have specifically said: Programmable Geometry Pipeline is increasing the Geometry Throughput .

Vega, without implementation of Programmable Geometry Pipeline can do 4 triangles per clock. With implementation of Programmable Geometry Pipeline can do up to 11 triangles per clock.

Endnotes of this presentation: Data based on engineering design of Vega. Fury X has 4 Geometry Engines, and a peak of 4 polygons per clock. Vega is designed to handle up to 11 polygons per clock with 4 Geometry Engines.

Pascal does not have Geometry Pipeline culling, but has very robust Rasterization culling.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
and it turned out, I misunderstood them. It appears that Primitive Shaders are part of Programmable Geometry Pipeline, not the other way around.
LOL, so your predictions of 70% more Vegan performance is based on faulty data, wonderful! I believe you will find a way for that prediction come back in full force though!
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
LOL, so your predictions of 70% more Vegan performance is based on faulty data, wonderful! I believe you will find a way for that prediction come back in full force though!
Nope. I only thought that Programmable Geometry Pipeline is part of Primitive Shaders feature, and its other way around. Other data is correct.

If you cull at least 50% of geometry used to render a scene, if you increase Triangles registered with each cycle two times, compared to Fiji, if you on top of all of this have 1.6 GHz core clock, compared to 1050 MHz, the effects on performance will be massive.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Clearly, two different markets - otherwise, AMD's dominance in consoles would have turned the tide for them on PCs. This hasn't happened - and this particular argument is shown to be faulty by direct evidence.
I dont buy this doninating discourse that amd hardware is fantastic and nv software is superior and that consoles didnt have an impact.

For gaming it looks to me nv hardware from even kepler on was damn lean and efficient. Tailor made for gaming loads. Maxwell just improved that enormously. Damn sharp hardware for todays load. The idea that pascal is just 14nm maxwell is stupid. Looking at an arch on high paper level says nothing. Think bd. Nv arch and hardware is just flat out better suited to gaming loads in a landscape where dx11 rules.

Now consoles are low level api and porting to pc means you want to cover all segment including dx11 cards. Gcn loses a lot of its power here, but if we look at how gcn aged its pretty evident consoles played a big part. Perf improved a lot with time. Engines was adapted. If it wasnt for the consoles amd would have been ran over by a huge train.
 
Last edited:

french toast

Senior member
Feb 22, 2017
988
825
136
LOL, so your predictions of 70% more Vegan performance is based on faulty data, wonderful! I believe you will find a way for that prediction come back in full force though!
Although they would argue otherwise, I would say vegans don't perform 70℅ better.
Meat eaters will always be top of the food chain my friend .

Jokes aside, not sure I like the fact GN have no review sample yet, they are usually critical of amd lately (for good reason in some cases, not all) so I hope it's not related to that.
 
Reactions: Muhammed

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Mining hides the fact that amd betting and building compute into their gpu has been a failure.
Trying to make the ocean boil.
Had they went 100% gaming and with a near 100M console market to back it up they could have had most of that market.
Instead they tried to sit on multiple chairs at the same time.
Still do today. They even added more chairs.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I dont buy this doninating discourse that amd hardware is fantastic and nv software is superior and that consoles didnt have an impact.

Anyone with a brain knows that the consoles definitely had an impact, particularly with the greatly accelerated emphasis on using compute which really took off with the launch of the current gen consoles. The thing is, the impact had a far weaker magnitude than many had anticipated. I remember some battles I had with forum members way back then about this very issue, and I remember telling them the same thing I alluded to earlier.

In PC land, game developers could never really exploit architectural optimizations like they could with consoles due to the thick abstractive layer of the API, ie DX11. This severely mitigated AMD's console monopoly from being used as a means to enable low level optimizations for GCN, which is exactly what would have given AMD an edge.

If DX12 and Vulkan had been around back then, the impact would have been MUCH greater, but still I doubt it would have been enough to stop NVidia. What NVidia has been able to achieve with their DX11 drivers in terms of performance is nothing short of amazing. When you hear that developers are having difficulty matching NVidia's DX11 driver with DX12, then you know it's something remarkable.
 

Tup3x

Golden Member
Dec 31, 2016
1,012
1,002
136
No.

Read something about the architecture of GCN5 and how it compares to GCN4 and 3. This type of post shows your competence level, or lack of it. Im interested in discussion, not refutes like this.

I already provided a lot of points of interest for people who are truly interested in understanding what is happening.

I will ask for you a simple question. How much faster will be Vega with implementation of Programmable Geometry Pipeline in games, when it will double the geometry performance? If you are able to answer this question, we can discuss more.
Doubling the geometry performance doesn't mean doubling the performance in that game. Not even close. So, I really do not know where you are pulling all these humongous gains. Vega also has serious disadvantage when it comes to fill rate so I don't see it ever doing better than GTX 1080 Ti in games unless there's something really shady going on and total lack of optimisations for green cards. Even if Vega would suddenly match Pascal clock for clock in geometry performance, they still have massive clock speed disadvantage.

To me it looks like they are just moving from bottleneck to bottleneck and/or some new things they introduced are still lacking (on hardware level).

By the way, AMD said that 150W has similar (slightly lower if their chart is accurate) perf/W than GTX 1080. What does that say about the expected performance of that Vega Nano?

In ideal situation Vega might beat GTX 1080 by some margin (while using more power) but there really is nothing that points to catching GTX 1080 Ti.
 
Reactions: Muhammed

Lodix

Senior member
Jun 24, 2016
340
116
116
Again, in the side above, the elusive 14nm+ shows up, sure would be nice to get something official about what this node is.
Since they are licensing 14nm from Samsung I guess it is their 14nmLPU which brings 15% better performance at same power consumption.
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
Doubling the geometry performance doesn't mean doubling the performance in that game. Not even close. So, I really do not know where you are pulling all these humongous gains. Vega also has serious disadvantage when it comes to fill rate so I don't see it ever doing better than GTX 1080 Ti in games unless there's something really shady going on and total lack of optimisations for green cards. Even if Vega would suddenly match Pascal clock for clock in geometry performance, they still have massive clock speed disadvantage.

To me it looks like they are just moving from bottleneck to bottleneck and/or some new things they introduced are still lacking (on hardware level).

By the way, AMD said that 150W has similar (slightly lower if their chart is accurate) perf/W than GTX 1080. What does that say about the expected performance of that Vega Nano?

In ideal situation Vega might beat GTX 1080 by some margin (while using more power) but there really is nothing that points to catching GTX 1080 Ti.
I have asked simple question, how much do you guys think GPU will be faster in situation when software implements Vega Features. I did not implied it will be 2 times faster.

Yes, AMD have said that in CURRENT slack of software 150W Vega has similar performance per watt that GTX 1080.

Does any of current slack of software use any of Vega features that have most meaningful impact on performance?


Doubling the Geometry registered each clock by GCN is solving biggest problem it had in geometry throughput. And then on top of it, you have a culling technique that supposedly can cull at least 50% of geometry not used in scenes.

About the last paragraph. What if Vega already beats GTX 1080 by some margin? What if you will add on top of it hardware features, and Primitive Shaders to software? Is it impossible for Vega to gain 40% of performance with it?
 
Reactions: Magic Hate Ball

french toast

Senior member
Feb 22, 2017
988
825
136
Since they are licensing 14nm from Samsung I guess it is their 14nmLPU which brings 15% better performance at same power consumption.
Yea I've been hoping for this outcome for ages, however I have a sneaky feeling that AMD will go the cheap and easy route and make 14nm LPP "4th gen" (*cough)
This will mean rx480-rx580 type gains, wouldn't be great if true but would make some sense judging by AMDs financials the last few years and the fact they have paid considerable costs to launch zen2 on 7nm late 2018, 14nmLPU conversion probably is not cheap.
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136

In this video Scott Wasson is saying that Primitive Shaders, because bypassing Fixed Function Shaders are increasing the Geometry throughput available to the cores. So everything I was saying is correct, in the end. This talk is from January 2017. The talk about Primitive Shaders starts at 35:40 minute mark.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136

In this video Scott Wasson is saying that Primitive Shaders, because bypassing Fixed Function Shaders are increasing the Geometry throughput available to the cores. So everything I was saying is correct, in the end. This talk is from January 2017. The talk about Primitive Shaders starts at 35:40 minute mark.

Using primitive shaders is a lot more work for developers. It's like expecting them to hand code in Assembler for your product. It is questionable how many titles will see this. I expect none other than where AMD pays for it.
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
With Volta now way off, Vega is even more of a serious consideration for me. Should be full reviews on Monday with some leaks over the weekend.
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
Using primitive shaders is a lot more work for developers. It's like expecting them to hand code in Assembler for your product. It is questionable how many titles will see this. I expect none other than where AMD pays for it.
It actually allows for less work for developers, that they would have to with Fixed Function Shaders.

And this feature is base of Vega and every next generation AMD GPU architecture.
 

zinfamous

No Lifer
Jul 12, 2006
110,810
29,564
146
It actually allows for less work for developers, that they would have to with Fixed Function Shaders.

And this feature is base of Vega and every next generation AMD GPU architecture.

whether or not this is true, it only matters if developers actually do it, which they seem to not be doing generation after generation of GCN where the argument has always been the same: "These cards would be great! if the devs actually utilize the hardware properly!"

So, is Vega the first generation of GCN where those pesky devs actually code the way AMD needs them to code to make their hardware shine "the way it is supposed to," or not? Honest question, because I want to see AMD succeed there, but this argument has never materialized into reality. You must have the patience of a saint.
 
Reactions: Zstream

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
whether or not this is true, it only matters if developers actually do it, which they seem to not be doing generation after generation of GCN where the argument has always been the same: "These cards would be great! if the devs actually utilize the hardware properly!"

So, is Vega the first generation of GCN where those pesky devs actually code the way AMD needs them to code to make their hardware shine "the way it is supposed to," or not? Honest question, because I want to see AMD succeed there, but this argument has never materialized into reality. You must have the patience of a saint.
This is not a question to me. I do not have an answer for this .
I cannot say: "Yes, they will use those features". Or, "No, they won't use those features", because I do not have crystal ball.

All I can discuss and I am discussing is the potential that is in the hardware.
 

zinfamous

No Lifer
Jul 12, 2006
110,810
29,564
146
This is not a question to me. I do not have an answer for this .
I cannot say: "Yes, they will use those features". Or, "No, they won't use those features", because I do not have crystal ball.

All I can discuss and I am discussing is the potential that is in the hardware.

I know.

But it's the same discussion between users (you and me) generation after generation of GCN. I am saying that the value of this point is significantly diminished, because it hasn't once materialized into real-world data. For whatever reason: the work for devs really is not as easy as purported, economic incentives simply aren't there, or even the improvements really aren't that significant; the devs still aren't doing that work and so it's a point that only matters when we see the actual results.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |