Vega/Navi Rumors (Updated)

Page 207 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
So, why would you spend "$500-$700" (or more?) on Vega when you can spend $250-$300 on a RX580 if you are only going to play BF1 & Doom?
Because no one buys a PC to play 2 games only ever? Obviously the Vega is going to be able to sustain higher framerates in more demanding current and future games.
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
Freesync would disguise any dips in performance.

Sort of. It's mainly to prevent tearing caused the variation, but you'd still have less frames being displayed. If FPS doesn't matter then the entirety of both companies high end offerings don't matter. Humans tend to notice variability rather than two different, but consistent values. If AMD was slower, but more consistent that can be a good thing. Either way their point stands that if you can't tell the difference then Vega is a buy. You have to come up with another use case from what they presented. For instance I won't buy a Freesync monitor until it can do frame strobing at the same time. I find persistence to be more of an issue than tearing. My monitor is 120hz and my computer generally can deliver those frames relatively consistently. I haven't noticed tearing since I went to a high refresh monitor. In this use case AMD's argument isn't valid.
 
Reactions: Kuosimodo

Elixer

Lifer
May 7, 2002
10,376
762
126
Because no one buys a PC to play 2 games only ever? Obviously the Vega is going to be able to sustain higher framerates in more demanding current and future games.
Yes, so doing the blind test on only 2 games "where you can't tell the difference" makes no sense.
If they would have allowed a much broader range of games to be 'blind' tested, then, they would have had a more truthful picture of things, not that gimmick line they said.
 
Reactions: Crumpet

Elixer

Lifer
May 7, 2002
10,376
762
126
From that article again, we got this:
Asus MX38VQ retails for $1099. (MSRP) [http://www.anandtech.com/show/11018/asus-announces-designo-curve-mx38vq]
Asus ROG PG348Q retails for $1299 (MSRP) [https://www.pcmag.com/review/344451/asus-rog-swift-pg348q]. Gsync "tax" is $200.
A 1080 MSRP is $499, for a total of ($1299 + 499 = $1798)

They said the difference is around $300 between systems.

So, that Vega RX they had in that system will be $399. ($1099 + 399+ 300 = $1798).

Somehow, I don't think that will be the case.
 
Last edited:

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
Yes, so doing the blind test on only 2 games "where you can't tell the difference" makes no sense.
If they would have allowed a much broader range of games to be 'blind' tested, then, they would have had a more truthful picture of things, not that gimmick line they said.
Yep agreed. It's all marketing fluff.
 

Maverick177

Senior member
Mar 11, 2016
411
70
91
Yep agreed. It's all marketing fluff.

I agree, I wonder how far ahead Nvidia is right now.

At the time of this post:

Vega consumes 400W to achieve 1.6Ghz clock.
At 1.6Ghz clock, it is still slower than a reference gtx 1080, which consumes 200W.
15 months late compared to a gtx 1080.
The only saving grace for Vega at this point is price, which Nvidia adjust accordingly at any time.

Raja please go.
 
Last edited:

Veradun

Senior member
Jul 29, 2016
564
780
136

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Someone on HardOCP forums pointed out that the real-world bandwidth of Vega FE is probably limited by the Synopsys memory controller they are using for HBM. Testing from PCGH.de provided only a bit over 300 GB/sec of bandwidth in testing, despite the fact that the stacks should be able to put through 480 GB/sec at the clock rate they are run at. The Synopsys HBM controller supports only up to 307 GB/sec bandwidth.

This may be part of the explanation for why Vega underperforms so badly in gaming. They were originally going for 512 GB/sec memory bandwidth, but ended up with less than 60% of that. Whatever simulations and other stuff they ran assumed nearly twice as much memory bandwidth as they actually got. Hynix/Micron/Samsung/whoever they're getting HBM2 from underperformed, and apparently Synopsys, from whom they're licensing their memory controller, underperformed as well (or they didn't read the specs carefully enough?) This is one of the drawbacks of running an operation on a small R&D budget: you have to rely on third-party vendors and can't always control the quality of their work.
 
Reactions: Konan

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
I agree, I wonder how far ahead Nvidia is right now.

At the time of this post:

Vega consumes 400W to achieve 1.6Ghz clock.
At 1.6Ghz clock, it is still slower than a reference gtx 1080, which consumes 200W.
15 months late compared to a gtx 1080.
The only saving grace for Vega at this point is price, which Nvidia adjust accordingly at any time.

Raja please go.

All this. Volta is going to be completely unanswered. AMD has really been churning out turds the past 2 years. I rocked a 7950 and 7970 along with a 390 and all were terrific cards, especially the 7000 series so I'd love to see AMD catch up with NVIDIA but it seems like the gap just keeps getting wider and wider. Hopefully the HBM gamble pays off one day because we're 2 years in and it's still a turd in a punch bowl while NVIDIA's GDDR5 and GDDR5X cards crush both clock speeds and power consumption.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Someone on HardOCP forums pointed out that the real-world bandwidth of Vega FE is probably limited by the Synopsys memory controller they are using for HBM. Testing from PCGH.de provided only a bit over 300 GB/sec of bandwidth in testing, despite the fact that the stacks should be able to put through 480 GB/sec at the clock rate they are run at. The Synopsys HBM controller supports only up to 307 GB/sec bandwidth.

This may be part of the explanation for why Vega underperforms so badly in gaming. They were originally going for 512 GB/sec memory bandwidth, but ended up with less than 60% of that. Whatever simulations and other stuff they ran assumed nearly twice as much memory bandwidth as they actually got. Hynix/Micron/Samsung/whoever they're getting HBM2 from underperformed, and apparently Synopsys, from whom they're licensing their memory controller, underperformed as well (or they didn't read the specs carefully enough?) This is one of the drawbacks of running an operation on a small R&D budget: you have to rely on third-party vendors and can't always control the quality of their work.

AMD has always struggled with memory controllers. Hopefully their CPU division will bring enough cash into the equation that some real R&D can be focused on the GPU side.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Someone on HardOCP forums pointed out that the real-world bandwidth of Vega FE is probably limited by the Synopsys memory controller they are using for HBM. Testing from PCGH.de provided only a bit over 300 GB/sec of bandwidth in testing, despite the fact that the stacks should be able to put through 480 GB/sec at the clock rate they are run at. The Synopsys HBM controller supports only up to 307 GB/sec bandwidth.

That makes no sense. If you go to Videocardz.com where the article originated from someone says this:
They are likely talking about aggregate bandwidth per controller. etc. Which would be half a stack if they are using two controllers per HBM stack like they did with Fiji. or it could be a single controller solution per stack this time.

Then two should achieve 614GB/s.

We'd like to point to a single solution to a problem, but I don't see it. Likely lots of things are messed up everywhere. Everything is a trade-off, and having the right balance is what makes a good product. AMD's Infinity Fabric. Intel's mesh for Skylake-SP/X. They are both trade-offs regarding scalability/cost/performance. You can't say they are flat out bad. They may be bad for your particular scenario.

AMD has always struggled with memory controllers

This.

Ethereum mining is sensitive to memory performance. Vega FE got 37MH/s. Fury chips got little over 30MH/s. We blamed HBM on that. Guess what Pascal based Tesla gets? 70MH/s.

Nvidia knows how to make better memory controllers. Back in the chipset space, performance leaders were between Intel/Nvidia. Even with integrated memory controllers AMD were/are behind. Having massive R&D seems like a waste until you look at details like these. Little advantages here, and little advantages there and suddenly it looks like a big hill.

Unfortunately if you look at PCGH tests that's exactly it. It's not behind just in memory controllers. Overall the marketing lied to us just like with Polaris. No reason it won't continue with Navi.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,355
642
121
From the article I linked above...


With this logic...
They could have pitted a Radeon RX Vega vs the Radeon RX 580, my money would still bet on a similar outcome: that you wouldn’t really be able to tell the difference with variable refresh rate.

So, why would you spend "$500-$700" (or more?) on Vega when you can spend $250-$300 on a RX580 if you are only going to play BF1 & Doom?

This is why this whole "blind" test is a load of hooey.

You know this isn't true so I don't even know why you bother with it. There's a reason we want the fastest GPU possible to sync up with Freeysnc.

I think what you quoted is the main take away. VRR is the MOST important feature for gamers right now. I've been saying it since Gsync debuted. You NEED a VRR monitor.

Also, I think it's pretty obvious at this point AMD is going for hte ecosystem and will price RX Vega based on this. As a freesync owner and given the tests they've d one, it's their best selling point. I'm stuck getting Vega simply due to the Ecosystem cost.

All this. Volta is going to be completely unanswered. AMD has really been churning out turds the past 2 years. I rocked a 7950 and 7970 along with a 390 and all were terrific cards, especially the 7000 series so I'd love to see AMD catch up with NVIDIA but it seems like the gap just keeps getting wider and wider. Hopefully the HBM gamble pays off one day because we're 2 years in and it's still a turd in a punch bowl while NVIDIA's GDDR5 and GDDR5X cards crush both clock speeds and power consumption.
Wait for Volta is definitely something I wouldn't blame any person for doing right now. This is 1 year late.
It only makes sense if you want the price discount for freesync, but you're also getting a performance lag discount too in that AMD is essentially 1 year behind Nvidia performance wise.

Just a trade off based on how fast you need performance and how price conscious you are.
 

Elixer

Lifer
May 7, 2002
10,376
762
126
...Whatever simulations and other stuff they ran assumed nearly twice as much memory bandwidth as they actually got. Hynix/Micron/Samsung/whoever they're getting HBM2 from underperformed, and apparently Synopsys, from whom they're licensing their memory controller, underperformed as well (or they didn't read the specs carefully enough?) This is one of the drawbacks of running an operation on a small R&D budget: you have to rely on third-party vendors and can't always control the quality of their work.
I could of swore AMD was using Rambus tech for their memory controller for HBM2, but, yeah, according to http://www.nasdaq.com/press-release...gbs-bandwidth-for-graphics-and-20170725-00873 Synopsys got the win.

In any case, JEDEC specifies the maximum speed of HBM2 is 2000Gb/s, or a total bandwidth of 256GB/s per package.
Synopsys says they can now handle 2400Gb/s or 307GB/s per package. (20% higher than JEDEC).
That means, that if they use 2 packages, JEDEC says 512GB/s is max, and Synopsys's 2 packages could achieve 614GB/s.

So, something is just not adding up here, Vega FE was using 2 packages I thought...

If they have the double whammy of bad yields (because they are increasing JEDEC defined voltages to be higher), and still have to down clock from 2000Gb/s to 1200Gb/s (for 300GB/s) then HBM2 is ending up to be a disaster for them so far.
 

Elixer

Lifer
May 7, 2002
10,376
762
126
You know this isn't true so I don't even know why you bother with it. There's a reason we want the fastest GPU possible to sync up with Freeysnc.
That IS my point.
Both Freesync & G-sync were made to compensate for times when the video card can't handle everything on the screen smoothly.
So, as gamers, we want the fastest GPU possible to *not* be forced to use tech that will compensate for the lack of the ability of the video card, but, that isn't what these marketing guys want the world to think when they are at a disadvantage with their product.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I just hope the "Launch" at Siggraph, puts an end to nonsense blind testing, and it is not some kind of paper launch and only a few outlets get test cards on condition that only have them testing doom, in the dark...
 
Reactions: SickBeast

Maverick177

Senior member
Mar 11, 2016
411
70
91
I just hope the "Launch" at Siggraph, puts an end to nonsense blind testing, and it is not some kind of paper launch and only a few outlets get test cards on condition that only have them testing doom, in the dark...

This is AMD you're talking about, they'll find a way to eff it up somehow. Guaranteed.
 
Reactions: tential

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
AMD is waiving their hands so furiously, trying to show everyone that they can give a similar game play experience to a GTX 1080. This all means that, generally speaking, GTX 1080 is going to be faster. Anyone intent on buying a 300w card for the same price as an 180w equivalent (or better) that has been out for 13+ months, that is some serious dedication to reward piss poor engineering. Volta is going to drop in the December '17 - March '18 time frame with a GTX 2070 shaping up to be 15-20% faster for half the power consumption. If you waited 13+ months for Vega to reward you with nothing, why not wait another 6 months for Volta to actually deliver?
 

Elfear

Diamond Member
May 30, 2004
7,115
690
126
AMD is waiving their hands so furiously, trying to show everyone that they can give a similar game play experience to a GTX 1080. This all means that, generally speaking, GTX 1080 is going to be faster. Anyone intent on buying a 300w card for the same price as an 180w equivalent (or better) that has been out for 13+ months, that is some serious dedication to reward piss poor engineering.

Rewarding "piss poor engineering" or saving yourself $300? If you have an unlimited budget, an aftermarket 1080Ti and a nice 21:9 G-Sync monitor look like the best bet right now but not many people have unlimited budgets.

Volta is going to drop in the December '17 - March '18 time frame with a GTX 2070 shaping up to be 15-20% faster for half the power consumption. If you waited 13+ months for Vega to reward you with nothing, why not wait another 6 months for Volta to actually deliver?

March is 8 months away. Isn't that the same logic you've been arguing against with waiting for Vega (i.e. why wait that long for the same or slightly better performance)? Besides the fact that you'd still have to cough up $300 or so extra benjamins to get a G-Sync monitor.

I'm as disappointed as everyone else if RX Vega performs like rumor suggests but I also recognize that there are logical reasons to buy Vega (many of which were outlined previously in a discussion you participated in).
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
What is surprising to me is how AMD's marketing seems to be working on quite a few people here... Guess the marketing team isn't as bad as some say.
The marketing team is also viral so I wouldn't really say that the marketing is "working" on quite a few people here...I would say to you that a lot of these people are actually AMD viral.







This is trolling. Its not allowed.
Take your tin-foil hat conspiracy tales elsewhere.



esquared
Anandtech Forum Director
 
Last edited by a moderator:
Reactions: tviceman

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
The marketing team is also viral so I wouldn't really say that the marketing is "working" on quite a few people here...I would say to you that a lot of these people are actually AMD viral.

Eh, I doubt it.

Thats a pretty big accusation without much proof...
 

maddie

Diamond Member
Jul 18, 2010
4,787
4,771
136
Someone on HardOCP forums pointed out that the real-world bandwidth of Vega FE is probably limited by the Synopsys memory controller they are using for HBM. Testing from PCGH.de provided only a bit over 300 GB/sec of bandwidth in testing, despite the fact that the stacks should be able to put through 480 GB/sec at the clock rate they are run at. The Synopsys HBM controller supports only up to 307 GB/sec bandwidth.

This may be part of the explanation for why Vega underperforms so badly in gaming. They were originally going for 512 GB/sec memory bandwidth, but ended up with less than 60% of that. Whatever simulations and other stuff they ran assumed nearly twice as much memory bandwidth as they actually got. Hynix/Micron/Samsung/whoever they're getting HBM2 from underperformed, and apparently Synopsys, from whom they're licensing their memory controller, underperformed as well (or they didn't read the specs carefully enough?) This is one of the drawbacks of running an operation on a small R&D budget: you have to rely on third-party vendors and can't always control the quality of their work.
Whoever wrote that has probably reached a wrong conclusion.

They quote:
"Synopsys, Inc. (Nasdaq: SNPS) today introduced its complete DesignWare® High Bandwidth Memory 2 (HBM2) IP solution consisting of controller, PHY and verification IP, enabling designers to achieve up to 307 GB/s aggregate bandwidth, which is 12 times the bandwidth of a DDR4 interface operating at 3200 Mb/s data rate."

Each HBM2 stack has one memory controller. Bandwidth and # stacks/memory controllers have a linear relationship. How could you say this is the max bandwidth if you don't know the # stacks used?

Vega has 2 stacks = 2 memory controllers = 614 GB/s max assuming everything works properly.


I now see our very own lolfail9001 is the poster on that HOCP forum thread.

Edit: Just saw intelUser2000 had similar post
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |