Vega/Navi Rumors (Updated)

Page 190 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Elixer

Lifer
May 7, 2002
10,376
762
126
On a second reading of that TAM slide i noticed the brackets on the right of each bar, so:

the total addressable market is expected to be growing and AMD is expected to have products for the whole TAM only in 2018.
Exactly, that slide has NOTHING to do with performance of a certain product.
 
Last edited:

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
I have a Ryzen 7 1800x. That OC on the unnamed card at @4225 is BS. My max stability is 4.05. I think the GTX 1080 TI at 4 Ghz Ryzen 1800x is right on.
 

Pookums

Member
Mar 6, 2017
32
13
36
It's OC'ed to hell and beyond, 150MHz on the core, 150MHz on the HBM2, probably sucking in close to 500W! The CPU is OC'ed a little bit as well. The 1080Ti is not OC'ed though, it can easily reach 2050MHz at a much lower ~350w. So yeah not a fair comparison at all. Despite all that it still couldn't beat the 1080Ti.

I am guessing RX Vega will ship with 1650MHz watercooled version, and less than that if they plan on an air cooled one.

Gamers nexus had 1700mhz and 1100mhz at 400Watts with their custom water build for FE. Its possible vega could run at 500W with the proper Pin arrangement and bios tweaks, as the cards appear to be thermally limited, and not necessarily architecturally. I'm not sure why someone would want 500Watts, but it might be possible.

RX Vega could possibly clock much higher for less wattage if HBM2 and building around it actually turned out to be the issue. While technically it should take less power and produce less heat, a design could be flawed or use improper voltages resulting in spiking temperatures and power draw. Since RX vega will use one stack of 8 rather than two, there is a scenario where that could benefit clocks, power draw and performance.

An 10% increase in clocks results in ~33% increase in graphics tests scores?

I'm confused, unless those are full and cut GPUs.

Vega FE throttles hard. The lower score was likely from the typical 1400mhz rather than its registered 1600mhz.

----------------------------------------------------------------

Vega's design issue has more to do with Pascals HUGE jump, and less to do with what was likely amd's initial performance aim. It is likely AMD expected 1070 or slightly greater performance to be rough approximate for nvidia's high end when they began the design(prerelease of either architecture).

Using standard OC vendor cards(which are typical OC within a certain TDP envelope, not max potential), traditional boosts or "one and done" autoclocking, disagrees with the common assumption that Maxwell was the biggest architectural leap and pascal was poor, at least as far as gaming is concerned. A good site that I find matches performance TIME and TIME again whether its ssd/hdd/ram/cpu/gpu is http://cpu.userbenchmark.com/. I find when I average reviews together for hardware, it tends to match the specs coming out of this one website closest. It also shows direct comparisons.

Let's use the 80 editions as an example(this way we avoid the various Titans and differing/nonexistent placements depending on release):

Nvidia Recent history:

- http://gpu.userbenchmark.com/Compare/Nvidia-GTX-580-vs-Nvidia-GTX-480/3150vs3157 20%
- http://gpu.userbenchmark.com/Compare/Nvidia-GTX-580-vs-Nvidia-GTX-680/3150vs3148 40%
- http://gpu.userbenchmark.com/Compare/Nvidia-GTX-780-vs-Nvidia-GTX-680/2164vs3148 35%
- http://gpu.userbenchmark.com/Compare/Nvidia-GTX-780-vs-Nvidia-GTX-980/2164vs2576 30%
- http://gpu.userbenchmark.com/Compare/Nvidia-GTX-1080-vs-Nvidia-GTX-980/3603vs2576 65%!!

I also decided to search through some recent comparisons and jump to some old reviews pages just to make sure. In terms of average performance the jumps in Gaming appear to MATCH cpu userbenchmarks suggestions.

I do not understand where this idea that Pascal was a Poor jump or architectural change occured, but its not true. It might not have made any real IPC improvements per clock or introduce major architectural additions, however the typical and easy to gain boost Clocks and the improvements in perf/watt were huge.

Someone custom water cooling and greatly increasing the power envelope of the 980(or equivalent 900 cards) might have been able to increase the maximum headroom beyond the average effective speeds in the numbers above, but the vast vast vast majority of owners will not do this(or be able to do this), and the resulting power draw will be enormous in comparison to pascal.

This is the sole reason AMD is so far behind(aside from access to money that went into zen instead.). AMD likely expected something around 35% effective jump during the initial design stages. Pascal however made a DOUBLE generational jump in terms of gaming performance(again see above). If Pascal was only a single 35% generational jump, vega would be performing to expectation. This also leads to the other often repeated lie that the 60s become 80s and the 70s become titans. No, generally speaking GTXs replace the card directly above them + or - 10%. The 60s become 70s, the 70s become 80s, and the 80s become titans. ONLY PASCAL truely broke this trend.

Just for sake of argument:

http://gpu.userbenchmark.com/Compare/Nvidia-GTX-1080-vs-AMD-Vega-Frontier-Edition/3603vs3929

Similar in performance, but generally speaking the FE performs slightly worse. Not many benchmarks for it, so its hard to guage its exact measurement(also consider its tendency to throttle), but this was reflected in the reviews that showed the poorest performers(the throttlers) 10% above the 1070 and the best performers on equal footing with the 1080.

Why AMD chose to continuously delay release for a year is odd, but now the question is if they can play with the power envelope and the one stack of 8 HBM2 for the RX, along with driver support to drive the card to its maximum(though Power obsessed) performance and hope they can manage 10-15% above 1080 when all is said and done. I see no instance where it will match a 1080TI under any scenario(that would require a 3 generational jump for AMD), so they might as well push the limits of the card as much as possible.
 

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
Since RX vega will use one stack of 8 rather than two

Stopped reading there...

RX Vega will use 2 stacks, it has to becasue with 1 it would be in any scenario 100% bandwidth limited. It will use 2 4-hi stacks. And yes, the 4-high stacks might clock better so no need to overvolt and hence less power use. Or overvolt and higher frequency than 8-hi and hence less bandwidth starved.
 
Reactions: Kuosimodo

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
If you look "under the hood", comparing chips based on codename, we see that all but one of those comparisons is previous gen high-end chip vs current gen midrange chip.

If we're looking at architecture, we should I think be looking at the chips based on die sizes and position in the internal stack, rather than looking at the consumer-facing branding.

In other words, we should revise the comparisons as follows:
GTX 560 Ti vs GTX 680 +69% (GF114/332mm2 vs GK104/294mm2)
GTX 680 vs GTX 980 +76% (GK104/294mm2 vs GM204/398mm2)
GTX 980 vs GTX 1080 +63% (GM204/398mm2 vs GP104/314mm2)

All of which have about a 70% leap in performance, but looking at that, it is the 980 to 1080 that's the smallest leap. Still very good, mind you; AMD wishes their generational increases were this consistent and timely. GM204 is a much bigger die than GK104, but it is a bit of a special case as there was no node leap between those two. The performance difference really is remarkable considering.
 
Reactions: crisium

Pookums

Member
Mar 6, 2017
32
13
36
Stopped reading there...

RX Vega will use 2 stacks, it has to becasue with 1 it would be in any scenario 100% bandwidth limited. It will use 2 4-hi stacks. And yes, the 4-high stacks might clock better so no need to overvolt and hence less power use. Or overvolt and higher frequency than 8-hi and hence less bandwidth starved.

You stopped reading on the least related comment in that whole message, even though given the context of material discussed it was used as a positive not a negative and you chose to take it negatively? two 4 high or one 8 high both clock better than 16 and will use less power. Two 4-HI stacks would be nice, but with how tight lipped AMD has been I do not believe we have been privy to whether they are using two 4 Hi stacks, or one 8 hi stack. Both have been discussed in rumors in the past few months. Even though one 8high channel produces less bandwidth, it could likely be clocked even higher than the 2 4hi stacks simultaneously(not immensely mind you). While bandwidth would certainly be lower, the faster speed could be more beneficial overall, as I doubt in most games that the memory bandwidth is even close to being saturated in any of the recent architectures. The architecture itself and how quickly(not necessarily its maximum possible throughput) it communicates with memory is often the bottleneck.

It would be interesting if there were two version of rx vega using each scenario with which to compare. Some have argued its the bandwidth that is the bottleneck in vega. Others disagreed. While 3GB and less models in other Card architectures have bottlenecked as of recent, 4+ GB models have maintained adequate performances for their architecture. As a result, I doubt either one 8hi or two 4hi stacks would make a great deal of difference at this point in time. Since most games also do not like to exceed a 4GB cache I'm not sure it necessarily would have a great impact.

Regardless, I used one 8 high stack in my comment as its the one I've seen pop up in more rumor threads than 4 high stacks on this forum and a few others. Nothing more, nothing less.
I care more about the simple fact that having 8GB vs 16GB in FE should allow RX vega to exceed vega FE performance in Games as a result of higher capable clocks, less heat, and an unlikely ability for the cards bandwidth to be saturated in games.

Despite all that, I still see no scenario where RX vega competes with 1080 TI. I'd like to be pleasantly surprised, but that would seem to take some last magic given what we know of FE.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Paper launch or no, folks like AnandTech always get a week or two with the cards to write their reviews so on launch day at NDA time they can all simultaneously give us their take on the new hardware.

So the RX "must" be out in the wild in the hands of reviewers now, or very shortly. AND even paper launches have joke levels of stock so what are the odds an etailer employee in some remote corner of the world hasn't snagged one and run some benches?

Unless there literally aren't any in existence outside AMD, this is one closely guarded launch.

That's not what I meant.

Cards are usually on shelves when the reviewer's embargo lifts. With no leaks around that means cards will be in the wild later than "one week or so after launch at sIggraph". My best guess is we are looking at availability on shelves and release of reviews for the beginning of september .

If I remember correctly, Vega FE isn't aimed at the same market as the Quadro cards. Quadro cards are designed for accelerating design software (e.g. AutoCAD), and are guaranteed to a specific accuracy. It's that latter part that's important as that is one of the things that you pay for. Vega FE is more akin to Nvidia's Titan line... capable of professional tasks, but also fine for consumers (i.e. "pro-sumer").

Vega FE is a jack-of-all-trades card, but it's definitely marketed as a PRO card, you can find its product page in the PRO section of the Radeon website (the same section of Radeon PRO WX cards) and it sports the PRO driver. It is indeed similar to Titan, but while Titan is the flagship gaming card also used for GPU computing, this FE card seems more a card created for GPU computing but usable for gaming (and they explicitly told us so).
 

Pookums

Member
Mar 6, 2017
32
13
36
If you look "under the hood", comparing chips based on codename, we see that all but one of those comparisons is previous gen high-end chip vs current gen midrange chip.

If we're looking at architecture, we should I think be looking at the chips based on die sizes and position in the internal stack, rather than looking at the consumer-facing branding.

In other words, we should revise the comparisons as follows:
GTX 560 Ti vs GTX 680 +69% (GF114/332mm2 vs GK104/294mm2)
GTX 680 vs GTX 980 +76% (GK104/294mm2 vs GM204/398mm2)
GTX 980 vs GTX 1080 +63% (GM204/398mm2 vs GP104/314mm2)

All of which have about a 70% leap in performance, but looking at that, it is the 980 to 1080 that's the smallest leap. Still very good, mind you; AMD wishes their generational increases were this consistent and timely. GM204 is a much bigger die than GK104, but it is a bit of a special case as there was no node leap between those two. The performance difference really is remarkable considering.

Going by That comparison has issues of its own for other reasons associated with variances in TDP and size of the cuts in comparison to full size die. However ignoring that, As you said there was no 800 series, so there was only 1 Maxwell rather than 2. Kepler and Fermi both have 2 productions in the examples above. Thus when jumping to a new architecture nvidia had time to refine further from previous architectures. Pascal had no such step to help it in comparison to an original Maxwell, but managed that huge leap anyways. As a result, even using that data, it would be best if we jumped from 4 series to 6 to 9 to 10. Then divide the first two sets by 2, representing two iterations of improvement in the previous architectures. In the end Pascal should still Win by a solid margin. This also means maxwell would end up as the worst architectural leap, defying a common belief. Ironic.

Regardless, if you ignore stage refinements as well, all make large leaps in architectural performance as you noted.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Sensible way to do it is surely 980 -> 980ti (or Titan if one must) -> 1080 -> 1080ti -> pending Volta releases etc?

That's all been very consistently spaced out and I'm quite sure AMD were expecting it to continue when designing Vega, both with Pascal and Volta & beyond.

The tricky bit is matching it!
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Really?

Here's two of my Firestrike benchmarks at 4.2ghz

http://www.3dmark.com/fs/12570880
http://www.3dmark.com/fs/12313850

And now that I know of a few more things I can tweak I bet I could bench at 4.225.

Crumpet is your 1800x at 4.2Ghz able to run the following without crashing?
Intel Burn test at least 10 x
Aida64 stress test all component at least 15 minutes
Asus Realbench stress test at least 30 minutes
Aida64 memory benchmark
Cinebench 15?

Not an all inclusive suite but close enough to ferret out problems.

Perhaps I was too harsh with the BS comment. What I meant is the settings for the cpu run with the "un named gpu (hint hint probably a Rx Vega running over 1700 core clock) appears to have a near death run set of settings for both the cpu AND gpu and still cannot catch a Ryzen 1800x at 4Ghz (many can do this with adequate cooling) and a stock GTX 1080TI.

From the little info we have on RX Vega ( and goodness it is really sparce this late in the game) appears to show a gpu that pushed to the max can just beat a GTX 1080 but cannot catch a GTX 1080TI.

Perhaps drivers will eventually narrow the gap or even surpass a GTX 1080 TI but by then Nvidia will most likely regain the lead.
 

Crumpet

Senior member
Jan 15, 2017
745
539
96
Crumpet is your 1800x at 4.2Ghz able to run the following without crashing?
Intel Burn test at least 10 x
Aida64 stress test all component at least 15 minutes
Asus Realbench stress test at least 30 minutes
Aida64 memory benchmark
Cinebench 15?

Not an all inclusive suite but close enough to ferret out problems.

Perhaps I was too harsh with the BS comment. What I meant is the settings for the cpu run with the "un named gpu (hint hint probably a Rx Vega running over 1700 core clock) appears to have a near death run set of settings for both the cpu AND gpu and still cannot catch a Ryzen 1800x at 4Ghz (many can do this with adequate cooling) and a stock GTX 1080TI.

From the little info we have on RX Vega ( and goodness it is really sparce this late in the game) appears to show a gpu that pushed to the max can just beat a GTX 1080 but cannot catch a GTX 1080TI.

Perhaps drivers will eventually narrow the gap or even surpass a GTX 1080 TI but by then Nvidia will most likely regain the lead.

No, I can only keep it relatively stable at 4.15, though I was brand spanking new to overclocking at the time I was doing these runs and have since learned quite a bit more and may attempt them again.

My point was yes it was too quick to call BS on what is admittedly a probably fake result, because Ryzen can be pushed out of its comfort range for short benchmarks such as Firestrike.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Let's look at what happened on the last node shrink.

The biggest 28nm Kepler chip (GK110, 561mm^2) outperformed the biggest 40nm Fermi-refresh chip (GF110, 520mm^2) by 85%. (Link - compare GTX 780 Ti to GTX 580). Adjust for Kepler's largest die size, and you have a ~78% improvement in performance per mm^2.

Based on this, AMD should reasonably have expected that GTX 1080 Ti might be as much as 70-80% better than GTX 980 Ti. (Actual gains range from 54% to 85%, depending on resolution, with higher resolutions showing bigger improvements.) Now, it's true that Nvidia "exceeded expectations" with regards to clock speeds - which meant they could save die space by putting in fewer shaders at a higher clock. But the actual performance gains we got at each consumer GPU tier still roughly matched what we should expect from a node shrink.

I just don't believe that AMD targeted a measly +30% gain from Fiji->Vega. It makes no sense. They could never have reasonably expected that to be competitive with Nvidia's 16nm high end.
 

french toast

Senior member
Feb 22, 2017
988
825
136
I don't know about anyone else, but those low bandwidth rumours are really fishy, something does not add up here.
Why have they continued to market this thing for so long when all along they know it's performance would be so disappointing?

On the other hand if they had a magic driver why wouldn't they show everyone the true performance only 2 weeks out to generate more sales and stop this negative speculation?

I cant put my finger on it, it's either the greatest secret sandbagging cliffhanger turnaround ever - or it's the worst marketing for the worst GAMING GPU in history, nothing inbetween.
Seems like it can only realistically be the latter at this point unfortunately, but it is extremely weird all this I must say.
Usually when a company knows they have a shoddy product that will disappoint they quietly launch it-straight to DVD like.
Bizzare.
 
Reactions: SirDinadan

Krteq

Senior member
May 22, 2015
993
672
136
I cant put my finger on it, it's either the greatest secret sandbagging cliffhanger turnaround ever - or it's the worst marketing for the worst GAMING GPU in history, nothing inbetween...
Do you remember Evergreen launch?
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
I don't know about anyone else, but those low bandwidth rumours are really fishy, something does not add up here.
Why have they continued to market this thing for so long when all along they know it's performance would be so disappointing?

On the other hand if they had a magic driver why wouldn't they show everyone the true performance only 2 weeks out to generate more sales and stop this negative speculation?

I cant put my finger on it, it's either the greatest secret sandbagging cliffhanger turnaround ever - or it's the worst marketing for the worst GAMING GPU in history, nothing inbetween.
Seems like it can only realistically be the latter at this point unfortunately, but it is extremely weird all this I must say.
Usually when a company knows they have a shoddy product that will disappoint they quietly launch it-straight to DVD like.
Bizzare.

Signs are pointing to it being a disappointing launch.

Remember that even if they had a magic driver, they have to provide that driver to the benchmarkers so that their tests reflect that. Which means the magic driver would already need to exist, and if so, why not use it in marketing?

It may be that in 6 months time, RX Vega will perform closer to 1080Ti than 1080, but I don't think that will be the case at launch.
 
Reactions: french toast

beginner99

Diamond Member
Jun 2, 2009
5,223
1,598
136
Two 4-HI stacks would be nice, but with how tight lipped AMD has been I do not believe we have been privy to whether they are using two 4 Hi stacks, or one 8 hi stack
Yes we know because repeating myself anything less than 2 stack won't work due to memory bandwidth which depends on number of stack.

Besides that RX vega has a 2048-bit memory controller I wonder if it could even operate with only 1 stack, I highly doubt it. But again, it doesn't matter because it needs 2 stacks for memory bandwidth. Even a 4gb version will absolutely have 2 stacks.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Vega's design issue has more to do with Pascals HUGE jump, and less to do with what was likely amd's initial performance aim.

Pascals performance jump was a 2 node jump (28nm to 16nm) so 65% is right in line with other single node performance jumps. ~30% X 2.
 
Last edited:

Krteq

Senior member
May 22, 2015
993
672
136
Besides that RX vega has a 2048-bit memory controller I wonder if it could even operate with only 1 stack, I highly doubt it. But again, it doesn't matter because it needs 2 stacks for memory bandwidth. Even a 4gb version will absolutely have 2 stacks.
Actually, Vega 10 MEMIO interface has 2 blocks of 1024 bits.
 

french toast

Senior member
Feb 22, 2017
988
825
136
Signs are pointing to it being a disappointing launch.

Remember that even if they had a magic driver, they have to provide that driver to the benchmarkers so that their tests reflect that. Which means the magic driver would already need to exist, and if so, why not use it in marketing?

It may be that in 6 months time, RX Vega will perform closer to 1080Ti than 1080, but I don't think that will be the case at launch.
This is what I think, it's only 2 weeks out, we would see something already.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |