Vega/Navi Rumors (Updated)

Page 152 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

tential

Diamond Member
May 13, 2008
7,355
642
121
That wasn't very accurate information and if it was I'm sure they would have mentioned it, but instead it was "over 60fps".
Yes.
Not denying how much conjecture there was to come up with something like that. But maybe it's not just "barely" over 60 fps but quite a bit over? Who knows....
Another lame GPU launch by AMD.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
if it was beating the 1080 ti that demo and now the Raja twitter posts would make very little sense, like trying to look worse on purpose,

They aren't launching for 2 months, so why would they want Nvidia to know what performance they have? They aren't going to prevent people from buying other cards now, and they don't want any hype to die down before they are released. We'll see how people test the frontier edition cards and have a rough idea how well they'll game soon anyway.
 
Reactions: Jackie60

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
They aren't launching for 2 months, so why would they want Nvidia to know what performance they have? They aren't going to prevent people from buying other cards now, and they don't want any hype to die down before they are released. We'll see how people test the frontier edition cards and have a rough idea how well they'll game soon anyway.
I really doubt what nvidia do or do not has anything to do with Vega. Nvidia has already played all their cards for Pascal and consumer Volta is a long way off.
 
Reactions: tviceman
Mar 10, 2006
11,715
2,012
126
I really doubt what nvidia do or do not has anything to do with Vega. Nvidia has already played all their cards for Pascal and consumer Volta is a long way off.

Consumer Volta isn't that far off. We know GV102 is coming in early 2018, and GV104 could very well come before that.
 
Reactions: Phynaz

Magee_MC

Senior member
Jan 18, 2010
217
13
81
Consumer Volta isn't that far off. We know GV102 is coming in early 2018, and GV104 could very well come before that.

How do we know that. I've heard GV100 announced by NV, but I haven't heard anything else. Is that speculation, or did I miss an announcement?
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
How do we know that. I've heard GV100 announced by NV, but I haven't heard anything else. Is that speculation, or did I miss an announcement?

A GDDR6 memory maker said that an upcoming card in development scheduled for an early 2018 release has a 384-bit bus and would have an effective bandwidth of 768 gb/s @ 16gbps vram speed. Nothing official was or is announced, but connecting the dots and looking at release cadences gives a massive amount of credence towards an nvidia based GPU.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
7,130
6,001
136
A GDDR6 memory maker said that an upcoming card in development scheduled for an early 2018 release has a 384-bit bus and would have an effective bandwidth of 768 gb/s @ 16gbps vram speed. Nothing official was or is announced, but connecting the dots and looking at release cadences gives a massive amount of credence towards an nvidia based GPU.

Do you have a link on that? Not that I'd be looking at a GV102 card, but if there is one in early 2018 that means there will probably be a GV104 card a few months earlier, and that's what I'd be looking at upgrading to. So that would be awesome to see GV104 in 2017.
 

Elixer

Lifer
May 7, 2002
10,376
762
126
Do you have a link on that? Not that I'd be looking at a GV102 card, but if there is one in early 2018 that means there will probably be a GV104 card a few months earlier, and that's what I'd be looking at upgrading to. So that would be awesome to see GV104 in 2017.
https://www.skhynix.com/eng/pr/pressReleaseView.do?seq=2086&offset=1

Seoul, April 23, 2017 – SK Hynix Inc. (or ‘the Company’, www.skhynix.com) today introduced the world’s fastest 2Znm 8Gb(Gigabit) GDDR6(Graphics DDR6) DRAM. The product operates with an I/O data rate of 16Gbps(Gigabits per second) per pin, which is the industry’s fastest. With a high-end graphics card, this DRAM processes up to 768GB(Gigabytes) of graphics data per second. SK Hynix has been planning to mass produce the product for a client to release high-end graphics card by early 2018 equipped with high performance GDDR6 DRAMs.

GDDR is specialized DRAM for processing an extensive amount of graphics data quickly according to what graphics cards command in PCs, workstations, video players and high performance gaming machines. Especially, GDDR6 is a next generation graphics solution under development of standards at JEDEC, which runs twice as fast as GDDR5 having 10% lower operation voltage. As a result, it is expected to speedily substitute for GDDR5 and GDDR5X. SK Hynix has been collaborating with a core graphics chipset client to timely mass produce the GDDR6 for the upcoming market demands.

“With the introduction of this industry’s fastest GDDR6, SK Hynix will actively respond to high quality, high performance graphics memory solutions market” said senior vice president Jonghoon Oh, the Head of DRAM Product Development Division. “The Company would help our clients enhance their performance of high-end graphics cards” he added.

GDDR6 is regarded as one of necessary memory solutions in growing industries such as AI(Artificial Intelligence), VR(Virtual Reality), self-driving cars, and high-definition displays over 4K to support their visualization. According to Gartner, average graphics DRAM density in graphics cards is to be 2.2GB this year and 4.1GB in 2021 with CAGR of 17%.
 

Elixer

Lifer
May 7, 2002
10,376
762
126
AdoredTV says he thinks the reason is driver development.

Actually quite believable in AMDs case. Remember, Vega's back end is quite different to GCNs. So, it could take AMD a while to get its gaming performance up to scratch.
Don't think this would be the case.
AMD's driver team would have been working to get all features implemented for quite a long time. That is how they find bugs, and if they have to re-tape or not.
Optimization is another story, and as we can see by the Polaris launch, it does take time for AMD to squeeze out performance from them.

This really just smells like a capacity/yield issue for the fab (Gloflo strikes again), and/or HBM2 is just not enough of it to satisfy demand. With nvidia soaking up all they can get for their 'pro' cards, they can afford to pay more for HBM2, and still make a good profit.
That is the problem with depending on HBM2 with no alternative (GDDR5X) in sight.

Both Polaris and Vega were developed in China, which may explain why they are so underwhelming.
Eh? That is hogwash.
The problem is that if it takes two Vega GPUs to beat a single top-tier Nvidia card, then Vega is still coming up short.
You are making assumptions. We have no idea what AMD was trying to prove with what they did.
No one wants to screw around with all the drawbacks of multi-GPU if they can help it. Unless AMD has rolled out some secret sauce that fixes things and causes two GPUs to transparently appear as one big GPU in 100% of applications, then this is basically admitting a shortfall.
Nobody really wants crossfire/SLI if they can help it. The picture is pretty much the same for the nvidia camp.

If one Vega could beat "any single GPU" then Raja would have said so.
He failed, and should be fired. So should the Chinese development team, and R&D brought back to the USA.
. Again with the assumptions, and no, he wouldn't have. Since he is being very tight lipped, they don't want to say anything of value yet... While I think this is a mistake, I don't run AMD.
Polaris was also developed by the same people, and that turned out fine.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Or it could be that FE has seen better than expected response in the professional market and the high demand for it has affected the volume of RX Vega that was supposed to ship. This could probably explain why they're launching the FE earlier - which is a first for AMD.
 
Reactions: Jackie60

Paratus

Lifer
Jun 4, 2004
16,849
13,785
146
A couple thoughts.

1) If GV102 is getting a 384bit bus with 768 gb/s I'd expect Navi to get a 4 stack of HBM2 and push 960 gb/s to 1tb/s.

(I expect Vega to be fine with 2 stacks and 480-512gb/s)

2) Anyway I estimate Vegas performance seems to put it at or above the 1080ti. Let's assume 1650 clockspeed and 520mm^2 die size based on rumors.
  • Assuming a linear increase Fury X with a 57% in clock and 50% increase in transistors would be ~36% faster than the 1080ti @ 4K
  • RX 580 with 23% clock speed increase and a 124% increase in die size would be 32% faster than the 1080ti @4K
  • RX 570 with a 28% clock speed increase and twice the SPs, TMUs, and ROPs would be 99% of the 1080 ti @ 4K and 21% faster at 1080P (1080P is probably the better estimate as the 570 drops off significantly at 4K vs the TI or fury
Now performance doesn't normally increase linearly so that leaves 21-36% to give back for nonlinear performance increases and resources going to non-gaming related tasks but still remaining at or above the 1080ti.

So looking at Prey


and some limited Fury X clockspeed scaling, while assuming ALL architecture improvements have no gaming impact, Vega should be at least 50% faster than Fury.

That would put a single Vega comfortably above 60fps let alone what two should do.

If it truly takes two Vegas then AMD screwed the pooch.
 
Reactions: Kuosimodo

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
I really doubt what nvidia do or do not has anything to do with Vega. Nvidia has already played all their cards for Pascal and consumer Volta is a long way off.

They can do pre-emptive price drops instead of having to wait. They could sell 1080 for $300 if they wanted to completely price AMD out. There is a reason AMD hasn't dropped speed and pricing for RX
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
We all know what the excuse will be if the Frontier Edition is slower than some expect in games... "The driver's aren't ready! Wait for RX!"

Excuse? Its literally what AMD has already said. Frontier is likely going to have workstation drivers not the normal consumer ones with all the game optimizations and "cheats" that would break scientific work.
 
Reactions: Erithan13

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
They can do pre-emptive price drops instead of having to wait. They could sell 1080 for $300 if they wanted to completely price AMD out. There is a reason AMD hasn't dropped speed and pricing for RX
Main problem with this strategy is - nVidia would actually earn less than they do now. Cause if GTX 1080 was $300, it means GTX 1060 6GB would be around $150. Some of potential GTX 1060 buyers would actually buy GTX 1080 or 1070, but most of them would still go for GTX 1060 and save some money.

I've seen some figures of one of EUs PC retailers, where cheapest Skylake i3-6100 outsold all of 2c/2t Skylake Celerons and Pentiums combined, by a factor of 2. Now when intel released 2c/4t Pentiums in €60-80 range, G4560 alone outsells i3-7100 by a factor of 3. We know all 2c/4t CPUs have (almost) the same production cost, so Intel actually earns $40-$50 less per 2c/4t CPU, than they did in last 5 years.

Therefore, nVidia won't lower the prices of Pascal cards (significantly) until Volta is released, no matter what AMD makes with Vega
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
Market cap is simply the valuation of the company and isn't how "large" the company is on a scale of getting things done.

Not disagreeing with you that amd is a smaller company with less resources.

Market cap just isn't the correct metric to use for the context of your quote since you're talking about what amd has to work with which would be assets, and not their market cap which fluctuates based on investor sentiment.
I know, it was simply the most easily Googled number for a 1-minute post (rather than spending 10 minutes going through quarterly financial reports hoping to find numbers they don't advertise in the first place). IIRC, AMD still has quite a bit of debt and not much cash on hand, especially when compared to Nvidia (that's been a very, very profitable company for years).

Just to reiterate: for all of those out there disappointed with Vega being "late": I get it (especially if you need a new GPU ASAP), but at this point we should all be pretty amazed at AMD's ability to stay competitive in both the CPU and GPU markets when comparing them to the size of their competitors. That isn't saying that we should put up with lackluster performance or constant delays, but it is saying that we should all chill out just a little.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
Don't think this would be the case.
AMD's driver team would have been working to get all features implemented for quite a long time. That is how they find bugs, and if they have to re-tape or not.

Sure, but this is not that easy as it looks. For the earlier GCNs the driver wasn't a problem. Every newer generation introduces some minor changes, but the concept of the design was unchanged. So, the development was straightforward, because they have to support some new features. Vega is so much different, because the architecture introduce some "never done before" changes, and the software team also never seen these before in a hardware, but they have to come up with efficient implementations. And the driver may not work well at the start, but it should be efficient enough to release the hardware, and from than they can find that 15-20 percent "untouched speed" that is still in the GPU in the coming months. So overall the implementation must be relatively close to the hardware maximum, usually 80-85 percent close. Until they don't get these results compared to the simulations, they will not release the hardware.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Two Vega's are faster than a single 1080Ti.

So are two 1060's. Is he saying Vega is as fast as a GTX1060?

Two 1060's would actually be slower in Prey than a single 1080 Ti, you need two 1070's to beat a single 1080Ti. 4K bench.
A couple thoughts.

1) If GV102 is getting a 384bit bus with 768 gb/s I'd expect Navi to get a 4 stack of HBM2 and push 960 gb/s to 1tb/s.

(I expect Vega to be fine with 2 stacks and 480-512gb/s)

2) Anyway I estimate Vegas performance seems to put it at or above the 1080ti. Let's assume 1650 clockspeed and 520mm^2 die size based on rumors.
  • Assuming a linear increase Fury X with a 57% in clock and 50% increase in transistors would be ~36% faster than the 1080ti @ 4K
  • RX 580 with 23% clock speed increase and a 124% increase in die size would be 32% faster than the 1080ti @4K
  • RX 570 with a 28% clock speed increase and twice the SPs, TMUs, and ROPs would be 99% of the 1080 ti @ 4K and 21% faster at 1080P (1080P is probably the better estimate as the 570 drops off significantly at 4K vs the TI or fury
Now performance doesn't normally increase linearly so that leaves 21-36% to give back for nonlinear performance increases and resources going to non-gaming related tasks but still remaining at or above the 1080ti.

So looking at Prey


and some limited Fury X clockspeed scaling, while assuming ALL architecture improvements have no gaming impact, Vega should be at least 50% faster than Fury.

That would put a single Vega comfortably above 60fps let alone what two should do.

If it truly takes two Vegas then AMD screwed the pooch.

Regarding Prey, there is some evidence to suggest that a single Vega FE is actually comfortably beating 1080 Ti.

A redditor counted 2.7 tears per frame, which would mean that 2x Vega FE is running at 2.7 times the refresh rate of the monitor/projector used. If we assume 90% crossfire scaling then that would put a single Vega FE at 1.4 times the refresh rate. As such if the refresh rate was 30 Hz, then we're looking at 43 FPS, and if the refresh rate was 60Hz then we're looking at 85 FPS. 43 FPS seems highly implausible, seeing as that would mean that Vega FE has 0% improvement over Fury X, so that leaves us with 85 FPS as the most plausible.

Furthermore the Prey benchmark you posted from TPU was done with the old launch version (including the launch patch), however with the new 1.2 patch, performance has gone down significantly (SSR was broken before, and thus not given the normal performance hit, in other words the older version was effectively running with SSR off). If AMD was using the newest version in their demo, then the 85 FPS they got would effectively be equal to about 100-110 FPS under the older version.

100-110 FPS would be 20-30% faster than a 1080 Ti.
 
Reactions: Magic Hate Ball

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
Don't think this would be the case.
AMD's driver team would have been working to get all features implemented for quite a long time. That is how they find bugs, and if they have to re-tape or not.
Optimization is another story, and as we can see by the Polaris launch, it does take time for AMD to squeeze out performance from them.

That is what I meant - that Vega is underperforming because the drivers are not up to scratch yet.

If you think about what a major change Vega is, its a big job. And also, they have to work on both DX11 and DX12 drivers. Over the past few years, they have optimized GCN so much that is probably as good as it gets. But with Vega, there is a lot of work still to do. I think this is why people are currently underwhelmed with the Vega demos shown so far - they haven't really tapped the potential of the card.

Yes, wishful thinking maybe, but with precedent. Look at pretty much every single AMD product ever released, and especially graphics cards. As times goes by and drivers mature, they get better.

This really just smells like a capacity/yield issue for the fab (Gloflo strikes again), and/or HBM2 is just not enough of it to satisfy demand. With nvidia soaking up all they can get for their 'pro' cards, they can afford to pay more for HBM2, and still make a good profit.
That is the problem with depending on HBM2 with no alternative (GDDR5X) in sight.

I wonder why AMD is not using GDDR5X? Licensing issue maybe? I mean, it seems like a good technology - a cost efficient way to implement higher memory bandwidth without requiring highly specialised HBM2.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Two 1060's would actually be slower in Prey than a single 1080 Ti, you need two 1070's to beat a single 1080Ti. 4K bench.


Regarding Prey, there is some evidence to suggest that a single Vega FE is actually comfortably beating 1080 Ti.

A redditor counted 2.7 tears per frame, which would mean that 2x Vega FE is running at 2.7 times the refresh rate of the monitor/projector used. If we assume 90% crossfire scaling then that would put a single Vega FE at 1.4 times the refresh rate. As such if the refresh rate was 30 Hz, then we're looking at 43 FPS, and if the refresh rate was 60Hz then we're looking at 85 FPS. 43 FPS seems highly implausible, seeing as that would mean that Vega FE has 0% improvement over Fury X, so that leaves us with 85 FPS as the most plausible.

Furthermore the Prey benchmark you posted from TPU was done with the old launch version (including the launch patch), however with the new 1.2 patch, performance has gone down significantly (SSR was broken before, and thus not given the normal performance hit, in other words the older version was effectively running with SSR off). If AMD was using the newest version in their demo, then the 85 FPS they got would effectively be equal to about 100-110 FPS under the older version.

100-110 FPS would be 20-30% faster than a 1080 Ti.
If one card could do 85fps and beat a 1080Ti then they wouldn't have used two of them, and they would be shouting about it from the rooftops.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
If one card could do 85fps and beat a 1080Ti then they wouldn't have used two of them, and they would be shouting about it from the rooftops.
Maybe, but they might also be keeping quiet to not raise expectations too high by showing off best-case scenario games.
I wonder why AMD is not using GDDR5X? Licensing issue maybe? I mean, it seems like a good technology - a cost efficient way to implement higher memory bandwidth without requiring highly specialised HBM2.
Hasn't this been discussed to death already? HBM has two drawbacks: cost (the need for the interposer seems to be the main thing here, even if the HBM stacks themselves are a bit more expensive than a similar amount of GDDR5(X)) and availability. The first is obviously not that big a problem (they sold Fiji on a bigger interposer two years ago for $550, and that was with four stack of HBM1, and their first attempt at an interposer), and the second will solve itself in time. At the same time, the areas where HBM beats GDDR5(X) are clearly important to AMD: power usage (for equivalent amounts of RAM, HBM1 was said to use around half the power - HBM2 is supposedly even lower), board complexity and size (this can be a big advantage in the AIB market for several reasons, and huge in compute/enterprise/content creation/datacenter where you want as many GPUs as possible in as small a space as possible), die size (a single 1024-bit HBM bus segment is barely bigger than a 64-bit GDDR5 bus segment) and bandwidth without needing a massive bus on the PCB. Now, if they could implement a decently-sized GDDR5X controller on the chip as well without making the die significantly bigger, they probably would. But that's impossible. The question then becomes: do they ditch HBM altogether (which would have many drawbacks), make a separate mid-to-high end die with GDDR5X (which would likely be of comparable size due to the VRAM interface, i.e. no cheaper to produce), or implement both on the same die (making that die even bigger). My guess would be that AMD sees more possible gains than losses in using only HBM2 for the upper midrange to high end (e.g. ~1070 - 1080Ti+) than any other potential solution, especially for non-gaming uses, which is where they're looking to make some actual money.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
If one card could do 85fps and beat a 1080Ti then they wouldn't have used two of them, and they would be shouting about it from the rooftops.

The whole point of the demo was to show off Threadripper and its I/O capabilites, not Vega. As such you would want to add enough GPUs to the point where there is no GPU bottleneck, and then show that Threadripper is still capable of keeping up. It's still a crappy demo mind you, as it should instead have been done as a comparison with a slower CPU, like they did with the DOTA2 streaming demo, which didn't feature FPS numbers either, but still did a perfect job of getting the message across.

Either way though this is all irrelevant. I just found this 60FPS recording of the demo, and it quite clearly shows that the refresh rate was 30 Hz, not 60 Hz, so all of the number I mentioned above should be cut in half.

So unless crossfire scaling was terrible, a single Vega FE is basically the same speed as Fury X (possible 20-30% faster if they were running the 1.2 version of Prey). I can only assume that these drivers must be extremely premature, otherwise it just doesn't make any sense*.

*another possibility is of course that the frame with 2.7 tears, was very unrepresentative of the performance.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
I have to ask (I haven't watched the stream): was the demo live? If not, does the playback Hz necessarily match the recorded FPS of the video? It would be an utterly lazy oversight if the projector was set to 30Hz or couldn't do 60, but it's possible, I suppose. And if it was live, I assume whoever was playing wasn't using the projector as the main display, but as a mirror. The question then is how the signal was split. I'd assume they used some sort of HDMI splitter (rather than Windows display controls - can Windows even render the same game mirrored on two displays?), which again begets the question of whether the splitter might be "downsampling" a 60Hz signal to 30Hz by, for example, skipping every other frame.

Also, I know I'm not very sensitive to that kind of stuff, but how does the recording "clearly show" that the refresh rate was 30Hz?
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Well they are the artists impressions of the GPU which may or may not be correct. But your right, basically geforce variants will have most of them stripped or crippled just like their Pascal counterparts e.g. a 1:16 FP64 rate etc just software support/compatibility purpose.

Thinking the gaming VEGA will be the same.
Maxwell and beyond gaming chips are 1:32 FP64 actually.

On the AMD side it's 1:16.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |