Vega/Navi Rumors (Updated)

Page 142 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

guachi

Senior member
Nov 16, 2010
761
415
136
Keep in mind the complexity of driver development for a new GPU architecture. Maybe they just didn't have a finished driver to show off a competitive product in a straight up comparison like they did with their CPU months before launch.

I hadn't thought of that. If I remember, we've seen recent comments from AMD about ensuring Vega drivers are good.

The initial testing by websites is what matters. 5% faster in tested games a month after release isn't particularly helpful.
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
Connecting them with fabric is not coming until Navi in 2019, and even if it is ready now, I don't know how they'd fit two 500mm^2 dies and four stacks of HBM on an interposer.

Is there a limit to the size of the interposer? I haven't heard that there is.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
Didn't spot that this was a reply directed at me. This doesn't qualify as a board shot, though. Sure, it shows the PCB extending the length of the shroud. It also has a matte rubberized cover, not the brushed metal aesthetic shown in their renders, not to mention the 6+8 power delivery rather than 8+8. Engineering sample? Non-final board? No matter what, the core of it is that we don't know anything about what's on the board. From what we know of Vega's die/package size, that board could be 1/3 or more bare PCB to make room for a honking huge cooler.


Well, whatever Vega's performance is, place TWO of those on one card using fabric....
We have no indication that Vega has the capability of having multiple GPUs act as one over Infinity Fabric. While this is theoretically possible with the technology, and no doubt something AMD is working towards, there is unfortunately nothing indicating that this is ready for Vega. Navi, on the other hand, might have it - who knows? And as has been stated by multiple people above: a fast interconnect does not make two GPUs appear or act like one to the OS and software. That's an entirely different ball game.

AMD has nothing to fear from Pascal, in which it will most likely out muscle it. The GP102 does not match full Vega sku.
Yes pascal is an architecture, that given the constraints of GP100 (GP102), is not as powerful of a design as Vega.
Please explain how you know this. Please. I've been asking you this for months, and you seem entirely unable to come up with an argument. Do you have any basis for this outside of your own gut feeling?

Not sure we needed a whole post of you trying to understand Pascal. We already know how well Titen Xp & Ti handle 4K. So we already know Pascal's gaming limit, and it doesn't handle 4k gaming with aplomb. Because if it did, nobody would be looking toward Vega or Volta.
Really? You seem to have a very limited grasp of how tech enthusiasts think, and how new games are always ever more demanding. The way you're talking, everyone would stop looking at new GPUs if one were to appear on the market capable of fulfilling their wishes. That's bull, plain and simple. There's always a new, hot, game with amazing graphics launching in a few months that promises to crush even the fastest GPUs. Thus, gamers and tech enthusiasts are always looking for the next piece of shiny silicon.
Or, just have not admitted to yourself, (I am right) that Vega X2 is real. But just can't bring yourself around to connecting all the dots & admitting it.
You are the ones hiding from reality.
Vega x2 is real...
Vega X2 is real.
Slightly OT here, but as someone who does textual analysis for a living, I find it fascinating that you consistently use language that conveys belief rather than knowledge. Your writing has striking similarities with the manners of speech used in various charismatic religions. Makes me wonder whether this is conscious or not, and regardless of this, what your thinking behind your way of expressing yourself is. As you seem singularly unwilling to answer critical questions with anything but reiterations of your unfounded statements, unfortunately we're not getting any closer to this. But nevertheless, it's truly fascinating.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
Is there a limit to the size of the interposer? I haven't heard that there is.
Well, they're able to exceed the reticle size of the production process, but the bigger, the more expensive. It's made from a silicon wafer, after all, and those don't come cheap, even if they're manufactured on a very mature ~60nm process. And given that Intel (IIRC) is specifically looking at splitting up the interposer into multiple smaller parts to cut costs for their uses, I'd say costs rising with size is a very real concern. One of the main concerns for Fiji was the size of the Interposer to fit a 600mm2 GPU plus four stacks of HBM.
 
Reactions: Magee_MC

jpiniero

Lifer
Oct 1, 2010
15,177
5,717
136
how can infinity fabric possibly balance work between GPU resources AND be as fast as single die? It's counter intuitive.

It wouldn't be. It'd be a tradeoff. If/when they do this you would consider it to be a single GPU composed of multiple die.
 

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
Is there a limit to the size of the interposer? I haven't heard that there is.
Maybe it's possible (though expensive and crazy), but there's no need for a 500mm^2 die if AMD intended to use Vega in such a way. It seems pretty clear to me that RTG is going for the highest single die performance they thought was appropriate.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
AMD's tried X2 cards in the past, and they haven't worked.

What you think Vega has isn't coming until Navi.
I don't care. I want one. If they make one I'll get one this time around if I can find 8-10 games I want to play that support it. Shouldn't be hard.
 

GoodRevrnd

Diamond Member
Dec 27, 2001
6,801
581
126
Well, don't go out on a limb there, or anything!


I'm just saying, there's a couple in this thread that are furiously jerking themselves off certain it will be 30% faster than a Ti, and several equally certain it'll be well behind a 1070.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Is there a limit to the size of the interposer? I haven't heard that there is.

There sure is one, but nVidia just announced an 815mm^2 chip to be put on an interposer and 4 stacks of HBM2, so it is probably possible to get an interposer large enough for two Vegas and 4 stacks. Who knows.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
There sure is one, but nVidia just announced an 815mm^2 chip to be put on an interposer and 4 stacks of HBM2, so it is probably possible to get an interposer large enough for two Vegas and 4 stacks. Who knows.
They had to use two exposures for that, which increases costs.
 
Mar 11, 2004
23,280
5,722
146
There sure is one, but nVidia just announced an 815mm^2 chip to be put on an interposer and 4 stacks of HBM2, so it is probably possible to get an interposer large enough for two Vegas and 4 stacks. Who knows.

Why couldn't they just make the interposer the board? That's more or less the next step isn't it, just make the interposer and the board be the same thing, and shrink both to as small of a size as possible. I think that's one reason why AMD is sticking to HBM, is that it has a lot of potential to shrink their form factor, enabling smaller footprint (especially desired in the consumer side) and also higher computing density (favorable in the pro/enterprise space).

They could split the typical board into sections and so instead of AMD sending the chip (or with HBM the interposer setup) to be soldered to the board, they'd send a sorta like MXM or whatever style board that the partners then slot into whatever form they're going with.

I've actually been wondering for some time why they haven't done that for the connectors alone since I think there's quite a bit of benefits. For starters it'd open things up so that coolers could push more of the heat out of the case. It would also make it so that you could easily change if you need a new connector (and not have to resort to adapters), and you could put them wherever you want, they could add some to say the front panel connector. It would be something that you could keep and save some costs on each card. Not only that but so many mainboards now come with video connectors already, so it would make the transition even easier. And they already make headless cards for use in different environments.

Would it be possible to make a chip that interfaces on the sides/edges, so that it would have two faces, effectively doubling the area of the chip that you could put a heatsink to? Or on a dual card, put them on opposite sides of the card. Wasn't one of the Nvidia dual cards one where the two GPUs were on separate boards entirely? (Yep, 7950GX2) They basically were just single slot cards but sandwiched closer (and sharing video outputs). Which, moving the video ports off would enable single slot cards that could push heat out of the case better too.
 
Last edited:

Jackie60

Member
Aug 11, 2006
118
46
101
As one of the few people to have owned and experienced Titan Pascal SLI on 5960X at 4.6 ghz 4k for 6 months I am desperate for 1080ti performance or better with 70-90% crossfire scaling. I can say without doubt that while a single 1080ti can do 4k Ultra with some games quite well aplomb isn't the word I'd use. You need two to shine at 4K in GTA 5 for example (I don't buy the idea that you turn off MSAA at 4K) and while games such as SWBF run well the Witcher 3 also needs two to look decent at 4K and those are not 2017 games.

You keep repeating this statement, without arguing for it, as if it's anything but the complete nonsense that it is. (First off, Pascal is an architecture, not a chip, but let's leave that dead and talk about GP102, which is the most relevant chip here.) The 1080Ti is a card with a cut-down GPU, not really pushed to its limits in terms of clocks (at least in FE form), yet it handles 4k60 on Ultra settings with aplomb, even with the stock blower cooler. It is - still - the most powerful consumer facing GPU out there. And sadly, we don't really have any data showing that Vega can clearly beat it. Do we have indications that Vega can match it? Arguably, yes. But beat it? No. Not that we've seen. If you have information to contradict this, show it to us.

Now, before you start accusing me of being and Nvidia shill (as you have before, repeatedly, despite the fact that I've never owned an Nvidia GPU in my life), please try to not somehow take this personally, and look at the data we have, the information we have, and what reasonable estimations can be made from these. What is making you so sure that Vega must beat GP102 across the board, when AMD hasn't shown us this? Previously, your argument has boiled down to "it's a newer architecture, so it must be faster," which isn't even remotely logical when comparing different architectures from different manufacturers. That's like arguing that VIA's newest octa-core CPUs must have better IPC than Skylake since they're newer. And again, I'm not saying that AMD's position or level of advancement in the GPU space is comparable to VIA's in the CPU space, but simply pointing out that there is no relation whatsoever - logical, causal, architectural - between Vega and Pascal performance outside of the fact that they're meant to compete and as such ought to be in the same ballpark. If you have any actual arguments that contradicts this, I would love to hear them. No doubt about it. But I have yet to see any.

You have repeatedly stated that you're a "realist". If so, please show us, either through data or argumentation, how Pascal is not a 4k chip, and how Vega will beat Pascal. Please.
 

Jackie60

Member
Aug 11, 2006
118
46
101
AMD's tried X2 cards in the past, and they haven't worked.

What you think Vega has isn't coming until Navi.

I've owned 5970, 6990 and 295x2 and I can assure you they did work. Sometimes I experienced microstutter but generally they were good cards. If they can produce a 2017 iteration without any microstutter and plenty of memory with dual Vega I'm in assuming we get excellent scaling and it can do better than 1080SLI/1080TI SLI.
 
Reactions: w3rd

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
I have to laugh about Crossfiring Vega. I Crossfired the Polaris RX480 to stay competitive with a single GTX 1080 but the CF RX480s run out of gas vs the single GTX 1080 at higher resolutions on some benchmarks.

I'll be thrilled if a single Vega can replace both RX480s and stay competitive in the zone between the GTX1080 and the GTX 1080TI throughout the various resolutions of a benchmark.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
I've owned 5970, 6990 and 295x2 and I can assure you they did work. Sometimes I experienced microstutter but generally they were good cards. If they can produce a 2017 iteration without any microstutter and plenty of memory with dual Vega I'm in assuming we get excellent scaling and it can do better than 1080SLI/1080TI SLI.
The 5970 was a 294W card. The 6990 was a 375W card. The 295X2 was a 500W card. Of those power numbers, I'd say only the first is even remotely acceptable in today's market. The GPU market has taken on a distinct preference for efficiency rather than performance regardless of power draw - largely due to the impossibility of cooling >300W GPUs effectively without an AIO or crazy loud fans. If the choice is between noise, case compatibility issues and power/thermal throttling, the vast majority of the market will say no thanks to all three. Not to mention that single GPUs are inherently more efficient, due to less redundant hardware and better scaling even when CF works at its best. As such, any GPU manufacturer is better off making the biggest, widest die they can fit inside a ~300W TDP, rather than cobbling together a dual GPU card. Not to mention that a single-die card is inherently easier to engineer and produce. Infinity Fabric will probably change this once they can use it to make multiple dice work together as one GPU, but we have no indication that we're there yet. As such, I hope dual Vega is kept in the workstation/compute segment, where scaling is far easier/better, noise is no concern, and costs are expected to be very high.

Remember: If AMD can make a 250W Vega card that performs at a level X, a dual GPU version will need to be a 500W card to potentially perform 2X with perfect scaling (which doesn't happen). Which, for various reasons, would sell in the low thousands worldwide, at best. That's a net loss for AMD, no matter the sale price. And if they scale it back, you'd get, for example, 1.5X perfect scaling performance at 375W - which is still problematical high. And when scaling doesn't work as it should (which would likely be the majority of games, regardless if AMD is putting more resources into driver development these days), you'd then be stuck with .75X performanve instead. Uh-oh. Succinctly said: the potential pitfalls and drawbacks of multi-GPU cards are too many for them to be viable in the consumer space in 2017. If they release one, I hope it's presented as a crazy side project, not a flagship. That would only harm the AMD brand.
 

coercitiv

Diamond Member
Jan 24, 2014
6,633
14,075
136
Remember: If AMD can make a 250W Vega card that performs at a level X, a dual GPU version will need to be a 500W card to potentially perform 2X with perfect scaling (which doesn't happen). Which, for various reasons, would sell in the low thousands worldwide, at best. That's a net loss for AMD, no matter the sale price. And if they scale it back, you'd get, for example, 1.5X perfect scaling performance at 375W - which is still problematical high. And when scaling doesn't work as it should (which would likely be the majority of games, regardless if AMD is putting more resources into driver development these days), you'd then be stuck with .75X performanve instead.
While I completely agree with your other arguments against a dual chip card, the power scaling example is a bit stretched: the 1.5X scaling down from 2X means a 25% drop in clocks, which in turn can yield as much as 50% reduction in power usage. As an example to support that, the Fury Nano gave up around 10-15% performance for around 25% drop in power usage.

If AMD makes a 250W Vega card that is likely clocked beyond it's efficiency range for the chip&process combo and performs at level X, they can easily make a 275W dual GPU card with 1.5X scaling. With smart power management they can even make it perform with 1X scaling when dual setup is not supported.

Cost problem is still there though and hurts the most, doesn't go away until a dual GPU card becomes more (actually less) than just 2 cards stitched like two conjoined twins who sometimes refuse to play together.
 
Reactions: Mopetar

davide445

Member
May 29, 2016
132
11
81
Reading the news seems Vega is delayed due HBM shortage.
Another possibility IMHO is AMD is delaying to avoid crypto currency mining.

The whole mining is really not a business for AMD, with his final customers waiting for cards, the market flooded later with squeezed second hand, less developers owning the card to optimize sw on it.

So maybe it's not entirely due to HBM shortage AMD is planning to release the professional card first.
Less interesting for miners due to high price, thy will go to their final users and developers.

Leave miners to exhaust the stock of Polaris cards and possibly squeeze out the last coins, and only after present retail cards.

Will not be surprised if there will be a limit to purchase more than 2 cards at a time.
And possibly being less optimized for mining if possible.
Another good point will be to fund the development of some crypto currency more optimized for CUDA cores mining, the same as i.e. Ethereum unoptimized by design for FPGA mining.
 

OatisCampbell

Senior member
Jun 26, 2013
302
83
101
Reading the news seems Vega is delayed due HBM shortage.
Another possibility IMHO is AMD is delaying to avoid crypto currency mining.

The whole mining is really not a business for AMD, with his final customers waiting for cards, the market flooded later with squeezed second hand, less developers owning the card to optimize sw on it.

So maybe it's not entirely due to HBM shortage AMD is planning to release the professional card first.
Less interesting for miners due to high price, thy will go to their final users and developers.

Leave miners to exhaust the stock of Polaris cards and possibly squeeze out the last coins, and only after present retail cards.

Will not be surprised if there will be a limit to purchase more than 2 cards at a time.
And possibly being less optimized for mining if possible.
Another good point will be to fund the development of some crypto currency more optimized for CUDA cores mining, the same as i.e. Ethereum unoptimized by design for FPGA mining.
This theory doesn't make sense in my opinion.
AMD doesn't care who buys cards they just want to sell as many cards as they can for as much as possible. If they could launch a top to bottom lineup of Vega based parts today, they would.
Design needs a respin or no RAM are only plausible explanations.
 
Last edited:

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
This theory doesn't make sense in my opinion.
AMD doesn't care who buys cards they just want to sell as many cards as they can for as much as possible. If they could launch a top to bottom lineup of Vega based parts today, they would.
Design needs a respin or no RAM are only plausible
I agree that that theory makes no sense, but a respin? Now? That's crazy. It's at least 6 months too late for that. Volume production of any mass produced high volume component starts around a quarter before (hard) launch, two months at the latest.
 

OatisCampbell

Senior member
Jun 26, 2013
302
83
101
I agree that that theory makes no sense, but a respin? Now? That's crazy. It's at least 6 months too late for that. Volume production of any mass produced high volume component starts around a quarter before (hard) launch, two months at the latest.
What if rev 02 or whatever was supposed to be the late June launch was found to not have enough usable chips per wafer in March?
A few tweaks refining chip and process and rev 03 becomes volume batch in September/October?

Wasn't the original launch for this supposed to be Q4 last year? Its not like they quit working on them once they launch or make engineering samples, would make sense to me to launch with the higher margin prosumer or professional (but lower volume) parts.

If they launch with low yield chips with a high volume part they risk enraging the fans and getting the "phantom edition" bad press.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
All this talk of delays is part of the regular rumor mill making hay. Only those new to the scene will believe everything they read. Sure it's a possibility, and that's why they are selling it. Another (much more likely) possibility is that everything is roughly on schedule.

As far as HBM availability goes, it seems like AMD is pulling a 1080 with "limited" GDDR5X. Announce early and milk the impatient buyers with a FE edition whilst demand is huge.
 

MangoX

Senior member
Feb 13, 2001
595
111
106
This theory doesn't make sense in my opinion.
AMD doesn't care who buys cards they just want to sell as many cards as they can for as much as possible. If they could launch a top to bottom lineup of Vega based parts today, they would.
Design needs a respin or no RAM are only plausible explanations.

I agree it just doesn't make any sense either. AMD and their partners would just be thrilled to sell every single card they can produce.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |