Vega/Navi Rumors (Updated)

Page 195 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Elixer

Lifer
May 7, 2002
10,376
762
126
Last edited:

cytg111

Lifer
Mar 17, 2008
23,535
13,109
136
You're just making a fool of yourself. You stated higher clocks means less latency. That's one of the most unintelligent technical comment I've seen on this forum in a while.

Higher clocks do NOT = less latency. Your back tracking on the issue is laughable.
? Why doesnt it? As cas latency is specfied in cycles it goes to reason that the more cycles you have per second the lesser your total latency will be. Or am I reading the dispute incorrectly?
 

nurturedhate

Golden Member
Aug 27, 2011
1,761
757
136
? Why doesnt it? As cas latency is specfied in cycles it goes to reason that the more cycles you have per second the lesser your total latency will be. Or am I reading the dispute incorrectly?
It's an argument of generalization versus absolutes. Higher clocks does not guarantee lower latency.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
You're just making a fool of yourself. You stated higher clocks means less latency. That's one of the most unintelligent technical comment I've seen on this forum in a while.

Higher clocks do NOT = less latency. Your back tracking on the issue is laughable.
Back tracking? Cute. I did no such thing. Faster clockspeeds means lower latency assuming the cycle count is equal. That's just fact.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Back tracking? Cute. I did no such thing. Faster clockspeeds means lower latency assuming the cycle count is equal. That's just fact.

No it's not. I'm done debating it as that's tech 101. No need to waste text on debating your garbage.
 

cytg111

Lifer
Mar 17, 2008
23,535
13,109
136
It's an argument of generalization versus absolutes. Higher clocks does not guarantee lower latency.
I dont understand what there is to be a dispute over, its pretty simple math in context of some very real physical properties; higher clocks give lesser latency within the same specs - on a wire where electrons travel at a very fixed speed thus there is physical boundries to what latency you can achieve... Done. No more dispute. .
Guess the point is that from generation to generation the resulting latency will be the same in the end(minus optimizations, shorter routes, faster controllers), what you gain is bandwidth.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
High clocks doesn't equal low latency, because high clocks typically improves throughput only, doesn't improve latency, and actually usually make it worse.

So GDDR5X has higher clocks than GDDR5, but has much higher latency as well, however GDDR5X is still faster because it gets a little bit more work done per cycle, and when the cycles are too many, it adds up to a pretty large sum. In other words it is faster because it is wider and hides it's latency well through parallelization even though it has higher latencies.

In fact in RAM, you can't increase clocks without increasing latencies, this is just how RAM cells work, faster clocks means longer time for the RAM to respond.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
True, however, not sure HBM is the answer there either.
They built the new cache controller to speed up operations, so, it is possible they will switch to something like eDRAM.
OEMs are not likely going to pay a premium just for having a HBM package.
It all boils down to cost.

If that's the case then explain why they are spending so much developing HBM? Not just the actual physical cost of development but the cost in not pursuing current lower cost and more easily available tech. They're just going all in on HBM. And since they are the company most able to leverage high performance APU's due to their graphics and X86 IP who is going to usurp them by going in a different direction? Sure, nVidia is leveraging current tech better for gaming but they can't compete with AMD and Intel on the CPU front. And Intel lacks the graphics IP and expertise to compete with AMD/RTG.

AMD usurped MS on graphics API's. They're driving the "moar cores" right through Intel, which is another layer to their goal. I think they'll have their way with HBM too. Just because it's another step in their overall strategy and they haven't failed up until now. And now, they're going to have more money thanks to Ryzen and the soon to come Threadripper and Epyc CPU's. AMD has been heavily limited financially and still have managed to be the biggest influence in the PC market. Everything is pointing in the direction they are aiming it.

I'll add that this has all come about due to the console wins. Another "cost" they ate to get to where they want to be.
 
Reactions: tonyfreak215

Elixer

Lifer
May 7, 2002
10,376
762
126
If that's the case then explain why they are spending so much developing HBM? Not just the actual physical cost of development but the cost in not pursuing current lower cost and more easily available tech. They're just going all in on HBM. And since they are the company most able to leverage high performance APU's due to their graphics and X86 IP who is going to usurp them by going in a different direction? Sure, nVidia is leveraging current tech better for gaming but they can't compete with AMD and Intel on the CPU front. And Intel lacks the graphics IP and expertise to compete with AMD/RTG.
They do think HBM is the way to go, that is pretty obvious.
That doesn't mean it made fiscal sense before, with the Fury line, or now with the Vega line. They didn't go with a tiered approach like nvidia, "pro" end gets HBM and they can make lots of profit, and a consumer range where they can use the much cheaper GDDR5/X.
They seem to be playing very long term, and will eat the costs.
I'll add that this has all come about due to the console wins. Another "cost" they ate to get to where they want to be.
Speaking of console wins, I am sure you noticed that none of the players want HBM, and that is clearly for costs reasons.

That delta is just too high for what you get.

There are not enough fabs mass producing HBM yet to make sense for any major player.
Once the cost of HBM goes lower than competing tech, then you will see it making more sense to be using HBM instead of other memory technologies, but right now, there isn't any huge advantage, and only a huge tech challenge both to make & implement.
 
Reactions: Phynaz

Muhammed

Senior member
Jul 8, 2009
453
199
116
The irony!
The AMD Blind Reality Check Challenge


The next challenge given to gamers was two high-end systems that were both running AMD Radeon HD 7970 ‘Tahati’ DirectX 11 graphics cards running an Eyefinity display setup. The Intel system was powered by an Intel Core-i7 2700K ‘Sandy Bridge’ processor with an ASRock P67 Fatal1ty motherboard and 8GB of AMD DDR3 performance memory. The AMD system was powered by the FX-8150 ‘Bulldozer’ processor an ASRock 990FX Fatal1ty and the same 8GB of AMD DDR3 performance memory. The key to this demo was focused on processor performance and not graphics performance. The Intel Core i7-2700K retails for $369.99 and the AMD FX-8150 retails for $269.99, so the question here was if gamers could tell a difference between the systems.

The AMD Reality Check Results:

  • System A (Intel Core i7-2700K): 40 Votes
  • System B (AMD FX-8150): 73 Votes
  • No Difference: 28 Votes
It appeared that AMD was looking for a no difference win here with the setup, but the gamers voted for system with the AMD FX-8150 ‘Bulldozer’ processor in it.



http://www.legitreviews.com/amd-reality-check-at-fx-gamexperience_1838#Hc71mld4EcZJvekS.99
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
They do think HBM is the way to go, that is pretty obvious.
That doesn't mean it made fiscal sense before, with the Fury line, or now with the Vega line. They didn't go with a tiered approach like nvidia, "pro" end gets HBM and they can make lots of profit, and a consumer range where they can use the much cheaper GDDR5/X.
They seem to be playing very long term, and will eat the costs.

AMD can probably only afford to pursue one direction.

Speaking of console wins, I am sure you noticed that none of the players want HBM, and that is clearly for costs reasons.

That delta is just too high for what you get.

There are not enough fabs mass producing HBM yet to make sense for any major player.
Once the cost of HBM goes lower than competing tech, then you will see it making more sense to be using HBM instead of other memory technologies, but right now, there isn't any huge advantage, and only a huge tech challenge both to make & implement.

HBM is too high end right now for consoles, that's true. But that's irrelevant. Custom chips are a separate division.

What does that have to do with the post you replied to though? Who is going to usurp HBM on APU's from AMD? Or do you predict they'll simply fail to achieve their goal? Because, considering what they've done so far moving the industry where they want it with no money I only see it getting easier from here on out.

AMD is a CPU company 1st and foremost. They only acquired ATI as a means to an end. Not simply to make graphics cards. They were lucky that they did with the way Bulldozer worked out. GPU's probably kept them afloat, but that was a bump in the road (that turned out to be a major hurdle) they had to overcome. If all they wanted to do was make GPU's they'd be doing the same thing nVidia is. The fact that they don't isn't a failure but just a different strategy to fulfill their endgame.

IMO, of course.
 
Reactions: tonyfreak215

Elixer

Lifer
May 7, 2002
10,376
762
126
What does that have to do with the post you replied to though? Who is going to usurp HBM on APU's from AMD? Or do you predict they'll simply fail to achieve their goal? Because, considering what they've done so far moving the industry where they want it with no money I only see it getting easier from here on out.
On AMD? Nothing that I can see, as you mentioned, it doesn't seem like they can afford looking at other tech.
From the other players? That is a good question, there is newer tech coming in the pipeline.
If Intel starts using their MCDRAM (proprietary of course) on processors other than the Xeon Phi, that could very well bring a game changer for them in the desktop space.
Other companies are looking at HMC, STT-MRAM, and I forgot the other one, but, as I mentioned, everything comes down to cost.

AMD is paying Hynix for the HBM2 memory, then they must ship that to UMC for the front-end TSV work, and then ship that to ASE which does the backend assembly services, then ship back to GloFlo for testing and other stuff.
There are quite a few failure points there, not to mention extra costs, which is why Vega can't be cheap, and I bet is one of their most expensive GPUs yet.
This is getting to be a bit off topic with the newer tech, so I'll stop here, and swing it back to the PDXLAN Vega RX stunt.

As was previously mentioned, they are indeed doing the bulldozer type of "blind" tests, and this irks people for no good reason.
If you compare Vega RX to another unreleased product, Threadripper, the difference & direction is pretty much opposite.

On one hand we know the full specs of Threadripper, we know core counts, we even know price. They didn't do "blind" tests with Threadripper, since they knew they had a winning product, and no need to play such games.
On the other hand, Vega RX is cloaked in crazy marketing schemes, no solid information if you want to believe it doesn't have the same specs as the FE, and telling people @PDXLAN that they can't release any information on a unreleased product is untrue--look at Threadripper again.

So, if you are showcasing one unreleased product, yet using smoke & mirrors on the other, what does AMD expect people to think?
 
Last edited:

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
High clocks doesn't equal low latency, because high clocks typically improves throughput only, doesn't improve latency, and actually usually make it worse.

That's completely wrong. Everything else being equal, higher clocks will lower latency. It's the things we do to attain high clocks in the first place, like loosening timings, that can drive up latency.
 
Reactions: french toast

leoneazzurro

Golden Member
Jul 26, 2016
1,010
1,605
136
On AMD? Nothing that I can see, as you mentioned, it doesn't seem like they can afford looking at other tech.
From the other players? That is a good question, there is newer tech coming in the pipeline.
If Intel starts using their MCDRAM (proprietary of course) on processors other than the Xeon Phi, that could very well bring a game changer for them in the desktop space.
Other companies are looking at HMC, STT-MRAM, and I forgot the other one, but, as I mentioned, everything comes down to cost.

AMD is paying Hynix for the HBM2 memory, then they must ship that to UMC for the front-end TSV work, and then ship that to ASE which does the backend assembly services, then ship back to GloFlo for testing and other stuff.
There are quite a few failure points there, not to mention extra costs, which is why Vega can't be cheap, and I bet is one of their most expensive GPUs yet.
This is getting to be a bit off topic with the newer tech, so I'll stop here, and swing it back to the PDXLAN Vega RX stunt.

As was previously mentioned, they are indeed doing the bulldozer type of "blind" tests, and this irks people for no good reason.
If you compare Vega RX to another unreleased product, Threadripper, the difference & direction is pretty much opposite.

On one hand we know the full specs of Threadripper, we know core counts, we even know price. They didn't do "blind" tests with Threadripper, since they knew they had a winning product, and no need to play such games.
On the other hand, Vega RX is cloaked in crazy marketing schemes, no solid information if you want to believe it doesn't have the same specs as the FE, and telling people @PDXLAN that they can't release any information on a unreleased product is untrue--look at Threadripper again.

So, if you are showcasing one unreleased product, yet using smoke & mirrors on the other, what does AMD expect people to think?

Difference is that Threadripper has been formally announced from AMD, stating prices and availability.
RX Vega has not.
 

eek2121

Diamond Member
Aug 2, 2005
3,051
4,273
136
They do think HBM is the way to go, that is pretty obvious.
That doesn't mean it made fiscal sense before, with the Fury line, or now with the Vega line. They didn't go with a tiered approach like nvidia, "pro" end gets HBM and they can make lots of profit, and a consumer range where they can use the much cheaper GDDR5/X.
They seem to be playing very long term, and will eat the costs.

Speaking of console wins, I am sure you noticed that none of the players want HBM, and that is clearly for costs reasons.

That delta is just too high for what you get.

There are not enough fabs mass producing HBM yet to make sense for any major player.
Once the cost of HBM goes lower than competing tech, then you will see it making more sense to be using HBM instead of other memory technologies, but right now, there isn't any huge advantage, and only a huge tech challenge both to make & implement.

Your lack of understanding of the way that a distribution chain works astounds me. If you think that AMD paid $48 for 4 GB of HBM you are mistaken. AMD is a much bigger company than most and as a result, they paid far less than the list price per X units. Just like they are doing with Vega. If Nvidia jumped on the HBM2 bandwagon tomorrow they'd pay even less than AMD. That's the way things work. I encourage you to spend some time in the wholesale/distributor/retail industry before spreading your misinformation here. Only then will you begin to understand the fallacy of your posts.
 
Reactions: tonyfreak215

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
That's completely wrong. Everything else being equal, higher clocks will lower latency. It's the things we do to attain high clocks in the first place, like loosening timings, that can drive up latency.

Except with memory not everything else is equal. What fellow forum member was trying to say is:

1) To rise memory clocks AMD had to give up access latency
2) Rising access latency is usually somewhat compensated by increased bandwidth ( think GDDR5 -> GDDR5X, DDR3 -> DDR4 ), but in this case AMD also cut the bus in half, so their chip has to eat full price of increased latencies, without corresponding increase in bandwidth.

And no, rising 500->900+ memory clocks does not come for free in power and latency (and price?).
 
Reactions: Muhammed

Muhammed

Senior member
Jul 8, 2009
453
199
116
That's completely wrong. Everything else being equal, higher clocks will lower latency. It's the things we do to attain high clocks in the first place, like loosening timings, that can drive up latency.
Again everything will not be equal, increasing clocks will come at the expense of increasing latency. That's how RAM is designed.

Difference is that Threadripper has been formally announced from AMD, stating prices and availability.
RX Vega has not.
RX Vega has been teased and announced years before TR, and yet we don't know anything about it 7 days before launch. TR is announced like 3 months ago? and yet we have videos of Lisa Su herself testing TR against i9 7900X in Cinebench, with hard numbers and open testing policy. NONE of that happened with RX Vega. Or even Vega FE, which had another blind test vs the TitanXp to obscure it's bad gaming performance. All of this doesn't inspire any confidence in the RX Vega.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
On AMD? Nothing that I can see, as you mentioned, it doesn't seem like they can afford looking at other tech.
From the other players? That is a good question, there is newer tech coming in the pipeline.
If Intel starts using their MCDRAM (proprietary of course) on processors other than the Xeon Phi, that could very well bring a game changer for them in the desktop space.
Other companies are looking at HMC, STT-MRAM, and I forgot the other one, but, as I mentioned, everything comes down to cost.

AMD is paying Hynix for the HBM2 memory, then they must ship that to UMC for the front-end TSV work, and then ship that to ASE which does the backend assembly services, then ship back to GloFlo for testing and other stuff.
There are quite a few failure points there, not to mention extra costs, which is why Vega can't be cheap, and I bet is one of their most expensive GPUs yet.
This is getting to be a bit off topic with the newer tech, so I'll stop here, and swing it back to the PDXLAN Vega RX stunt.

As was previously mentioned, they are indeed doing the bulldozer type of "blind" tests, and this irks people for no good reason.
If you compare Vega RX to another unreleased product, Threadripper, the difference & direction is pretty much opposite.

On one hand we know the full specs of Threadripper, we know core counts, we even know price. They didn't do "blind" tests with Threadripper, since they knew they had a winning product, and no need to play such games.
On the other hand, Vega RX is cloaked in crazy marketing schemes, no solid information if you want to believe it doesn't have the same specs as the FE, and telling people @PDXLAN that they can't release any information on a unreleased product is untrue--look at Threadripper again.

So, if you are showcasing one unreleased product, yet using smoke & mirrors on the other, what does AMD expect people to think?

I'll leave the spoiler as to get back on topic. That still remains to be seen but at least makes sense of AMD's HBM strategy. It should be obvious that GDDR5X/6 isn't going to do what they want. We both believe they can't afford both. /case closed.

It at the very least looks like Vega still isn't ready for prime time. Or the very worse it's not going to perform well and they are trying to candy coat it as much as possible. We don't know for certain, but it is worrisome for sure.

As far as the marketing goes it really appears that they are two different companies doing the marketing. It might just be because they are operating from an inferior position with Vega. Or they might just be really really leery of nVidia ruining their coming out party and don't want to give them any ammo. I do see other possibilities though other than +300W@<1080 performance like some have preordained.

I wouldn't judge too harshly though with FE. The FE does appear to be a very competent card for what they are marketing it as. We'll have to see how it performs in pure compute tasks once the ROCm drivers are out (Which I don't believe those are ready yet.). In the professional aps though, it offers good perf/$ and flexibility for gaming possibly when drivers are finished and all of the features are accessible.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Except with memory not everything else is equal. What fellow forum member was trying to say is:

1) To rise memory clocks AMD had to give up access latency
2) Rising access latency is usually somewhat compensated by increased bandwidth ( think GDDR5 -> GDDR5X, DDR3 -> DDR4 ), but in this case AMD also cut the bus in half, so their chip has to eat full price of increased latencies, without corresponding increase in bandwidth.

And no, rising 500->900+ memory clocks does not come for free in power and latency (and price?).
HBM1 was a first generation beta test of the idea more or less. HBM2's latency in nanoseconds should be equal or lower, unless they shot up access latencies through the roof to bring it to that clock speed. I can't find specifications but I could have sworn I've seen some somewhere.

You can have higher access latencies in cycles, but due to increase in clocks end up having lower latencies in nanoseconds.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
But the bottom line is faster performance or you wouldn't do it.

Faster performance IF bus width is the same. In this case AMD can't even reach spec sheet 1Ghz at nasty voltages, what makes You think same engineers somehow achieved a miracle of latencies?
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
People are arguing semantics. Sure you can run tighter timings at lower clocks. So "technically" you add latency, as in loosen timings, as you increase clocks. But the bottom line is faster performance or you wouldn't do it.

Do you guys not understand the difference in access latency and bandwidth? I can't believe we're having this discussion in 2017.

For gods sake people, learn the basics before arguing about GPU's. Some people still remaining want to discuss technical merit and not garbage by so called consumers.

 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |