Vega/Navi Rumors (Updated)

Page 42 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
If so why use HBM over the latest DDR5X? You get all that expense, interposer complexity, memory size restrictions and (if I remember correctly) more latency, for a little higher bandwidth. Seems the worst of both worlds - either go DDR5X and have something that's cheap and flexible, or do HBM properly with 4 stacks and the appropriately huge performance.

1. HBM2 has the same bandwidth as HBM1 but at half the controller width. That means you save die space from the memory controller = smaller die = cheaper to produce.
2. HBM2 has 2x the capacity per stack vs HBM1. That means it is cheaper to implement the same capacity because you need less memory chips 2x 2GB for HBM2 vs 4x 1GB for HBM1.
3. Needing less memory chips decrease the complexity and size of the interposer = higher yields, smaller size = cheaper
4. Comparing vs GDDR-5X, HBM2 overall cost is not that much higher. With GDDR-5X you need 384bit memory controllers for less bandwidth than HBM2, that increase your die size, increase the Graphics Card PCB complexity and PCB size = higher BOM = higher cost.

Generally HBM2 is way better than GDDR5X in almost every metric.
 

cytg111

Lifer
Mar 17, 2008
23,991
13,517
136
Until AMD seriously closes the performance per watt gap with NVidia, they will never have the performance lead...

Quoting midrange cards 1060/480 and then concluding that "Until AMD seriously closes the performance per watt gap with NVidia, they will never have the performance lead.." is a bit of a stretch? Define performance lead?
Noone cares if a 480 eats 100 150 or 200 watts.
Or are you extrapolating those numbers, perf/watt, to conclude that withing no acceptable tdp does AMD have a chance at the highend? If so, based on 480, I agree with you. But I do think that GloFo 14nm is a different animal today than it was 6 months ago, Vega is another animal as well, not two polaris10 dies glued together.
 
Reactions: Bacon1

linkgoron

Platinum Member
Mar 9, 2005
2,408
977
136
I cannot tolerate such distortion and revisionist history here, which completely disregards the facts. First off, Tahiti didn't smash anything. For most of it's life span, Kepler easily competed with, and mostly outperformed Tahiti. It also trounced Hawaii as well in the form of the GTX 780 Ti.
According to TPU, even at launch the 780ti did not "trounce" the 290x, but beat it by 8%.

Vega needs to at least match the GTX 1080 for it to even be viable. The only benchmark we've seen so far is from Doom, but Doom has the advantage of using shader intrinsic functions which significantly increases performance for Radeons since they can use the same shaders as the consoles. So I wouldn't expect Doom to be an accurate predictor of performance for games in general.
Do you think that there is a real possibility that a ~500mm^2 Radeon won't beat a 1080? The question is by how much, and will it even be relevant as it'll probably be facing a 2080/1180 or a 1080ti.
 
Reactions: Det0x and Bacon1

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Quoting midrange cards 1060/480 and then concluding that "Until AMD seriously closes the performance per watt gap with NVidia, they will never have the performance lead.." is a bit of a stretch? Define performance lead?
Noone cares if a 480 eats 100 150 or 200 watts.

I'm talking about how much performance they can extract out of each watt. Ever wonder why AMD GPU designs always seem to have a lot more ALUs than comparable NVidia designs? That's a consequence of AMD GPUs having less performance per watt than NVidia.

Basically, AMD GPUs need more hardware and consequently more wattage to get the same amount of work done as NVidia GPUs. As long as this imbalance exists, then AMD will never beat NVidia in overall performance..
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
I'm talking about how much performance they can extract out of each watt. Ever wonder why AMD GPU designs always seem to have a lot more ALUs than comparable NVidia designs? That's a consequence of AMD GPUs having less performance per watt than NVidia.

Basically, AMD GPUs need more hardware and consequently more wattage to get the same amount of work done as NVidia GPUs. As long as this imbalance exists, then AMD will never beat NVidia in overall performance..

Ehmm no,

GTX 480 had 480 cores
HD5870 had 1600 cores

I dont believe i need to tell you that HD5870 had higher perf/watt than GTX480
 
Reactions: Bacon1

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Do you think that there is a real possibility that a ~500mm^2 Radeon won't beat a 1080? The question is by how much, and will it even be relevant as it'll probably be facing a 2080/1180 or a 1080ti.

I think that more than likely, it will beat the GTX 1080. If it doesn't, then it will be seen as a massive failure. Thing is, AMD always seem to need several months to get their drivers up to snuff to make their new architectures shine brightest. So by the time that lag is resolved, many people will have already made up their minds.

That's what happened to Tahiti, Hawaii, Fiji and even Polaris to an extent. At launch, all of these architectures were underperforming..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Is this another way of say perf/watt ?

No, overall performance is just an expression or result of perf/watt. Perf/watt is what the engineers go after when designing these GPUs, because it's what determines the actual performance characteristics. Overall performance is what we consumers see in the benchmarks..
 

Mem

Lifer
Apr 23, 2000
21,476
13
81
I'm talking about how much performance they can extract out of each watt. Ever wonder why AMD GPU designs always seem to have a lot more ALUs than comparable NVidia designs? That's a consequence of AMD GPUs having less performance per watt than NVidia.

Basically, AMD GPUs need more hardware and consequently more wattage to get the same amount of work done as NVidia GPUs. As long as this imbalance exists, then AMD will never beat NVidia in overall performance..

You are over analyzing, gamers don't really care about wattage, most have decent PSUs (I know I do ie Seasonic 850w), they look at price to performance and 480 is up there with the best in that range,

Also don't forget cooling and wattage has dropped for AMD with the 480 for obvious reasons, so is it really even a factor nowadays unless you have a very low wattage PSU, (some would argue if you can afford a decent 480 card ,then you can afford a decent PSU for all your hardware.

I can tell you wattage is not even on my list when I choose between Nvidia and an AMD card, (yes I went XFX RX480 GTR black edition, awesome card ) and that goes for CPU as well regardless of AMD or Intel etc..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Ehmm no,

GTX 480 had 480 cores
HD5870 had 1600 cores

I dont believe i need to tell you that HD5870 had higher perf/watt than GTX480

Well you had to way back to prove your point

And I don't believe I need to tell you that the HD5870 was AMD's last GPU that truly gave NVidia a run for it's money. Meaning, that when AMD prioritizes perf/watt, they are more likely to succeed.
 
Reactions: happy medium

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
1. HBM2 has the same bandwidth as HBM1 but at half the controller width. That means you save die space from the memory controller = smaller die = cheaper to produce.
2. HBM2 has 2x the capacity per stack vs HBM1. That means it is cheaper to implement the same capacity because you need less memory chips 2x 2GB for HBM2 vs 4x 1GB for HBM1.
3. Needing less memory chips decrease the complexity and size of the interposer = higher yields, smaller size = cheaper
4. Comparing vs GDDR-5X, HBM2 overall cost is not that much higher. With GDDR-5X you need 384bit memory controllers for less bandwidth than HBM2, that increase your die size, increase the Graphics Card PCB complexity and PCB size = higher BOM = higher cost.

1+2+3, memory needs goes up. With Vega 20 you are at 4 stacks again.

4. Yet its already established its cheaper. GP100 vs GP102.

Generally HBM2 is way better than GDDR5X in almost every metric.

It certainly isn't. Its the other way around. Same reason GDDR6 is coming.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
No, overall performance is just an expression or result of perf/watt. Perf/watt is what the engineers go after when designing these GPUs, because it's what determines the actual performance characteristics. Overall performance is what we consumers see in the benchmarks..

Makes no sense whatsoever. Of course you can design a higher performance architecture even with a perf/watt penalty. It just means to your final design will consume more power in order to beat the competition in performance.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You are over analyzing, gamers don't really care about wattage, most have decent PSUs (I know I do ie Seasonic 850w), they look at price to performance and 480 is up there with the best in that range,

Also don't forget cooling and wattage has dropped for AMD with the 480 for obvious reasons, so is it really even a factor nowadays unless you have a very low wattage PSU, (some would argue if you can afford a decent 480 card ,then you can afford a decent PSU for all your hardware.

I can tell you wattage is not even on my list when I choose between Nvidia and an AMD card, (yes I went XFX RX480 GTR black edition, awesome card ) and that goes for CPU as well regardless of AMD or Intel etc..

I know gamers don't care about wattage. Neither do I for that matter, as you can see by looking at my sig. However, the fact remains that performance per watt is what determines the final performance of a GPU. All you need to do is look at Kepler, Maxwell and especially Pascal. NVidia is able to get much more performance out of each watt than AMD because their hardware is more efficient. For AMD to match, much less beat NVidia, they have to make GPUs that are much bigger, hotter and more power hungry than the competition..

Now generally speaking, gamers don't care about wattage, especially if said GPUs are A LOT faster the competition. But what happens when said GPUs consume significantly more power, but only match or are just barely faster than the competition?

Then it's like the HD 5870 vs the GTX 480 all over again.
 
Last edited:

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131

So exactly like VideoCardz predicted at the time this thread started.

- Vega 11 replacing Polaris 10/11 (professional market) - makes me wonder if this is some sort of desktop version of Scorpio's iGPU
- High-end Vega 10 comes first (H1-2017): 12 TFLOPs vs Fiji's 8.6 TFLOPs (+40%)
- Vega 10 has the exact same number of stream processors as their 2015 flagship, so they need higher clocks and architecture improvements to move up performance
- Vega 20 looks like a 7nm shrink of Vega 10, same number of NCUs again (64 NCUs): targets H2-2018, so Vega 10 and then Vega 20 will have to face Volta-based products if they arrive in early 2018
- Navi in 2019: Navi 10 positioned as the faster GPU, Navi 11 replacing Vega 11


Bacon1 said:
According to AMD, Vega 10 is small chip and Vega 11 is the large chip.

Wrong.
 
Last edited:

cytg111

Lifer
Mar 17, 2008
23,991
13,517
136
No, overall performance is just an expression or result of perf/watt. Perf/watt is what the engineers go after when designing these GPUs, because it's what determines the actual performance characteristics. Overall performance is what we consumers see in the benchmarks..

If a 480 ties a 1060 in game x for y frames per second .. then who cares how many smurfs be running around pulling levers and pressing buttons inthere? The metric "overall performance" is rather vague.
 

jpiniero

Lifer
Oct 1, 2010
15,161
5,695
136
The rumors say the 1080 Ti is going to be $899. I imagine the best consumer Vega will be at least that, esp if it's faster.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
I know gamers don't care about wattage. Neither do I for that matter, as you can see by looking at my sig. However, the fact remains that performance per watt is what determines the final performance of a GPU. All you need to do is look at Kepler, Maxwell and especially Pascal. NVidia is able to get much more performance out of each watt than AMD because their hardware is more efficient. For AMD to match, much less beat NVidia, they have to make GPUs that are much bigger, hotter and more power hungry than the competition..

Now generally speaking, gamers don't care about wattage, especially if said GPUs are [A LOT[/b] faster the competition. But what happens when said GPUs consume significantly more power, but only match or are just barely faster than the competition?

Then it's like the HD 5870 vs the GTX 480 all over again.

dude , You went too far.Hold your brake.atm different between RX 480 and GTX 1060 Is much lower than before.remember Radeon R9 390 vs compete GTX 980.at 2018 or 2019 , No one care about Perf/watt when different between two subjects are minimal.
 

zinfamous

No Lifer
Jul 12, 2006
111,131
30,082
146
So exactly like VideoCardz predicted at the time this thread started.

- Vega 11 replacing Polaris 10/11 (professional market) - makes me wonder if this is some sort of desktop version of Scorpio's iGPU
- High-end Vega 10 comes first (H1-2017): 12 TFLOPs vs Fiji's 8.6 TFLOPs (+40%)
- Vega 10 has the exact same number of stream processors as their 2015 flagship, so they need higher clocks and architecture improvements to move up performance
- Vega 20 looks like a 7nm shrink of Vega 10, same number of CUs again (64 CUs): will have to face Volta-based products in 2018
- Navi in 2019: Navi 10 positioned as the faster GPU, Navi 11 replacing Vega 11




Wrong.

If those slides are true, that also means 16GB on Vega10. I heard as recently as this morning that 8GB stacks of HB2 don't even exist.

Who is right?

and lol--you have some post form someone else saved in clipboard somewhere to distribute out as you see fit? that's well, a bit petty, no?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Makes no sense whatsoever. Of course you can design a higher performance architecture even with a perf/watt penalty. It just means to your final design will consume more power in order to beat the competition in performance.

Yes, but this hasn't been happening. The perf/watt gap between AMD and NVidia is now so large, that AMD has not had the performance crown in a very long time. Their products are not only slower, but they also use significantly more energy. It would be one thing if their products were significantly faster than NVidia's, yet used more power. But this is not the case.
 

Thedarkchild

Junior Member
Dec 30, 2016
7
16
16
Well you had to way back to prove your point

And I don't believe I need to tell you that the HD5870 was AMD's last GPU that truly gave NVidia a run for it's money. Meaning, that when AMD prioritizes perf/watt, they are more likely to succeed.
Yet they didn't really succeed. Even with HD4xxx,HD5xxx, HD6xxx series, when they were on equal footing and they didn't succeed. Why? Because Nvidia sold more GPUs even with worse performance, price and performance per watt. They had three of a kind. Even now, when AMD is in bad state, strapped for cash, they don't have "dark triade" in GPU market. Worse performance, performance per watt and performance per dollar. Making money on bad chips while your opponent doesn't, will result in current situation where one player in the market just doesn't have enough money in R&D to compete.

My problem is the same as it was with ATi 9800 series. It whopped Nvidia ass. Completely dimantled it. It was on market months before Nvidia, it had better performance in old and current titles and much better in future ones (DX9, Shader 2.0). With AA and AF activated it left FX5800 Ultra in dust, at less price and without heavy factory overclock and gigantic coolers that FX cards came with. I'm not even going to go into details of Nvidia AA/AF implementation at that time (horrid). Just all around ass whopping (lik 8800GT against Ati few years later). But know what happened? People bought more FX cards then ATi 9xxx series. Mental.

What can AMD do about it? When competitor had worse performance, worse perf per watt and worse perf per dollar it outsold them. Now when they have perf per dollar and performance in mid range market, but worse perf per watt, its suddenly huge issue that cannot be overlooked. Even though they have advantages of their own (freesync, CF, better performance in new APIs, more VRAM, less price etc.) its still at disadvantage because it uses 40W more.

What history has shown that AMD even with all around better card and better prices, they still won't get more then 40% of market, let alone 50%+.
 

Mem

Lifer
Apr 23, 2000
21,476
13
81
But what happens when said GPUs consume significantly more power, but only match or are just barely faster than the competition?

Then you look at other factors ie pricing, ram size and features you get with brand A and brand B, one of the factors for me was pricing, features on card for example SLI/Crossfire important?, ram size, length of card was an important issue for me as well. Driver support with AMD has always been better then Nvidia with regards to ageing.

I also like the fan swap in seconds on the XFX GTR 480( another cool feature I like) easier to clean heatsink and fans let alone replace them.

I look at the whole picture and it was time for me to upgrade my ageing 280x (Had Nvidia 560 before that).

Nvidia and AMD have always had models over the years that go neck and neck with each another, nothing new there, however AMD always seem to have better value for money, not surprising when you are the underdog but nobody mentions that?

I did look at the 1060 but was slightly pricer, 2GB less ram, no dual card support and other factors made my decision.
 
Last edited:
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |