[TT] Pascal rumored to use GDDR5X..

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

littleg

Senior member
Jul 9, 2015
355
38
91
You pointed out that the 7970ghz eventually pulled away from the 680 as befitted a high end card vs a midrange card.

Both cards retailed for $499. But the 680 which took the crown at release has failed to keep up.

ergo the 7970ghz which was basically equivalent when released was a better value over time than the 680 per your own comment.

This has been repeated with the 780 when it was the high end gaming card and to a lesser extent the 980.

My other point was if NV is going to use GDDR5X for Pascal instead of HBM2 then in my opinion it's due to a strong business case as opposed to a technical reason.

There's no reason for NV to skip HBM's superior power efficieny, form factor and bandwidth for pascal unless there is a significantly limited supply or a significant risk of delaying Pascal.

AMD doesn't have the luxury of waiting another round for HBM2, NV does.

Aside from potentially constrained initial supply, cost may be an issue for the low and mid range cards.

We don't have any data on the relative costs of GDDR5X vs HBM but if there is a non-trivial additional cost for HBM then that would drive the price of low and mid range cards up.

Both vendors will have price points they want to hit in each segment for optimal volume and profit. It may be the case that it's not really feasible at the moment to offer a card in the lower segments of the market with HBM and still hit those price points.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Odd. Long time AMD fan, you know how it was during Fermi? Every AMD user and their mother were hollering and hooting about power efficiency. That flew out the window the day HD 7970 got taken out by GTX 680. ANd it was a dead talking point when HD 7970 Ghz basically traded it all in to beat GTX 680.


My point is, all these talking points go back and forth ad nauseam. No one here is explicitly asking for GDDR5X. All I see is a bunch of "if HBM is really delayed, this seems like a reasonable stop gap." YOu know what else I don't see? "If HBM is delayed, that's fine, I hope they delay all the cards until HBM is readily available."

I won't be surprised if roles swap and AMD is back to efficient over Nvidia. But I'll laugh internally when now the AMD camp is bragging efficiency again like there is no tomorrow.

680 vs 7970 efficiency is load dependent. Don't forget mining. The 7970 was miles more efficient.
 
Feb 19, 2009
10,457
10
76
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I meant more so, we might see Hawaii/Grenada[Tahiti/Tonga?] with a node shrink make another showing.

A die-shrunk Tonga would actually be a pretty decent pipe cleaner for AMD's 16nm FF+ efforts. If they resisted the temptation to overclock the heck out of the card and instead ran it at GCN's optimal clock rate (which is 800-900 MHz on 28nm; I suppose it could be different on 16nm), they could get the power consumption under 75W. That would make it the most powerful card by far that doesn't need a PCIe power connector, making it perfect for the OEM upgrade market. The success of the GTX 750 Ti indicates that there's a lot of volume there.

That said, along with the shrink, they should add HDMI 2.0 support to the output module, and update the UVD block to support HEVC decoding (both 8-bit and 10-bit) in hardware. The GTX 960 has these features so AMD needs to keep up. Besides, a card like this would be perfect for HTPCs.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Odd. Long time AMD fan, you know how it was during Fermi? Every AMD user and their mother were hollering and hooting about power efficiency. That flew out the window the day HD 7970 got taken out by GTX 680. ANd it was a dead talking point when HD 7970 Ghz basically traded it all in to beat GTX 680.

Actually, Tahiti was about on par with GK104 in terms of perf/watt. The vanilla HD 7970 had very similar perf/watt to the GTX 680. Yes, it's true that the 7970 GHz Edition trailed behind (GCN is most efficient around 800-900 MHz, and pushing it beyond that hurts perf/watt). And Nvidia started pushing Kepler a bit more for the 700 series, too. Look at this chart: as of May 2013, the 7970 GHz Edition was 95% as efficient at the GTX 770.

As an architecture, Kepler was overall inferior to GCN; it sometimes matched or beat GCN in game performance at the time, but that was due to better software DX11 optimizations, not anything inherent in the hardware. On anything compute-related, GCN was clearly better (except perhaps double-precision on the Titan/Tesla). Today, GCN beats Kepler in just about everything. Of course, Maxwell is better still; but it's a mistake to read history backwards and assume that because Nvidia has the perf/watt crown now, they must have had it in 2013 as well. They didn't.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Actually, Tahiti was about on par with GK104 in terms of perf/watt. The vanilla HD 7970 had very similar perf/watt to the GTX 680. Yes, it's true that the 7970 GHz Edition trailed behind (GCN is most efficient around 800-900 MHz, and pushing it beyond that hurts perf/watt). And Nvidia started pushing Kepler a bit more for the 700 series, too. Look at this chart: as of May 2013, the 7970 GHz Edition was 95% as efficient at the GTX 770.

As an architecture, Kepler was overall inferior to GCN; it sometimes matched or beat GCN in game performance at the time, but that was due to better software DX11 optimizations, not anything inherent in the hardware. On anything compute-related, GCN was clearly better (except perhaps double-precision on the Titan/Tesla). Today, GCN beats Kepler in just about everything. Of course, Maxwell is better still; but it's a mistake to read history backwards and assume that because Nvidia has the perf/watt crown now, they must have had it in 2013 as well. They didn't.

I'm just going to use this post since I'm not gonna go and requote everyone.



Everyone arguing is proving my point (not you person I quoted). AMD had better perf/watt going into 28nm and THEY LOST IT with the GTX 680 vs 7970. THAT IS ALL I SAID. How quickly people jumped to posting charts and proving my point is exactly what I meant. When AMD had that metric, it was an AMD dogma. I know, I used it a lot against NV-users. Once NV got the metric, perf/watt didn't matter. It went to "well it's faster." Which is what I said when Ghz basically traded it all in to win.

Look at how Nano is basically treated. My original post to the person I responded to was that being Green (as in eco-friendly) around here was never about anything beside scoring points. (I was agreeing with him).
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
As an architecture, Kepler was overall inferior to GCN; it sometimes matched or beat GCN in game performance at the time, but that was due to better software DX11 optimizations, not anything inherent in the hardware. On anything compute-related, GCN was clearly better (except perhaps double-precision on the Titan/Tesla). Today, GCN beats Kepler in just about everything. Of course, Maxwell is better still; but it's a mistake to read history backwards and assume that because Nvidia has the perf/watt crown now, they must have had it in 2013 as well. They didn't.

That's not totally true. Kepler and GCN both have strong points and neither is truly overall better. This is because designing a GPU involves trade offs, for example you mention compute, GCNs compute is actually on average worse than a fully enabled Kepler (GK180GL). The reason one may think Kepler is actually inferior on compute is due to nVIDIA disable/leaving out a large number compute units in their consumer designs. If you look at Quadro K6000 it was a very competitive card.

Besides compute, Kepler also offers superior texturing performance, geometry performance, texture compression (since Fermi) and superior efficiency. Now you may debate me on that last point but remember this, nVIDIA was able to successfully shrink Kepler into a phone SoC, something I don't think was/is possible with GCN 1.1/1.2.

Now GCN on the other hand was indeed the more forward looking architecture, supporting more features than Kepler, GCN also has very strong ALUs but this wasn't without trade offs.

GPU designs much like CPU designs are rarely clean sweeps in favor of one design over the other, they usually involve trade offs.
 
Last edited:

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
That's not totally true. Kepler and GCN both have strong points and neither is truly overall better. This is because designing a GPU involves trade offs, for example you mention compute, GCNs compute is actually on average worse than a fully enabled Kepler (GK180GL). The reason one may think Kepler is actually inferior on compute is due to nVIDIA disable/leaving out a large number compute units in their consumer designs. If you look at Quadro K6000 it was a very competitive card.

Besides compute, Kepler also offers superior texturing performance, geometry performance, texture compression (since Fermi) and superior efficiency. Now you may debate me on that last point but remember this, nVIDIA was able to successfully shrink Kepler into a phone SoC, something I don't think was/is possible with GCN 1.1/1.2.

Now GCN on the other hand was indeed the more forward looking architecture, supporting more features than Kepler, GCN also has very strong ALUs but this wasn't without trade offs.

GPU designs much like CPU designs are rarely clean sweeps in favor of one design over the other, they usually involve trade offs.

Shrinking GCN into an SoC is definitely possible. It might be slower than Kepler, but beyond that the thing limiting it is AMD's budget.
 

Goatsecks

Senior member
May 7, 2012
210
7
76
You can say the same about people talking about power consumption..go to a electrical forum.

Apart from efficiency explicitly impacts on thermals and implicitly impacts on acoustics and overlocking potential.

Whilst the mining craze was a pain in the ass for gamers who wanted to get thier hands on 7950s.

Edit. Forgot to mention mobile GPUs. Pretty self explanatory.
 
Last edited:

ocre

Golden Member
Dec 26, 2008
1,594
7
81
My other point was if NV is going to use GDDR5X for Pascal instead of HBM2 then in my opinion it's due to a strong business case as opposed to a technical reason.

There's no reason for NV to skip HBM's superior power efficieny, form factor and bandwidth for pascal unless there is a significantly limited supply or a significant risk of delaying Pascal.

AMD doesn't have the luxury of waiting another round for HBM2, NV does.

This thread has went silly.
100% Gddr5x wouldnt be the first choice from a strictly technological perspective. But, it is very apparent most of us here lack the deeper understandings when it comes to chip design and manufacturing.

The truth is simple, Intel isnt selling the best chip they can possibly make. And neither is AMD. There is compromise and trade offs every step on the way. Be it time, cost, feasibility, power consumption, etc, etc. There is no chip in production without compromise.

Trade offs and compromise is the reality of every chip maker. They live every day weighing the pros and cons, often making tough decisions.
But this one isnt even all that complicated.

We often talk about these things with very little vision into the big picture. Lets just talk about a few of the relevant hurdles that are most likely in play in this specific decision. Its blatantly obvious the technological advantage HBM2 has over GDDR2 especially when we consider power consumption. But if it isnt feasible, it all means nothing to a chipmaker. The time frame to HBM2 mass production is a concern that most of us get but you must also consider that the ramp up speed would also be a huge issue to nvidia as well. Nvidia's sell volumes are huge, so deciding and depending on a technology that cant deliver chips in the quantities they need would be suicidal. It only makes sense to look for other options.
 

Goatsecks

Senior member
May 7, 2012
210
7
76
This thread has went silly.
100% Gddr5x wouldnt be the first choice from a strictly technological perspective. But, it is very apparent most of us here lack the deeper understandings when it comes to chip design and manufacturing.

The truth is simple, Intel isnt selling the best chip they can possibly make. And neither is AMD. There is compromise and trade offs every step on the way. Be it time, cost, feasibility, power consumption, etc, etc. There is no chip in production without compromise.

Trade offs and compromise is the reality of every chip maker. They live every day weighing the pros and cons, often making tough decisions.
But this one isnt even all that complicated.

We often talk about these things with very little vision into the big picture. Lets just talk about a few of the relevant hurdles that are most likely in play in this specific decision. Its blatantly obvious the technological advantage HBM2 has over GDDR2 especially when we consider power consumption. But if it isnt feasible, it all means nothing to a chipmaker. The time frame to HBM2 mass production is a concern that most of us get but you must also consider that the ramp up speed would also be a huge issue to nvidia as well. Nvidia's sell volumes are huge, so deciding and depending on a technology that cant deliver chips in the quantities they need would be suicidal. It only makes sense to look for other options.

Begone you with your turgid common sense and logic. God dammit, we want to lampoon nvidia for not using hbm2 in their paper weights. Let alone thier GPUs.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
You can say the same about people talking about power consumption..go to a electrical forum.

Wait, so power consumption (and heat) doesn't affect the cases you can properly fit the card in, how much you can OC it, the power supply you'd need and so on?

OK then...
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Wait, so power consumption (and heat) doesn't affect the cases you can properly fit the card in, how much you can OC it, the power supply you'd need and so on?

OK then...

You need to look at the whole conversation and then maybe get your sarcasm meter tuned up.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Actually, Tahiti was about on par with GK104 in terms of perf/watt. The vanilla HD 7970 had very similar perf/watt to the GTX 680. Yes, it's true that the 7970 GHz Edition trailed behind (GCN is most efficient around 800-900 MHz, and pushing it beyond that hurts perf/watt). And Nvidia started pushing Kepler a bit more for the 700 series, too. Look at this chart: as of May 2013, the 7970 GHz Edition was 95% as efficient at the GTX 770.

As an architecture, Kepler was overall inferior to GCN; it sometimes matched or beat GCN in game performance at the time, but that was due to better software DX11 optimizations, not anything inherent in the hardware. On anything compute-related, GCN was clearly better (except perhaps double-precision on the Titan/Tesla). Today, GCN beats Kepler in just about everything. Of course, Maxwell is better still; but it's a mistake to read history backwards and assume that because Nvidia has the perf/watt crown now, they must have had it in 2013 as well. They didn't.

Kepler and GCN were competitive with each other in desktop, but in the other half of the market (laptops) it wiped out AMD. It's laptops where performance/watt really matters, hence I think it's fair to say Kepler was a generation ahead of anything AMD have in performance/watt.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Kepler and GCN were competitive with each other in desktop, but in the other half of the market (laptops) it wiped out AMD. It's laptops where performance/watt really matters, hence I think it's fair to say Kepler was a generation ahead of anything AMD have in performance/watt.

Only Tahiti and Hawaii,with the emphasis on high DP performance/mm2, were really worse,but the rest of the range was generally competitive in performance/watt.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |