NVIDIA Pascal Thread

Page 27 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Or Nvidia chips just suck and that's why apple doesn't want them anymore.

Apple. Likes amd more now.

Id think its more to do with the prices..

Regarding GTX480/HD5870 days, AMD had great market share back then built from the HD4000 series. I guess those who stuck with nVIDIA were the ones who didn't care about the power efficiency because they wanted more performance.

But like exar333 have pointed out, GPUs started hitting the power envelope limit and they couldn't to what they've done for years e.g. increase all units by 50%, new node to reduce power, add in some new features etc. Now they got to really look at ways to reduce power consumption because nodal jumps aren't enough and there are limits with how you can cool the GPU. Plus normally when its power efficient, its quiet and produces less heat (all reduces cost!) so its a no-brainer to aim for high perf/watt metric.
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
If anyone said in 2011 that 28nm would still be mainstream until 1H 2016, he will be labelled as a madman.
 
Feb 19, 2009
10,457
10
76
If anyone said in 2011 that 28nm would still be mainstream until 1H 2016, he will be labelled as a madman.

Yeah and that's about how long we can expect 14/16 ff to last. Even Intel is having problems with their next-node, so I don't expect other foundries to be having an easy time there.

It's official: http://www.techspot.com/news/64197-intel-officially-kills-off-tick-tock-era-extended.html

I laugh when TSMC claims 7nm by 2018. Those jokers just fail every single time they make such predictions about future nodes yet they keep on doing the same massively optimistic dud predictions.
 
Last edited:

SteveGrabowski

Diamond Member
Oct 20, 2014
7,130
6,001
136
You're right, they improved performance by about 35% in many games when you compare GTX 970 to GTX 770, keeping the GTX 770 TDP, on the same node! Now tell me, how is it unrealistic to expect GTX 1070 to beat GTX 980 Ti/Titan X by 30% with new architecture and jump from 28nm to 16nm, keeping GTX 970 TDP?

Not only almost 35% performance gain, but significantly lower tdp. The 770 was pretty power hungry at 230W compared to the 160W or so for the 970. Not saying I care much about power consumption, but it's a nice extra. I'll be disappointed if the 70 series Nvidia card and 90 series AMD card can't beat the 980 Ti by 10%-15% or so after a big node shrink. 30% may be tough, that would be in line with what the 680 offered over the 580 when Nvidia went from 40 nm to 28 nm.
 
Last edited:

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
Not only almost 35% performance gain, but significantly lower tdp. The 770 was pretty power hungry at 230W compared to the 160W or so for the 970. Not saying I care much about power consumption, but it's a nice extra. I'll be disappointed if the 70 series Nvidia card and 90 series AMD card can't beat the 980 Ti by 10%-15% or so after a big node shrink. 30% may be tough, that would be in line with what the 680 offered over the 580 when Nvidia went from 40 nm to 28 nm.


GTX 770 was 230W because it was an overclocked 195W GTX 680. There are 250W aftermarket GTX 970s, so it's simply a matter of board design and power circuitry, and how much you want to go beyond the efficiency curve on a fixed die size to gain more performance.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
7,130
6,001
136
GTX 770 was 230W because it was an overclocked 195W GTX 680. There are 250W aftermarket GTX 970s, so it's simply a matter of board design and power circuitry, and how much you want to go beyond the efficiency curve on a fixed die size to gain more performance.

A 250W 970? I thought the 970 was limited to 110% TDP or so. My EVGA 970 SC can only go up to 110% TDP.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116

xorbe

Senior member
Sep 7, 2011
368
0
76
A 250W 970? I thought the 970 was limited to 110% TDP or so. My EVGA 970 SC can only go up to 110% TDP.

Yeah they are rated 145W iirc ... that's a +105w jump, that would need a re-designed power section. Almost just 1/2 the cores of a Titan X, I find it hard to believe that you could draw the same power with 1/2 the hardware. Double the heat density, it would be very hard to cool and keep temps down.
 

stuff_me_good

Senior member
Nov 2, 2013
206
35
91
If you dont think performance/watt is the key and low power consumption. Then you are living in a bubble. Note what the focus for AMDs entire Polaris marketing is. Its low power consumption and high performance/watt.

Then why didn't it matter before kepler, when all AMD product completely destroyed everything team green had on efficiency?
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91

This.

In 2011, AMD had 60% of the mobile (discrete) GPU market. Now they have 34.6%. AMD had a clear efficiency lead from 2008-2012, until Kepler was launched in 2012. To me, it is VERY clear that efficiency leads directly translated to mobile GPU leadership. Desktop, not as much (obviously) but cards didn't really start hitting the TDP limits until we saw Fermi. NV apparently learned their lesson there.

Anyone here saying efficiency didn't matter and folks didn't care about is amazingly ignorant. Sure fanboys from NV didn't care, but those of us who select GPUs based on performance and overall efficiency sure did (I LOVED my 5870).

Putting BD aside (a hot mess on the CPU side) I don't think AMD threw efficiency out the window for GPUs, but didn't focus on it like NV did. It does sound like they are now, but that's after losing almost half their mobile GPU market and are only ~20% on the desktop side. They have a lot of ground to make up.

http://www.forbes.com/sites/greatspeculations/2011/03/28/amd-stiff-arms-nvidia-for-top-spot-in-notebook-gpus/#1971005d312b

http://www.mercuryresearch.com/graphics-pr-2015-q3.pdf
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
A 250W 970? I thought the 970 was limited to 110% TDP or so. My EVGA 970 SC can only go up to 110% TDP.
Yeah, and what does rhat have to do with actual power consumption? As maxwell handles automatically voltage bumps as you OC the card, power consumption starts to spike up
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
This.

In 2011, AMD had 60% of the mobile (discrete) GPU market. Now they have 34.6%. AMD had a clear efficiency lead from 2008-2012, until Kepler was launched in 2012. To me, it is VERY clear that efficiency leads directly translated to mobile GPU leadership. Desktop, not as much (obviously) but cards didn't really start hitting the TDP limits until we saw Fermi. NV apparently learned their lesson there.

Anyone here saying efficiency didn't matter and folks didn't care about is amazingly ignorant. Sure fanboys from NV didn't care, but those of us who select GPUs based on performance and overall efficiency sure did (I LOVED my 5870).

Putting BD aside (a hot mess on the CPU side) I don't think AMD threw efficiency out the window for GPUs, but didn't focus on it like NV did. It does sound like they are now, but that's after losing almost half their mobile GPU market and are only ~20% on the desktop side. They have a lot of ground to make up.

http://www.forbes.com/sites/greatsp...a-for-top-spot-in-notebook-gpus/#1971005d312b

http://www.mercuryresearch.com/graphics-pr-2015-q3.pdf
The irony now is that NVIDIA need to throw away a lot of that efficiency out the window in order to hold off Intel from making headway into the HPC market.

This is what Pascal will achieve by re-integrating those FP64 units NV threw out with Maxwell. Come Volta, we'll see even more hardware redundancy added to NVIDIAs architecture (potentially re-introducing hardware side scheduling if that's already not going to make a come back with Pascal).

I also expect larger caches in both Pascal and Volta. Further eroding that efficiency lead people praised Kepler and then Maxwell for bringing to the table. We may also see dedicated Asynchronous Compute engines making their way into Volta.

Basically, NVIDIA are heading towards a more GCN-like architecture due to pressures on both the supercomputer end as well as the consumer end with DX12 and Vulkan (let's not forget VR).

AMD will be refining GCN with Polaris and Vega. Introducing efficiency with architectural tweaks aimed at boosting IPC coupled with a 14LPP process.

AMD are chasing VR and looking to also make a return to the HPC market with Zen, high bandwidth point to point interconnect, CUDA to OpenCL conversion and Polaris/Vega based Firepro's.

It will be interesting to see how the three companies look by Q1 2017.
 

Timmah!

Golden Member
Jul 24, 2010
1,463
729
136
This.

In 2011, AMD had 60% of the mobile (discrete) GPU market. Now they have 34.6%. AMD had a clear efficiency lead from 2008-2012, until Kepler was launched in 2012. To me, it is VERY clear that efficiency leads directly translated to mobile GPU leadership. Desktop, not as much (obviously) but cards didn't really start hitting the TDP limits until we saw Fermi. NV apparently learned their lesson there.

Anyone here saying efficiency didn't matter and folks didn't care about is amazingly ignorant. Sure fanboys from NV didn't care, but those of us who select GPUs based on performance and overall efficiency sure did (I LOVED my 5870).

Putting BD aside (a hot mess on the CPU side) I don't think AMD threw efficiency out the window for GPUs, but didn't focus on it like NV did. It does sound like they are now, but that's after losing almost half their mobile GPU market and are only ~20% on the desktop side. They have a lot of ground to make up.

http://www.forbes.com/sites/greatspeculations/2011/03/28/amd-stiff-arms-nvidia-for-top-spot-in-notebook-gpus/#1971005d312b

http://www.mercuryresearch.com/graphics-pr-2015-q3.pdf

If you chose 5870 over GTX480/580, then your decision was definitely not based on both efficiency and performance, cause nvidia cards at the time, while less efficient, were superior performers.
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
CPUs on Drive PX2 module are for a reason. GPUs Pascal do not have hardware scheduling and therefore - it is the job of CPUs.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
If you chose 5870 over GTX480/580, then your decision was definitely not based on both efficiency and performance, cause nvidia cards at the time, while less efficient, were superior performers.

Fantastically ignorant post.

If you would like to know, I grabbed the 5870 at launch and (of course you wouldn't have put in the time to actually research this) but the 480/580 didn't even exist then.

FAIL
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
CPUs on Drive PX2 module are for a reason. GPUs Pascal do not have hardware scheduling and therefore - it is the job of CPUs.

So its not on AMDs either. Only compute parts that can is Knights Landing
 

Glo.

Diamond Member
Apr 25, 2015
5,765
4,671
136
So its not on AMDs either. Only compute parts that can is Knights Landing

The only GPU that have proper Hardware Scheduling is Fiji, and maybe in future Polaris/Vega GPUs. Knights landing is not GPU exactly .

But overall you are correct
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
The irony now is that NVIDIA need to throw away a lot of that efficiency out the window in order to hold off Intel from making headway into the HPC market.

This is what Pascal will achieve by re-integrating those FP64 units NV threw out with Maxwell. Come Volta, we'll see even more hardware redundancy added to NVIDIAs architecture (potentially re-introducing hardware side scheduling if that's already not going to make a come back with Pascal).

I also expect larger caches in both Pascal and Volta. Further eroding that efficiency lead people praised Kepler and then Maxwell for bringing to the table. We may also see dedicated Asynchronous Compute engines making their way into Volta.

Basically, NVIDIA are heading towards a more GCN-like architecture due to pressures on both the supercomputer end as well as the consumer end with DX12 and Vulkan (let's not forget VR).
AMD will be refining GCN with Polaris and Vega. Introducing efficiency with architectural tweaks aimed at boosting IPC coupled with a 14LPP process.

AMD are chasing VR and looking to also make a return to the HPC market with Zen, high bandwidth point to point interconnect, CUDA to OpenCL conversion and Polaris/Vega based Firepro's.

It will be interesting to see how the three companies look by Q1 2017.

Great post!

I definitely agree with your points above. At some point, GPUs will need to be flexible enough in their cache structures, memory and compute to allow for different configurations. Not saying that cannot happen now, but a lot of different configurations means more work in driver optimizations. Part of me thinks this is why NV is doing what they are doing on each 'new' gen. Focusing on a few new areas to really 'do well' and optimize accordingly. Gone are the days of most people buying expensive cards for many years, so they sort of cater to that group. The more lower to mid-range group probably keep those longer and that's an advantage currently for AMD.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The only GPU that have proper Hardware Scheduling is Fiji, and maybe in future Polaris/Vega GPUs. Knights landing is not GPU exactly .

But overall you are correct

And your claim about PX2 couldn't be more wrong. But I guess you already knew that.
 

Creig

Diamond Member
Oct 9, 1999
5,171
13
81
If you chose 5870 over GTX480/580, then your decision was definitely not based on both efficiency and performance, cause nvidia cards at the time, while less efficient, were superior performers.
IIRC, at the time of release the 580 was only 10%-15% faster than the 5870. And the 580 had an MSRP of $499 while the 5870 was much cheaper at $379. So you got 85%-90% of the performance of a 580 at only 75% of the cost.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
CPUs on Drive PX2 module are for a reason. GPUs Pascal do not have hardware scheduling and therefore - it is the job of CPUs.

It is far more likely that they are there so that Drive can execute general purpose code much more effectively that would be possible with a GPU only.

It is very likely that the CPU on board is running a lot of the software involved in such a product with the GPU performing heavy lifting.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |