Or Nvidia chips just suck and that's why apple doesn't want them anymore.AMD is willing to accept peanuts for their chips these days. Their margins are abysmal.
Apple. Likes amd more now.
Or Nvidia chips just suck and that's why apple doesn't want them anymore.AMD is willing to accept peanuts for their chips these days. Their margins are abysmal.
Or Nvidia chips just suck and that's why apple doesn't want them anymore.
Apple. Likes amd more now.
Well, I was thinking more 780 to 980.
If anyone said in 2011 that 28nm would still be mainstream until 1H 2016, he will be labelled as a madman.
You're right, they improved performance by about 35% in many games when you compare GTX 970 to GTX 770, keeping the GTX 770 TDP, on the same node! Now tell me, how is it unrealistic to expect GTX 1070 to beat GTX 980 Ti/Titan X by 30% with new architecture and jump from 28nm to 16nm, keeping GTX 970 TDP?
Not only almost 35% performance gain, but significantly lower tdp. The 770 was pretty power hungry at 230W compared to the 160W or so for the 970. Not saying I care much about power consumption, but it's a nice extra. I'll be disappointed if the 70 series Nvidia card and 90 series AMD card can't beat the 980 Ti by 10%-15% or so after a big node shrink. 30% may be tough, that would be in line with what the 680 offered over the 580 when Nvidia went from 40 nm to 28 nm.
GTX 770 was 230W because it was an overclocked 195W GTX 680. There are 250W aftermarket GTX 970s, so it's simply a matter of board design and power circuitry, and how much you want to go beyond the efficiency curve on a fixed die size to gain more performance.
A 250W 970? I thought the 970 was limited to 110% TDP or so. My EVGA 970 SC can only go up to 110% TDP.
A 250W 970? I thought the 970 was limited to 110% TDP or so. My EVGA 970 SC can only go up to 110% TDP.
Looks like it was a 30% improvement at launch at both 1920x1080 and 2560x1600.
https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_980/26.html
If you dont think performance/watt is the key and low power consumption. Then you are living in a bubble. Note what the focus for AMDs entire Polaris marketing is. Its low power consumption and high performance/watt.
Then why didn't it matter before kepler, when all AMD product completely destroyed everything team green had on efficiency?
It did.
Yeah, and what does rhat have to do with actual power consumption? As maxwell handles automatically voltage bumps as you OC the card, power consumption starts to spike upA 250W 970? I thought the 970 was limited to 110% TDP or so. My EVGA 970 SC can only go up to 110% TDP.
The irony now is that NVIDIA need to throw away a lot of that efficiency out the window in order to hold off Intel from making headway into the HPC market.This.
In 2011, AMD had 60% of the mobile (discrete) GPU market. Now they have 34.6%. AMD had a clear efficiency lead from 2008-2012, until Kepler was launched in 2012. To me, it is VERY clear that efficiency leads directly translated to mobile GPU leadership. Desktop, not as much (obviously) but cards didn't really start hitting the TDP limits until we saw Fermi. NV apparently learned their lesson there.
Anyone here saying efficiency didn't matter and folks didn't care about is amazingly ignorant. Sure fanboys from NV didn't care, but those of us who select GPUs based on performance and overall efficiency sure did (I LOVED my 5870).
Putting BD aside (a hot mess on the CPU side) I don't think AMD threw efficiency out the window for GPUs, but didn't focus on it like NV did. It does sound like they are now, but that's after losing almost half their mobile GPU market and are only ~20% on the desktop side. They have a lot of ground to make up.
http://www.forbes.com/sites/greatsp...a-for-top-spot-in-notebook-gpus/#1971005d312b
http://www.mercuryresearch.com/graphics-pr-2015-q3.pdf
This.
In 2011, AMD had 60% of the mobile (discrete) GPU market. Now they have 34.6%. AMD had a clear efficiency lead from 2008-2012, until Kepler was launched in 2012. To me, it is VERY clear that efficiency leads directly translated to mobile GPU leadership. Desktop, not as much (obviously) but cards didn't really start hitting the TDP limits until we saw Fermi. NV apparently learned their lesson there.
Anyone here saying efficiency didn't matter and folks didn't care about is amazingly ignorant. Sure fanboys from NV didn't care, but those of us who select GPUs based on performance and overall efficiency sure did (I LOVED my 5870).
Putting BD aside (a hot mess on the CPU side) I don't think AMD threw efficiency out the window for GPUs, but didn't focus on it like NV did. It does sound like they are now, but that's after losing almost half their mobile GPU market and are only ~20% on the desktop side. They have a lot of ground to make up.
http://www.forbes.com/sites/greatspeculations/2011/03/28/amd-stiff-arms-nvidia-for-top-spot-in-notebook-gpus/#1971005d312b
http://www.mercuryresearch.com/graphics-pr-2015-q3.pdf
If you chose 5870 over GTX480/580, then your decision was definitely not based on both efficiency and performance, cause nvidia cards at the time, while less efficient, were superior performers.
CPUs on Drive PX2 module are for a reason. GPUs Pascal do not have hardware scheduling and therefore - it is the job of CPUs.
So its not on AMDs either. Only compute parts that can is Knights Landing
The irony now is that NVIDIA need to throw away a lot of that efficiency out the window in order to hold off Intel from making headway into the HPC market.
This is what Pascal will achieve by re-integrating those FP64 units NV threw out with Maxwell. Come Volta, we'll see even more hardware redundancy added to NVIDIAs architecture (potentially re-introducing hardware side scheduling if that's already not going to make a come back with Pascal).
I also expect larger caches in both Pascal and Volta. Further eroding that efficiency lead people praised Kepler and then Maxwell for bringing to the table. We may also see dedicated Asynchronous Compute engines making their way into Volta.
Basically, NVIDIA are heading towards a more GCN-like architecture due to pressures on both the supercomputer end as well as the consumer end with DX12 and Vulkan (let's not forget VR).
AMD will be refining GCN with Polaris and Vega. Introducing efficiency with architectural tweaks aimed at boosting IPC coupled with a 14LPP process.
AMD are chasing VR and looking to also make a return to the HPC market with Zen, high bandwidth point to point interconnect, CUDA to OpenCL conversion and Polaris/Vega based Firepro's.
It will be interesting to see how the three companies look by Q1 2017.
The only GPU that have proper Hardware Scheduling is Fiji, and maybe in future Polaris/Vega GPUs. Knights landing is not GPU exactly .
But overall you are correct
IIRC, at the time of release the 580 was only 10%-15% faster than the 5870. And the 580 had an MSRP of $499 while the 5870 was much cheaper at $379. So you got 85%-90% of the performance of a 580 at only 75% of the cost.If you chose 5870 over GTX480/580, then your decision was definitely not based on both efficiency and performance, cause nvidia cards at the time, while less efficient, were superior performers.
CPUs on Drive PX2 module are for a reason. GPUs Pascal do not have hardware scheduling and therefore - it is the job of CPUs.