moonbogg
Lifer
- Jan 8, 2011
- 10,637
- 3,095
- 136
Well, you got 1/2 his point. Understanding that can you see why it might be confusing for someone who's a non native speaker of English?
Its easy. Whaddya talking about?
Well, you got 1/2 his point. Understanding that can you see why it might be confusing for someone who's a non native speaker of English?
Well, that was all GM104 brought to the table. Better perf/W. nVidia made a killing with it.
The irony is that the slang/jargon use of literally is done so to create hyperbole. Hyperbole being defined as 'exaggerated statements or claims not meant to be taken literally.' I think this qualifies as meta. It also probably creates a paradox every time it's done. You could be destroying worlds in other dimensions!
You're right, they improved performance by about 35% in many games when you compare GTX 970 to GTX 770, keeping the GTX 770 TDP, on the same node! Now tell me, how is it unrealistic to expect GTX 1070 to beat GTX 980 Ti/Titan X by 30% with new architecture and jump from 28nm to 16nm?
You're right, they improved performance by about 35% in many games when you compare GTX 970 to GTX 770, keeping the GTX 770 TDP, on the same node! Now tell me, how is it unrealistic to expect GTX 1070 to beat GTX 980 Ti/Titan X by 30% with new architecture and jump from 28nm to 16nm, keeping GTX 970 TDP?
The irony is that the slang/jargon use of literally is done so to create hyperbole. Hyperbole being defined as 'exaggerated statements or claims not meant to be taken literally.' I think this qualifies as meta. It also probably creates a paradox every time it's done. You could be destroying worlds in other dimensions!
If Pascal launch really happens in May i assume we should already have tons of information 5 weeks before? There are no hints on starting Mass production. Also GP100 utilizing hbm2. How can it be released (even as a Quadro card) without reasonable hbm2 production process?
If Pascal launch really happens in May i assume we should already have tons of information 5 weeks before? There are no hints on starting Mass production. Also GP100 utilizing hbm2. How can it be released (even as a Quadro card) without reasonable hbm2 production process?
I don't know... Ever since Nvidia became more efficient in the performance/watt category, energy usage has suddenly become THE most important metric to lots of people when buying a video card. Despite the fact that it didn't seem to matter whatsoever during the years that AMD/ATi was more economical in the energy sipping category.But people dont buy cards for performance/watt. They buy for more performance. They care for perf/wattage ratio, only if they can have more performance at same wattage. Not equal performance at lower wattage. Why would someone waste another 400 USD/EUROs on 1070, if they already paid 700 for 980Ti. To save 100 Watts?
I don't know... Ever since Nvidia became more efficient in the performance/watt category, energy usage has suddenly become THE most important metric to lots of people when buying a video card. Despite the fact that it didn't seem to matter whatsoever during the years that AMD/ATi was more economical in the energy sipping category.
I don't know... Ever since Nvidia became more efficient in the performance/watt category, energy usage has suddenly become THE most important metric to lots of people when buying a video card. Despite the fact that it didn't seem to matter whatsoever during the years that AMD/ATi was more economical in the energy sipping category.
2 new Macbooks shipping @ late Q2 according to DigiTimes. I bet we will find either Pascal (GP106?) or Polaris (Polaris 10?) inside one of the new 15'' MBPs.
www.digitimes.com/news/a20160322PD204.html
This is really a simplistic answer to a more complex question.
Look back at previous flagships and find 2x 8pin connectors 10 years ago. Oh, you won't find them! (even the 9800pro was <50w TDP) Efficiency didn't matter a whole lot between then and the next 10 years as GPUs just kept adding die space and more power connectors, as the TDP went up. Eventually, they couldn't add more without focusing on a more efficient architecture. A lot of 'new' GPUs 5-6 years ago were essentially the same as the old one, just 50% bigger and on a new process node. Boom! More performance and more power usage, but slightly more efficient...
The reason efficiency became important is that it was required to (1) share architectures between mobile and desktop and (2) allow the MOST performance in the power envelope available.
I think efficiency is great as long as we get options. By that I mean we can get the best possible performance at all levels, low and high power.
You seem bitter that AMD picked the wrong time to shift the focus away from efficiency. That was a HUGE mistake and has cost them dearly. They are coming back around and have made that front and center and that will be critical to winning-back designs for mobile and desktop market share.
If you think 'efficiency' is just fanboyism, you are dead wrong. Like NV or not, they made a compelling decision to focus on efficiency and market that accordingly.
I think his point was, that AMD/Ati used to have more power-efficient architecture back in the 5870/Fermi days and same people, who now cant praise Nvidia enough for this very reason did not seem to care that much about it back then.
Which may be true or not, the only thing given, there were fanboys back then and there are fanboys now. Fanboys of both camps.
They will revamp EVERY card they sell, even the GT1010 from nVIDIA and the AMD R5 430. I doubt that we will see any rebrands by now. I mean. A whole new process will change everything.
I need to be sincere.
AMD and nVIDIA won't release any new card on the new process until July from this year. They will revamp EVERY card they sell, even the GT1010 from nVIDIA and the AMD R5 430. I doubt that we will see any rebrands by now. I mean. A whole new process will change everything.
I was expecting the GDDR5 die for real, leaving the trail for GDDR5X or HBM1 as the new minimum. Sadly that is not the case.
I won't but the first cards unless it contains the GDDR5X or HBM1 at minimum, why? Because the improvements won't be dramatic.
Now to this topic.
I see this path in terms of price and performance.
Old -- New -- Price
--- -- Titan Pascal -- USD 1500
Titan X. -- GTX 1080Ti. -- USD 900
Titan 980Ti. -- GTX 1080. -- USD 700
Titan 980. -- GTX 1070. -- USD 500
And so on...
I think after NVIDIA's little legal stunt, they won't be getting any further business with Apple for years to come. Expect it to be Polaris, for better or for worse.