OFFICIAL KEPLER "GTX680" Reviews

Page 25 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
The NV 680
1). Faster
2) Cheaper
3 ) Smaller die
4) Cooler
5) More power efficient
6) Noise level
All these things that AMD use to lead in NV now leads . Perfect sized card . I will take 2 .
 

tincart

Senior member
Apr 15, 2010
630
1
0
This is a great looking card. I hope the rest of the product stack comes out soon; I want another great deal at $200. Unless there are some great Pitcairn deals in the future, it looks like there will be another Nvidia card in my next build.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
But overall performance comes from a combination of things. Assuming an infinite amount of memory bandwidth and PCIE bandwidth, then performance comes down to how much work can be done per clock and how many clock cycles can be crammed into a given amount of time. The 680 is faster. Not hugely faster, but faster at the reference speeds.

Like I said earlier, I don't know what there is to argue about. A GPU comes out later than another and a near price point and is faster... I think we would expect that more often than not.
Yes, 680 is faster out of the box. The only reason is because it has 15% higher clocks.
AMD can release 7975 @1.1GHz and it will be faster card out of the box.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I would suggest you learn to read?

That's just rude. Using my favorite performance charts GTX680 is just 1% faster at 2560 with AA and 3% without. That's easily within margin of error. So I see his point. The situation looks just like GTX590 and 6990. using average metrics GTX590 is a bit faster but they trade blows on game-by-game basis. I wouldn't call GTX590 the indisputable king of the hill. It still is the fastest card on the market but for all intents and purposes 6990 is just as good.
And like GTX590 that was never designed for the clocks it was released at as evidenced by its inadequate power delivery system GTX680 too was never supposed to be a top-end solution. The reason for the first was that NV was suprised by 6990 they probably thought AMD wouldn't brake ATX specification the reason for the latter was greed and market situation. They saw that they could price their mid-range card at high-end price and get away with it and even get praises that they offer better value than competing design which is true.

http://www.computerbase.de/artikel/...force-gtx-680/12/#abschnitt_leistung_mit_aaaf

It's clearly faster at 1080p though.
 

OCGuy

Lifer
Jul 12, 2000
27,224
36
91
Sure 925MHz against 1.05-1.1GHz. 680 is what 10% faster with 15% higher clocks?

There is a reason AMD released the stock clocks where they are at.

I think what you mean to point out is that even though nV is clocked higher, it is cooler, quieter, and uses less power.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Hardware.fr says (via Google Translate):

"It is interesting to note that without Boost GPU, the GeForce GTX 680 would have been content to match the Radeon HD 7970. Overclocking the GeForce GTX 680, and especially her memory, the greatest benefit to situations in which it was behind. The average gain is 13% with a peak at 18% in Alan Wake. The Radeon HD 7970 enjoys a 15% gain with this relatively conservative overclocking, enabling it to make up some of the advance of the GeForce GTX 680. Note that this is an average earnings and not a performance index that uses a weighting relative to the single GPU card the best performing in each game and gives a different result than one point."

http://www.hardware.fr/articles/857-23/performances-gpu-boost-overclocking.html

In their conclusion, they said:

"We must therefore assume that this is the range in which performance will find the GeForce GTX 680 once placed in your system, and performance may actually be closer to those of a Radeon HD 7970. In level of overclocking, the GK104 has less room than the Radeon HD 7900 GPU, GPU Boost already drawing largely in it. By cons, overclocking the memory can be generous and offer a very high yield on a GeForce GTX 680 which is somewhat lacking in memory bandwidth, providing a nice performance gain especially in situations where it is a little bit withdrawal. To beat the GeForce GTX 680 and boosted, it will then go through a massive overclocking of the Radeon HD 7970, with changes in blood increased GPU and all nuisances."

They tried maxing out GPU Boost and found that it can be unstable. They also used a relatively tame overclock of the 7970, probably without any overvolting like GPU Boost uses, and the 7970 mostly caught up.

So without GPU Boost the 680 would merely be on par with the 7970.

With GPU Boost (which acts as overclock + overvolt) the 680 managed to beat the 7970.

A modest overvolt and a reasonably common level of OC should put the 7970 back on par with the 680.

Where the 680 would win out, though, is that it apparently has lower wattage even when both cards are going balls out. I.e., the max-GPU Boosted OC+OV 680 draws less wattage than the modestly overvolted, max-stable-OC'd 7970. And then there is the Adaptive Vsync, Physx, CUDA, and $50 lower price tag.

Bottom line: without GPU Boost, both cards are on par with each other. They once again trade blows when both are pushed to roughly the same stress level (variable overvoltage and max mostly-stable-OC vs. a slight bump and max stable OC for the 7970), especially when the memory-starved 680 gets a memory OC. Where the 7970 stumbles is that it draws more power in the process, has a smaller feature set, and costs $50 more, which seems a high price to pay for 3GB of VRAM that isn't really needed. Should be interesting to see how AMD prices the 1.5GB version of the 7970, but due to the above, the 7970 1.5GB version should sell for about $425, maybe less.
 
Last edited:

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
There is a reason AMD released the stock clocks where they are at.

I think what you mean to point out is that even though nV is clocked higher, it is cooler, quieter, and uses less power.
Yes, the hell got frozen.
NV has more power efficient card for gaming. NV did an excellent job.

Here is an answer from Dave Baumann why Tahiti had its clocks so low.
http://forum.beyond3d.com/showthread.php?p=1626557#post1626557

Timing.

Although there was a relatively short time period between the releases of the chips, Verde and Pitcairn's bring-up, and to some extent qualification, have a reasonable level of leveraging going on so they are a little shortended in terms of initial engineering wafers back to product shipping. Actually setting the product "boundries" for Tahiti happened a while ago, on initial engineering material and few wafers out from the fab; Pitcairn and Verde on the other hand had their product boundries set when Tahiti production starts were already occuring and there is a very quick evolution in terms of understanding things with the new process / chips.

I guess the question you want to ask is whether, now that we know things have evolved, are we going back to re-look at Tahiti....
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Yes, the hell got frozen.
NV has more power efficient card for gaming. NV did an excellent job.

Here is an answer from Dave Baumann why Tahiti had its clocks so low.
http://forum.beyond3d.com/showthread.php?p=1626557#post1626557

Selling a pre-OC's 7970 (perhaps binned) is not a great solution... it's a stopgap at best. See my post right above yours. Yes it will take the 7970 back on par with the 680, but it will also draw more power in the process, at least for the 3GB VRAM version and probably also the 1.5GB VRAM version. And it will have a smaller feature set, too. AMD has got to lower prices or else lose market share.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Selling a pre-OC's 7970 (perhaps binned) is not a great solution... it's a stopgap at best. See my post right above yours. Yes it will take the 7970 back on par with the 680, but it will also draw more power in the process, at least for the 3GB VRAM version and probably also the 1.5GB VRAM version. And it will have a smaller feature set, too. AMD has got to lower prices or else lose market share.
I am not sure how much power 1.1GHz 7970 would need. It depends if the voltage goes up or not.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Selling a pre-OC's 7970 (perhaps binned) is not a great solution... it's a stopgap at best. See my post right above yours. Yes it will take the 7970 back on par with the 680, but it will also draw more power in the process, at least for the 3GB VRAM version and probably also the 1.5GB VRAM version. And it will have a smaller feature set, too. AMD has got to lower prices or else lose market share.

What smaller feature set? Are you talking about just psyhx? And since when power consumption is so important for high-end cards? 7970 is not bad when it comes to power consumption it draws much less power than GTX580, so its power consumption is not unreasonably high like GTX480 was. Even amd's dual gpu card drew less power. Even though power consumption gap was huge between Fermi and Cypress, Fermi was still considered the better card, which it undoubtedly is. GTX680 is just particularly good at that metric. But I agree that at current prices GTX680 is the better buy.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
I am not sure how much power 1.1GHz 7970 would need. It depends if the voltage goes up or not.


I'll try and look at my Kill-o-Watt tonight. I can say that I don't notice much difference between 925MHz and 1100MHz as far as fan noise and temp go. I'm sure it'll use a little more power, but things are great until I crank the voltage... then the fan becomes annoying quickly. I have not stopped at 1200MHz because that's as fast as I can go, I am at 1200Mhz because the card becomes just plain too loud.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I am not sure how much power 1.1GHz 7970 would need. It depends if the voltage goes up or not.

Yeah with binning it is possible to suppress wattage. For instance, look at the GTX 460 FTW edition they got AT to use in their 68x0 review. http://www.anandtech.com/show/3988/the-use-of-evgas-geforce-gtx-460-ftw-in-last-nights-review

But binning costs time/money/effort and even if successful, they are at best on par with the 680 performance. Still not beating it. Still with higher wattage. Still with a smaller feature set, including lack of Adaptive Vsync. And what happens to the price of the non-binned SKUs?

Plus NVDA has a stronger brand name, and many people are either too fearful/lazy/ignorant or whatever to overclock and they will be happier letting GPU Boost take care of the nitty gritty details.

This all adds up to bad news for AMD. I do not see how AMD can hold its market share by merely matching 680's performance and price, but not its feature set or power draw. A $500 7970-1.5GB version isn't good enough. Even $450 is probably not good enough. $425 or less might be good enough, imho. The 3GB variant can probably be priced higher for those nuts who actually want that much VRAM. But most people will read the reviews and conclude that the 680's 2GB VRAM is enough for them.


What smaller feature set? Are you talking about just psyhx? And since when power consumption is so important for high-end cards? 7970 is not bad when it comes to power consumption it draws much less power than GTX580, so its power consumption is not unreasonably high like GTX480 was. Even amd's dual gpu card drew less power. Even though power consumption gap was huge between Fermi and Cypress, Fermi was still considered the better card, which it undoubtedly is. GTX680 is just particularly good at that metric. But I agree that at current prices GTX680 is the better buy.

A lot of small advantages add up to a significant one. I'm talking about ease of GPU Boosting for people who don't know how to OC or are scared to void their warranty and want the software to hold their hand. Lower power draw. Physx/CUDA. Adaptive Vsync (this is important to me and I bet a lot of others). Probably also lower temps and noise but that depends on which models you are comparing.

Then there is $50 but that is not a feature per se.

I do not believe Fermi was considered better than Cypress by the vast majority of people, though it's possible that a smaller majority or a minority of people did. Many others, though, joked about its power/thermal/watts and lousy price/perf relative to the competition. And back then AMD had single-GPU Eyefinity all to itself.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I'll try and look at my Kill-o-Watt tonight. I can say that I don't notice much difference between 925MHz and 1100MHz as far as fan noise and temp go. I'm sure it'll use a little more power, but things are great until I crank the voltage... then the fan becomes annoying quickly. I have not stopped at 1200MHz because that's as fast as I can go, I am at 1200Mhz because the card becomes just plain too loud.

Based on the reviews I've seen you will need a bit more than 1.1GHz to match a similarly pushed 680's performance. More like 1.15 or 1.175GHz. If you're just aiming to match stock 680 performance (such as it is, since GPU Boost is still on), then you probably don't even need to hit 1.1GHz, something lower will do.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Yeah with binning it is possible to suppress wattage. For instance, look at the GTX 460 FTW edition they got AT to use in their 68x0 review. http://www.anandtech.com/show/3988/the-use-of-evgas-geforce-gtx-460-ftw-in-last-nights-review

But binning costs time/money/effort and even if successful, they are at best on par with the 680 performance. Still not beating it. Still with higher wattage. Still with a smaller feature set, including lack of Adaptive Vsync. And what happens to the price of the non-binned SKUs?

Plus NVDA has a stronger brand name, and many people are either too fearful/lazy/ignorant or whatever to overclock and they will be happier letting GPU Boost take care of the nitty gritty details.

This all adds up to bad news for AMD. I do not see how AMD can hold its market share by merely matching 680's performance and price, but not its feature set or power draw. A $500 7970-1.5GB version isn't good enough. Even $450 is probably not good enough. $425 or less might be good enough, imho. The 3GB variant can probably be priced higher for those nuts who actually want that much VRAM. But most people will read the reviews and conclude that the 680's 2GB VRAM is enough for them.




A lot of small advantages add up to a significant one. I'm talking about ease of GPU Boosting for people who don't know how to OC or are scared to void their warranty and want the software to hold their hand. Lower power draw. Physx/CUDA. Adaptive Vsync (this is important to me and I bet a lot of others).

I do not believe Fermi was considered better than Cypress by the vast majority of people, though it's possible that a smaller majority or a minority of people did. Many others, though, joked about its power/thermal/watts and lousy price/perf relative to the competition.

7970@1.1GHz is faster than 680.
http://www.hardware.fr/articles/857-23/performances-gpu-boost-overclocking.html
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
blastingcap you are being overly dramatic. Tahiti is still a small chip. The whole card costs maybe 40$ more to produce so their margins are still reasonable given that's their high-end card. Back in 2008 nvidia competed with RV700(323mm2) with GT200 (576mm2/460mm2) that had way more complex PCB 448 vs 256 and somehow still made money. That was 323mm2 vs 576/460mm2 plus more complex PCB to boot. Now the difference is way smaller. GTX680 is more efficient per mm2 than tahiti and it would be an epic failure of similar magnitude to GF FX if it wasn't. Take into account that gtx680 was stripped of almost any compute features. It has only 1/24 DP performance and static scheduler.

do not believe Fermi was considered better than Cypress by the vast majority of people, though it's possible that a smaller majority or a minority of people did. Many others, though, joked about its power/thermal/watts and lousy price/perf relative to the competition. And back then AMD had single-GPU Eyefinity all to itself.
Maybe I didn't remember that correctly I considered it as a better card than Cypress maybe that skewed my memory. I still had Cypress because I bought it full 5 months before Fermi even came out and upgrading was pointless back then because at launch GTX480 was just 15% faster than Cypress and that performance advantage came at a huge penalty to power consumption. The difference was much more relevant then it is between GTX680 and tahiti now because it made SLI GTX480 very loud, too loud for most people. For single card systems it was basically a non-issue.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
blastingcap you are being overly dramatic. Tahiti is still a small chip. The whole card costs maybe 40$ more to produce so their margins are still reasonable given that's their high-end card. Back in 2008 nvidia competed with RV700 with 576mm2/460mm2 with way more complex PCB 448 vs 256 and somehow still made money. That was 323mm2 vs 576/460mm2 plus more complex PCB to boot. Now the difference is way smaller. GTX680 is more efficient per mm2 than tahiti and it would be an epic failure of similar magnitude to GF FX if it wasn't. Take into account that gtx680 was stripped of almost any compute features. It has only 1/24 DP performance and static scheduler.

Where did I say anything about profit or cost? I was talking about market share. I am talking about the consumer's point of view. Whether either company makes money or not is irrelevant to the consumer, at least in the short term. At current prices Tahiti is a hard sell. It needs a major price cut, if not the 3GB VRAM version then definitely the 1.5GB VRAM version. Else many consumers will simply get a GTX 680 instead.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Based on the reviews I've seen you will need a bit more than 1.1GHz to match a similarly pushed 680's performance. More like 1.15 or 1.175GHz. If you're just aiming to match stock 680 performance (such as it is, since GPU Boost is still on), then you probably don't even need to hit 1.1GHz, something lower will do.
That's the point. AMD can release a faster out of the box card in a matter of weeks/days.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |