[Hardcorp] GALAXY GTX 660 Ti GC OC vs. OC GTX 670 & HD 7950

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
They have updated their high AA benches. As expected the gap vs the 670 grows to 15-20%.

Not sure if they are serious thinking its valid benching a high OC 660ti vs a "reference" biosed 7950 which doesn't even operate at 925mhz constantly..

I hate to say it but AT's coverage of 7950 is disappointing. I was expecting a full evaluation of HD7950 vs. 660Ti based on new prices and after-market SKUs available from both sides. The $300 segment is right now starting to hit that price-performance sweet spot a lot of enthusiasts are looking for. The 7950 vs. 660Ti coverage should have been more detailed since those cards will sell more units than 670/680/7970 series. Not only did they completely ignore after-market HD7950, as countless of new 7950 version have launched since January 31st), but they also didn't retest 7950 OC with the latest drivers like HardOCP did, or actually for that matter even tested the 7950 B thoroughly. Using a 7950 "B" reference card overvolted to 1.25V when it's the only card out of almost 20 on Newegg that uses this voltage is mind-boggling. That's like taking the worst possible, loudest and hottest 7950 card and using it as a measuring stick for all other 95% of 7950s....

Computerbase found out that if you just move the PowerTune to +10% in the CCC, you get the full 925mhz boost almost all the time. Time and time again I have to go to these European websites to find out more about how videocards are working. This should have been explored by AT.

That's besides the point that some after-market 7950 cards already ship with 950mhz clocks. Not tested at AT against after-market 660Tis even though 3GB 660Ti cards sell for $340.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Now you may say well GTX480 and 580 stayed at $500 but remember AMD's cards were $370. If HD8970 is $550, GTX780 will be $650+

Dammit Russian don't you read my posts! http://forums.anandtech.com/showpost.php?p=33860889&postcount=73

I keep saying GK110's flagship sku is going to be more than what we are used to seeing. I think the first sku based off GK110 will have 1-2 SMX's disabled, and will be $599 (gtx780). A few months later Nvidia will finally get the uncut GK110 geforce out, and that will probably be $649-699 (gtx785), depending on what the gtx780 price levels are at.
 

lopri

Elite Member
Jul 27, 2002
13,211
597
126
I did not know about Borderland/Borderland 2. I don't think it's in AT benchmark suit, is it?
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
I hate to say it but AT's coverage of 7950 is disappointing. I was expecting a full evaluation of HD7950 vs. 660Ti based on new prices and after-market SKUs available from both sides. The $300 segment is right now starting to hit that price-performance sweet spot a lot of enthusiasts are looking for. The 7950 vs. 660Ti coverage should have been more detailed since those cards will sell more units than 670/680/7970 series. Not only did they completely ignore after-market HD7950, as countless of new 7950 version have launched since January 31st), but they also didn't retest 7950 OC with the latest drivers like HardOCP did, or actually for that matter even tested the 7950 B thoroughly. Using a 7950 "B" reference card overvolted to 1.25V when it's the only card out of almost 20 on Newegg that uses this voltage is mind-boggling. That's like taking the worst possible, loudest and hottest 7950 card and using it as a measuring stick for all other 95% of 7950s....

Computerbase found out that if you just move the PowerTune to +10% in the CCC, you get the full 925mhz boost almost all the time. Time and time again I have to go to these European websites to find out more about how videocards are working. This should have been explored by AT.

That's besides the point that some after-market 7950 cards already ship with 950mhz clocks. Not tested at AT against after-market 660Tis even though 3GB 660Ti cards sell for $340.

I would hate to thing anything bad about anandtech because this is the only place I frequent, but their review of the 660ti just seemed so one sided I couldn't even take it serious. It's a very very sad state of affairs when you get outdone by Tom's Hardware. Seriously...
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I did not know about Borderland/Borderland 2. I don't think it's in AT benchmark suit, is it?

No, Borderlands 1 isn't a particularly demanding game. Borderlands 2 will likely perform similarly, except when enabling physx. I'm sure physx in this game will start another big flame debate, but I personally think it looks like a decent implementation / addition. I have no idea how much of a performance hit it will have on various geforce cards, though.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
GK110 has been verbally confirmed to be a GeForce product eventually, and will be on the market in the professional sector in Q4. It is only a matter of time before it will be a GeForce product. I do not need to provide links for what is both common sense and common knowledge.
Common sense is not a data point when talking about possible release dates.
It is my personal prediction that it will be released as a very high priced GeForce product in December, but it is probably more likely it will be out Q1 next year.
Which would mean that GK110 or whatever it's called will be released as a professional product earlier, since it is well established that Nvidia is going to satisfy the professional market first. So we are apparently going to see GK110 very soon. I find this unlikely, which is why I asked for links (with info from Nvidia, not a third party guess) but there does not seem to be anything out there.
And Sea Islands has been confirmed by AMD to be a 2013 product. This has been linked to over and over again on this forum, and also shown in AT articles.
Very possible, if Sea Islands is released this year it will be basically the last day and a preview launch.
Prices, im not going to speculate or really care, if its the fastest thing around im pretty sure they can even sell it for $900 and ppl would still buy more than they can produce with TSMC's well know capabilities.
That could mean they will sell 5 of them.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126

I do. I think you have a very high accuracy for predicting pricing, performance and NV's launch strategies. :thumbsup:

Myself I don't know what to think though. If GTX780 is 40-50% faster than 680, but then NV plans to launch Maxwell in 2014 and that's supposed to add another 40-50% on top of the 780? That seems too optimistic, no? It took AMD more than 3.5 years to nearly double the performance of the 5870 (~6950 level) with the 7970 Ghz edition. In 2 years NV increased the performance just 40-50% from the GTX480 (~GTX570). I am having a hard time believing that GTX780 will add 40-50% on top of the 680 and then Maxwell adds another 50% on top of the 780. We would have this then:

GTX680 (March 2012) --> end of 2014 (2x the performance increase). W000t!
 
Last edited:
Feb 19, 2009
10,457
10
76
Depends on whether the migration below 28nm is on schedule or not... not* is probably more realistic and what's Maxwell going to be on?

On paper at least gk110 should be >50% of gk104. We'll just wait and see.
 

hyrule4927

Senior member
Feb 9, 2012
359
1
76
Computerbase found out that if you just move the PowerTune to +10% in the CCC, you get the full 925mhz boost almost all the time. Time and time again I have to go to these European websites to find out more about how videocards are working. This should have been explored by AT.

I had been wondering if PowerTune could be used for more consistent boost. Looks like it takes about 5 seconds to adjust this to ensure a consistent 925MHz boost.

How is it that not a single english hardware site has thought to increase the PowerTune limit in their review?

Also, considering that AMD isn't afraid to give these things 1.25V, I'm very interested to see just how far a solid card would overclock on that voltage.
 
Last edited:

Hypertag

Member
Oct 12, 2011
148
0
0
I do. I think you have a very high accuracy for predicting pricing, performance and NV's launch strategies. :thumbsup:

Myself I don't know what to think though. If GTX780 is 40-50% faster than 680, but then NV plans to launch Maxwell in 2014 and that's supposed to add another 40-50% on top of the 780? That seems too optimistic, no? It took AMD more than 3.5 years to nearly double the performance of the 5870 (~6950 level) with the 7970 Ghz edition. In 2 years NV increased the performance just 40-50% from the GTX480 (~GTX570). I am having a hard time believing that GTX780 will add 40-50% on top of the 680 and then Maxwell adds another 50% on top of the 780. We would have this then:

GTX680 (March 2012) --> end of 2014 (2x the performance increase). W000t!

You are being extremely pessimistic with your GTX 780 performance estimates. At first, you were saying just "50%" increase from GTX 680. Now you are saying it will be "40-50%". Look, you could be correct with that estimate if Nvidia is literally incapable of making functional GK110 dies after waiting for the 28nm process to improve for an entire year. This is where I get the "idea" that you are biased to AMD. Your last post basically detailed how you are "convinced" that the 8970 will be "basically the same" as the GK110 when we know nothing about it, and at least have a white paper on GK110. You assumed many things, such as the GK110 being crippled, the GK110 being just as "bad" at 2560x1440, and the 8970 being more efficient. I think we will see at least a 55-60% increase from GK110 over GK114. If they can get all 15 units, then I expect at least 60-70% more. Basically, I do not see any mathematical way you can look at the GK110 specifications, and conclude that it will be "40%" faster. It will have 87.5% more shading potential (hindered slightly by clock speeds), and it will have 50% more memory bandwidth (yeah 448-bit would have been better). Those specifications do not translate to "40%" better than GK114. As far as the clock speed limits, note that you can save a lot of power by reining in turbo boost a bit. These GK114 cards are using 150-175 watts at 1.15GHz or so. Plenty of room to trim it down slightly without getting into GF100 mobile territory (400~MHz?).
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I do. I think you have a very high accuracy for predicting pricing, performance and NV's launch strategies. :thumbsup:

Myself I don't know what to think though. If GTX780 is 40-50% faster than 680, but then NV plans to launch Maxwell in 2014 and that's supposed to add another 40-50% on top of the 780? That seems too optimistic, no? It took AMD more than 3.5 years to nearly double the performance of the 5870 (~6950 level) with the 7970 Ghz edition. In 2 years NV increased the performance just 40-50% from the GTX480 (~GTX570). I am having a hard time believing that GTX780 will add 40-50% on top of the 680 and then Maxwell adds another 50% on top of the 780. We would have this then:

GTX680 (March 2012) --> end of 2014 (2x the performance increase). W000t!

Hahaha no I am not always very accurate. I think Maxwell's compute monster won't hut until the end of 2014 or early 2015. If 20nm is widely available before then, we'll get Kepler shrinks. Also, I would imagine that Maxwell's GM104 will come out ahead of GM100/GM110.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
A couple thoughts of mine:

1) If the die is so massive, and NV is able to sell GTX680 for $500, why would they sell a GPU 2x larger for $500, if it has so much more performance? Unless 8970 gives them strong competition, NV has no incentive to price the 780 for $499. If it's 50%+ faster than a GK104 and has almost double the die size, and has the performance lead, NV can crank it to $649+. They did it before with GTX280 and even more for 8800GTX. The only reason NV dropped GK104 at $500 is because it's not really any faster than AMD's card. So they couldn't realistically charge much more. If NV has a substantially faster chip, they'll add the NV premium in no time -- they did in the past every single time they had a large lead. Now you may say well GTX480 and 580 stayed at $500 but remember AMD's cards were $370. If HD8970 is $550, GTX780 will be $650+.

2) 50% faster than GK104 and using 1080P as a yard stick starts to become questionable. At that level of performance, 1080P is no longer stressing the GPUs for most games (i.e., not every game will be like Crysis 3 or Metro Last Light).

At 2560x1600, HD7970 = GTX680 (TPU, Computerbase), HD7970 Ghz leads by 9% to 12%, depending on the source.

If 8970 has 2560 SPs, 48 ROPs and even higher clocks, 25-30% gain is possible. Suddenly, you are looking at 36% to 45% faster than a GTX680 at 1600P. GTX780 will probably win overall but at higher resolutions, it'll be close I bet. And honestly the market forf people who want to spend $500+ on yet another new generation GPU to play console ported games at 1080P is shrinking every generation. GTX670/680 should be fast enough for most games at 1080P, why even bother upgrading unless you must use SSAA or need 60 fps minimum at all times? I think 8970 and 780 will be aimed strictly at 2560x1600 users. Unless games start to get a lot more demanding soon, 1080P testing for the new flagship generation may become even less relevant.

Skyrim already shows us that 1080P is almost entirely CPU limited for current generation GPUs while at 1600P, the picture is entirely different. Right now a 925mhz 7970 = 680 at high resolutions. That means something like an 1100mhz 2560SP 48 ROP 8970 may have a chance.

They'll start testing with some feature cranked that tanks performance. They can just use more samples for AO and/or GI or increase reflection depth and drive the requirements up enough to justify a new card. It's all they've been doing with these console ports for years now and it's worked just fine.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I hate to say it but AT's coverage of 7950 is disappointing. I was expecting a full evaluation of HD7950 vs. 660Ti based on new prices and after-market SKUs available from both sides. The $300 segment is right now starting to hit that price-performance sweet spot a lot of enthusiasts are looking for. The 7950 vs. 660Ti coverage should have been more detailed since those cards will sell more units than 670/680/7970 series. Not only did they completely ignore after-market HD7950, as countless of new 7950 version have launched since January 31st), but they also didn't retest 7950 OC with the latest drivers like HardOCP did, or actually for that matter even tested the 7950 B thoroughly. Using a 7950 "B" reference card overvolted to 1.25V when it's the only card out of almost 20 on Newegg that uses this voltage is mind-boggling. That's like taking the worst possible, loudest and hottest 7950 card and using it as a measuring stick for all other 95% of 7950s....

Computerbase found out that if you just move the PowerTune to +10% in the CCC, you get the full 925mhz boost almost all the time. Time and time again I have to go to these European websites to find out more about how videocards are working. This should have been explored by AT.

That's besides the point that some after-market 7950 cards already ship with 950mhz clocks. Not tested at AT against after-market 660Tis even though 3GB 660Ti cards sell for $340.

They aren't the only ones who took the short way out of reviewing the 7950B. Look at the card Hardware Canucks did a real hatchet job.


There is no worse of a design for O/C'ing - O/Volting.

I understand why they tested this card the way they did. They had to admit they were wrong and over reacted prior to the 7950B launch with SKYMTL's tirade. So, they went and got a "reference" 7950 at retail and tested it. (How often do reviewers buy their own review samples? :sneaky: ) There was no motivation for them to try and show it in a positive light after they were embarrassed because of it.

It's just not AT's SOP to maintain reviews of current performance. They do release testing, and that's generally about it. No sites do what we really want them to. [H] only did their review at the request of their members. (Which was really good of them and they deserve kudos for it.) It still isn't anything exhaustive though. It's just an O/C'd review of the requested cards.

I blame the vendors for this. They need to request that reviewers test certain aspects of their products. Or, suffer the consequences when their stuff doesn't get to show it's true potential.
 

Toonces

Golden Member
Feb 5, 2000
1,690
0
76
So, they went and got a "reference" 7950 at retail and tested it. (How often do reviewers buy their own review samples? :sneaky: )

Hardware Canucks was founded by the Canadian etailer NCIX and had a very comfortable relationship with them. Not sure how close they are these days but it could very well be that SKYMTL simply asked for a card.:\
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Hardware Canucks was founded by the Canadian etailer NCIX and had a very comfortable relationship with them. Not sure how close they are these days but it could very well be that SKYMTL simply asked for a card.:\

Is this the way HC usually gets their review samples?
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
They aren't the only ones who took the short way out of reviewing the 7950B. Look at the card Hardware Canucks did a real hatchet job.


There is no worse of a design for O/C'ing - O/Volting.

I understand why they tested this card the way they did. They had to admit they were wrong and over reacted prior to the 7950B launch with SKYMTL's tirade. So, they went and got a "reference" 7950 at retail and tested it. (How often do reviewers buy their own review samples? :sneaky: ) There was no motivation for them to try and show it in a positive light after they were embarrassed because of it.

It's just not AT's SOP to maintain reviews of current performance. They do release testing, and that's generally about it. No sites do what we really want them to. [H] only did their review at the request of their members. (Which was really good of them and they deserve kudos for it.) It still isn't anything exhaustive though. It's just an O/C'd review of the requested cards.

I blame the vendors for this. They need to request that reviewers test certain aspects of their products. Or, suffer the consequences when their stuff doesn't get to show it's true potential.

The funny thing is both consumers and reviewers will say they are not affected by marketing/"service", but let rationality and reason prevail. At the same time AMD and especially NV is using tons $ and tallent on the same marketing. I am pretty sure they do it for a reason, because they can easily track the results

I have to say i am pretty tired of non oc vs oc results. Results for resolutions where the cards is not intended, and actually bought. Comparisons of cards of different price class.

The marketing is blurring the picture of all those reviews. I dont know if i am an old man, but wasnt this better when we were young?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You are being extremely pessimistic with your GTX 780 performance estimates. At first, you were saying just "50%" increase from GTX 680. Now you are saying it will be "40-50%". Look, you could be correct with that estimate if Nvidia is literally incapable of making functional GK110 dies after waiting for the 28nm process to improve for an entire year. This is where I get the "idea" that you are biased to AMD.

I am not saying that 8970 will perform the same as the GK110, but it could be closer than you think. I can see GK110 winning 1080P, but it's less clear how well it will do at high resolutions. For 2 generations now 480/580/680, NV is struggling at 2560x1600 and esp. with triple monitors. Right now 7970 GE is 9-12% faster at 1600P. That means even if GK110 is 50-60% faster than GTX680, but 8970 is 25-30% faster you end up with:

2560x1600 (probably a reasonable resolution to test GK110 to remove CPU limits)
GTX680 (100%) ---> HD7970 (109-112%) ---> HD8970 (136%-146%) --> GK110 (150-160%). That could be close depending no clock speeds and if NV will actually release a full-fledged GK110 for mass production.

But, do you really think NV will launch a 16 SMX @ 3072 SP GK110 (or was it 15 SMX = 2880 SPs?) with 1150mhz GPU Boosted card? I don't have the data but I somehow doubt that GK110 K20 is running at the same clocks as GTX680 is.

I am not AMD-biased. I just think it's very optimistic to think that GTX780 will be 50-70% faster and then Maxwell in 15 months will add another 50% on top. I would love for that to happen but I am staying more cautious this time. Personally I am just not buying into the hype anymore. After 7970/680 generation, I am not going to fall for the hype and easily believe we'll see more than double the increase in GPU speed over the next 24 months after NV gave us 580 (just 15% faster than 480) and then 680 (just 30-35% faster than 580). So it took NV 2 years to increase performance 50%, but suddenly they'll increase it > 2x in the next 2 years? I am trying to be realistic here. If NV delivers that, great! Just to understand your projection then, you are expecting GTX780 to be 20-30% faster than a 8970 then (because you are saying it'll be 55-60% faster than a GTX680)? So you think NV is about to unleash another G80 and leave AMD in the dust?

Every time I say something that doesn't put NV in the best light possible, I get told I am AMD-biased. I switch sides all the time depending on price/performance and make my recommendations appropriately. Here are some samples of my messages with other members on our forum going way back.






I've recommended so many NV cards over the years, I can't even keep track
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Depends on whether the migration below 28nm is on schedule or not... not* is probably more realistic and what's Maxwell going to be on?

On paper at least gk110 should be >50% of gk104. We'll just wait and see.

Keep in mind power consumption. While the specs on paper look like that, they might not be able to clock it at 1GHz+ and keep power usage acceptable. Time will tell, but I think that's going to be a limitation.
 

Hypertag

Member
Oct 12, 2011
148
0
0
I am not AMD-biased. I just think it's very optimistic to think that GTX780 will be 50-70% faster and then Maxwell in 15 months will add another 50% on top. I would love for that to happen but I am staying more cautious this time. Personally I am just not buying into the hype anymore. After 7970/680 generation, I am not going to fall for the hype and easily believe we'll see more than double the increase in GPU speed over the next 24 months after NV gave us 580 (just 15% faster than 480) and then 680 (just 30-35% faster than 580). So it took NV 2 years to increase performance 50%, but suddenly they'll increase it > 2x in the next 2 years? I am trying to be realistic here. If NV delivers that, great! Just to understand your projection then, you are expecting GTX780 to be 20-30% faster than a 8970 then (because you are saying it'll be 55-60% faster than a GTX680)? So you think NV is about to unleash another G80 and leave AMD in the dust?

I don't think Maxwell falls into the calculation at all. The limiting factors for increasing performance beyond the GK110 are how well the 20nm process is, and how expensive the 20nm process is. I am not optimistic for sub 28nm processes at all at this point. Nvidia might repeat the 2012 strategy for 2014 with Maxwell, since it has been proven to be wildly successful. In any event, it would be fairly easy to get 50% more performance from GK110 with 20nm and another mega-die. I just doubt that it would be able to be produced profitably on 20nm in 2014.

As far as the bias accusation, I think that future product speculation shows where things stand. All we have is a white paper to go off of, and basic speculation. Kepler has proven itself to be wildly, wildly more efficient compared to GCN, and things should scale up. AMD isn't going to make a 575mm^2 die. I see this as a really efficient architecture scaled massively upward versus an architecture that is less efficient on a smaller die. Knowing that, you speculated that AMD's next offering would "be close I bet". Defining "close" is a bit tough. I could argue that the GTX 670 is "close" to the GTX 680's performance. I could argue that the GTX 460 is "close" the GTX 580's performance. I guess you can spin "close" to be nearly anything. I just don't see the evidence of a 380mm^2 die on an inefficient architecture beating a 575mm^2 die on a more efficient architecture. (yes, GK110 will have 15 times as much space occupied by dual-precision hardware, so there is a slight loss from that when comparing strictly gaming performance).

Edit: Remember Fermi was wildly less efficient compared to VLIW5/4, and it won using the die size advantage
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Kepler has proven itself to be wildly, wildly more efficient compared to GCN, and things should scale up. AMD isn't going to make a 575mm^2 die. I see this as a really efficient architecture scaled massively upward versus an architecture that is less efficient on a smaller die.

You are forgetting that GCN has full GPGPU compute parts with double-precision performance and crippled GCN parts such as Pitcairn. Pitcairn, while still based on GCN, is a pure gaming chip much like the crippled Kepler GK104 is. Ok, now compare those 2 pure gaming chips and Pitcairn is actually more efficient in performance/watt than any of the 660Ti/670/680 parts.

So it's not that Kepler architecture is more efficient than GCN is; it's just NV made a gaming card in the 660Ti/670/680 series (good strategy), while AMD made a general purpose GPU like Fermi was that is also happened to be fast for games. Can Kepler do this against an i7-3960X? I don't think so.



You know a bunch of people here run distributed GPU computing projects such as Folding @ home? You know how much faster AMD cards are in distributed computing projects with double precision performance, like 10-20x faster than NV cards are at F@H. The minute you leave games, Kepler is toast. It's a step back from GF100 Fermi in many ways. It's no wonder it looks so efficient for games. There is no secret sauce in Kepler though. All it is is Fermi, gutted GPGPU functionality and half the shader clocks. To compensate for the loss in shader clock speed, NV rebalanced the chip by adding more TMUs and shaders. GTX680 is basically a 2nd generation Fermi card made specifically for games. If AMD crippled HD7970 the same way, you'd have a 2048 SP Pitcairn style gaming chip and GTX680's performance/watt wouldn't look so great. Actually, HD8870 is probably going to be that card. It's going to show us how fast GCN is without all the extra fat.

Double precision and dynamic scheduler for compute aren't free. They cost transistor space and thus power consumption. Add real compute and double precision functionality to GK110 and you'll see your performance/watt being under pressure. They might manage to keep it flat just because of how much faster GTX780 is rumoured to be, but I expect that card to have a 250W TDP. It's not going to be "free" from a power consumption point of view. HD7970 and GTX480/580 all paid a penalty when all of those parts had fully functional GPGPU architecture. I don't see how GK110 can escape that either.
 
Last edited:

Hypertag

Member
Oct 12, 2011
148
0
0
Put a number on "wildly".


Well, if we accept that the 7970 Ghz edition is 10% faster than GK114 (which I don't even agree with), then that means Tahiti is 10% "better" than GK114. However, that doesn't get into efficiency. Tahiti at 1.05 GHz consumes significantly more than 110% of GK110's power. It achieves a 10% "victory" while using significantly more than 10% more transistors. It manages a 10% "victory" while having an absolutely gargantuan 50% memory bandwidth advantage. So I don't see anything "efficient" about it's "10%" "victory". It is just using a die size advantage to grind out a narrow win like Fermi did. That is a die size advantage it won't even dream of having versus GK110.

edit: To give some type of response to the last post, obviously GK110 will be pushing 250-300 watts. I am assuming Nvidia learned from GTX 480 to put stock coolers on power hungry cards that don't sound like vacuum cleaners. I don't mind if it is power hungry if it delivers performance. If its 40% better with 300 watts, then it is crap. If its 60% better at 275 watts, then its pretty good. It could even be better than that.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Keep in mind power consumption. While the specs on paper look like that, they might not be able to clock it at 1GHz+ and keep power usage acceptable. Time will tell, but I think that's going to be a limitation.

You are absolutely right, TDP will be the deciding factor in what Nvidia can squeeze out of GK110. If they don't mind going with a 265-275 watt TDP, they will have an absolute performance monster on their hands.

GTX580, 512 shaders @ 770mhz = 394,240mhz cycles
GTX560ti, 384 shaders @ 820mhz = 314,880mhz cycles

If I use Crysis Warhead @ 1920x1200 as a reference point (this graph), gtx580 is 40% faster. I know it wasn't always 40% faster, and in many cases it was more than 40% faster (due to the gtx560's 1 gig of vram), but in shader bound situations where vram isn't an issue, 35-40% was pretty typical. GTX580 was only pushing 25% more shader power based on it's cores multiplied by clock frequency. Therefore GTX580's cores were actually operating at a higher output efficiency per shader than gtx560ti's (in part due to higher memory bandwidth and ROP's, no doubt).

Without taking into account the increased ROP performance that will come with having a 384-bit bus and the freed up performance with the higher memory bandwidth, if GK110 can keep it's output efficiency 1:1 with GK104, then a 13 SMX GK110 (2,496 shaders) can operate at 900mhz and be 40% faster in shader bound situations. A fully unlocked GK110 at 900mhz would be 60% faster.

EDIT: If anything, my estimates are conservative since it does not take into account the additional ROP's.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |