Kitguru : Nvidia to release three GeForce GTX 800 graphics cards this October

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
No, what you're doing is taking random numbers and make a comparision.
Between the GTX480 and 290X there is a 2x performance and a 3 3/4 timeframe between.

Between a GTX880 and a 7970 there will be 3 years and ~2x performance cap.

Using a 5870 doesnt make sense because the architecture was outdated back in 2010. A GTX480 is 60% faster in BF4 than the 5870...

Way to divert the attention towards the 5870 and miss all the other combinations of GPU upgrades. The upgrade paths below is something a PC gamer could have easily chosen but you ignored them.

Upgrade path #1: GTX480 (March 2010) --> GTX780 (May 2013) = roughly 3 years, performance increase of 2.1x
http://www.computerbase.de/2013-05/nvidia-geforce-gtx-780-test/3/

Upgrade path #2: GTX580 (Nov 2010) --> GTX780 Ti (Nov 2013) = 3 years, performance increase of 2x
Upgrade Path #3: HD6970 (Dec 2010) --> R9 290X (Oct 2013) = less than 3 years, performance increase of 2.3x
Upgrade Path #4: HD6970 (Dec 2010) --> 7970 (Dec 2011) = In barely more than 1 year, performance increase of > 70%!
http://www.computerbase.de/2013-12/grafikkarten-2013-vergleich/10/

If you don't like the 5870 comparison, pick any other generation. Not every gamer purchased an HD5870, which was the point of my post. You don't like the 5870 --> 7970Ghz comparison, then use GTX480 to 780 or 580 to 780Ti or 580 to R9 290X. It makes no difference. These all convey the same message: A PC gamer who bought some flagship card A in the last 3-4 years and upgraded following in the next 3 years would have received 2-2.3x the performance increase. 880 is looking to fall way short of that compared to the 680 it replaces.

And btw, there is no evidence to support your claim that 880 will double the performance of a 7970. 7970 is faster than R9 280 which would mean to double 7970, 880 would need to be faster than an R9 295X2. And since most of us overclock, it gets even worse for the 880. Since almost all 7970s could hit 1050mhz (7970Ghz speeds), for GTX880 OC to double 7970 OC, 880 OC would need to be 20% faster than 295X2! Doubtful.

http://www.computerbase.de/2014-08/grafikkarten-2014-im-vergleich-sommerausgabe/2/

For all intents and purposes there isn't even a point of discussing 7970 and 7970Ghz as separate cards since most 7970 owners have overclocked 7970 way beyond 7970Ghz speeds on our forum. Using R9 280X/770 as a baseline is a bare minimum for users such as myself looking to upgrade. If you don't like ComputerBase, other sites corroborate how impossible your claim is: 880 OC would need to be nearly as fast as 780 SLI / Titan Z to double the performance for a 7970 OC owner, or 37% faster than 780Ti:




I would have believed 35-37% increase if the 880 was a 2560 CUDA core chip but recent leaks indicate it could be a 1920 CUDA core chip instead.

Even when NV updated GF104 (560Ti) to GK104 (680), the performance increase was 90%. Is 880 going to be 90% faster than 680? you are saying yes since you are suggesting it will double the 7970.

Nobody says they cant go 20nm. Problem is they cant go 20nm and get better price/performance on a transistor cost.

Soon the increase in price to move to 20nm transistors will not be much more expensive than 28nm. Since Maxwell is also much more energy efficient, NV doesn't need to make a 550mm2 chip to outperform a GK110 by 30-40%. 20nm will make it viable for NV to make a smaller chip that would easily outperform the 780TI while using less power. The question is are they willing to take a hit on gross margins in 2014 or not? If not, then such chip is likely to show up in 2015, not in 2014.

When they estimate that there is no benefit to moving to lower nodes it's comparing at relatively mature 28nm generation to the beginning of a new generation, which is of course cutting edge and brings with it higher costs. But if they wait long enough, the yields on 20nm pick up and wafer prices drop. Eventually it won't be that expensive to move from "inception/early 28nm prices" to "middle of the life-cycle 20nm" prices. It just means NV/AMD will have to wait longer to transition to lower nodes than before.

In fact, NV continues to show increases in gross margins, with last quarter reporting a whopping 56%. So it sounds to me like NV just doesn't want to go back to 40-48% margins, which is why they are not embracing 20nm as early. Really NV could launch a 20nm GM200 by July 2015 I imagine at $699 but it could mean a reduction in margins from 56% to 50%, which is still very good but impossible to justify to their investors. After selling mid-range 680 for $500 and establishing a new flagship price at $700 for 780Ti and moving into semi-professional space with the Titan series, NV is going to be very reluctant to go back to old GPU pricing. Of course, it's not feasible to launch a 550mm2 20nm GM200 this year at any reasonable price in the consumer space. Maybe they will send some samples out like the orders of GK110 for Oak Ridge, well before GK110 reached the consumer space as the Titan at $1K.

----

Fudzilla has an article which yet again contradicts 28nm rumors. They are now saying 800 series will launch on 20nm node, and that NV will even have 800 series mobile parts ready before the end of the holiday season, also on 20nm.

Now we have all these possibilities:

300mm2 20nm
300mm2 28nm
430mm2 20nm
430mm2 28nm
GTX880 < 780Ti
GTX880 > 780Ti

With the launch so close, the leaks are all over the place. :hmm:
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
With the launch so close, the leaks are all over the place. :hmm:

Similar to GK104's launch. No one thought GK104 could compete with Tahiti, performance expectations were all over the place, and potential pricing models were too. It wasn't until ~1 week before launch that legitimate benchmarks were leaked.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Similar to GK104's launch. No one thought GK104 could compete with Tahiti, performance expectations were all over the place, and potential pricing models were too. It wasn't until ~1 week before launch that legitimate benchmarks were leaked.

There was a leak from ChipHell that got lost among hundreds that had correct CUDA cores for 680 and 780 (1536 and 2304). Maybe they got lucky.

Anyway, here are Xbitlabs' 3DMark scores. Based on preliminary 870 scores, R9 290X and 780Ti are faster.



780Ghz edition is also in the 5100 points range


870 and 880 would beat the reference 780/780Ti at lower prices. This would allow NV to claim 880 as the fastest card and thus 880 naming > 780TI. With a lower price of $400-450, these cards that would make it a good value against $450-500 780 and $600-650 780Ti. If they overclock by 20-30% like 750Ti, these cards would sell like hot cakes before GM200 launches in 2015.

If 870 comes in at 165-175W of power and is slightly faster than 780/R9 290, it would instantly obsolete both of those cards if it launches at $399 since no one will want to buy a card that uses nearly 55-80W more power with similar performance. For 880, it only needs to be beat 780TI by 5-10% at 185-190W and at $499-549 it would also obsolete that card. I don't think these prices and performance increases are stellar but they are enough to make 870/880 better than 290/290X/780/780Ti, which is all that NV really needs to do to make them attractive.

 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
Is 3DMark 2013 the same as 3DMark 2011?

Edit - I guess the 11 means DX11, how many different versions are out there?
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
There was a leak from ChipHell that got lost among hundreds that had correct CUDA cores for 680 and 780 (1536 and 2304). Maybe they got lucky.

The core count leaks were nailed way before launch, but actual benchmarks that were reliable didn't surface until a week or so before GTX680 launched. There was Nvidia's vague "2012 is going to be a great year" tweet, and there was Charlie's "Nvidia wins this round" article, but there weren't concrete benchmarks up until just before the launch and many people didn't see GK104, being considerably smaller than Tahiti, would be able to compete like it did.

Anyways, the core count from the Coolaler leaks is funny. Current logic dictates that GM204 will be 20 SMX's (before both GK104 and GF114 were 4x the cores of their Gx07 counterparts), making it 2560 cores. The 870 GPUz shot shows it having WAY less. At the same time, Charlie (for better or worse on bringing this up) said GM204 would have 4 sku's. Could GM204 power the GTX 880 TI, 880, 870, and 860 cards? Or is GPUz reading the card wrong? Or is the conventional wisdom of GM204 having 20 SMX's wrong?
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Is 3DMark 2013 the same as 3DMark 2011?

Edit - I guess the 11 means DX11, how many different versions are out there?

3DMark 99
3DMark 2000
3DMark 2001
3DMark 03
3DMark 05
3DMark 06
3DMark Vantage
3DMark 11
3DMark 2013, confusingly just called 3DMark
 

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
So I'm assuming 3DMark (2013) scores cannot be compared to 3DMark 11 scores?

And then comes the confusing part where you aren't sure if you should compare just the graphics score, the combined score, etc...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Anyways, the core count from the Coolaler leaks is funny. Current logic dictates that GM204 will be 20 SMX's (before both GK104 and GF114 were 4x the cores of their Gx07 counterparts), making it 2560 cores. The 870 GPUz shot shows it having WAY less. At the same time, Charlie (for better or worse on bringing this up) said GM204 would have 4 sku's. Could GM204 power the GTX 880 TI, 880, 870, and 860 cards? Or is GPUz reading the card wrong? Or is the conventional wisdom of GM204 having 20 SMX's wrong?

NV is going to up the core count last minute to give us 30% more performance just like they did the magical GTX670/680 700mhz pre-leaked clocks --> 1058mhz clock speed a week before launch :biggrin:
 

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
It's not faster than a 780, just tested and I get a higher score despite having an older CPU:

http://www.3dmark.com/3dm11/8611022

That's at stock speeds.

Yeah the performance is obviously not going to be mind blowing. I think we really will see a 8800GTX to 9800GTX situation, with the 880 a little faster at 1080p than a 780ti - but slower at 2560x1600/1440 and 4K.

These cards will get marketed on price and performance comparatively against a $700 780ti or $1000 Titan Black is my guess. Smaller die so they can be sold cheaper.

Perhaps we'll see an 880ti down the road with a different chip behind it that does outperform 780ti solidly, or perhaps a Titan 2. I think a Titan 2 is far more likely, $1000+ initial flagships are likely the new reality for the real beastly performing chips from nvidia.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
NV is going to up the core count last minute to give us 30% more performance just like they did the magical GTX670/680 700mhz pre-leaked clocks --> 1058mhz clock speed a week before launch :biggrin:

Hahaha, who knows. IF GM200 exists on 28nm, which I think is a strong likelihood if Nvidia is skipping 20nm altogether, then GM204 might only be 5-15% faster than GTX780 TI. That'd be disappointing, unless perf/$ improves significantly and GM200 comes in at 40-50% faster than a 780 TI for $599.

Regardless, with the leaks out there I have enough information on hand to know that I am not upgrading this cycle. I want a 75-100% increase over a stock gtx 780.
 

Subyman

Moderator <br> VC&G Forum
Mar 18, 2005
7,876
32
86
870 and 880 would beat the reference 780/780Ti at lower prices. This would allow NV to claim 880 as the fastest card and thus 880 naming > 780TI. With a lower price of $400-450, these cards that would make it a good value against $450-500 780 and $600-650 780Ti. If they overclock by 20-30% like 750Ti, these cards would sell like hot cakes before GM200 launches in 2015.

If 870 comes in at 165-175W of power and is slightly faster than 780/R9 290, it would instantly obsolete both of those cards if it launches at $399 since no one will want to buy a card that uses nearly 55-80W more power with similar performance. For 880, it only needs to be beat 780TI by 5-10% at 185-190W and at $499-549 it would also obsolete that card. I don't think these prices and performance increases are stellar but they are enough to make 870/880 better than 290/290X/780/780Ti, which is all that NV really needs to do to make them attractive.


The 870/880 would surely obsolete the older cards, but with that small of a performance improvement, they wouldn't attract a new purchase. Anyone that bought 780 performance for 780 money isn't going to spend 780 money on 780 performance + 5-10%. At that small performance increase, only the people that are compelled to get something new will want to buy.

We are also only talking about 5-10%, the kind of gain that can only be felt in benchmarks. To get a real, seat in the pants, improvement we need 30%+.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Regardless, with the leaks out there I have enough information on hand to know that I am not upgrading this cycle. I want a 75-100% increase over a stock gtx 780.

I think a lot of people will start to want these increases based on the following factors:

1) GPUs have gotten more expensive and the pace of performance increases has slowed down. If someone is now waiting 2-3 years to upgrade, they aren't going to be excited about a 50% increase given the time span when in the past in 2-3 years we got 2-2.3x the increases. You feel like you are now getting 50% less for your $ than before. Lower power consumption is not enough of an incentive to upgrade for NA users.

2) $500-550 used to get one a flagship card and $300-350 a mid-range card. This has all changed with 680/7970 and 780/780Ti. $400-500 now buys only a 2nd best card (R9 290/780), not the flagship level. In the past you could get a 6950 2GB/GTX470/570 and OC to the flagship. Now to get closer to the flagship you need to spend more (unless we are talking R9 290 at 4K vs. 780Ti). Whenever a consumer is asked to pay more, they want more return for their $ spent.

3) Resolution increases require a larger jump in performance to feel the difference. While 1080p is still very prevalent, as more and more people upgrade to 2560x1440p IPS monitors as a stop gap to 4K, a 30-40% increase in performance is going to be very little if you are chugging at 30 fps in that game (but for 1080P cards like 770/R9 280X are enough for most non-competitive 120 fps gamers). In % terms it sounds impressive but hardly changes playability for that $500 spent on a new card. The higher the resolution, the worse the performance is on an outdated card and the more of a jump in % terms is required from your existing card to really improve gameplay. There is a very large difference in performance between 1080P and 1440P on existing hardware which allows 1080p users to keep trucking on mid-range 680/770/7970 cards for longer without killer games like Crysis 3 crushing their cards.

4) Many are eagerly awaiting for next generation PC games to launch and for 4K monitors to come down in price. Both are taking their sweet time which has less pressure for existing owners to upgrade.

But 670/680 surprised and maybe 870/880 will do the same.
 
Last edited:

CrazyElf

Member
May 28, 2013
88
21
81
Right now, I upgrade every 2nd generation. The way things are going though, maybe 3rd generation or 4th generation may be worth it.

The question is, will GPUs ultimately end up like CPUs, with only single digit percentage improvements per year?
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Way to divert the attention towards the 5870 and miss all the other combinations of GPU upgrades. The upgrade paths below is something a PC gamer could have easily chosen but you ignored them.

Upgrade path #1: GTX480 (March 2010) --> GTX780 (May 2013) = roughly 3 years, performance increase of 2.1x
http://www.computerbase.de/2013-05/nvidia-geforce-gtx-780-test/3/

Upgrade path #2: GTX580 (Nov 2010) --> GTX780 Ti (Nov 2013) = 3 years, performance increase of 2x
Upgrade Path #3: HD6970 (Dec 2010) --> R9 290X (Oct 2013) = less than 3 years, performance increase of 2.3x
Upgrade Path #4: HD6970 (Dec 2010) --> 7970 (Dec 2011) = In barely more than 1 year, performance increase of > 70%!
http://www.computerbase.de/2013-12/grafikkarten-2013-vergleich/10/

Stop using AMD's oudated pre-GCN architectures. In Battlefield 4 a GTX580 is 35% faster than the 6970. On the other hand a 7970GHz - available 20 months later - is only 50% faster with 28nm and a much higher core clock.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Right now, I upgrade every 2nd generation. The way things are going though, maybe 3rd generation or 4th generation may be worth it.

The question is, will GPUs ultimately end up like CPUs, with only single digit percentage improvements per year?


I don't think you understand what's going on. CPU IPC increases by 5-10% every tick/tock (WITH INTEL) but the performance per watt jumps by tremendous amounts. Being that all chips in a single generation uses the same architecture scaled up and down, the PPW has to be good or else you can forget getting into portable devices, period. Anyone who looks at performance or desktop IPC only doesn't get it:

Intel, AMD and NV all design one architecture for all of their designs, so to speak. Kepler ranges from the TK1 SOC that is a 2W TDP all the way up to the 780ti. It is all the same architecture, and a scalable architecture that goes from the smallest to the biggest.

What does this reqiure? Excellent performance per watt. If your PPW sucks, then you're not getting into mobile devices. This is what happened to AMD for many years, they lost nearly all of the mobile DGPU market share to Kepler because their GCN 1.0 mobile dGPUs had very poor performance per watt compared to Kepler. The point is, scalabable architectures. All of these companies (amd, intel, NV ) design one architecture and scale them up and down. Side note: I think this is why AMD made Tonga. Is it exciting for the desktop? Not really, but the efficiency should be very good. AMD *KNOWS* they need good performance per watt to make plays in the portable market.

So essentially intel is doing what they have to do in order to be a viable company 10 or 15 years from now. If they focused on power sucking monstrosity CPUs, that would manifest itself when scaled down into a mobile chip with poor PPW. Intel cannot do that if they want to retain their sales of macbook air/pro and ultrabook devices. That is a large chunk of their profit, even larger than desktop (by far).
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
That said...I still see the increases on the dGPU side being far larger from generation to generation than CPUs. GPUs are making huge advances in performance per watt and absolute performance, while CPUs make smaller roundabout 10% IPC jumps per year. Maybe intel could do another Conroe at some point with a breakthrough in IPC, but it is what it is. Intel is doing what they should be doing: creating chips with good performance per watt. Without good PPW they would be a dead company 5-10 years from now. It's what they have to do, because their performance relative to the other SOC vendors is way, way better.

This is also what GPU vendors are doing. They know the end game is performance per watt. But like I said. I don't see the increases stopping with dGPUs... I think we'll see big jumps again once 20nm hits.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,572
248
106
The question is, will GPUs ultimately end up like CPUs, with only single digit percentage improvements per year?

No - GPU performance can still be easily increased by throwing more cores at it because its such a parallel workload. Of course that assumes new nodes.

CPU's are stagnant not because of nodes, but because its harder to extract more performance out a few powerful cores - you can't just throw a bunch of cores at it like you can with a GPU
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
No - GPU performance can still be easily increased by throwing more cores at it because its such a parallel workload. Of course that assumes new nodes.

CPU's are stagnant not because of nodes, but because its harder to extract more performance out a few powerful cores - you can't just throw a bunch of cores at it like you can with a GPU

This too. Very good point. CPUs are stagnating because clockspeed limitations are very real, it is what it is. Software hasn't caught up with multi threaded processing yet - it isn't something intentional by software developers, it is actually very difficult to program a piece of software to use 100% of every thread available. At least from what I've read. The more threads you have, the less efficiency per thread you get, generally speaking.

This same problem doesn't apply to GPUs as noxious stated. GPUs specialize in parallel workloads.
 

CrazyElf

Member
May 28, 2013
88
21
81
I don't think you understand what's going on. CPU IPC increases by 5-10% every tick/tock (WITH INTEL) but the performance per watt jumps by tremendous amounts. Being that all chips in a single generation uses the same architecture scaled up and down, the PPW has to be good or else you can forget getting into portable devices, period. Anyone who looks at performance or desktop IPC only doesn't get it:

Try me then.

Single threaded performance is simply not getting much faster. Certainly not on a per watt basis.

Let's take a "good" Sandy Bridge. That SB chip would be able to go to say, 5GHz at 1.45V. A typical 4790K today would be able to go to 4.7-4.8GHz-ish. The 4790K isn't that much faster, maybe 5-6% than that 5GHz Sandy Bridge. There are a few new instruction sets though. That's the exception. You see much better scaling there. Haswell for example has very good AES scaling compared to the Bridges. But in order to use those new instruction sets, the power consumption goes way, way up.

Now it's not a perfect parallel. Single threaded performance is, at least on the desktop, what matters most. Servers are different. Certain applications on desktop too scale very well with cores, like video encoding and file compression. For that reason, for their Xeon line of chips, Intel has been adding cores progressively.

For GPUs, it's different. They very parallel. It's closer to a CPU with thousands of cores. Not a perfect analogy but close enough.




Intel, AMD and NV all design one architecture for all of their designs, so to speak. Kepler ranges from the TK1 SOC that is a 2W TDP all the way up to the 780ti. It is all the same architecture, and a scalable architecture that goes from the smallest to the biggest.

What does this reqiure? Excellent performance per watt. If your PPW sucks, then you're not getting into mobile devices. This is what happened to AMD for many years, they lost nearly all of the mobile DGPU market share to Kepler because their GCN 1.0 mobile dGPUs had very poor performance per watt compared to Kepler. The point is, scalabable architectures. All of these companies (amd, intel, NV ) design one architecture and scale them up and down. Side note: I think this is why AMD made Tonga. Is it exciting for the desktop? Not really, but the efficiency should be very good. AMD *KNOWS* they need good performance per watt to make plays in the portable market.

No disagreements here.

When I asked my question, I was asking from strict perspective of, how much more efficient can GPU architectures get?

We seem to have hit rock bottom with transistors:cost at 28nm. Maybe a 450mm wafer could buy a node or two, but not much more. At that point, there's going to be a few one offs. The only other way to get performance is through better architecture.

Perhaps another way to say it would be, how many years, given that die shrinks are no longer economical, are we going to see of rapid scaling of GPUs through their better architecture? The reason I ask is, even if GPUs are parallel, the previous generation generally improved by adding more cores. I am generalizing here - there were other things done, but cores was one of the big ones. The thing is, if you're stuck at one node for an extended period of time, you can no longer do that, unless you make a bigger die because it's a mature process. But then how big can a die get? Intel's Tukwila I believe was close to 700mm^2. Could a 1000mm^2 die be economical? Even on a very mature process?


So essentially intel is doing what they have to do in order to be a viable company 10 or 15 years from now. If they focused on power sucking monstrosity CPUs, that would manifest itself when scaled down into a mobile chip with poor PPW. Intel cannot do that if they want to retain their sales of macbook air/pro and ultrabook devices. That is a large chunk of their profit, even larger than desktop (by far).

That is true.

The issue is, let's face it. We enthusiasts are only a small percentage of the market.

The only thing that matters is volume (the general public) and high margin processors (Xeons).

Single threaded raw performance is not the important thing right now, or well, not so much not important as much as, just not a high priority. For Intel, it's servers and the volume stuff (OEMs).

Edit:
This too. Very good point. CPUs are stagnating because clockspeed limitations are very real, it is what it is. Software hasn't caught up with multi threaded processing yet - it isn't something intentional by software developers, it is actually very difficult to program a piece of software to use 100% of every thread available. At least from what I've read. The more threads you have, the less efficiency per thread you get, generally speaking.

This same problem doesn't apply to GPUs as noxious stated. GPUs specialize in parallel workloads.


This too is an issue. It's the reason why I still think that single threaded performance is the most important thing. A lot of existing software is still single threaded.

The other question is, how much can be put into parallel. Some things probably cannot be. Amdahl's argument comes into argument here.


@Blackened and Noxious
Are we in agreement or do you disagree with this post?
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Stop using AMD's oudated pre-GCN architectures. In Battlefield 4 a GTX580 is 35% faster than the 6970. On the other hand a 7970GHz - available 20 months later - is only 50% faster with 28nm and a much higher core clock.

Latest BF4 Dragons teeth.

GTX580 is ~17% faster than HD6970 at 1080p. At 1600p the difference is ~18%.

HD7970 is 30% faster than GTX580 at 1080p and ~35% faster at 1600P.

Why only BF4 ???

http://gamegpu.ru/action-/-fps-/-tps/battlefield-4-dragon-s-teeth-test-gpu.html


 

dangerman1337

Senior member
Sep 16, 2010
333
5
81
*snip*

What am I missing? The 780 on Guru3D runs 12,100.
Let's take a look at the test bed: http://www.guru3d.com/articles_pages/msi_geforce_gtx_780_gaming_oc_6_gb_graphics_card_review,11.html

Mainboard
MSI Big Bang XPower II (MS-7737) - Review
Processor
Core i7 3960 Extreme (Sandy Bridge-E) @ 4.6 GHz on all six cores - Review
Graphics Cards
GeForce GTX 780 - MSI Gaming OC Edition (6GB)
Memory
8192 MB (2x 4096 MB) DDR3

Does 3DMark11 have its results influenced by multicore much?
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Does 3DMark11 have its results influenced by multicore much?

CPU speed does affect 3dmark 11 scores, yes. In particular, the physics score is heavily influenced by CPU and if you overclock, that will inflate the score. Guru3d overclocked their testbed to 4.6GHz.

3dmark11 performance basically is a test of low end graphics.

3dmark11 extreme is a test of high end graphics with all the bells and whistles thrown in to really push a GPU to the limits.

Essentially, the 780 and 870 are in the same range. The coolaler testbed CPU per the leak was not overclocked, so guru3d would have a higher physics score (thanks to the OC'ed CPU) - which would inflate the final score. I think the 880 will be quite a good jump from the 870, just my personal theory though. I can't remember if the 880 was rumored to have 25xx cores, but I think that's what it was? If so, the 870 is getting 780 level performance with 1600ish cores.

Back to 3dmark11 performance and extreme. 3dmark11 performance preset is 720p from what I remember (someone refresh my memory), while extreme tests high resolution. Either the 880 is increasing its performance gap over the 780 as resolution increases, or the CPU testbed is affecting / inflating the guru3d score for the performance test.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |