9600GT SLi review

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
2,979
126
Now, if you disabled 1 clusters, this would mean your simulation of the GT has 28 TMUs compared to 56TMUs found on the G92
I was never trying to simulate other cards, only to demonstrate the performance reduction by going from 128 SPs to 64 SPs on my card.

As for your commentary, it's correct but that was never the issue.

My issue is with Azn trumpeting texture fillrate without a shred of evidence that modern games are bottlenecked in such a way, while I provided plenty of evidence to demonstrate why shading is more important.

My other issue is how he changes his tune to suit the results.

First he was running around and saying disabling 64 SPs having no performance hit is what he expected. Then when I showed evidence to the contrary he changed his tune to say he did expect a performance hit due to a reduction of texturing capability, even though the GTS tested in the first place had much less texture fillrate than my Ultra.

Furthermore we know CoD 4 is a relatively primitive engine in terms of shader usage so that lends my comments even more credibility.
 

lopri

Elite Member
Jul 27, 2002
13,211
597
126
I am not sure what you two are arguing at this point. I have noticed Azn self-contradicting (when comparing posts from other threads) as well. I would like to urge both of you to drop a little bit of ego and be more productive. Folks would be more interested in acquiring accurate information than who's winning the argument between two of you. Oh and to BFG: I don't think an engine's age has much relevance to a heavily modified game. Even if the engine is old, a final product can be very different in terms of resource usage. Adding a layer of specific post-processing can tax hardware's shading ability like no other engine can do. (I think?)
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K
8800gt in the other hand would still have 18000 Mtexels/s even if 48 shader was disabled.
But again that texture fillrate advantage is only relevant in games not using FP rendering.

I tested Bioshock again (which does use FP rendering) using the lowest shader clock I could go to (1224 MHz, default 1512 MHz).

I got 47 fps vs 53 FPS, or a 13% performance gain for an increase of 24% in shader clock.

And again the difference would likely be higher if I wasn't running 4xAA combined with TrAA.

Why is that Sickbeat was to downclock his SP to 800mhz and you can only downclock it to 1224mhz?

Try clocking your SP to 800mhz or so and do the bench again. This wouldn't cut your texture fillrate into half.

When you diable 1/2 of SP on your ultra, your FP16 Texture fillrate is cut to 1/2 as well.


So I was right along. BFG texture fillrate is cut into half when he disabled his 1/2 of SP which hampers his performance not because of his SP.
But again you need to demonstrate that the reduction of texturing hurts performance more than the reduction of shading ability.

You would also need to demonstrate how your multi-texturing results are relevant in today?s games.

So far you have failed to do either while I have provided ample evidence to demonstrate shaders becoming more and more widespread compared to texturing.

We also have commentary from ATi in the article I linked to that confirms what I'm saying.

Why don't you provide proof instead by downclocking your SP. If I had a higher end card I would provide it for you but I do not. Comparing 8800gt and 9600gt in bioshock the performance difference is there but not the numbers you posted by 50% reduction in frame rates. It's more like 15%.

http://www.firingsquad.com/har...performance/page14.asp
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: lopri
I am not sure what you two are arguing at this point. I have noticed Azn self-contradicting (when comparing posts from other threads) as well. I would like to urge both of you to drop a little bit of ego and be more productive. Folks would be more interested in acquiring accurate information than who's winning the argument between two of you. Oh and to BFG: I don't think an engine's age has much relevance to a heavily modified game. Even if the engine is old, a final product can be very different in terms of resource usage. Adding a layer of specific post-processing can tax hardware's shading ability like no other engine can do. (I think?)

Oh really? Where? I believe that future games would use more shader so 8800gt would be faster but currently the difference is very little.

You can see this in 9600gt vs 8800gt.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: keysplayr2003
Originally posted by: v8envy
Keys, one quick question. How can you get at your keyboard with those enormous balls of yours in the way?

Definitely appreciate your testing. This is incredibly interesting.

LMAOOOO!!

I guess you could say that, its not the end of the world for me if I kill my card. I'd be a bit upset, but not the end of the world. I'd go out and get another one. Don't get me wrong, I like my money as much as the next guy, it's just that I currently have the means to get another card. So I am not as frightened to try things.

When I disabled an ROP, I got SUPER garblage on the windows startup screen, then a BSOD, to a reboot. I panicked, but worked it out. So now I can tell everyone NOT to try disabling them. Shaders units do not seem to have any problems being disabled however. Actually, I think they appreciate the rest!

Keys

I didn't get a BSOD, but when I underclocked the SP from 1625 to 815 the third time I did so it caused a hard lock and I had to reach for the reset button (first time in a year or so) ... and weird slowdowns.. but no biggie. If you think about it, the price of the card is only a few days worth of work.

It also makes no sense for anything to get damaged, I have never heard of such thing damaging a card, you are using software to turn off parts of the card, as soon as you cut the power and turn it back on the card is back to normal. (it is after all, a self contained computer, with its own software, ram, bios, cpu, etc)

BTW, you mentioned me testing it.. I only tested company of heroes in its built in time demo, which is the same as it was during DX9... I remember it being pretty accurate, now it was more then 400% more fps then I was getting in actual gameplay. So I am not too sure about using it.

I will see about testing bioshock, since BFG mentioned it gets a pretty hefty change from 64 to 128 SP.
Or I would if someone could tell me how to get nvstrap to work with vista64
 

SniperDaws

Senior member
Aug 14, 2007
762
0
0
Originally posted by: Azn
Originally posted by: lopri
I am not sure what you two are arguing at this point. I have noticed Azn self-contradicting (when comparing posts from other threads) as well. I would like to urge both of you to drop a little bit of ego and be more productive. Folks would be more interested in acquiring accurate information than who's winning the argument between two of you. Oh and to BFG: I don't think an engine's age has much relevance to a heavily modified game. Even if the engine is old, a final product can be very different in terms of resource usage. Adding a layer of specific post-processing can tax hardware's shading ability like no other engine can do. (I think?)

Oh really? Where? I believe that future games would use more shader so 8800gt would be faster but currently the difference is very little.

You can see this in 9600gt vs 8800gt.



if that does by the time it happens the 8800GT will be struggling to play games at fluid rates anyway so even if your theory is correct its meaningless.

infact im going to keep my eye on newer games and keep comparing the 8800GT with the 9600GT and im pretty confident that the 8800GT wont start getting faster and faster compared to the 9600.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K
Now, if you disabled 1 clusters, this would mean your simulation of the GT has 28 TMUs compared to 56TMUs found on the G92
I was never trying to simulate other cards, only to demonstrate the performance reduction by going from 128 SPs to 64 SPs on my card.

But you didn't only demonstrate 128SP to 64SP. You cut your texture fillrate by 1/2 as well.


My issue is with Azn trumpeting texture fillrate without a shred of evidence that modern games are bottlenecked in such a way, while I provided plenty of evidence to demonstrate why shading is more important.

What evidence did you bring? You didn't even have a clue your texture fillrate was cut by half when you disabled your SP and tested your games on it. Only when I said something you come up with texture fillrate makes no performance impact (the usual run around cat and mouse game). You haven't shown us anything BFG. You argue about something that was never said or sentence structure. That's about it.


My other issue is how he changes his tune to suit the results.

Kind of like 3dmark multi-texture test doesn't matter and COD4 is primitive engine? Only when it's favorable to win an argument right? :disgust:


First he was running around and saying disabling 64 SPs having no performance hit is what he expected. Then when I showed evidence to the contrary he changed his tune to say he did expect a performance hit due to a reduction of texturing capability, even though the GTS tested in the first place had much less texture fillrate than my Ultra.

Where was my comment about disabling 64SP have no performance impact? You make up a story just so you can sound credible.

I might have said little impact but I did not say NO impact.


Furthermore we know CoD 4 is a relatively primitive engine in terms of shader usage so that lends my comments even more credibility.

Is this why Geforce 7 is getting weak performance and x1000 does better on this game and my budget 8600gts with 128bit bus does good as 7900gtx?

Why don't you tell Activision that COD4 is primitive. They will laugh in your face I bet.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: SniperDaws
Originally posted by: Azn
Originally posted by: lopri
I am not sure what you two are arguing at this point. I have noticed Azn self-contradicting (when comparing posts from other threads) as well. I would like to urge both of you to drop a little bit of ego and be more productive. Folks would be more interested in acquiring accurate information than who's winning the argument between two of you. Oh and to BFG: I don't think an engine's age has much relevance to a heavily modified game. Even if the engine is old, a final product can be very different in terms of resource usage. Adding a layer of specific post-processing can tax hardware's shading ability like no other engine can do. (I think?)

Oh really? Where? I believe that future games would use more shader so 8800gt would be faster but currently the difference is very little.

You can see this in 9600gt vs 8800gt.



if that does by the time it happens the 8800GT will be struggling to play games at fluid rates anyway so even if your theory is correct its meaningless.

infact im going to keep my eye on newer games and keep comparing the 8800GT with the 9600GT and im pretty confident that the 8800GT wont start getting faster and faster compared to the 9600.

Sniper again. I'm happy about your card.

But your confidence will be broken when 8800gt does get faster as more shader are used in games.
 

Cheex

Diamond Member
Jul 18, 2006
3,123
0
0
With the 9800GTX boasting such a lame 3DMark06 score...14014...assuming it is real...9600GT SLI is looking even better.

Will they ever allow Tri-SLI on the 9600GT??...That would be perfect!!

I don't think so though...just hoping.
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Look, we've done enough talking. There are some pretty straight-forward ways to test most of the 'theories' being floated around here.

To sum up, there are a couple of fundamental questions to be answered:

1) Why does the 9600GT perform as closely as it does to the 8800GT given the extreme reduction in shaders.

2) How heavily are most modern games linked to shader (versus texture) performance.

With regard to question #1, two theories are being floated. First, that the 9600GT is simply a cut-down, but tweaked G92 with higher clockspeeds, and that it performs closely with the 8800GT because the extra shading power of the 8800 just isn't required for a lot of games. Second, that the G92's architectural improvements amount to more than a few simple 'tweaks' and that the existing shaders are just more efficient somehow.

This is an easy question to answer as long as we have two people, one with a 9600GT and another with a G92 (any flavor except GT 256MB), who are willing to run benchmarks. First, disable enough 'units' on the G92 to make it even with the 9600GT in terms of shaders and texture units. Then equalize the clocks (core, shader & memory). This should yield parts with the same number of shaders and texture units, with equal clocks. We benchmark and look at the results.

To address question number two is simple as well. Simply take an 8800GT 512MB and reduce the shader clock in increments equal to approximately 1/7th the maximum value, then compare the results against disabling one 16sp 'unit' at a time. There are seven 16 shader 'units' in the 8800GT. Reducing the shader power by 1/7th the total value should be equal to disabling one of the 16 shader units. The only difference between the two will be the fact that texture units are reduced when actually disabling units, while they will still be active when only reducing the shader clock speed.

Voila, you have a way to compare both the 9600GT's architecture to G92, and you have a way to look at texturing power versus shader power. It's not perfect, as shaders and texture units are tied, but you can easily get a number of useful combinations.

You could, for example, disable three units one time, and four units another, while (when disabling four units) ensuring that you have a 1/7th greater shader clock speed. This will have a part with equal shader power, but with slightly fewer texture units.

Let's stop talking and start benching. I've got a 8800GT 512MB and a monitor that goes up to 19x12, and I'm game. Does anyone have a 9600GT with a monitor going up to 19x12?
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: dreddfunk
Look, we've done enough talking. There are some pretty straight-forward ways to test most of the 'theories' being floated around here.

To sum up, there are a couple of fundamental questions to be answered:

1) Why does the 9600GT perform as closely as it does to the 8800GT given the extreme reduction in shaders.

2) How heavily are most modern games linked to shader (versus texture) performance.

With regard to question #1, two theories are being floated. First, that the 9600GT is simply a cut-down, but tweaked G92 with higher clockspeeds, and that it performs closely with the 8800GT because the extra shading power of the 8800 just isn't required for a lot of games. Second, that the G92's architectural improvements amount to more than a few simple 'tweaks' and that the existing shaders are just more efficient somehow.

This is an easy question to answer as long as we have two people, one with a 9600GT and another with a G92 (any flavor except GT 256MB), who are willing to run benchmarks. First, disable enough 'units' on the G92 to make it even with the 9600GT in terms of shaders and texture units. Then equalize the clocks (core, shader & memory). This should yield parts with the same number of shaders and texture units, with equal clocks. We benchmark and look at the results.

To address question number two is simple as well. Simply take an 8800GT 512MB and reduce the shader clock in increments equal to approximately 1/7th the maximum value, then compare the results against disabling one 16sp 'unit' at a time. There are seven 16 shader 'units' in the 8800GT. Reducing the shader power by 1/7th the total value should be equal to disabling one of the 16 shader units. The only difference between the two will be the fact that texture units are reduced when actually disabling units, while they will still be active when only reducing the shader clock speed.

Voila, you have a way to compare both the 9600GT's architecture to G92, and you have a way to look at texturing power versus shader power. It's not perfect, as shaders and texture units are tied, but you can easily get a number of useful combinations.

You could, for example, disable three units one time, and four units another, while (when disabling four units) ensuring that you have a 1/7th greater shader clock speed. This will have a part with equal shader power, but with slightly fewer texture units.

Let's stop talking and start benching. I've got a 8800GT 512MB and a monitor that goes up to 19x12, and I'm game. Does anyone have a 9600GT with a monitor going up to 19x12?

#1 has been tested already if you read few pages back and already been proven that you don't need a 112SP to get near 8800gt performance.

9600gt is proof anyways for #1.

#2 I can't test because I don't have a 8800gt or 9600gt. Yes I would really appreciate if you would test it and shut BFG up once and for all.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
Originally posted by: taltamir

It also makes no sense for anything to get damaged, I have never heard of such thing damaging a card, you are using software to turn off parts of the card, as soon as you cut the power and turn it back on the card is back to normal. (it is after all, a self contained computer, with its own software, ram, bios, cpu, etc)

If we were talking about a purely analog, mechanical system I'd be more inclined to agree. We're talking about a giant wad of software sitting on top of this card, with logic to do voltage and power management.

I'd be worried that running it that far out of spec is going to cause some block of code to go 'OMG we need ((unsigned long) -1) millivolts to that chip over --> there to make it work right.'

Then again, I've only written drivers for ICU hardware, mission planning software, financial software for brokerages and regulated utilities. My expectation for software quality may be far lower than what people writing video drivers are delivering.

Besides, cost of the card is only part of it. You have to deal with shopping for a new card, waiting for it to arrive, etc etc.
 

SniperDaws

Senior member
Aug 14, 2007
762
0
0
Originally posted by: Azn
Originally posted by: SniperDaws
Originally posted by: Azn
Originally posted by: lopri
I am not sure what you two are arguing at this point. I have noticed Azn self-contradicting (when comparing posts from other threads) as well. I would like to urge both of you to drop a little bit of ego and be more productive. Folks would be more interested in acquiring accurate information than who's winning the argument between two of you. Oh and to BFG: I don't think an engine's age has much relevance to a heavily modified game. Even if the engine is old, a final product can be very different in terms of resource usage. Adding a layer of specific post-processing can tax hardware's shading ability like no other engine can do. (I think?)

Oh really? Where? I believe that future games would use more shader so 8800gt would be faster but currently the difference is very little.

You can see this in 9600gt vs 8800gt.



if that does by the time it happens the 8800GT will be struggling to play games at fluid rates anyway so even if your theory is correct its meaningless.

infact im going to keep my eye on newer games and keep comparing the 8800GT with the 9600GT and im pretty confident that the 8800GT wont start getting faster and faster compared to the 9600.

Sniper again. I'm happy about your card.

But your confidence will be broken when 8800gt does get faster as more shader are used in games.



Are we all agreed that Crysis is the most taxing and shader intensive game out there so far ?

http://www.firingsquad.com/har...performance/page13.asp

ok then please just look at the results between the non overclocked 9600GT and the 8800GT.

the 9600GT is 3 fps slower at lower res and 2 fps at the higher resoution.



 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
Erm. 2 FPS out of 22 is roughly 10%...

We also know Crysis is a horrific CPU pig. I'd like the same tests done on a more macho CPU than a E6750. The 6750 looks like the frame rate problem, not the cards. Crysis is one of the few examples today where a 2.66 ghz CPU will limit GPU performance. Especialy at 1280x1024! I know for a fact people with 8800GTs are able to get to mid 30s on that benchmark with 3+ ghz CPUs, even at a higher res than 1024x768.

One of those graphs (16x12) implies 3850 serves up identical performance to an overclocked 9600GT, and at 19x12 the 3850 and 3870 just utterly demolish everything nvidia (by a convincing 20% margin). The next closest card is a 9600GT, outperfoming an 8800GT. Sooo... I'm not sure you can draw any useful conclusions from that test.
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
Originally posted by: SniperDaws
Originally posted by: Azn
Originally posted by: SniperDaws
Originally posted by: Azn
Originally posted by: lopri
I am not sure what you two are arguing at this point. I have noticed Azn self-contradicting (when comparing posts from other threads) as well. I would like to urge both of you to drop a little bit of ego and be more productive. Folks would be more interested in acquiring accurate information than who's winning the argument between two of you. Oh and to BFG: I don't think an engine's age has much relevance to a heavily modified game. Even if the engine is old, a final product can be very different in terms of resource usage. Adding a layer of specific post-processing can tax hardware's shading ability like no other engine can do. (I think?)

Oh really? Where? I believe that future games would use more shader so 8800gt would be faster but currently the difference is very little.

You can see this in 9600gt vs 8800gt.



if that does by the time it happens the 8800GT will be struggling to play games at fluid rates anyway so even if your theory is correct its meaningless.

infact im going to keep my eye on newer games and keep comparing the 8800GT with the 9600GT and im pretty confident that the 8800GT wont start getting faster and faster compared to the 9600.

Sniper again. I'm happy about your card.

But your confidence will be broken when 8800gt does get faster as more shader are used in games.



Are we all agreed that Crysis is the most taxing and shader intensive game out there so far ?

http://www.firingsquad.com/har...performance/page13.asp

ok then please just look at the results between the non overclocked 9600GT and the 8800GT.

the 9600GT is 3 fps slower at lower res and 2 fps at the higher resoution.

keep in mind its easy to get 13% to 16% more performance out of a stock 8800gt via an easy session in Rivatuner.

that create a larger margin of performance than the 9600gt vanilla --> 9600gt super OC edition in that link shows.

 

SniperDaws

Senior member
Aug 14, 2007
762
0
0
Yeah, so far ive had mine at 750/2000 so we can all overclock thats why i pointed out both stock clock versions of the 8800 and 9600.

OMG are you lot women ? ive never chatted with such a bunch of stubben twats in all my life.

The facts are in the benchmarks, even in Crysis the 8800 doesnt pull away from the 9600GT its as simple as that. Even if Crysis is a CPU hog what diffrence is it going to make ? itll make both cards perform better not just the 8800GT when you use a faster cpu.

So when 2fps doesnt sound enough we then use % to make it sound better and faster.

Please carry on and knock yourselves out!


 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
dude, your card is good, just not as good as an 8800gt. Your card is more desirable because of the 169.00 price tag. the end.
 

SniperDaws

Senior member
Aug 14, 2007
762
0
0
FFS im not trying to justify buying a 9600GT, before i bought it i slagged it off as nothing but a cut down version of the 8800GT and i was going to buy an 8800GT but then after reading benchmarks and reviews i realised the 8800GT wasnt worth the extra money as it performs the same and better when using AA and AF.

Anyway im done.

if i ever need any help with shopping or knitting or just feel like slagging off men then at least i know which people to chat to.......lmao.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: SniperDaws
Originally posted by: Azn
Originally posted by: SniperDaws
Originally posted by: Azn
Originally posted by: lopri
I am not sure what you two are arguing at this point. I have noticed Azn self-contradicting (when comparing posts from other threads) as well. I would like to urge both of you to drop a little bit of ego and be more productive. Folks would be more interested in acquiring accurate information than who's winning the argument between two of you. Oh and to BFG: I don't think an engine's age has much relevance to a heavily modified game. Even if the engine is old, a final product can be very different in terms of resource usage. Adding a layer of specific post-processing can tax hardware's shading ability like no other engine can do. (I think?)

Oh really? Where? I believe that future games would use more shader so 8800gt would be faster but currently the difference is very little.

You can see this in 9600gt vs 8800gt.



if that does by the time it happens the 8800GT will be struggling to play games at fluid rates anyway so even if your theory is correct its meaningless.

infact im going to keep my eye on newer games and keep comparing the 8800GT with the 9600GT and im pretty confident that the 8800GT wont start getting faster and faster compared to the 9600.

Sniper again. I'm happy about your card.

But your confidence will be broken when 8800gt does get faster as more shader are used in games.



Are we all agreed that Crysis is the most taxing and shader intensive game out there so far ?

http://www.firingsquad.com/har...performance/page13.asp

ok then please just look at the results between the non overclocked 9600GT and the 8800GT.

the 9600GT is 3 fps slower at lower res and 2 fps at the higher resoution.

Crysis isn't just shader intensive it is video card intensive. 8800gt is bandwidth limited at this point to make a huge impact over 9600gt.

There are however more shader intensive games out there that make 8800gt take bigger lead but currently 9600gt is able to handle them with ease. The future however we don't know how a coder is going to code a game.

Upcoming Fallout is supposed to be very shader intensive from the makers of oblivion. only time will tell.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: v8envy
Erm. 2 FPS out of 22 is roughly 10%...

We also know Crysis is a horrific CPU pig. I'd like the same tests done on a more macho CPU than a E6750. The 6750 looks like the frame rate problem, not the cards. Crysis is one of the few examples today where a 2.66 ghz CPU will limit GPU performance. Especialy at 1280x1024! I know for a fact people with 8800GTs are able to get to mid 30s on that benchmark with 3+ ghz CPUs, even at a higher res than 1024x768.

One of those graphs (16x12) implies 3850 serves up identical performance to an overclocked 9600GT, and at 19x12 the 3850 and 3870 just utterly demolish everything nvidia (by a convincing 20% margin). The next closest card is a 9600GT, outperfoming an 8800GT. Sooo... I'm not sure you can draw any useful conclusions from that test.

Maybe in medium detail but in high detail it is video card limited.

http://www.gamespot.com/features/6182806/p-6.html
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Originally posted by: apoppin
You are late to the party Dredd ... Keys and BFG and a few others are already *doing* just that

Originally posted by: Azn
#1 has been tested already if you read few pages back and already been proven that you don't need a 112SP to get near 8800gt performance.

9600gt is proof anyways for #1.

#2 I can't test because I don't have a 8800gt or 9600gt. Yes I would really appreciate if you would test it and shut BFG up once and for all.

'Late to the party', 'already been proven', hardly. All respect to Keys and BFG--as I think it was wonderful for them to do the tests to begin with--I don't believe their tests to be conclusive in any way, which is indicated by the fact that both plan further tests.

However, and this is the point of my posting, I think testing two G80 GPUs, is the wrong choice. I'm interested in their results, but their applicability to G92/G94 cards, with their different texture and memory setups, remains tentative.

The issue at hand is how a *G94* card with 64 SPs performs close to a *G92* card with 112 SPs. The answer to that question isn't going to lie in running tests on two G80 cards.

So I suggest coordinating a test between G92/G94 cards, while *controlling* for variables (such as number of shaders and texture units).

I will try to do what I can on my own, as a G92 owner, but someone with a G94 and a computer with similar specs would really need to be in the equation to *prove* anything (I'm on an e2140 @2.4, IP35-E, 3GB HP Value RAM DDR2-667 @600, BFG 8800GT OC @625, monitor up to 1920x1200).

If anyone knows of any downloadable time-demos that seem reasonable to test, I'd be more than happy to give it a go. I don't have many of the latest FPS-style games, I got out of the habit of playing them, but I'm willing to do what I can within reason (i.e. I don't have the money to go out and buy five games just to run benchmarks).

Cheers.
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
specs don't need to be similiar if you calculate percentages & form a ratio. You're right they should be identical, but where are you going to find that, considering factors like driver revisions and different personalized operating systems, performance/quality settings, and many other things.

 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
jared - I think that's true in order to come up with a reasonably-founded guess about performance, but why stop there? Why not just use RivaTuner to scale back SPs on an 8800GT and see what we come up with. The truth is, I've no idea how different the G80 architecture is from G92, on the low level of how the SPs operate.

I kind of want to get to the bottom of whether or not the G94s SPs are much improved, or that shaders just don't matter as much as I thought. I know specs won't be identical outside of a testing lab, but I feel that we could do a lot better than what we're doing right now, 'tis all.

I'll look for the Crysis demo. Does it have a built-in time demo?


[edited for spelling...or lack thereof]
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |