Doom 3 Benchmarks at [H]

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
56,552
10,171
126
Originally posted by: Cat1. The whole point of Doom's engine is produce nearly identical results across all the rendering paths. The dynamic lighting, and stencil shadows will be used in all 4 paths.

Ideally, yes.

But do you realize, that on older cards, it takes an additional stencil pass *per light* on the scene? That would mean, that with a few additional lights, rendering could take as much as 4x the time, resulting in 1/4 the framerate, on older cards.

Unless, of course, dynamic lights/stencil shadows aren't used on those low-end cards. We'll have to see, I guess. Not too many benchmarks or IQ comparisons with D3 on older hardware yet.

Originally posted by: Cat
2. The physics sim runs at 60Hz, no more, no less, regardless of CPU. This has been explicitly stated.

Please show me where it has been explicitly stated that Doom3's physics engine will run at exactly 60Hz, no less, even on slower CPUs. I somehow doubt this. I think that perhaps you missed my mention of "interpolation".

Originally posted by: Cat
3. The timedemo is described in the article as being an unusually intensive example of the game.

But still not utilizing the physics/AI. Therefore CPU load will be higher still with those enabled. Unless Carmack somehow invented software code that actually takes "negative CPU cycles", and makes "more code, run faster". Being an x86 assembly-language programmer myself, I must state that such a thing is not possible, at least not until the physicists manage to create time-travelling CPUs.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: Gamingphreek
Well the point that you are referring to when the 9800XT is tied or winning by a bit is when the game is CPU bound. Therefore it is expected for one to lead the other within a couple percentage points because it is within the margin of error. However when everything becomes GPU bound you see the 5950U pull away.
When the 6800 scores twice as high as the 5950/9800 on the same system with the same test, it is safe to conclude that the benchmark is not CPU-bound.

Also ATI does too optimize for games. How in the hell do you think they do so well in HL2.
....
 

Illissius

Senior member
May 8, 2004
246
0
0
Originally posted by: Rollo
I'm pretty curious to see how my 6800NU will fare.
On one hand it's pretty clear D3 loves the nV40. On the other, 12 pipes/128MBs.

If it could split the difference between the X800Pro and 6800GT AT 10x7 4x8x (42>60) AND COME IN AT 50FPS I would be a pretty happy camper.

Oh well. these benches confirmed my hunch nVidia would be the company to go with, and disproved my hunch the 5900s would be way above the 9800s. (although 32 vs 26 is a substantial percent, no one will be playing like this, and I'm not a "what would you rather ride?!?! A cow or a goat?!?!?" kind of guy)

;-)

I'll make the prediction that the 6800 will lead the X800 Pro by the same amount as the 6800GT leads the X800XTPE, at settings where the 128MB memory isn't a bottleneck. Looking at raw fillrates, the 6800 is roughly 30% slower than the GT, and the same ratio holds true for the X800Pro and XTPE.
 

Cat

Golden Member
Oct 10, 1999
1,059
0
0
Originally posted by: VirtualLarry
Originally posted by: Cat1. The whole point of Doom's engine is produce nearly identical results across all the rendering paths. The dynamic lighting, and stencil shadows will be used in all 4 paths.

Ideally, yes.

But do you realize, that on older cards, it takes an additional stencil pass *per light* on the scene? That would mean, that with a few additional lights, rendering could take as much as 4x the time, resulting in 1/4 the framerate, on older cards.

Unless, of course, dynamic lights/stencil shadows aren't used on those low-end cards. We'll have to see, I guess. Not too many benchmarks or IQ comparisons with D3 on older hardware yet.

Originally posted by: Cat
2. The physics sim runs at 60Hz, no more, no less, regardless of CPU. This has been explicitly stated.

Please show me where it has been explicitly stated that Doom3's physics engine will run at exactly 60Hz, no less, even on slower CPUs. I somehow doubt this. I think that perhaps you missed my mention of "interpolation".

Originally posted by: Cat
3. The timedemo is described in the article as being an unusually intensive example of the game.

But still not utilizing the physics/AI. Therefore CPU load will be higher still with those enabled. Unless Carmack somehow invented software code that actually takes "negative CPU cycles", and makes "more code, run faster". Being an x86 assembly-language programmer myself, I must state that such a thing is not possible, at least not until the physicists manage to create time-travelling CPUs.



1. Yes, I know that more passes are required on the NV10 and NV20 paths, and more fillrate is burned. They also don't support the ARB2 path, and the register combiner code is very quick. I stand by my claim that you won't lose much in the feature department aside from the dependent texture read effects, you'll just run at lower resolution. You'll also miss out on true normalization, and have to use cube-maps instead, but those are sampled just as quickly as regular 2D textures on NVidia hardware.

EDIT: I reread your post: dynamic lights and stencil shadows will, with one hundred percent certainty, be used in all paths. Hell, they work fine in the 'alpha.' There's no other way to get the game to look remotely similar.

2. The physics engine is basically an integral performed by approximation. You can't skip ticks and expect the same result, because error accumulates, and Carmack has explicitly said the 'same inputs produce the same outputs.' and that the physics sim is locked at 60Hz. Please explain to me how allowing a lower tick rate will produce the same results.

3. It all depends on what the bottleneck is. Obviously this game is both CPU and GPU bound in different situtations. Dropping more GPU intensive parts (benchmark was unusually GPU heavy) while increasing the CPU load could equal something close to the original benchmark.

I write (crappy) OpenGL applications at work. One of them even uses multipass ARB_fragment_program lighting.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Cat
Originally posted by: VirtualLarry
Originally posted by: Cat1. The whole point of Doom's engine is produce nearly identical results across all the rendering paths. The dynamic lighting, and stencil shadows will be used in all 4 paths.

Ideally, yes.

But do you realize, that on older cards, it takes an additional stencil pass *per light* on the scene? That would mean, that with a few additional lights, rendering could take as much as 4x the time, resulting in 1/4 the framerate, on older cards.

Unless, of course, dynamic lights/stencil shadows aren't used on those low-end cards. We'll have to see, I guess. Not too many benchmarks or IQ comparisons with D3 on older hardware yet.

Originally posted by: Cat
2. The physics sim runs at 60Hz, no more, no less, regardless of CPU. This has been explicitly stated.

Please show me where it has been explicitly stated that Doom3's physics engine will run at exactly 60Hz, no less, even on slower CPUs. I somehow doubt this. I think that perhaps you missed my mention of "interpolation".

Originally posted by: Cat
3. The timedemo is described in the article as being an unusually intensive example of the game.

But still not utilizing the physics/AI. Therefore CPU load will be higher still with those enabled. Unless Carmack somehow invented software code that actually takes "negative CPU cycles", and makes "more code, run faster". Being an x86 assembly-language programmer myself, I must state that such a thing is not possible, at least not until the physicists manage to create time-travelling CPUs.



1. Yes, I know that more passes are required on the NV10 and NV20 paths, and more fillrate is burned. They also don't support the ARB2 path, and the register combiner code is very quick. I stand by my claim that you won't lose much in the feature department aside from the dependent texture read effects, you'll just run at lower resolution. You'll also miss out on true normalization, and have to use cube-maps instead, but those are sampled just as quickly as regular 2D textures on NVidia hardware.

2. The physics engine is basically an integral performed by approximation. You can't skip ticks and expect the same result, because error accumulates, and Carmack has explicitly said the 'same inputs produce the same outputs.' and that the physics sim is locked at 60Hz. Please explain to me how allowing a lower tick rate will produce the same results.

3. It all depends on what the bottleneck is. Obviously this game is both CPU and GPU bound in different situtations. Dropping more GPU intensive parts (benchmark was unusually GPU heavy) while increasing the CPU load could equal something close to the original benchmark.

I write (crappy) OpenGL applications at work. One of them even uses multipass ARB_fragment_program lighting.

LOL, everyone has suddenly become programmers I see.
 

Cat

Golden Member
Oct 10, 1999
1,059
0
0
I'm no John Carmack, but since I do this stuff at work, I think I'm a bit more qualified to comment.
 

CrazyHelloDeli

Platinum Member
Jun 24, 2001
2,854
0
0
I consider myself "fairly" technical when it comes to computer hardware, however, you guys have lost me
 

Blastman

Golden Member
Oct 21, 1999
1,758
0
76
Originally posted by: Genx87

btw the custom shaders ect have been dropped for the 5900 as Nvidias driver team has got the drivers upto the point where the card can run his ARB2 path as good or better than the 9800.
You?re correct, the NV3x is running on a the ??standard? ? RB2 path -- but they?re still replacing shaders.

Carmack was able to use the standard path for the NV3x? because ?NV showed them they can detect the game and replace the shaders ? and ? Carmack wrote in special optimizations in the ??standard? ? RB2 path for the NV3x.

Originally posted by: Gamingphreek

Also ATI does too optimize for games. How in the hell do you think they do so well in HL2.

Of course ATI optimizes their drivers (in general) for games. But they don?t detect games like HL2 -- and say ?? oh? here?s this game -- lets use this specific optimization?. Their optimizations are generic -- for all games. This is quite a bit different from saying that ATI worked with Valve to have the code in HL2 run well on their cards.

As far as the game being pegged at 60fps. I believe it?s internal and if the graphic card runs faster it just renders some frames twice.
 

Cat

Golden Member
Oct 10, 1999
1,059
0
0
I don't think Carmack has said that he's put in special optimizations for the NV30, other than OPTION ARB_precision_hint_fastest. It's a one line thing.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Illissius
Originally posted by: Rollo
I'm pretty curious to see how my 6800NU will fare.
On one hand it's pretty clear D3 loves the nV40. On the other, 12 pipes/128MBs.

If it could split the difference between the X800Pro and 6800GT AT 10x7 4x8x (42>60) AND COME IN AT 50FPS I would be a pretty happy camper.

Oh well. these benches confirmed my hunch nVidia would be the company to go with, and disproved my hunch the 5900s would be way above the 9800s. (although 32 vs 26 is a substantial percent, no one will be playing like this, and I'm not a "what would you rather ride?!?! A cow or a goat?!?!?" kind of guy)

;-)

I'll make the prediction that the 6800 will lead the X800 Pro by the same amount as the 6800GT leads the X800XTPE, at settings where the 128MB memory isn't a bottleneck. Looking at raw fillrates, the 6800 is roughly 30% slower than the GT, and the same ratio holds true for the X800Pro and XTPE.

Seems like good logic to me, I've already decided my game experience with this game will not be held back by cash.

What it takes to run D3 at 10X7 4X8X at 50fps, I will buy. If the 6800NU can do it, I will bench and post.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,995
126
You deserved it for that irritating, vauge, condesending and uninformative reply...like this one again..
Irritating? Perhaps if a fantasy has been burst.
Vague? There's nothing vague about the benchmark results.
Condescending? Telling the truth is now condescending is it?
Uninformative? Again, the benchmarks are clearly visible.
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: BFG10K
You deserved it for that irritating, vauge, condesending and uninformative reply...like this one again..
Irritating? Perhaps if a fantasy has been burst.
Vague? There's nothing vague about the benchmark results.
Condescending? Telling the truth is now condescending is it?
Uninformative? Again, the benchmarks are clearly visible.

Explain yourself?

I don't see how a 9700 will have a problem at decent res (say 10x7) When the 9800xt is getting 50+ FPS at that res/... How much slower can a 9700pro be> 20% Ok then 40FPS Not "unplayable" by any means.


kyle says "we again end up with settings that are extremely playable." right there
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,995
126
40 FPS average in one demo. One demo does not equal the whole game and likewise 40 FPS average doesn't mean a constant 40 FPS.

How much slower can a 9700pro be>
I'd expect an 9800 XT to be around 30% faster than a 9700 Pro.

kyle says "we again end up with settings that are extremely playable." right there
Kyle's comments aren't backed by his results.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: BFG10K
40 FPS average in one demo. One demo does not equal the whole game and likewise 40 FPS average doesn't mean a constant 40 FPS.

How much slower can a 9700pro be>
I'd expect an 9800 XT to be around 30% faster than a 9700 Pro.

kyle says "we again end up with settings that are extremely playable." right there
Kyle's comments aren't backed by his results.

in 1 demo that is extremely demanding. more so than the game. mr c clearly stated the game will play better than the demo runs.

ok, i'd say 25% and prolly never dips below 30fps.

(my pred)

btw, i still have my 8500 to compare with my 9800xt.

:roll:
 

VirtualLarry

No Lifer
Aug 25, 2001
56,552
10,171
126
Originally posted by: Cat
1. Yes, I know that more passes are required on the NV10 and NV20 paths, and more fillrate is burned. They also don't support the ARB2 path, and the register combiner code is very quick. I stand by my claim that you won't lose much in the feature department aside from the dependent texture read effects, you'll just run at lower resolution. You'll also miss out on true normalization, and have to use cube-maps instead, but those are sampled just as quickly as regular 2D textures on NVidia hardware.

EDIT: I reread your post: dynamic lights and stencil shadows will, with one hundred percent certainty, be used in all paths. Hell, they work fine in the 'alpha.' There's no other way to get the game to look remotely similar.

Well, that's a good point, there probably isn't an easy way to get the game to have that "look", sans dynamic lights and stencils, but I could have sworn that I saw a reference somewhere to some alternative implemenation that was going to be used on slower machines. Maybe I imagined it?

Originally posted by: Cat
2. The physics engine is basically an integral performed by approximation. You can't skip ticks and expect the same result, because error accumulates, and Carmack has explicitly said the 'same inputs produce the same outputs.' and that the physics sim is locked at 60Hz. Please explain to me how allowing a lower tick rate will produce the same results.

I thought that it was capped at 60Hz. As for the error terms accumulating, there are ways around that, but I can't say more about that, I think I'm technically still under NDA. But when you add it all up, it just seems that on a lower-powered CPU, *something* has to give. I'm still not so sure that they couldn't just scale the number of calculations/sec done in the physics engine, and then interpolate. Apparently FarCry *does* scale the level of interactions of their game physics engine along with the detail level - let me see if I can dig up the thread on that here on AT. (Edit: Found the thread here.)

Originally posted by: Cat
3. It all depends on what the bottleneck is. Obviously this game is both CPU and GPU bound in different situtations. Dropping more GPU intensive parts (benchmark was unusually GPU heavy) while increasing the CPU load could equal something close to the original benchmark.

I guess both of us will just have to see the actual game in play when it comes out to know for sure.

Originally posted by: Cat
I write (crappy) OpenGL applications at work. One of them even uses multipass ARB_fragment_program lighting.

Cool. I worked some years back for a game startup working on one of the first MMORPGs. But I was the DOS/asm guy, I didn't directly work on the physics engine or design the 3D pipeline. We had some guys from WPI and Brown, respectively, doing that work. I've not really touched OpenGL at all, either, but I'm somewhat familiar with the API.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,552
10,171
126
Originally posted by: Cat
I don't think Carmack has said that he's put in special optimizations for the NV30, other than OPTION ARB_precision_hint_fastest. It's a one line thing.

He did mention something about making innocuous changes to the shaders, causing them to "drop off the fast path", meaning that NV detects particular shader code sequences used by Doom3 in their drivers, and then dynamically-replaces them with (hand-optimized?) shader code fragments specific for those cards (low-end NV3x, I guess). Apparently the higher-end NV3x cards can handle the ARB2 path O.K.

So while Carmack is no longer special-casing them in the application, NV *is* special-casing them in the driver, just before they hit the hardware. At least that's how I currently understand the state of things.

This certainly isn't the first time that NV has done this, they did it for 3DMark03 too, claiming that the shaders used in the benchmark for some of the scenes were in fact highly un-optimized, and substituted their own, with theoretically similar/equivalent IQ. Others claimed that NV was "cheating" by specifically detecting the benchmark software and optimizing for it. Which is probably true, and also probably true in Doom3's case.

Ever wonder why NV's video driver binaries are so d*mn huge? I'm guessing that things like this are why. They probably have entire databases of code/shaders/etc., to detect and replace, hidden/encrypted in there. (Well, along with the insane amount of internationization, I never understood why they need to install any alternative-language files beyond what the default/specified locale/language in Windows was.)
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: BFG10K
You deserved it for that irritating, vauge, condesending and uninformative reply...like this one again..
Irritating? Perhaps if a fantasy has been burst.
Vague? There's nothing vague about the benchmark results.
Condescending? Telling the truth is now condescending is it?
Uninformative? Again, the benchmarks are clearly visible.




Ah, BFG. Making more friends......
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
Originally posted by: Illissius
Originally posted by: Rollo
I'm pretty curious to see how my 6800NU will fare.
On one hand it's pretty clear D3 loves the nV40. On the other, 12 pipes/128MBs.

If it could split the difference between the X800Pro and 6800GT AT 10x7 4x8x (42>60) AND COME IN AT 50FPS I would be a pretty happy camper.

Oh well. these benches confirmed my hunch nVidia would be the company to go with, and disproved my hunch the 5900s would be way above the 9800s. (although 32 vs 26 is a substantial percent, no one will be playing like this, and I'm not a "what would you rather ride?!?! A cow or a goat?!?!?" kind of guy)

;-)

I'll make the prediction that the 6800 will lead the X800 Pro by the same amount as the 6800GT leads the X800XTPE, at settings where the 128MB memory isn't a bottleneck. Looking at raw fillrates, the 6800 is roughly 30% slower than the GT, and the same ratio holds true for the X800Pro and XTPE.
That would be a first. In all the benches I've seen in other games, 6800 is above a 9800XT and below a X800P by quite a bit.
 

JBT

Lifer
Nov 28, 2001
12,094
1
81
Not sure if anyone cares or not but I messed around with the HL2 beta last night with my card. I ran it at 1600x 1200 dx9 code path and I had 8x/6x AA whatever that is in Nvidia's settings and 8x AF. I run reforce so I am not entirely sure how high the FPS would have been but it was maxing out my monitor which has a refresh rate of 72hz with that res quit a bit. Yes there were alot of times it was hovering in the 30's but obviously I don't think any of the drivers are real optimized for a leaked beta that was released a year ago lol. Take it for what you will I have no real idea if the AA and AF setting were working but im pretty sure AA atleast was.
 

Cat

Golden Member
Oct 10, 1999
1,059
0
0
Originally posted by: VirtualLarry
Originally posted by: Cat
I don't think Carmack has said that he's put in special optimizations for the NV30, other than OPTION ARB_precision_hint_fastest. It's a one line thing.

He did mention something about making innocuous changes to the shaders, causing them to "drop off the fast path", meaning that NV detects particular shader code sequences used by Doom3 in their drivers, and then dynamically-replaces them with (hand-optimized?) shader code fragments specific for those cards (low-end NV3x, I guess). Apparently the higher-end NV3x cards can handle the ARB2 path O.K.

So while Carmack is no longer special-casing them in the application, NV *is* special-casing them in the driver, just before they hit the hardware. At least that's how I currently understand the state of things.

This certainly isn't the first time that NV has done this, they did it for 3DMark03 too, claiming that the shaders used in the benchmark for some of the scenes were in fact highly un-optimized, and substituted their own, with theoretically similar/equivalent IQ. Others claimed that NV was "cheating" by specifically detecting the benchmark software and optimizing for it. Which is probably true, and also probably true in Doom3's case.

Ever wonder why NV's video driver binaries are so d*mn huge? I'm guessing that things like this are why. They probably have entire databases of code/shaders/etc., to detect and replace, hidden/encrypted in there. (Well, along with the insane amount of internationization, I never understood why they need to install any alternative-language files beyond what the default/specified locale/language in Windows was.)


I'm not denying that the driver is doing something; I'm just correcting what you posted. I'm more concerned about the spread of misinformation than who looks good or bad. The Catalyst drivers are bigger than the Forceware drivers; make what you will of that.

Forceware 61.76: 12 MB
EDIT: International version is 20 or so MB.
Catalyst 4.8: 28 MB

Regarding the physics again, you're most definitely right about tweakable options in Far Cry.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Zebo
Originally posted by: BFG10K
You deserved it for that irritating, vauge, condesending and uninformative reply...like this one again..
Irritating? Perhaps if a fantasy has been burst.
Vague? There's nothing vague about the benchmark results.
Condescending? Telling the truth is now condescending is it?
Uninformative? Again, the benchmarks are clearly visible.

Explain yourself?

I don't see how a 9700 will have a problem at decent res (say 10x7) When the 9800xt is getting 50+ FPS at that res/... How much slower can a 9700pro be> 20% Ok then 40FPS Not "unplayable" by any means.


kyle says "we again end up with settings that are extremely playable." right there

There is no question that the 9700pro has been an incredible video card for the past two years and still plays most games easily. But with these new "very hungry" games coming out, like Far Cry, DoomIII, Stalker, HL2, the 9700 is coming to the end of its glory. I would say that bare minimum gameplay these days require a 9700 or 5900XT and some of the candy needs to be off. 9800XT or 5950 would be better, but still not really providing the horsepower needed for high framerates in newer games with AA/AF.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: oldfart
Originally posted by: Illissius
Originally posted by: Rollo
I'm pretty curious to see how my 6800NU will fare.
On one hand it's pretty clear D3 loves the nV40. On the other, 12 pipes/128MBs.

If it could split the difference between the X800Pro and 6800GT AT 10x7 4x8x (42>60) AND COME IN AT 50FPS I would be a pretty happy camper.

Oh well. these benches confirmed my hunch nVidia would be the company to go with, and disproved my hunch the 5900s would be way above the 9800s. (although 32 vs 26 is a substantial percent, no one will be playing like this, and I'm not a "what would you rather ride?!?! A cow or a goat?!?!?" kind of guy)

;-)

I'll make the prediction that the 6800 will lead the X800 Pro by the same amount as the 6800GT leads the X800XTPE, at settings where the 128MB memory isn't a bottleneck. Looking at raw fillrates, the 6800 is roughly 30% slower than the GT, and the same ratio holds true for the X800Pro and XTPE.
That would be a first. In all the benches I've seen in other games, 6800 is above a 9800XT and below a X800P by quite a bit.

Yeah, but it's reasonable because the X800XT doesn't get it's arse handed to itself by the 6800GT much in benches, either . It is reasonable to assume that the gap between the X800 Pro and the 6800nu will be something similar to that between the 6800U and the X800 XT; maybe +/- 10% .

I would venture a guess that the 6800nu's 12 pixel pipes and slower core/memory will hurt it a bit more, and I would predict that the gap between the 6800nu and the X800 Pro wukk be a bit narrower in D3 than between the X800XT and the 6800U, but again it's just speculation. We shall see soon, at any rate.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
Originally posted by: jiffylube1024

Yeah, but it's reasonable because the X800XT doesn't get it's arse handed to itself by the 6800GT much in benches, either . It is reasonable to assume that the gap between the X800 Pro and the 6800nu will be something similar to that between the 6800U and the X800 XT; maybe +/- 10% .

I would venture a guess that the 6800nu's 12 pixel pipes and slower core/memory will hurt it a bit more, and I would predict that the gap between the 6800nu and the X800 Pro wukk be a bit narrower in D3 than between the X800XT and the 6800U, but again it's just speculation. We shall see soon, at any rate.
Its kind of hard to say until we see benches. How important is 128 vs 256 Meg of memory?
 

gururu

Platinum Member
Jul 16, 2002
2,402
0
0
Originally posted by: keysplayr2003

There is no question that the 9700pro has been an incredible video card for the past two years and still plays most games easily. But with these new "very hungry" games coming out, like Far Cry, DoomIII, Stalker, HL2, the 9700 is coming to the end of its glory. I would say that bare minimum gameplay these days require a 9700 or 5900XT and some of the candy needs to be off. 9800XT or 5950 would be better, but still not really providing the horsepower needed for high framerates in newer games with AA/AF.

I've been zooming through farcry this last few weeks on my 9700pro. benches with FRAPS put me well above 30 fps just about anywhere. Thats with the highest ingame settings available, hardware 16x quality AF and a res of 1024. Benches done in this game with this card haven't reflected the 'true' playability (eg. Anand's Farcry 1.2 tests) I've experienced.

Hardocp's Kyle said, "Again I caution you as John did to keep in mind that these are just benchmarks as the true DOOM 3 gaming experience is looking to be very flexible and forgiving in terms of hardware needs."

As in Farcry, I doubt the benches are indicative of real world playability on older generation cards. I planning on at least 30fps in D3 at 1024 with the highest quality settings, no AA or AF. benches with the 9800xt put it at around 46fps WITH 8xAF. It should be well in the range of 50-55 fps without AF. If the 9700pro is 20-30% slower than a 9800xt, then that would put it at about 35-44 fps at these settingswithout AF.
 

rbV5

Lifer
Dec 10, 2000
12,632
0
0
I'm not denying that the driver is doing something; I'm just correcting what you posted. I'm more concerned about the spread of misinformation than who looks good or bad. The Catalyst drivers are far bigger than the Forceware drivers; make what you will of that.

Forceware 61.76: 12 MB
Catalyst 4.8: 28 MB

The difference isn't exactly that large. The Cats are bigger, but only ~ 4MB apples to apples not 16MB
Forceware Version: 61.76 International: 20.6MB
Catalyst 4.7 (4.8 hasn't been released): 28.8MB minus 4.29MB WDM capture drivers = 24.51MB
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |