x800 pro v geforce 6800 gt (poll)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Originally posted by: BFG10K
Based on the specs and the latest results I'd have to say the GT is the clear superior to the X800 Pro.

then you obviously cannot comprehend what you read.

Since current architectures are single TMU configurations with a direct tie-in of pipelines to shader units then yes, pipes do mean an awful lot.

again, not without considering the rest of the architecture. vertex/shader processing within the pipelines, core and memory clocks, etc. all play an integral part. any single part doesn't mean anything by itself.

Did you see the FiringSquad review? When the Pro wins its usually by a tiny margin but when the GT wins the gap is often quite large. Also the GT is potentially always stronger in shader situations.

did you?

as i said, the ogl tests were won by nvidia, as expected.. hell the nv30/35 beat your precious 9700p in ogl.. what's new? funny it meant nothing nv beat your card for ogl last round, but this round it's more meaningful?

where, exaclty with 4xaa/8xaf did the gt win with a "gap that is quite large"?

i posted specific examples, which in your pompasity you refute offhand w/o providing anything of substance.

admittedly i didn't include traod but i never considered it in favor of ati before, would be hypocritical of me to consider it in favor of nc now. the rest are close regardless of who wins.

(fyi the cat 4.7 'beta' includes a significant ogl improvement, at least in CoD),
I'll believe that when I see the results. For now those drivers don't exist so any comments about them are simply speculation.[/quote]

geez BFG.. if you're going to be a pompous know-it-all, at least be more convincing. they're out, they're available, and they've been tested quite a bit. several threads on it @ r3d; you don't even need to search....

it's amazing how one-sided and opinionated you always seem to be.. nothing is ever equitable.. whichever side you happen to favor always seems to trounce the other... but the truth is i've yet to see anything which shows these cards being anything but pretty comparable.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Originally posted by: MarsumaneThats all true but u still have to consider the longlivity of the card if ps3.0 actually does help iq to any degree or performance to any degree. It would make the card that much more valuable in the future after its lifetime.

the problem is ps3 won't help iq. and while i agree features can certainly extend the longevity of a part, the degree to which it will extend it's life is highly subjective. many cards "support" dx9, but a helluva alot of em are being upgraded right now to nv40/r420 - just as DX9 is just now starting to be used..

since sm3 (ps3/vs3) is not much more than a 'extension' of sm2, it would make sense to think it will be adopted quicker, however rendering effects which can be done in sm3 can be done in sm2; the only question is the 'cost' of doing them (both from a performance viewpoint as well as a programming standpoint). this is the part we really have no 'proof' of, however from the many discussions i've followed from both sides as well as developer comments, nothing has been said to make me think it will make much, if any difference.

at any rate, while a 9700p is still certainly useable for running farcry @ 1024 no aa/af, it is certainly 'obsolete' when compared to a card that can run it @ 1600 4xaa/8xaf - with higher framerates. sure, the 9700p is certainly still an 'upgrade' for many, however it simply doesn't compare with the 6800gt/x800pro and higher cards. even the vanilla 6800 will likely be faster...
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,995
126
then you obviously cannot comprehend what you read.
:roll:

again, not without considering the rest of the architecture. vertex/shader processing within the pipelines, core and memory clocks, etc. all play an integral part. any single part doesn't mean anything by itself.
Who said I didn't consider them? Apparently you didn't though because of your first comment.

hell the nv30/35 beat your precious 9700p in ogl..
Actually no, not always.

funny it meant nothing nv beat your card for ogl last round, but this round it's more meaningful?
Did you see the COD results last round?

where, exaclty with 4xaa/8xaf did the gt win with a "gap that is quite large"?
Where exactly did I single out that setting?

i posted specific examples, which in your pompasity you refute offhand w/o providing anything of substance.
Why don't you calculate the performance delta between each card and see which one has the trend of having the highest whenever it wins.

admittedly i didn't include traod but i never considered it in favor of ati before, would be hypocritical of me to consider it in favor of nc now. the rest are close regardless of who wins.
Selectively ignoring results is your problem, not mine.

they're out, they're available, and they've been tested quite a bit. several threads on it @ r3d; you don't even need to search...
I didn't see them tested in that review, did you?
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: GeneralGrievous
I don't see the point in this 6800 Ultra Extreme when its still slower than the XT in most benchmarks. If the GT can be considered faster than the Pro the XT should be considered that much faster than the Ultra by a similar margin.

most being 1 more than XT-PE :roll: with horrible beta drivers.
 
Jun 18, 2004
105
0
0
quote:

--------------------------------------------------------------------------------
4. Possible optimizations with nForce3 chipset.
--------------------------------------------------------------------------------


not one shred of evidence of this has been shown.

There was an article somewhere showing nVidia cards gaining performance when used in the 250gb chipset motherboards.

Can't remember the site now but hopefully someone can.

This is one of my reasons for ordering a 6800 as hopefully it will gain something when the a64 system I ordered arrives.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Who said I didn't consider them? Apparently you didn't though because of your first comment.

read again. you attempted (poorly, i might add) to refute my orignal statement. "pipes mean nothing without considering the rest of the architecture".. twisting things around doesn't change the accuracy of my origninal statement.

Actually no, not always.

often enough

Did you see the COD results last round?

yea, about on par or a bit slower than vanilla 5900 @ 1024x768 4xaa/8xaf depending on which review you read. still the exception, and not the rule as in general it falls behind in ogl.

Where exactly did I single out that setting?

so now non aa & non af settings are the important ones? funny coming from the guy who always stressed the importance of aa/af when it benefited your point of view.. hmmm...

Why don't you calculate the performance delta between each card and see which one has the trend of having the highest whenever it wins.

and the relevance of that is.. ? oh wait.. it's not the non aa/af settings that are important, right? :roll:

again, with the exeption of the ogl titles which i've already pointed out, they're within a few %... and also once again, ogl performance (which has always been an achilles heel for ati in nvidia's favor) has improved in the new beta, and is claimed to improve quite a bit with ati's ogl re-write (tho i'm not holding my breath - i'll believe the "re-write" when it happens).

Selectively ignoring results is your problem, not mine.

heh.. actually reading the posts is exaclty the opposite - you pick and choose your replies, not me also, your whole change in attitude regarding "non aa/af" quality settings on your part is quite contradicting from when you were slamming nv and praising the last couple of generations.

while telling rollo those settings are unimportant when you were pimping your 9700p vs his 5800u (among others.. i recall as far back as the original ut/q3 days you stressed their importance), now you're telling me those settings are important as nv actually bests the x800 in several games with the "non quality" settings...

while i can certainly agree there are reasons for choosing the gt over the pro, i can also understand the opposite decisions - the cards are in fact that close at this point. while i will have a gt soon, after over a month i have little to complain about regarding the performance of the x800pro.

edit: oops.. forgot one:

I didn't see them tested in that review, did you?

irrelevant. you stated they "didn't exist".. and you were wrong. if you want to "qualify" your statements, do so when you make it, not when you are trying to cover your ass at a later time.

dunno why everything has to be an argument even when you don't have a "leg to stand on".
 
Mar 18, 2004
339
0
0
Originally posted by: nitromullet
So why don't you help me "get it".
Your telling me that the X800XT and PE are the exact same cards core speeds mem speeds down to the last transistor? Why call it a PE then?
I think only the built-by-ATi cards are called Platinum Edition, while the ones made by third parties are just XT's. I am basing this on the fact that ATi doesn't sell a just a regular XT. As far as I can tell, the XT PE built-by-ATi and the XT's made by others are the same card.

I'm pretty sure the X800XT is just the card. If you get an X800XT PE you get a cool looking ATI t-shirt a demo C-D and other useless junk. But my vote goes to X800pro because I'd mod it into an X800XT.
 
Apr 14, 2004
1,599
0
0
most being 1 more than XT-PE with horrible beta drivers.
Depends on where you look. Anandtech had very conservative XTPE benchmarks compared to other sites. And while Nvidia may have released better drivers since then, so has ATI.
 

Blastman

Golden Member
Oct 21, 1999
1,758
0
76
I'd take the X800pro.

X800 has more pure fillrate ?

X800 ? 475 x 12 = 5700
6800GT ? 350x16= 5600

Sites like FS shouldn?t be running the 6800?s with the hacked down Brilinear. Or, one can simply say the X800 has better IQ. The only place the GT has an advantage over the X800pro is OGL. Problem is 90% + games are DX.

The X800pro is faster where it really counts -- DX9, AA/AF, and shader intensive games. As new games come out (where NV hasn?t had a chance to ?hack/cheat? reduce IQ somehow) the X800pro is looking like a good match for the 6800U (so it beats the 6800GT pretty easy). Both the Hardocp and hardware.fr reviews showed this. Although the cards are generally fairly close.

I?m also betting that the X800pro could prove to be substantially faster in some of the new shader intensive games than the 6800GT (early indications) . So the better shader performance of the X800pro will probably win out over the SM3.0 of the GT.

Edit: Add, the X800 also has better AA -- a usable 6AA.
 

Illissius

Senior member
May 8, 2004
246
0
0
Eh. Dislike dissecting people's posts sentence by sentence, but I couldn't resist, so here goes:

X800 has more pure fillrate ?

X800 ? 475 x 12 = 5700
6800GT ? 350x16= 5600
By 1.7%. It's a factor, but only one you should take into account if the cards come out equal in every other respect, and even then the color of the packaging might have more significance. A slight difference in architechture or driver efficiency or a slightly better overclock by a few MHz from either card will make it entirely irrelevant.

Sites like FS shouldn?t be running the 6800?s with the hacked down Brilinear.
Then sites shouldn't be running the X800's with their hacked down brilinear either (which is tough, seeing as ATi doesn't provide a means to turn it off). News flash: both ATi and nVidia have optimizations for both trilinear and anisotropic, and neither is noticeable to the naked eye without artificially coloring mipmaps or high magnification. Nothing you'd ever notice during a game, certainly less than the gain in FPS they provide. (Go here. You can open up a lossless PNG image by clicking on the JPEGs. Open them in seperate windows/tabs with the image in the exact same position on each, then switch between them if you need convincing. There. Is. No. Difference.)

The only place the GT has an advantage over the X800pro is OGL. Problem is 90% + games are DX.
It's more like this: At OGL, the GeForces demolish the Radeons pretty much without exception. At DX they're even, with the win going either way depending on the game. (Assuming their fillrate is close as with the 6800GT vs. X800 Pro; with its 30% higher fillrate the X800XT will beat the 6800U pretty much always.) Although, the Radeons' weakness at OGL seems to be a driver issue, so it's possible that they'll catch up if ATi improves them, but I wouldn't count on it.

The X800pro is faster where it really counts -- DX9, AA/AF, and shader intensive games.
One out of four is valid half of the time: The GeForces suffer a disproportionately large performance loss at AF when they have their optimizations turned off and the Radeons don't. To be fair, they probably still lose a bit more than the Radeons, but it's not a huge difference. (If you'd care to back up the DX9, AA, and "shader intensive games" parts of that with any examples not from HardOCP, I'm willing to listen.)

where NV hasn?t had a chance to ?hack/cheat? reduce IQ somehow
:/
Both parties are guilty of this, whether intentionally or not (most likely not, I'd say they're just driver bugs). I won't cite examples from nVidia since you no doubt know them by heart. From ATi I will mention the disappearing shadows in Splinter Cell and Far Cry.
Since what you probably meant to imply was optimizations, I'll address that as well. It's funny, you see, because it's precisely the other way around: current games are not optimized for the GeForce 6 series, since they were developed long before they were released. I understand they have quite a bit of latent potential by optimizing to execute multiple instructions per clock (per pipe, per whatever makes up parts of the pipe, etc.), so it'll be interesting to see how much they gain once games *do* start optimizing for them.

I?m also betting that the X800pro could prove to be substantially faster in some of the new shader intensive games than the 6800GT (early indications) . So the better shader performance of the X800pro will probably win out
Link?

the X800 also has better AA -- a usable 6AA.
I'll give you that. Although I've heard that nVidia is preparing a 4xMS 2xSS form of 8x AA for a future driver which should considerably increase performance for that mode.




@ GeneralGrievous:
Actually a usuable 12x AA effective with temporal AA coming up.
Hmm. Is that possible? IIRC, temporal AA works by choosing reference points (or sample points? dunno what they're called) at random out of those available. Thus, it can simulate an already available mode of AA faster, such as 4xAA with 2xAA performance (with various caveats), but can't make up entirely new reference points to be able to do 12x AA like you mention. I don't know the specifics of this, though, so correct me if I'm wrong.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Hmm. Is that possible? IIRC, temporal AA works by choosing reference points (or sample points? dunno what they're called) at random out of those available. Thus, it can simulate an already available mode of AA faster, such as 4xAA with 2xAA performance (with various caveats), but can't make up entirely new reference points to be able to do 12x AA like you mention. I don't know the specifics of this, though, so correct me if I'm wrong.

ok, i'll correct you

a 12x "sample" is not required; it uses 2 "different" 6xaa samples, and "tricks" the eyes into seeing a 12x sampled image.
 

Blastman

Golden Member
Oct 21, 1999
1,758
0
76
Then sites shouldn't be running the X800's with their hacked down brilinear either (which is tough, seeing as ATi doesn't provide a means to turn it off). News flash: both ATi and nVidia have optimizations for both trilinear and anisotropic, and neither is noticeable to the naked eye without artificially coloring mipmaps or high magnification. Nothing you'd ever notice during a game,
ATI?s adative Tri = Full Trilinear and is better than NV?s Brilinear --period. The Brilinear on the 60.72 was supposedly better than in the past --but I?m willing to bet NV is hacking it back down to previous levels on the newer drivers.


extremetech mipmaps on nv Bri

l

In this close-up of the floor at 4X magnification, we can see what the filtering looks like on the GeForce 6800 Ultra (with and without trilinear optimizations) and on the Radeon X800 XT. ............. ... ... ATI's texture filtering is noticeably better.

.. At DX they're even, with the win going either way depending on the game.
If the 6800GT is so much faster in OGL, and even in DX, how on earth did the X800pro beat the 6800Ultra in both the HardOCP review and the 8 game Hardware France review?

In shaders. NV is already running lower shader paths in Halo and Farcry, and it looks like they?re hacked things back to 16bit in Farcry (graphics anomalies) to try and keep up.

l? In the High-Dynamic-Range lighting shader test ? (NV need to run 32bit precision otherwise you get anomalies) ?

X800 XT ? 131.29
6800U ? ? 61.26

The X800 is twice as fast as the 6800. I imagine the margin between the Pro and the GT are about the same. Will be interesting to see if NV can improve this.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: Blastman
I'd take the X800pro.

X800 has more pure fillrate ?

X800 ? 475 x 12 = 5700
6800GT ? 350x16= 5600

LOL Woot! a 1.75% fill rate advantage! :roll:

Sites like FS shouldn?t be running the 6800?s with the hacked down Brilinear.
Why not? They use ATIs hacked down brilinear?

Or, one can simply say the X800 has better IQ.
Even when you disable nVidias brilinear in the drivers Chief?

The only place the GT has an advantage over the X800pro is OGL. Problem is 90% + games are DX.
I play shooters, lots of them are based on Carmacks OGL engines.

The X800pro is faster where it really counts -- DX9
LOL talk about caring about selective games!

AA/AF, and shader intensive games.
LOL- sure thing Big Chief

As new games come out (where NV hasn?t had a chance to ?hack/cheat? reduce IQ somehow) the X800pro is looking like a good match for the 6800U (so it beats the 6800GT pretty easy). Both the Hardocp and hardware.fr reviews showed this.
LOL- no bias there. And good methodology everyone approves of! :roll:

Although the cards are generally fairly close.
You don't seem to think so in the preceding fanboy rant....

I?m also betting that the X800pro could prove to be substantially faster in some of the new shader intensive games than the 6800GT (early indications) . So the better shader performance of the X800pro will probably win out over the SM3.0 of the GT.
Time will tell.

Holy crap. I pretty much echoed Illisius without ever having read his/her post. Illisius, you don't look like Denise Richards and like strong beer, duck hunting, and bass fishing by any chance, do you? LOL
 

Illissius

Senior member
May 8, 2004
246
0
0
a 12x "sample" is not required; it uses 2 "different" 6xaa samples, and "tricks" the eyes into seeing a 12x sampled image.
I understand the basics of how it works, but aren't the possible sample points hardcoded into the chip? I would think so, otherwise, for example, nVidia could/would have enabled ordered grid AA on the nv3x series with a driver update. And if so, since 6xAA is the highest the Radeons support, I'd assume there aren't any further possible sample points for it to "trick" me into seeing.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Originally posted by: Illissius
a 12x "sample" is not required; it uses 2 "different" 6xaa samples, and "tricks" the eyes into seeing a 12x sampled image.
I understand the basics of how it works, but aren't the possible sample points hardcoded into the chip? I would think so, otherwise, for example, nVidia could/would have enabled ordered grid AA on the nv3x series with a driver update. And if so, since 6xAA is the highest the Radeons support, I'd assume there aren't any further possible sample points for it to "trick" me into seeing.

heh.. apparently you don't understand the basics of it. check it out here: http://www.hardocp.com/article.html?art=NjExLDM=

when i stated 2 "different" samples, i meant different as in it doesn't take 2 of the same samples - if it did that, you would simply have regular aa

also, ati's aa is programmable (edit: or perhaps more appropriately, r3xx/r420 features 'programmable AA patterns'?).. nvidia's is 'hardcoded'. this is why you won't see 'temporal' aa on nvidia, at least in this generation.

admittedly this is only beneficial under certain conditions, and hardly something i would solely base a purchasing decision on, but it's still an interesting feature.
 

Illissius

Senior member
May 8, 2004
246
0
0
I understand it, but apparently I was incorrect in the assumption that it's limited to the sample points already defined by the existing AA patterns. Interesting.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Originally posted by: Illissius
I understand it, but apparently I was incorrect in the assumption that it's limited to the sample points already defined by the existing AA patterns. Interesting.

well the article i referenced isn't the best (just the easiest one i could find), but you can see by the pics in their examples how it 'alternates' (that's the term i was looking for!) the sample patterns.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,995
126
read again. you attempted (poorly, i might add) to refute my orignal statement. "pipes mean nothing without considering the rest of the architecture".
The only problem here is that you're making it sound like I only considered pipes when in reality I didn't.

often enough
When nVidia won OpenGL back then the difference wasn't that large and usually at higher settings (which are the most important) ATi basically matched them. But when ATi pulled ahead in shader games the difference was often as high as 100%.

Using that evidence one can conclude the R3xx was overall the superior card and I did exactly that.

and not the rule as in general it falls behind in ogl.
Yeah but not by much. And when nVidia fell behind in DirectX 9 they got smashed. That's the point.

A lead isn't just a lead because it needs to be considered in the grand context of things.

so now non aa & non af settings are the important ones?
Not at all - see above.

and the relevance of that is.. ? oh wait.. it's not the non aa/af settings that are important, right
The relevance is that if card 1 wins by a small factor in one place and card 2 wins by a large factor in another situation you can't claim the cards are equal.

But because you're too lazy to back your own claims I'll calculate the figures using AA & AF (or just AA if it's not possible) and work out the highest performance delta between each card for any given game.

GT: 43% faster in COD, 12% faster in IL-2, 11% faster in LOMAC
Pro: 12% faster in Far Cry.
Equal: Tomb Raider & UT2004 (since both cards shuffle around when it comes to leading).

If that's your definition of equality I suggest you consult a dictionary.

And nowhere did a 5950 smash my 9700 Pro by 43% in COD, especially not at 8xAF + 4xAA. So again instead of just saying "this card is faster" think about the bigger picture before making a flawed claim and/or comparison.

also, your whole change in attitude regarding "non aa/af" quality settings on your part is quite contradicting from when you were slamming nv and praising the last couple of generations.
My settings have never changed (as is proven above by my calculations) but rather the setttings that you were using as evidence to back your claims have never been established. In light of my calculations we can at least conclude that you chose to ignore the AA/AF results.

Furthermore your indirect claim that I only back the card that I own is quite frankly ludicrous given that I own neither of the two cards in question.

irrelevant. you stated they "didn't exist".. and you were wrong.
They don't exist in that review.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
The only problem here is that you're making it sound like I only considered pipes when in reality I didn't.

the OP stated one card is better than the other because it had more pipes. i countered # of pipes is really irrelevant without considering the rest of the arhchitecture.. then you jumped in to refute my statement. you failed to do so as your argument had no merit in the context of the flow of the thread.

# of pipes are irrelevant without taking into account the entire architecure. agree, or qualify (expand) it.. but disputing it is ridiculous as my statement is entirely accurate. that you're attempting to distract from, or change the original argument in and of itself makes you inaccurate in the context of this statement - i never said 'pipes' had no influence whatsoever in performance.. but neither was that the point.

there's an all too common perception in these forums that more pipes equates to great performance, and that's untrue. obviously it does if the rest of the architecture remains the same (ie a 12 pipe nv40 compared to a similarly clocked 16 pipe nv40), however a 12 pipe r420 is certainly capable of being faster than a 16 pipe nv40 as the rest of the architecture is quite different. people focus far too much on the # of pipes without considering the acutal power of the 'pipe' itself, or the accompanying architecture. hell, i've seen posts where people assumpe the 'pipe' is what carries data to memory; it's a very misunderstood term.

another 'subsytem', the memory architecture, recieves far less credit regarding performance than it should... it's not all clock speeds and pipes...

as for the "huge" performance gap.. i suppose you can apply your own logic to try and make your conclusions come out the way you want them to, but the bottom line is that in the settings you previously stated (adamantly as a matter of fact): aa/af with the excpetion of a single app (CoD - i'm surprised you're not claiming "cheat" as you so often liked to do any time nv had a performance increase), both cards perform similarly. even in CoD, with aa/af the x800pro is 140fps vs 156fps for the 6800GT - not exaclty a noticeable (or even significant) difference. but whatever... i guess if you call that 'huge', i can simply attribute it to your using a different definition than most.

My settings have never changed (as is proven above by my calculations) but rather the setttings that you were using as evidence to back your claims have never been established. In light of my calculations we can at least conclude that you chose to ignore the AA/AF results.

again, i'll stick with my observation your criteria has changed.. your 'calculations' are misguided at best, as sated above - 16-17 fps @ 150fps is not a 'huge' difference. while it's certainly and advantage, it's also certainly not noticeable. even @ 2048 res it's only 13fps difference (55 vs 68).

as for my choice to "ignore the aa/af results", again you are incorrect. my choice was to 'ignore' the non aa/af results, as i always use aa/af. frankly, (and you've probably used this logic previously) what's the point in owning a 4-500 video card if you don't use aa/af?

They don't exist in that review.

and again, qualify that in the beginning - not afterwards when it's convenient to support your point of view. you said they didn't exist. you didn't say they "didn't exist in the review", so your statement was still incorrect despite your adding a qualifier later in the argument
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,995
126
# of pipes are irrelevant without taking into account the entire architecure. agree, or qualify (expand) it..
I did expand it by discussing other aspects of the architecture:
Since current architectures are single TMU configurations with a direct tie-in of pipelines to shader units then yes, pipes do mean an awful lot.

Nowhere did I say "you are wrong" or words to that extent.

as for the "huge" performance gap.. i suppose you can apply your own logic to try and make your conclusions come out the way you want them to,
My own logic? You do understand the concept of a performance delta right?

(CoD - i'm surprised you're not claiming "cheat" as you so often liked to do any time nv had a performance increase)
Without evidence to say nVidia is cheating it's simply a strawman on your part to do so in this case.

both cards perform similarly.
Heh, similarly being 43% faster. And you were questioning my definitions and logic?

again, i'll stick with my observation your criteria has changed..
It has done no such thing.

your 'calculations' are misguided at best, as sated above
Again you do understand the concept of performance delta and using percentages to calculate differences? Because from the way you talk it doesn't appear that you do. If you did you wouldn't continue these nonsensical "equality" arguments you keep coming up with.

as i always use aa/af. frankly, (and you've probably used this logic previously) what's the point in owning a 4-500 video card if you don't use aa/af?
If those are the settings you chose then your original equality claim is quite simply totally incorrect.

and again, qualify that in the beginning
Why? Just because something exists it doesn't means it's relevant. Maybe alpha Cat 4.8 is sitting on an ATi dev's HD and it's promised twice the OpenGL performance. Well maybe, but so what? It's simply speculation at this point much like the 4.7 drivers are. They could be faster but until they're actually tested in a proper review they can't be used to prove anything concrete.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
i'll pull a "BFG" and only answer questions as i feel like, ignoring much of the rest..

Heh, similarly being 43% faster. And you were questioning my definitions and logic?

like i said.. you pick and choose to back up your thinking:

as for the "huge" performance gap.. i suppose you can apply your own logic to try and make your conclusions come out the way you want them to, but the bottom line is that in the settings you previously stated (adamantly as a matter of fact): aa/af with the excpetion of a single app (CoD - i'm surprised you're not claiming "cheat" as you so often liked to do any time nv had a performance increase), both cards perform similarly. even in CoD, with aa/af the x800pro is 140fps vs 156fps for the 6800GT - not exaclty a noticeable (or even significant) difference. but whatever... i guess if you call that 'huge', i can simply attribute it to your using a different definition than most.

so 140fps vs 156fps.. and you claim 43% faster.. umm.. yea, i quiestion your definition AND logic :roll:

Why? Just because something exists it doesn't means it's relevant. Maybe alpha Cat 4.8 is sitting on an ATi dev's HD and it's promised twice the OpenGL performance. Well maybe, but so what? It's simply speculation at this point much like the 4.7 drivers are. They could be faster but until they're actually tested in a proper review they can't be used to prove anything concrete.

another of your idiotic analogies.. first, i never claimed "alpha Cat 4.8" existed.. secondly, not only was i running the beta i spoke of, there are many posts by others stating their results of faster CoD performance.. your comparison is at best idiotic; at worst another attempt at spreading FUD...

now to finish watching LOTR on the 50" lcd my wife got me for father's day
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |