LOL, shame really, you might actually learn something. How'd that 4850CF set-up treat you btw?Originally posted by: MegaWorks
Don't worry BFG10K, I really don't give a shit about his comments.
LOL, shame really, you might actually learn something. How'd that 4850CF set-up treat you btw?Originally posted by: MegaWorks
Don't worry BFG10K, I really don't give a shit about his comments.
Originally posted by: chizow
Yep, and I was right there as well. G80 to G80 GTS, G80 to G92, G92 to G94, G92 to GT200, GT200 to GT200b. They ALL show TMU and SP are less significant than ROP when it comes to performance.Originally posted by: Azn
No needed reply to marketing jargon by a guy who thinks ROP was the biggest factor when it comes to performance. :laugh: You even had a thread about it. ROFL...
Based on AT's review that's right, however that wasn't the case with the original 9800GTX and GTS 512MB. Still doesn't change the fact the GTX 260 always outperforms the 9800GTX+.
Ya, it could be anything but at the end of the day GTX 280 SLI distances itself from both and GTX 295 ends up looking like GTX 260 SLI. That's the point.
Yep and raw performance would be just as important at 1680 as 2560, so if bandwidth isn't an issue as you've repeatedly claimed, you'd certainly have to acknowledge the results at 1680 with 4xAA are completely relevent in proving the point ROP are more significant than SP and TMU given the performance differences between GTX 280 SLI, 260 SLI and 295.
Because lower resolutions have fewer pixels to draw per frame, regardless of AA.
So is bandwidth an issue at lower resolutions or not? You keep arguing both, either bandwidth is important or only "RAW performance", which is it?
I'd love for you to run some benchmarks at 1680 and compare performance differences at 1680 and see which yields a bigger increase, core, shader or memory. I already know the difference but now that you have a somewhat relevant part you might find out more on your own so you can stop posting nonsense about SPs, TMUs, and bandwidth.
Originally posted by: chizow
LOL, shame really, you might actually learn something. How'd that 4850CF set-up treat you btw?Originally posted by: MegaWorks
Don't worry BFG10K, I really don't give a shit about his comments.
Only after ~25% increase to core clock to make up for its reduction in ROPs. Yet it still performs similarly to G80 despite nearly 50% increases to TMU and SP performance. Based on your theoreticals G92 should be beating G80 by 50%, but it clearly does not. Just as all of the other examples show, ROP has a greater impact on performance than SP and TMU, as you've claimed.Originally posted by: Azn
That's quite funny considering G80 gets hammered by G92 when it comes to raw performance. :brokenheart:
GTX 260 is always 5-10% faster despite similar or lower theoreticals (SP and texture fillrate) than the 9800GTX+, but often much more than that once AA is enabled. I guess its probably not a coincidence that GTX 260 also has more ~40% more ROPs and pixel fillrate....Wrong! 9800gtx is 8fps faster in that same benchmark against 8800gtx. Fact that GTX 260 also has whole lot of bandwidth, more vram, and ROP just to beat 9800gtx by 5-10% in raw performance.
Yep, leading us to the conclusion ROPs have a greater impact on performance than TMUs and SP on Nvidia parts.Anything is right. :laugh:
I'm not talking about clearly CPU limited situations, I'm comparing lower resolutions that would not be bandwidth limited, even with AA, that show differences in performance between the parts. Bandwidth is *NOT* the only limiting factor when it comes to AA performance.How so when it's not it's not CPU bottlenecked? When it is being CPU bottlenecked the FPS is same across the board whether it be GTX 280 SLI or GTX 295 even with AA. At 1680x1050 4xAA it is still being bottlenecked by bandwidth. AA uses bandwidth and vram and some pixel fill. If a card has 15% more bandwidth expect 15% better fps with AA give or take. Factor in the pixel and texture fillrate to determine the final outcome of FPS.
Up to the point where you become CPU limited yes, but in more GPU intensive games or with AA enabled, the bottleneck isn't going to be RAW fillrate or CPU speed, its going to be compute/processing or ROP/AA performance, and that's before bandwidth or VRAM become an issue.Although that statement is right that's not exactly how it works. So if you have more than enough fillrate you would still be getting same fps? Of course not. If you have 2x the fillrate then what's required you would be drawing it 2x as fast or wait for CPU to feed the information. in the end more fillrate you have faster it gets.
I doubt bandwidth is an issue at 1680 with 4xAA considering much slower GPUs with less bandwidth handled those resolutions just fine and there's no adverse effects from bandwidth all the way up to 2560.With AA it is in a big way. With raw performance not as much if any at all. Depends on the game and card as well.
Again, lets see some non-3DMark benches. Increase each ndividually by 10% and see what yields the most performance gain. My bet is on the core.Considering G92 is bottlenecked by bandwidth memory speed is as much a big factor as core speed. It also depend on the game and what settings I'm trying to test.
With AA mostly bandwidth. With no AA I would emphasis on core and shader even though it's bottlenecked by bandwidth. Crysis is one of those games that need both.
Funny I remember similar comments from you when I listed potential multi-GPU pitfalls. Only to see this from you some months later:Originally posted by: MegaWorks
Learn something! You think I would wast precious time acting like a fanboy when there are other more interesting hobbies. Please don't tell me that think you're expert now!
I fixed it, I just changed the bios of each cards to the latest one from HIS so no more blue screen. My brother is using the 4850CF setup and he loves it.
Originally posted by: MegaWorks
I had 2 4850 CF for 4 months gave me nothing but BSOD problem. This thread should explain it. I was waiting for AMD to fix and acknowledges the problem but nooo! they say it our computer parts problem! I gave ATI the finger and I went with nVidia and got myself a nice super overclocked GTX260 core216. It's sad for last 7 years I bought nothing but ATI, but when someone screw you like ATI did I'm sorry but I'm jumping teams. BTW I had 2 3870 for 6 months before this setup and I loved it.
Originally posted by: chizow
Funny I remember similar comments from you when I listed potential multi-GPU pitfalls. Only to see this from you some months later:Originally posted by: MegaWorks
Learn something! You think I would wast precious time acting like a fanboy when there are other more interesting hobbies. Please don't tell me that think you're expert now!
I fixed it, I just changed the bios of each cards to the latest one from HIS so no more blue screen. My brother is using the 4850CF setup and he loves it.
Originally posted by: MegaWorks
I had 2 4850 CF for 4 months gave me nothing but BSOD problem. This thread should explain it. I was waiting for AMD to fix and acknowledges the problem but nooo! they say it our computer parts problem! I gave ATI the finger and I went with nVidia and got myself a nice super overclocked GTX260 core216. It's sad for last 7 years I bought nothing but ATI, but when someone screw you like ATI did I'm sorry but I'm jumping teams. BTW I had 2 3870 for 6 months before this setup and I loved it.
But hey, at least you tried it yourself, even if you learned the hard way :laugh:
Originally posted by: chizow
Only after ~25% increase to core clock to make up for its reduction in ROPs. Yet it still performs similarly to G80 despite nearly 50% increases to TMU and SP performance. Based on your theoreticals G92 should be beating G80 by 50%, but it clearly does not. Just as all of the other examples show, ROP has a greater impact on performance than SP and TMU, as you've claimed.
GTX 260 is always 5-10% faster despite similar or lower theoreticals (SP and texture fillrate) than the 9800GTX+, but often much more than that once AA is enabled. I guess its probably not a coincidence that GTX 260 also has more ~40% more ROPs and pixel fillrate....
Yep, leading us to the conclusion ROPs have a greater impact on performance than TMUs and SP on Nvidia parts.
I'm not talking about clearly CPU limited situations, I'm comparing lower resolutions that would not be bandwidth limited, even with AA, that show differences in performance between the parts. Bandwidth is *NOT* the only limiting factor when it comes to AA performance.
Again, ROPs, particularly at lower resolutions where VRAM and bandwidth are not an issue are much more important with regards to performance as they handle all the Z/stencil ops, blending, MSAA resolve. Bandwidth is important as data is constantly written and read from the Z-buffer and color cache, but again, performance and necessary bandwidth is going to be limited by ROP performance.
Up to the point where you become CPU limited yes, but in more GPU intensive games or with AA enabled, the bottleneck isn't going to be RAW fillrate or CPU speed, its going to be compute/processing or ROP/AA performance, and that's before bandwidth or VRAM become an issue.
I doubt bandwidth is an issue at 1680 with 4xAA considering much slower GPUs with less bandwidth handled those resolutions just fine and there's no adverse effects from bandwidth all the way up to 2560.
Again, lets see some non-3DMark benches. Increase each ndividually by 10% and see what yields the most performance gain. My bet is on the core.
I don?t care what he claimed. I was quoting you, not him. It?s another attempt at deflecting the issue on your part.Originally posted by: chizow
He claimed the Computerbase selections were objective based on game popularity, when in reality there's always going to be subjective influence with a limited testing suite.
How would they change the results in Call of Juarez, Jericho or Vegas, the very games you singled out? Stop changing the subject with your irrelevant rhetoric.Like I said, inserting say, CoH and Dead Space, two popular titles that clearly favor Nvidia parts would significantly change the results.
I?m not saying it does. What I?m saying is that the benchmarks you linked to could also be flawed but you don?t consider that possibility because they show nVidia in a good light, especially since they don?t include the range of titles Computerbase tested.Of course I'm going to point out inconsistencies and problems with Derek's reviews, that doesn't invalidate all of his testing, research, insights and opinions.
Utter rubbish. They tested Far Cry 2 and GRID, and both are on nVidia?s list. I suggest you do some reason before mouthing off about things you clearly have no idea about.His Big Bang results were clearly an outlier and I pointed that out, especially given Nvidia did not list improvements in the titles AT tested.
Fine, then why not consider the possibility of the AT review you linked being flawed too?I would absolutely point out the same if he said or did similarly with Nvidia.
Accurate analysis? Heh. Sure, if by ?accurate? you mean ?so comically wrong that it?s not possible to be any more wrong?.LOL, looks like someone's still salty about my accurate analysis of your buying habits.
Again, another attempted back-pedal on your part. Who gives a shit about the averages? They?re just that: averages. Forget about them if you can?t understand that.Where am I dismissing the scores? I'm pointing out their performance rating aggregates aren't an accurate gauge of actual performance. Like I said earlier, I certainly enjoy seeing results from a variety of games but to say these results are completely objective or vendor agnostic is laughable.
Of course Chizow?s an ?expert?. He feels qualified to talk about ATi drivers despite not having used an ATi part since 2002 because, ya know, he read it on TEH INTARNETTS. :roll:Originally posted by: MegaWorks
Please don't tell me that think you're expert now!
Originally posted by: nosfe
charlie is up to no good again
http://www.theinquirer.net/inq...idia-delays-40nm-parts
What has been his hit ratio with nv rumors in the last year anyway? i seem to recall a lot of "i told you so" articles from him lately
Originally posted by: keysplayr2003
Originally posted by: nosfe
charlie is up to no good again
http://www.theinquirer.net/inq...idia-delays-40nm-parts
What has been his hit ratio with nv rumors in the last year anyway? i seem to recall a lot of "i told you so" articles from him lately
HEY!!! How dare you get this thread back on track!!! :frown:
Yes, Charlie is getting a smackdown if you look at the members posts beneath the "rant".
Originally posted by: chizow
4870X2 wins 4 games at 1280/1680 8xAA
GTX 295 wins 3 games at 1280/1680 8xAA
GTX 295 also wins in 4 other titles with 4xAA, the highest allowed/tested. Results are similar with 2560, as already discussed.
Why would you count LP and FC2 as splits when the GTX 295 wins the majority? I guess you'd also have to count Jericho as a split as well?
Ah yep, I see it now. Again it proves my point the Performance Rating can be misleading and meaningless.
Sure there is, you show difference in FPS and % difference on a per title and resolution basis. Not only is it easier to read, its actually meaningful as well. For example, I can look at a 65% difference and 5.2 FPS at 2560 and dismiss the result as meaningless.
If by hand-picked you mean Top 10 titles for the last 2-3 months at any given time, I'd be glad to have hand-picked titles for every review. Certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol.
Originally posted by: qbfx
Originally posted by: chizow
4870X2 wins 4 games at 1280/1680 8xAA
GTX 295 wins 3 games at 1280/1680 8xAA
GTX 295 also wins in 4 other titles with 4xAA, the highest allowed/tested. Results are similar with 2560, as already discussed.
Ok, If there wasn't the possibility you don't see well, I'd think you're lying. Before typing anything about the tests, check if these are right or wrong because I've no intenstion to list the results again:
1280x1024, 8xAA/16xAF
HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies
summary: HD4870X2 wins in 5 titles, GTX295 wins in 3
1680x1050, 8xAA16xAF
HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies
summary: HD4870X2 wins in 5 titles, GTX295 wins in 3
2560x1600, 4xAA/16xAF
HD4870X2: Jericho, GRID, RS: Vegas, Ass. Creed, CoJ, Crysis, Stalker
GTX295: CoD5, Bioshock, FC2, LP: Colonies, WiC
summary: HD4870X2 wins in 7 titles, GTX295 wins in 5
2560x1600, 8xAA/16xAF
HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis, LP: Colonies
GTX295: CoD5, FC2
summary: HD4870X2 wins in 6 titles, GTX295 wins in 2
Now, from the trend above, we can derive the results for 1920x1200 XxAA/XxAF which is way more important than 1280x1024 no AA/AF anyway :laugh:, and they would be similar to 2560x1600 XxAA/XxAF.
Why would you count LP and FC2 as splits when the GTX 295 wins the majority? I guess you'd also have to count Jericho as a split as well?
Look at the numbers above, they tell stories.
Ah yep, I see it now. Again it proves my point the Performance Rating can be misleading and meaningless.
Maybe it proves it when it's about AMD/ATi winning
Sure there is, you show difference in FPS and % difference on a per title and resolution basis. Not only is it easier to read, its actually meaningful as well. For example, I can look at a 65% difference and 5.2 FPS at 2560 and dismiss the result as meaningless.
You can look at every AA/AF mode for every res. as well, that can't summarize the results from 12 titles, can it?
If by hand-picked you mean Top 10 titles for the last 2-3 months at any given time, I'd be glad to have hand-picked titles for every review. Certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol.
No, by handpicked I mean 5 titles that favor GF hardware and that were the only benches nVidia would let reviewers use during these tests.
Anyway, according to these results it's clear that the HD4870X2 wipes the floor with the 20% more expensive GTX295 on the resolutions that actually matter (stop pointing out the GTX295 wins in the "majority" of the tests only because at 1280x1024 noAA/AF the GTX295 actually kicks arse (by 10% :laugh) and don't forget 1920x1200 that's the sweetspot right now.
Originally posted by: keysplayr2003
Originally posted by: qbfx
Originally posted by: chizow
4870X2 wins 4 games at 1280/1680 8xAA
GTX 295 wins 3 games at 1280/1680 8xAA
GTX 295 also wins in 4 other titles with 4xAA, the highest allowed/tested. Results are similar with 2560, as already discussed.
Ok, If there wasn't the possibility you don't see well, I'd think you're lying. Before typing anything about the tests, check if these are right or wrong because I've no intenstion to list the results again:
1280x1024, 8xAA/16xAF
HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies
summary: HD4870X2 wins in 5 titles, GTX295 wins in 3
1680x1050, 8xAA16xAF
HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies
summary: HD4870X2 wins in 5 titles, GTX295 wins in 3
2560x1600, 4xAA/16xAF
HD4870X2: Jericho, GRID, RS: Vegas, Ass. Creed, CoJ, Crysis, Stalker
GTX295: CoD5, Bioshock, FC2, LP: Colonies, WiC
summary: HD4870X2 wins in 7 titles, GTX295 wins in 5
2560x1600, 8xAA/16xAF
HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis, LP: Colonies
GTX295: CoD5, FC2
summary: HD4870X2 wins in 6 titles, GTX295 wins in 2
Now, from the trend above, we can derive the results for 1920x1200 XxAA/XxAF which is way more important than 1280x1024 no AA/AF anyway :laugh:, and they would be similar to 2560x1600 XxAA/XxAF.
Why would you count LP and FC2 as splits when the GTX 295 wins the majority? I guess you'd also have to count Jericho as a split as well?
Look at the numbers above, they tell stories.
Ah yep, I see it now. Again it proves my point the Performance Rating can be misleading and meaningless.
Maybe it proves it when it's about AMD/ATi winning
Sure there is, you show difference in FPS and % difference on a per title and resolution basis. Not only is it easier to read, its actually meaningful as well. For example, I can look at a 65% difference and 5.2 FPS at 2560 and dismiss the result as meaningless.
You can look at every AA/AF mode for every res. as well, that can't summarize the results from 12 titles, can it?
If by hand-picked you mean Top 10 titles for the last 2-3 months at any given time, I'd be glad to have hand-picked titles for every review. Certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol.
No, by handpicked I mean 5 titles that favor GF hardware and that were the only benches nVidia would let reviewers use during these tests.
Anyway, according to these results it's clear that the HD4870X2 wipes the floor with the 20% more expensive GTX295 on the resolutions that actually matter (stop pointing out the GTX295 wins in the "majority" of the tests only because at 1280x1024 noAA/AF the GTX295 actually kicks arse (by 10% :laugh) and don't forget 1920x1200 that's the sweetspot right now.
Well, since 19x12 is the "sweet spot", why don't you list those results.
Originally posted by: BFG10K
Of course Chizow?s an ?expert?. He feels qualified to talk about ATi drivers despite not having used an ATi part since 2002 because, ya know, he read it on TEH INTARNETTS. :roll:Originally posted by: MegaWorks
Please don't tell me that think you're expert now!
Then he dismisses my 4850 vs GTX260+ comparison, despite me having months of gaming experience with both cards in a range of titles and drivers, because according to him ?I swapped the cards too early, so it doesn?t count?.
Yep, that little champ has it all ?figured? out. :roll:
Already gone over all of this before, you claimed SP and TMUs had the greatest impact on performance with NV parts, and while the G92 clearly benefits from the ~50% increases to both over G80, it performs nowhere close to that much faster. Further, G80 saw a very linear increase to performance by simply increasing core clocks and nothing else. This would lead one to believe that the main difference in G80 to G92 in terms of reductions, ROPs, bandwidth and VRAM are whats holding it back. All areas that were addressed significantly with GT200. What areas weren't addressed as much compared to G92b? SP performance and TMUs. Yet GT200 always outperforms GT92b, and often significantly.Originally posted by: Azn
In raw performance 9800gtx beats 8800gtx pretty badly. Even with 25% better clock it still doesn't quite catch up 8800gtx pixel fillrate. Actually 8800gtx has 28% more pixel fillrate and yet it gets out performed by 9800gtx when it comes to raw performance.
8800gtx 13800 Mpixels/s
9800gtx 10800 Mpixels/s
9800gtx has 50% more bilinear texel fill but in FP16 fill only 17% not to mention it's holding the fillrate back because of the bandwidth limitations. 50% more fillrate does not equate 50% better performance. That's quite simple minded actually if you actually think that.
Its possible G92 with around half the bandwidth is bandwidth limited, but the question is whether or not it benefits more from memory bandwidth or core frequency? Comparing the various G92 and G92b variants, all grounded with 256-bit bus and ~1000-1000MHz memory frequency and my experience with an 8800GT I'd still come to the conclusion core clocks (ROP, set-up, TMU etc) have a greater impact on performance than shader performance or texture fillrate.GTX 260 is always faster you are right. Not denying it. Not saying those extra ROP doesn't add performance either when it comes to AA or uber high resolution. Again 9800gtx+ has more fillrate but is that fillrate saturated by bandwidth that it can close the gap on GTX260? I've done simple tests where my G92 card performance improves as I only raise memory clocks even without AA. So I wonder if G92 had the bandwidth it can beat GTX260 in performance? Maybe even be neck and neck with GTX 260 216SP core when it has enough bandwidth.
First, counting number of transistors and die size for any given core function is a poor metric. For example, L2 cache attributes up to 80% of some Core 2 dice. Yet a chip with 1/2 or even 1/4 the L2 typically performs within 80%. Secondly, there's always going to be design decisions with regard to functional units and the goal is balance. Third, its possible Nvidia missed their target clocks or had other motives for certain design decisions (Tesla, Quadro, GPGPU etc).Here's a picture of the GTX 280 highlighted with ROP, TMU, SP, Frame buffer... ROP is the smallest part of the chip while SP and texture units cover nearly 70% of the chip.
http://techreport.com/r.x/gefo...0/die-shot-colored.jpg
So I ask you that small section of that chip makes the most difference in performance? Why doesn't Nvidia add more ROP and kill off some texture or SP units if ROP was had the biggest impact in performance? Is nvidia dumb?
Yes, that's the point, that GTX 280 SLI is always faster even in situations that aren't bandwidth limited, discounting situations where all high-end solutions are CPU bottlenecked. There's more than just the AT and Bit-Tech benches, every other review site that tested GTX 295 came to the conclusion it performs closer to GTX 260 SLI compared to GTX 280 SLI. Certainly bandwidth and VRAM come into play at higher resolutions with AA, but even at lower resolutions with or without AA the GTX 280 SLI is significantly faster when CPU bottlenecking isn't an issue.In lower resolutions with AA GTX 280 sli is more faster compared to GTX 295 than say no AA. Without AA that performance lead shrinks as shown in Anandtech's review. Bit tech's numbers are all over the place far as I'm concerned. GTX 295 is beating GTX 280 SLI in GRID even with AA while Anandtech review does not. Not to mention 216 core sli beating GTX 295 in many of their benches when in theoretically it's not possible unless Nvidia has neutered drivers.
I actually did exactly that with WiC, CoH, Crysis and FC2 last night at 1680 with 4xAA. I'll put actual numbers up in a bit but from preliminary results, cutting bandwidth 26% at 602/1296/868 resulted in less than 5% difference in WiC, CoH and FC2. Crysis showed more difference, ~8-10%. I can guarantee you a 26% increase to core clock would result in more than 5-10% performance gain, unfortunately Precision and RT don't allow me to change the core/shader ratio that much individually.But again bandwidth is the issue even at lower resolutions long as you add AA in the mix. You can easily figure this out by downclocking your GTX 280 to GTX 260 memory bandwidth and try benchmarking @ 1680x1050 4xAA and compare before and after performance differences. You would have to downclock your GTX 280 memory to 872mhz to match GTX 260/295 memory clocks of 999mhz. Try benchmarking few games to get a clear indication and post the results here.
How am I contradicting myself? Yes the number of pixels drawn to the frame buffer are the same for any given frame, but they can only be drawn as fast as they're fed to the ROPs, and in cases with AA, post-processing or heavy shading that rate is going to be reduced.Now you are contradicting yourself.
This is what you said: Because lower resolutions have fewer pixels to draw per frame, regardless of AA.
Its an issue at higher resolutions with AA, but not nearly as much at 1680 with 4xAA.Sure it's an issue. 9800gtx+ vs GTX260. 9800gtx+ vs 8800 ultra. Should I go on?
If you're comparing your GSO then you also have 25% less bandwidth than a full G92. This is similar to past comparisons with parts crippled with 64-bit or 128-bit memory buses. And yes Crysis has always been bandwidth and VRAM sensitive even at lower resolutions, so its not much of a surprise reducing bandwidth on a part that was already heavily bandwidth limited would have an adverse effect. Its also one of the few titles that is heavily shader intensive. Curious how you were able to unlink core and shader with that much difference though, I don't recall being able to change the ratio that much with my G80s or G92, and certainly not with my 280.I don't have time to do benches right now. I'm about get rid of 8800gts I just bought waiting for GTX 280 from buy.com. Hopefully my orders goes through. What I can show you is my 8800gs benches I've done in the past.
I lowered my core clocks by 24% which would reduce both my pixel and texel fillrate. My memory clocks lowered by 24% to emphasis on this test...
Tested Crysis 1.2 1440x900 no AA dx9 high settings
STOCK OC CLOCKS 729/1728/1040
37.55 fps
CORE REDUCTION 561/1728/1040
34.87 fps -7.2% difference
BANDWIDTH REDUCTION 729/1728/800
33.70 fps -10.1% difference
memory clocks had the biggest drop in performance. This is a exact G92 chip same as my 8800gts with 1/4 of the cluster disabled. Full g92 would show exactly the same results.
Well should I believe Charlie saying that GT212 is dead or should I believe CJ which is telling us specifics of GT212 ?Originally posted by: SlowSpyder
He says the GT212 is dead? Isn't that supposed to be Nvidia's next cash cow? Of course this is the Inq.
*edit - Reading through that article I'd have to say he's now thrown out any attempt to be a journalist when it comes to Nvidia. Guess they really don't care for each other. But, I guess what matters is how much truth is in the article which remains to be seen.
Oh noes, you don't care what he claimed, problem is what you're quoting is a direct reply to his claim. So really, who gives a shit what you think I claimed when it was a reply to something he claimed? He claimed this review was somehow objective, and not just "12 games on Wolfgang's hard drive", saying they were less biased than hand-picked titles in reviews that favor Nvidia, especially when Nvidia's "hand-picked" criteria is clearly more objective to begin with.Originally posted by: BFG10K
I don?t care what he claimed. I was quoting you, not him. It?s another attempt at deflecting the issue on your part.
You stated certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol., as if to somehow imply the article has an ATi bias because it includes some titles that ATi are traditionally strong in.
I never claimed they were irrelevant, but they're surely less relevant than recent, popular titles that more people are buying and playing, right now.You were also implying that slightly older titles are somehow irrelevant because they don?t win any popularity contests. But again, those that play said games are very interested in the scores and don?t give a shit about your comments about the issue.
Certainly, what would make you think IHVs aren't supporting them?And again remember, these are 2007 titles so even with your laughable backwards compatibility ?standards? you have to admit IHVs should still be supporting them and providing performance gains.
Its not irrelevant when it clearly proves my point this and all reviews are far from objective, as he claimed. Again, if anything Nvidia's criteria of using "Top 10 recent titles" is more objective, yet both of you criticize it for being "hand-picked for marketing purposes". If marketing means better performance in recent, popular titles I'd have no problems with that criteria, ever.How would they change the results in Call of Juarez, Jericho or Vegas, the very games you singled out? Stop changing the subject with your irrelevant rhetoric.
So what are you saying? That AT's game selections are subjective? Hmmm. :roll: I've read over the review, nothing really stands out. Nvidia wins titles the majority of reviews show Nvidia winning in, ATI wins in titles that the majority of reviews show them winning.I?m not saying it does. What I?m saying is that the benchmarks you linked to could also be flawed but you don?t consider that possibility because they show nVidia in a good light, especially since they don?t include the range of titles Computerbase tested.
GRID showed some improvement but considering only 1 resolution was tested and the drivers claim results may only be evident with certain hardware configs or settings, that's not much to go by at all. After retesting with the drivers Derek posted an update saying they did see larger improvements at higher resolutions or with AA, saying the drivers "handled high memory usage situations better".Utter rubbish. They tested Far Cry 2 and GRID, and both are on nVidia?s list. I suggest you do some reason before mouthing off about things you clearly have no idea about.
It is possible, but as stated above, there's nothing in there that would indicate such problems based on the results. The main issue with AT reviews are archived results, but we have assurance the GTX 295 results are recent because it was just released. Its possible the 4870X2 could be using older results but the 8.12 hot fix drivers are specifically mentioned which reduces the risk.Fine, then why not consider the possibility of the AT review you linked being flawed too?
Heheh ya, what'd you ultimately claim was the reason again? Oh ya, more robust drivers and features from Nvidia. Shame that directly contradicts claims you've made in the past about ATI drivers being more robust.Accurate analysis? Heh. Sure, if by ?accurate? you mean ?so comically wrong that it?s not possible to be any more wrong?.
But whatever you say, champ. :roll:
I have nothing against driver updates that improve performance for free and I've acknowledged the huge gains in 2 of the 12 titles as a result. My point is that the PR 160% or 120% aggregates clearly do not reflect actual performance even in their individual settings where things are at worst, even in high bandwidth situations.Again, another attempted back-pedal on your part. Who gives a shit about the averages? They?re just that: averages. Forget about them if you can?t understand that.
The point is some games are seeing gains of 40% or more which is massive, more than enough to beat the GTX295 where the 4870X2 was previously losing. This is equivalent to a GPU upgrade and is bigger than a lot the gains observed when moving from a G80/G92 to a GT200. All this from a free driver upgrade.
I dislike Crossfire but damn, even I have to admit that is impressive considering the card wasn?t a slouch in many of those games to begin with.
Rofl no, I haven't read your 4850 vs GTX 260+ comparison, I'm sure its fine and all, and will ultimately come down to ugly textures in Thief 2 and Red Faction. But of course that has nothing to do whatsoever with your idiotic claims made over a year ago where you claimed ATI drivers were better than Nvidia's based on your experience when you didn't have any relevant experience with an ATI part in over 3 years.Originally posted by: BFG10K
Of course Chizow?s an ?expert?. He feels qualified to talk about ATi drivers despite not having used an ATi part since 2002 because, ya know, he read it on TEH INTARNETTS. :roll:Originally posted by: MegaWorks
Please don't tell me that think you're expert now!
Then he dismisses my 4850 vs GTX260+ comparison, despite me having months of gaming experience with both cards in a range of titles and drivers, because according to him ?I swapped the cards too early, so it doesn?t count?.
Yep, that little champ has it all ?figured? out. :roll:
Originally posted by: chizow
Rofl no, I haven't read your 4850 vs GTX 260+ comparison, I'm sure its fine and all, and will ultimately come down to ugly textures in Thief 2 and Red Faction. But of course that has nothing to do whatsoever with your idiotic claims made over a year ago where you claimed ATI drivers were better than Nvidia's based on your experience when you didn't have any relevant experience with an ATI part in over 3 years.Originally posted by: BFG10K
Of course Chizow?s an ?expert?. He feels qualified to talk about ATi drivers despite not having used an ATi part since 2002 because, ya know, he read it on TEH INTARNETTS. :roll:Originally posted by: MegaWorks
Please don't tell me that think you're expert now!
Then he dismisses my 4850 vs GTX260+ comparison, despite me having months of gaming experience with both cards in a range of titles and drivers, because according to him ?I swapped the cards too early, so it doesn?t count?.
Yep, that little champ has it all ?figured? out. :roll:
As for ATI driver problems, are you claiming the FC2 issues didn't exist and still don't exist even to this day, despite numerous hot fixes specifically addressing it? Are you claiming the CF/Vista problems didn't exist and still don't exist, despite numerous hot fixes specifically addressing it? Once again, referencing multiple sources, particularly those with concurrent experience with hardware from both camps is certainly compelling evidence. The difference is, I'm not going to make idiotic claims that I'm basing anything on my experience :laugh: