Originally posted by: BFG10K
And in titles where it isn't showing high performance but CF/SLI is significantly higher? What then? Are you claiming a CPU bottleneck is responsible for the GTX280 which magically doesn?t affect multi-GPU?
First of all, I never said there were CPU bottlenecks or frame caps in every instance and every game, so get that ridiculous notion and line of questioning out of your head. In games where single-GPU is frame capped/sync'd/smoothed I think it is possible for SLI/CF to override that function and exceed the cap whether by drivers or the game itself.
One example would be Assassin's Creed, which we know for sure is capped normally, yet SteelSix posted screenshots of frame rates that far exceeded capped frame rates. AT's review also indicated capped frame rates for AC which isn't uncommon as different review sites often have different results.
But again you can't claim that if CF/SLI are faster. You also can?t claim that if the graphs aren?t flat-lining of which there are numerous examples.
Again, in cases where CF/SLI are only a few FPS faster than its still valid. Same for single GPU with frame capping or CPU bottlenecking. For instance, if a game is limiting/capping FPS to 60 and you see a spread of a few FPS difference between all the parts, its obvious that there is capping going on and the difference in FPS is just the slower parts spending more time below 60FPS with lower FPS than the faster parts. But that's not necessarily indicative of gameplay and its definitely not a good gauge of potential performance since the faster parts (and SLI/CF) are capped, limiting your maximums. This is very different from uncapped averages where you might see spreads between 90 and 30 FPS to come to an average close to 60. Unfortunately there's no way to tell from a review unless they graph the frame dumps or you see it first-hand. Some sites do disclose any capping or smoothing but its pretty easy to gloss over, like in the AT review.
I'm not sure what examples you were looking at. While there were plenty that agreed with you, plenty did not:
http://www.firingsquad.com/har...performance/page12.asp
I'm not sure how anyone can claim CPU limitations, frame smoothing or a framecap is responsible for those figures. In Bioshock the 4870
is faster than the GTX280 without AA, no two ways about it, and you can see CF is significantly faster with the gap widening as the resolution increases. This is GPU bottlenecking 101.
Furthermore there is no flat-lining at or near 62 FPS or the refresh like you claim..
And once again, I'm not claiming CPU bottlenecking or frame rate capping
in every game and certainly not every resolution. As I said originally, what I found interesting was that there was clearly more CPU bottlenecking occurring at higher resolutions, even 16x10 or 19x12 when in the past those were considered higher resolutions.
Perhaps, but we aren't talking about those situations. We're talking about 159.8 vs 78.9 which is a vast change.
No, you are talking about those situations and I'm not disagreeing in those cases anyways. But what if you have 4870 posting 90 FPS, 4850 posting 75 FPS, 4850 CF posting 130 FPS and 4870 CF posting 131FPS, along with 9800GX2 posting 121 FPS, GTX 280 posting 111 FPS etc. Pretty clear there is CPU bottlenecking going on, with the majority of differences attributable to longer durations spent at lower FPS for the slower parts. Sure there is some difference, but is it indicative of how fast the parts really are?
You're basically saying "well, the 4870 isn't faster than the GTX280 because in the situations it is, it's because of CPU limitations or [insert reason X]. Likewise multi-GPU isn't faster, it's micro-stutter".
No, in the situations the 4870 is faster, it'd be because there are no CPU limitations. Likewise in the situations the GTX 280 is faster than the 4870, its because there clearly are no CPU limitations. I'm referring to situations where the 4870, GTX 280 and every other single GPU solution is within 5FPS or 10%, like AC, Witcher, and Crysis in AT's review.
That argument is nothing more than green propaganda.
Tell me, when the GTX280 is faster than the 4870 do you also chalk that up to CPU limitations or other nonsensical reasons? Or how about when the GTX280 is faster than the 8800 Ultra? Is that also not really faster using your reasoning?
The only nonsensical reasons I see are the ones you're inventing to prove points I never claimed.
I was heavily involved in that thread and I produced numerous graphs. But I can tell you that the framerate increase here can't be explained by micro-stutter. In fact micro-stutter is totally irrelevant to this argument since multi-GPU cannot provide a performance gain to begin with if there?s a bottleneck elsewhere.
Did you even read annihilat0r's post and methodology?
Micro-Stutter thread, 3rd to last post.
annihilat0r:
It's obvious that every third frame sees a jump from around 50 FPS to around 150 FPS. I don't think I need to tell you that in this situation your eye will see the fluidity of a 50 FPS system with some stuttering (caused by the super-fast delay third frames). However, the reported frame rate will be 1000*(63-48)/(981-753) = 66 FPS.
So all hardware sites will take this result and, I'm sorry but, stupidly compare it to non-AFR single GPU frames and say "wow, our FPS increased from a single 8800GT's 40 to 66 when we plugged in another 8800GT!!" Which is obviously nonsense. I can't believe that after all the awareness evoked from this kind of threads in hardware forums, nearly no hardware site mentions this in their reviews, including Anandtech, which I wouldn't normally expect such a thing from.
Again, this is classic micro-stuttering that also shows how multi-GPU can inflate FPS because they're rendering frames at irregular intervals.
I never said you were lying, I asked you to provide recent benchmarks of it in action, otherwise it's irrelevant.
No, you dismissed it as if it was a problem that didn't exist, when it clearly does. Its not irrelevant because it shows Devs are implementing methods of frame capping to normalize performance that you may or may not know about, and even if you did, may or may not be able to turn on or off. This might also explain how some sites can get such drastically different results from others. I asked Derek about some of his results in the 4870 feedback and he said they no longer force Vsync off (due to his findings with Crysis I'm sure). Maybe some reviewers are forcing Vsync off and getting better results in some games as a result. Its clear there is considerable frame capping and/or CPU bottlenecking in Assassin's Creed, Witcher and Crysis up to 1920x1200 in AT's review. I've seen a few others as well from different sites (once again, I'm not claiming CPU limitations in every title and resolution, and certainly not in Bioshock which has always run better on ATI hardware).
Sure, but I've provided Bioshock examples that demonstrate no such cap is in effect. Again you need to provide real examples or stop dismissing benchmarks on the basis of fictional hypothetical situations.
I never dismissed any benchmark, you're coming up with examples you came across that I never mentioned, and I'm using examples I came across in my points. At no time did I ever say or imply that I found CPU limitations or frame capping in every instance, but you're clearly assuming that in your arguments, which is ridiculous.