Originally posted by: BFG10K
No it isn?t. If the game is limiting frames then the driver can?t make them go faster by adding multiple GPUs into the equation.
I asked you for such examples and you?ve failed to deliver. Furthermore I provided two UT3 engine games that demonstrated they showed no such thing.
The driver could easily be disabling frame capping in SLI/CF if there was any internal capping/Vsync enabled by default.
And about the UE3 engine games, you never responded if UT3 had frame capping removed by default via patch or if it requires an .ini change. I'm pretty sure Bioshock does not have frame rate smoothing and never said otherwise. But its also not the only game that has some kind of capping going on, as both Assassin's Creed and Witcher exhibit similar behavior (in AT's review for sure).
http://www.anandtech.com/video/showdoc.aspx?i=3341&p=16
We can see AT?s results are capped at 60 FPS or thereabouts and neither CF/SLI is removing that cap (how on earth could it?).
If you have evidence that demonstrates multi-GPU/SLI is removing the cap - say all single cards are limited to ~60 FPS while multi-GPU is far higher - you need to post it up or your claims are yet again unfounded.
Witcher @ 1920 and 1680
Its clearly obvious there is some kind of frame capping going on, but the 4870CF manages to average 72 FPS. Derek also posted a comment that verified this saying
it looks like the witcher hits an artificial 72fps barrier ... not sure why as we are running 60hz displays, but that's our best guess. vsync is disabled, so it is likely a software issue.
The results at 1920 are convincing, but the results at 1680 confirm it with all the SLI configurations managing to average higher than 60FPS, which shouldn't be possible if there's some 60FPS software cap or hidden Vsync function. Again, I think it simply comes down to timing differences/sync issues with FRAPs or a driver override allowing slightly faster, but superficial frame rates.
Right, right, but nobody is discussing those situations. We all accept and understand what external limitations look like and the ramifications of such.
Its still valid when someone points to a review and says "Oh look, the 4870 ties the GTX 280, and in some cases, even outperforms it" and the results are say, 58.9 FPS for the 280 and 60.1 FPS for the HD4870. Again, this is not to say the 4870 is a bad part, its not, its a fantastic part, but its clearly not a good indication of relative performance if both parts are hitting a CPU/frame cap.
No. it?s not clear at all. The only thing clear would be the 4870 CF being bottlenecked because it?s not much faster than the 4850 CF.
Hothardware, multiple GPU, single GPU 1920 and 2560
This should be easier to visualize then. This review does an excellent job because it compares just about every relevant single card and multi-GPU configuration as well, but also puts 1920 and 2560 results side by side. Now, its a lot to digest, and I'm not going to go through everything to try and explain how I get to my conclusion, but its pretty clear there is CPU bottlenecking occurring even at 2560 once you move into the multi-GPU parts. Some of it may be due to poor scaling, but you'll see that many of the faster single-card solutions (GTX 280 and 4870)are near cap at 1920 and don't scale in CF/SLI. But at 2560, they drop below cap, but multi-GPU scaling brings them back to a CPU capped frame rate. If you look at the other games they are similar and often more pronounced.
That I agree with, but again that was never under contention.
Well again, if you look at something like AT's review and it shows <10% difference between the GTX 280 and 4870 and just about every other recent card then I think it is relevant. Its also extremely relevant for anyone considering 4870CF or 4870X2.
I don?t think you understand what micro-stutter is especially since you?re claiming it can somehow remove game framerate caps. Micro-stutter is simply the uneven distribution of frames; it doesn?t impact a game?s frame cap.
I never said micro-stutter removed game framerate caps, micro-stutter is the result, not the cause. I already showed one example where CF/SLI is breaking 60FPS when all other cards are capped with Witcher, Oblivion and Assassin's Creed in AT's review. I don't know the cause, but annihilat0r's post provides insight on the timing that would result in a higher average FPS with multi-GPU.
In 10 ms a single card might get 3 frames while a multi-card might get 5 frames. The difference - and this is what causes the micro-stutter - is the single card?s frames are distributed more evenly than the multi-card?s.
But this in no way invalidates the fact that it?s objectively provable the multi-GPU has a higher framerate because it?s rendered more frames in the same amount of time, something that can?t happen if a framerate or CPU limitation is in place.
Sure you can, you can increase the number of pre-rendered frames which means more input lag, but higher frame rates for the multi-GPU solution. Again, this can easily be regulated by the driver or profile even in frame capped or CPU limited situations.
Well that?s definitely a serious oversight of AT not to mention this. Still, they can?t possible be doing it in all games:
http://www.anandtech.com/video/showdoc.aspx?i=3341&p=14
Even an LCD set 75 Hz would top out at 75 FPS, something that clearly isn?t shown on that graph.
And once again, I never said it was doing it in all games but when there's serious frame capping and CPU bottlenecking in 4 of the 7 games (AC, Oblivion, Crysis, Witcher) in your test suite and people are drawing conclusions from that, I think its pretty clear we're not seeing accurate performance differences between the parts as a whole.
Pardon? Let?s quote what you said about UT3 based games:
I then produced UT3 and Bioshock benchmarks that debunked your claims. If you?re now saying you don?t think the framerate cap is in effect for those games you need to retract that claim, but don?t go around pretending you never mentioned such examples.
I mentioned UT3 as proof there was frame capping occurring in games and you reacted as if it was a problem that didn't exist. I proved evidence to the contrary and asked if it was patched out in UT3 or if you needed to still manually disable it. You still haven't replied. If it needs to be manually disabled then I'm absolutely correct in saying you have to assume there may be similar instances occurring in other games that may be unknown or can't be disabled, especially when I've already shown 2 others in AT's review that seem to have similar FPS caps (Witcher and AC). Its also a verifiable fact that frame rate smoothing is enabled in GoW and Mass Effect by default. I never said Bioshock was frame capped and specifically said I was pretty sure it wasn't.
After all this what are you trying to say exactly? That graphs that are close together demonstrate an external limitation? Well no sh!t given everyone here agrees with that.
So you think the 4870 is as fast as a GTX 280 even in cases of frame capping/cpu limitations?
People are taking issue with your original comment:
No I think people are taking issue because they don't full understand the issue and see it as an attack on the 4870, but its certainly relevant if they were considering 4870CF or 4870X2, even moreso than just a single GTX 280.
This posted in the context of a 4xxx thread makes it sound you?re claiming when the 4870 is competitive it?s somehow irrelevant because of some external limitation, most of which are fictional.
No, I never maligned the 4870's performance, I'm simply pointing out that direct comparisons between a GTX 280 and 4870 may vary well be skewed due to CPU limitations or frame caps.
Between that and you misquoting Anandtech above along with your comments in other threads ?the 4850 has been reduced to mediocrity because of the 9800 GTX++? and ?when CF is faster than the GTX280 both cards are fast enough so it doesn?t matter? really speak volumes.
Misquoting of Anandtech? Really? Where? Oh, you took the bait of some guy highlighting the first 3 words instead of the next 3 words as a misquote. Good job.
I never said the 4850 was mediocre because of the 9800GTX+, I said it was mediocre compared to 7-8 month parts that brought far more in terms of performance and cut far deeper into existing prices relative to performance. Are you going to argue that's not the case? Do you think 4850 was a more impressive part in terms of price/performance than the 8800GT? Honestly, people who even attempt to make that comparison have no sense of perspective.
My reference to CF vs GTX 280 was to show there may not be much point in upgrading to CF at all or expecting vast performance gains with the 4870X2 due to bottlenecking.