Originally posted by: BFG10K
Sorry, no, it can?t. You keep repeating this but repeating it doesn?t make it right. The driver can?t remove a game framerate cap. If the game is limiting frames every 5 ms the driver can?t go in there and adjust the game?s tick loop.
About the only thing the driver could do is ignore vsync in which case the driver is broken; but then reviewers shouldn?t be running with vsync in the first place as it makes their tests broken.
Broken Vsync doesn't explain away FPS capped @60 for single-GPU, but only slightly higher in multi-GPU. Poor CF/SLI scaling maybe, but highly unlikely given the differences (or lack of) between 1920 and 1680 for both single and multi-GPU.
I provided benchmarks that demonstrated it didn?t affect any of the current round of benchmarks and I?m still waiting for your evidence otherwise. Until you provide such evidence please do not bring up this issue again or I will assume you are trolling.
Benchmarks that were never in question to begin with, yet you still haven't answered whether the option needs to be turned off in the .ini or whether it was patched out. Clearly you're being evasive, so I'm going to assume that it is still on by default and you were purposefully being deceptive by trying to claim the feature did not exist or was not a problem. Luckily users CAN turn the feature off in UE3.0 games, but other games might have similar frame caps that are less obvious or can't be turned off.
My first mention of Bioshock had a disclaimer saying I was pretty sure there was no frame smoothing. I clearly identified the difference with the two other UE3.0 games I own, GoW and Mass Effect, which do have it enabled by default.
So what if it averages 72 FPS? That's the game's cap according to Derek.
60 FPS cap? What the hell are you talking about? You just quoted Derek saying there is a 72 FPS cap!
You're saying what exactly? That CF is breaking your fictional 60 FPS cap by getting 72 FPS (which just happens to be the actual cap Derek described), thereby proving multi-GPU can "work around" game caps?
LMFAO.
72 FPS isn't a unilateral frame cap though, I average much higher on a same clocked Quad core and a GTX 280 with 2xAA at 1920. Yet no single GPU was able to average over 60 FPS at 1920x1200, even after dropping down to 1680. You don't think its obvious there is a frame cap for single GPU but multi-GPU can exceed it? There is no Vsync option in Witcher but the game does dynamically change options based on system specs (like available AA level).
But there are plenty of examples were they aren?t hitting a CPU cap but you?re simply making sweeping generalizations. Take the CoD 4 results: the 4870 provides 90% of the GTX280?s performance at less than half the cost and half the VRAM. I don?t know how anyone can claim that is a CPU limitation.
How am I making sweeping generalizations? I'm pointing to specific instances and benchmarks that clearly show flat scaling between resolutions, frame caps across different configs and no scaling with multi-GPU. You're the only one who is pointing out instances where there is clearly no frame capping or CPU bottlenecking occurring as strawman examples. Have I mentioned COD4 once as a bottlenecked situation? No. Have I mentioned 2560 with 8xAA? LMAO. No.
I agree, but that was never under contention.
But even in that example, you can see that the fastest single GPUs like the GTX 280 and 4870 are very close to the multi-GPU solutions that do not scale well. Again, the point is to illustrate that as averages, those single-GPU are being bottlenecked as well as their max is being limited and they're only averaging lower FPS because they're spending more time at lower FPS levels rendering more intensive frames than multi-GPU solutions.
Where? Your link
http://www.anandtech.com/video/showdoc.aspx?i=3341&p=21 shows no such thing. Furthermore since when is Oblivion capped at 60 FPS?
You clearly need to look harder. In Witcher up to 1920, no single GPU averages more than 60 FPS except the GTX 280 at 1680x1050 at 60.6 (again FRAPS sync issues no doubt). Without knowing exactly what area Derek and Anand tested, I can say its most likely a frame rate lock at 60FPS, which would imply Vsync except Witcher has no Vsync option. All other cards at that resolution are dropping below 60FPS for various durations to bring their average below 60FPS. Except the multi-GPU solutions do not exhibit this behavior, with every single one exceeding the 60FPS average except the 3870X2 at 1680. Same is true for Assassin's Creed, although the difference is less pronounced. It can't be explained simply as multi-GPU are ignoring Vysnc in either case since the results are far too close to a 60FPS cap and single-GPU figures, but they're able to exceed the 60FPS cap with multi-GPU.
And I don't know when Oblivion became frame capped as I don't own it, but it clearly is capped in AT's review.
Based on your response to what you quoted I don?t think you understand what you quoted. I also don?t think you understand what micro-stutter is or even how multi-GPU scales.
Pre-rendered frames is purely a function of the driver. If the game tick is limiting frames to begin with pre-rendering won?t cause the tick to be lowered.
If the game is limiting frames without a true CPU bottleneck then I don't see how the driver couldn't queue more pre-rendered frames.
You?ve made one sweeping generalization and each time we have this discussion you include more and more games. CPU bottlenecking? Frame capping? In Oblivion at 2560x1600? Please tell me you?re joking.
When did I say anything about 2560x1600. Are you denying that there is frame capping and/or CPU bottlenecking in Witcher, Oblivion, Crysis and Assassin's Creed at resolutions up to 1920 based on the data in Anandtech's review? That is 4 out of 7 games tested in 2 out of 3 resolutions, you can either acknowledge it skews actual performance or you can focus on the other 3 out of 7 games and the resolutions that clearly aren't CPU bottlenecked.
Also I just thought you were telling us Oblivion has a 60 FPS cap but now it?s CPU bottlenecking? And in the next post no doubt you?ll deny you ever mentioned Oblivion and claim I?m making up scenarios. :roll:
I don't own the game and I wasn't there when it was tested, so I can't say for sure but its obvious its one or the other. You can either acknowledge it or you can focus on semantics. I'm sure you'll focus on the latter. That's what you do in absence of substance.
I provided several benchmarks the debunked your claim and until you provide relevant benchmarks to the contrary do not bring up this topic again.
Debunked my claim of what? I never explictly stated UT3 as an example of frame capping or CPU bottlenecking, I merely used it as an example that games are employing frame capping methods that the end-user may or may not know about. I asked if it was disabled by default, you still have not answered so I'm going to assume that you must still disable it in the .INI. I also provided 2 other popular titles that use some type of frame capping or performance smoothing based on CPU speed with Witcher and Assassin's Creed.
No, I think your appraisal of the situation is overblown. I also think you constantly shift the goal-posts to vsync to capping to CPU limitations to multi-GPU whenever it suits your agenda without ever actually providing any evidence to back your claims.
So do you think AT's review of the GTX 280, 4870 and other multi-GPU configurations is an accurate portrayal of performance or not? Its plainly obvious that 4 of the 7 titles are bottlenecked or capped in 2 out of the 3 resolutions tested. The cause, be it Vsync, CPU, GPU, obscure settings, etc. is irrelevant. Is it accurate based on the data?
It might be, but then the situations you describe aren?t the ones people are drawing inferences from.
Judging from the 200+ responses from the AT 4870 article, I would say they are.....
Yep, absolutely. This is what you said:
Well, I'd say its a bit premature to say GT200 is a flop, if you look at this latest round of reviews I think you'll see that there's quite a bit of CPU bottlenecking and frame capping going on, even at higher resolutions like 16x12 and 19x12. That's not to say 4870 isn't a great part, it is, but clearly a large part of the reason its so close to GTX 280 is because of CPU bottlenecking.
For example, quoted from the AT article:
See the highlight, your claim of
CPU bottlenecking? Now let?s see what you quoted from Anandtech to ?back? that claim:
Performance of the Radeon HD 4870 continues to be strong, but because of the frame rate cap we're not able to see if the GTX 280 could stretch its legs further and eventually outperform the 4870. In actual gameplay, the 4870 and GTX 280 appear to be equals.
Nowhere in that quote does it mention CPU bottlenecking.
Rofl right, I forgot to mention Frame rate capping in the 2nd instance. I guess next time I should properly reference everything and add my sources in the appendix as well, assuming someone focused more on semantics than substance such as yourself will read it? LMAO.
Now let?s look at the missing section of the quote, the one you conveniently left off when using it as ?evidence? for you claims of CPU bottlenecking:
Assassin's Creed is capped at near 60 fps, which is why we see most cards reaching but not significantly exceeding that marker.
This is why people don?t take you seriously. You chop and change whenever it suits your agenda, mis-quote and then claim you never made such claims when called out.
Rofl, and you think you come off any better when you focus on semantics instead of substance? I used the terms interchangably because it is unclear which limit is at play, but there's obviously something going on which you still fail to acknowledge at all. Jarred also thought there was CPU bottlenecking occurring as well, so I guess he's misquoting AT as well?
Here?s the quote from you ?Case in point is the GTX+ that needed a clock speed boost to push 4850 back into mediocrity."
You?re at the stage now of denying things that you said in the past.
Nice out-of-context quote. Its mediocrity wasn't in comparison to the paper launched GTX+, it was to the 7-8 month old G92 parts. Mentioning the GTX+ was to illustrate a simple clock speed on old parts was all it took to show how mediocre 4850 actually was.
Actually it looks like half of those quotes refute what you?ve been claiming. You were claiming game caps in Assasin?s Creed when Jared points out CPU limitations. You were claiming Anandtech don?t force off vsync when in actual fact they don?t force it off in the driver (but do so in the game, again selective quoting on your part). I could go on but honestly it?s a waste of time with you.
What? Jarred wrote that original AC article and on further inspection, he does run into frame caps/cpu limits as well. The FPS averages higher than 60 are for lower settings. But yes, Jarred's comments are proof that you will focus on semantics when your arguments lack substance.
And when did I say AT doesn't turn off Vsync in-game? ROFL. I pointed out why they don't force it off in the driver, going back to the Crysis Tri-SLI review where Derek found forcing it off in the driver lead to worst performance. Are you saying AT is so incompetent that a distinction would need to be made between in-game and driver Vsync? LMAO. Sorry if I had to choose who to place confidence in, it'd be Anand and Derek over you.