5 fps at 50 fps avg is a huge difference. It doesn't matter at 20fps b/c they're both slideshows, and it doesn't matter at 120 fps b/c the display wouldn't show a difference, but at 50 fps even a casual gamer would immediately see a difference.
Below 50fps. Both would feel like garbage because the minimums are around 10 at times. Not fun.
sli yes, nvidia features physx which I think will be replaced by direct compute, 3d vision yes (but people hardly use that), adaptive v-sync which I can't really tell the difference from that and a frame rate limiter and need clarification on how the tecnologies are different as the hwcanucks article didn't differentiate the two, then we have txaa which has only been implemented in one game .
times have changed . the 680 retails for $30 more and performs worse in the majority of games at max settings (info garnered in other forumsfrom people who had both cards) . Not to mention the ghz edition cards are all non reference and don't have heat issues like the reference did .
DirectCompute =/= Physx.
Framerate limiter will keep your cards just fast enough so as to keep your FPS at the level you select. For example, Skyrim has issues when you run 200fps. So I lock it to 80fps and all the flashing textures and AI weirdness goes away. The bonus is your cards will run cooler, use less power, and be much quieter. When I limit the FPS to 80 I still have vsync off to eliminate input lag. It just lets my cards run just enough to keep that 80fps and not waste power when it's not needed.
Adaptive vsync is not the same and I cannot believe you couldn't figure out the difference yourself. It's like regular vsync except without the FPS drops. Vsync without triple buffering will drop in multiples of 60 when the fps goes down(assuming 60hz refresh rate). You don't go from 60fps to 45fps. Instead you go to 30fps, then 20, then 15. That is why games sometimes stutter if you cannot keep a constant 60fps and you get a drop. It drops by a huge amount. Adaptive vsync will dynamically turn vsync off when you get the drops so you don't get the stuttering effect and no 30fps hit, and turn vsync on again when you are at 60fps (or 120fps if you have 120hz monitor or even 85hz on a CRT). It never eliminates screen tearing completely, but it does eliminate the drastic FPS drop due to lack of triple buffering.
Oh boy, not this again. I'm sure both of you have had quad-SLI experience and zero hard lock/CTD issues. There's a reason why people like myself and others (l88bastard, levesque, tsm106, vega, etc.) use 7970s for multimonitor setups instead of our 680/690s and microstuttering is obviously not one of them. There's barely any microstutter from both vendors unless I max out the AA/AF at 5760x1200.
Anyways, I know I won't sway anyone's beliefs here so I'll just show myself out of this thread.
Who was talking about multimonitor? Who mentioned quad SLI? Nobody...so you're replying to a ghost or something.
I'm sensitive to MS and tearing and just an overall bit of perfectionist for these things. I notice MS when in xfire at anything <60fps with vsync on. Makes for funtime with cattering to 60fps min, but so far so good.
Single cards I also don't like the dip from 60 to 40/30 though.
I'll say I prefer 1 card at 40fps over xxxfps with MS. But for me its 60fps locked or bust these days.
Vsync a must for MS, though this was the same with my 460SLI setup. Some games worse than others. Standouts are Skyrim, Heaven also shows MS well.
I've grown accustomed to turning vsync off no matter what that way I get max performance with no input lag. To me input lag is far worse than any tearing (if I'm above 70fps I don't see much tearing anyhow). Microstutter is something I ignore if I see it. Never bad enough to affect me.
the thing is their statement that the 680 is no longer the more desirable card because the 7970 ghz is only available in non ref flavors that eliminate heat and noise so then you're left with a stalemate between a $500 card and a $470 card
Power consumption too...can't forget that. Plus did you notice? The way AMD chose to compete with Nvidia is to market Boost clocks! That's right...take the same idea Nvidia had and use it on their cards. Only wait, they upped the voltage too much and as a result shot the power consumption up unnecessarily. Nvidia still does Boost better than AMD.