hokies83
Senior member
- Oct 3, 2010
- 837
- 2
- 76
$330AR OC'd gtx670 on a gtx680 PCB with Borderlands 2 makes everything else look stupid right now: http://www.newegg.com/Product/Produc...82E16814162107
Fixed.
Fixed Again
$330AR OC'd gtx670 on a gtx680 PCB with Borderlands 2 makes everything else look stupid right now: http://www.newegg.com/Product/Produc...82E16814162107
Fixed.
RS, what about FP16 performance, wasn't Kepler supposed to have and advantage there? That's one area I have little information on. This is a real question--I'm curious.
Not sure if serious. Addressed this at least 3x already in different threads.
You are still stuck on Kepler vs. GCN and trying to nitpick apart what has been shown already over and over? Both architectures have their positives, already outlined. Take a game with GPU cloth, tessellation, extreme particles, watch for Crysis 3 when GTX680 beat up the HD7970. I don't design the games. Put in things that Kepler is good at and GCN is going to be slower. Put in things GCN is good at and Kepler is toast. Wait for Borderlands 2, Crysis 3 before writing off Kepler architecture.
How does it explain an overclocked 7970GHE being slower than a stock card as illustrated by RussianSensation?
Granted the average and max went up but the Min dropped way off.
TWIMTBP will make sure of that, at least at game launch. But we've seen time again when AMD is given the opportunity to tune in drivers, the tables turn. Plus the 8000 series will be what Kepler is going up against, unsure if Nvidia will have a refresh by then.I think GTX680 > 7970 in Crysis 3.
I think GTX680 > 7970 in Crysis 3.
RussianSensation,
can you stop your marketing posting sessions, pls?
Repeating this "directcompute" stuff is ridiculous. In none of the 3 games GK104 shows a "directCompute" problem.
AMD's problem is some kind of Amdahl's law: A too narrow front-end for too many processing units. That's why you seeing stuff like downsampling (ogssaa) and forward+ (brute force rendering technique for more light sources) in more and more gaming evolved titles.
The only problem GK104 has is that the chip has not enough compute performance.
TWIMTBP will make sure of that, at least at game launch. But we've seen time again when AMD is given the opportunity to tune in drivers, the tables turn. Plus the 8000 series will be what Kepler is going up against, unsure if Nvidia will have a refresh by then.
Only reason Amd is doing so well is Nvidias lack of drivers for the 600 series.. when there released the tables will turn..
Only reason Amd is doing so well is Nvidias lack of drivers for the 600 series.. when there released the tables will turn..
Meh im down a Gpu.. and have a G1 Sniper 3 so im gimped atm... But when i was running SLI i have the 20th Highest SLI / Xfire 3d mark score on OCN... i expect to move up to the top 10 when i get my card back and do some tweaks..
Just like AMD was doing so bad and the worst company because they didn't have drivers for the 7000 series until 12.7 beta.
I'm kinda interested to see if AMD's new driver and developer relation campaign is going to turn the tables in the GPU landscape. I'm really loving the things they are doing lately, and the changers in the leadership at AMD seem to be the "catalyst" (pun intended) that are moving things forward for AMD's graphics division.
I truly hope they can keep it up and win some mindshare as well as marketshare from gamers out there. If they compete on more levels with nvidia we all reap the benefits.
Source?GK110 wil be out.
RS is a straight shooter. I've been reading his posts for years, he certainly calls it like he sees it, and I've seen him praise Nvidia in the past too. I don't think any other poster here posts as many informative, information/benchmark backed posts than RS.
Meh im down a Gpu.. and have a G1 Sniper 3 so im gimped atm... But when i was running SLI i have the 20th Highest SLI / Xfire 3d mark score on OCN... i expect to move up to the top 10 when i get my card back and do some tweaks..
Thanks for the support SlowSpyder! I try to stay neutral for both brands. I am not always right, but I try to contribute where I can. For example, when AMD cards had driver issues in Guild Wars 2 Beta, I put up some info on that for better or worse; and then other posters later provided updates that the performance was fixed. When Fermi was much faster in tessellated scenarios last generation, such as in Crysis 2, I made sure to note that for prospective buyers. It's just sharing of information, with no monetary involvement if AMD or NV sell more cards based on what I say. People have also mentioned that I promote bitcoin mining to sell more AMD cards, but what I am promoting with that functionality is an option for gamers to upgrade to a faster modern GPU and maybe try and save some $ to ease the cost of gaming with newer hardware. There is no conspiracy theory.
I can understand an appeal about having a top score in a popular benchmark that is shared by enthusiasts on a forum. However, 3DMark11 is not a game. Thus, unless its performance is reflective of how videocards perform in real world games, it's a synthetic benchmark and at best a showcase of some possible graphical features that may or may not be implemented in future titles. For example, at Xbitlabs, the 1324mhz GPU boosted MSI GTX680's Lightning scores 4094 in 3dMark11, or a commendable 16% lead over the 1250mhz HD7970, and yet the Lightning loses in games overall at high resolutions in the same review.
At TechPowerup, at 2560x1600 HD7970 Toxic and GTX680 are tied at 8.2 fps in 3dMark11, with Toxic at 1200mhz Lethal Boost only having a 6% advantage (8.7 vs. 8.2 fps) . In 18 games, HD7970 Toxic is actually 12% faster, while the Lethal Boost is 19% faster than a GTX680 at 2560x1600.
GTX680 vs. GTX580 in 3dMark11 at 2560x1600 = + 58% (8.2 vs. 5.2 fps)
GTX680 vs. GTX580 in 18 games at 2560x1600 = + 31% (89% vs. 68%)
What do all of these examples tell me about 3dMark11? It's not a very accurate predictor of how videocards will actually perform in modern video games. All it's telling me is how videocards will perform specifically running 3dMark11 test suites. Furthermore, while you say that NV can simply improve drivers, which I do not deny that they can, it can also be true that NV optimized highly for 3dMark11 and that the performance delta between the 680 and 7970 in 3dMark11 could actually be a function of NV's better driver optimization specifically for 3dMark11, or perhaps 1 of the test suites in 3dMark11 is using some graphical feature that runs much faster on the GTX680 (but I haven't researched the test suites in detail to confirm if say they are very heavy on tessellation). It could actually be the case that it is 3dMark11 that is the outlier, not the real world gaming performance. I am actually much more inclined to believe that 3dMark11 is not representative of GTX680's performance, but the best case for a 680.
I do not think the 680 is any better for 3dmark then a 7970.. the highest 7970 on ocn is #12 on the score board..
But in the Quad fire vs Quad SLI Quad fire is on top followed by 2x 690s.
Only reason Amd is doing so well is Nvidias lack of drivers for the 600 series.. when there released the tables will turn..
Meh im down a Gpu.. and have a G1 Sniper 3 so im gimped atm... But when i was running SLI i have the 20th Highest SLI / Xfire 3d mark score on OCN... i expect to move up to the top 10 when i get my card back and do some tweaks..
I think you missed the point of what I am saying. Someone might have scored high in 3dMark11 on a water-cooled 7970 that reached 1350-1400mhz. 3dMark11 is so off the mark for actually comparing GPUs for games, that it's basically worthless. GTX680 has a huge lead in 3dMark11 over a stock 580 and beats a 7970 easily but that % lead does not actually translate to real games. In other words 3dMark11 tells us nothing about how say GTX580/680/7970 videocards stack up against each other in games. In 3dMark11 a 680 is 58% faster than a 580, so does that mean the 680 is 58% faster on average in DX11 games than a 580 is? Of course not!
At TechSpot, it took a 1200mhz 7970 to barely beat a reference 680. In real world games, a 1200mhz 7970 = Sapphire TOXIC and it was 19% faster than a 680.
This is why I still don't understand why people even look at 3dMark11 scores. They are meaningless to compare videocards. The only purpose they serve is stress test your GPU, see possible cool effects that haven't yet been used in games and e-peen. The current 3dMark11 version is probably the worst in recent years since it's really inaccurate for trying to extrapolate AMD vs. NV gaming performance in actual games.
When the 7900 cards launched, the drivers were pretty unpolished. I know, I was an early adopter. The normally fantastic (for me) AMD drivers were not nearly as good as the drivers for my 5870's. You could see that there were a lot of areas that needed improvement from a functionality standpoint (disappearing cursor, won't wake from sleep, etc.) So it was reasonable to expect that the performance of the cards was just as crippled due to the early drivers. I think Nvidia's card launched with much better drivers than the 79xx cards, so who knows how much more can be tweaked out.
Pretty much. 3dmark11 does not match real world performance, and nvidia has a heavy lead in 3dmark11 generally speaking - while AMD does better in Heaven.
There are people obsessed with benchmarking synthetics and thats a sub-culture that i'll never understand. Don't really see the point of overclocking your system to insanity to get 200 more points in 3dmark11, its ridiculous. And doesn't help real games performance.
If they compete on more levels with nvidia we all reap the benefits.
Of course. TWIMTBP
i still think that amd drivers are unpolished...
heck!...we have yet to see VCE in action from gpus (not cpu based on trinity)