Originally posted by: Azn
To who? Maybe yourself that's about it. :laugh:
Oh right, I should've said it was obvious to those who weren't incompetent.
Yeah right. You were agreeing with everyone else in that thread about a year and some months ago when we had the long thread before it got locked. It was me who made Keys pissed off to disable or downclock his SP in the 9600gt thread. You can ask Keys yourself. Things have changed though. We have quite bit of games that are dictated by shader.
Uh, no I didn't. I stated early on SPs were overrated in games that were current at the time, as it was obvious to anyone who actually used the hardware and you even took exception to it in that G94 HardOCP comparison thread where you couldn't make sense of simple apple-to-apple comparisons.
Shader clocks were new so obviously messing with the clocks and ratios was interesting. Anyone who was curious enough to test clearly saw shader clock increases did not have nearly the impact as raising the core clock. With a G80 GTS at the time with a stock 513MHz core clock it was obvous that increasing core clock 20% yielded a much larger increase than increases to shader or memory, same with G92, G94, GT200, etc/ etc.
Umh.. That's not even remotely close to what I said. That discussion was part of your on going pixel performance matters most arguments you've been trying to spread on the internet. To prove that pixel performance doesn't matter as much I compared 2900xt and 3870 because more bandwidth a card has more pixel fillrate it has even though it's clocked the same yet 2900xt doesn't perform any better. That was the argument. Obviously you didn't get it but then again you don't get much of anything.
Uh no, I said bandwidth was wasted if it was unused and pointed to these nearly identical parts with different bandwidth as definitive proof. You kept trying to point to insignificant differences between the parts when they performed identically, despite superior bandwith in favor of the 2900XT. Glad you finally understand the error you made then, perhaps you can take a step back and apply that knowledge here. :laugh:
So you agree that GTX 280 bandwidth is wasted but G92 card that has nearly same amount of texture fillrate as GTX 280 and you are saying bandwidth isn't the biggest factor for G92? ROFL!!! Go ahead downclock your memory to 550mhz and see that performance shrink to 9800gtx+ levels. :laugh:
Yep I've shown bandwidth has less impact on performance by underclocking by 27% and seeing only 3-8% decrease in performance. I can absolutely say for certain performance would drop by more than 3-8% by dropping core clocks. Likewise for a 9800GTX+, I can say gains from Core/Memory are going to be greater than increases to memory bandwidth. And that's something I don't need to prove as its been proven time and time again when comparing 8800GT to 8800GTS to 9800GTX to 9800GTX+ where memory bandwidth was fixed due to GDDR3 speed limitations, yet performance always increased between parts based on clock speed. Pretty impressive gains from a part that's bandwidth limited.
But then how many times do I have to repeat myself to you that G92 is bandwidth starved. If it had the bandwidth it could easily outclass GTX 260. You can easily figure this out by downclocking your memory to 550mhz and compare your results with a 9800gtx+.
Heh, sure it could, except the 9800GTX+ is
never faster than a GTX 260, even at lower resolutions that aren't bandwidth intensive. I don't see how you can't get this through your head, despite theoreticals that exceed the GTX 260, the 9800GTX+ is never faster. You could drop resolution to 1280x1024 and as long as you weren't completely CPU limited, the GTX 260 would be faster. Is it still bandwidth limited at 1280?
Only when AA is applied. Again bandwidth limitations as shown even in your own benchmarks when you downclocked your memory. Once you remove most of the bandwidth limitations by not using AA it's becomes obvious. Then again you can't figure simple thing as bandwidth limitations and restrictions when it comes to AA.
BS, bandwidth is the same with G94 and G92, you're wrong here, you never could explain away G94 and you clearly still can't. G94 clearly shows TMU and SP have less impact on performance than ROPs given it has the same bandwidth as G92.
I'm really dodging anything you muster. :laugh:
Of course you are, that's what you do when you're clearly wrong and just trolling.
Because it has bandwidth which 9800gtx+ does not. It can also do 1200 GFLOPS and 9800gtx+ can not. Isn't it obvious where the raw performance difference comes from? If you even paid attention to what I was saying you wouldn't even be asking me stupid questions like this. But you thought it was a bright idea to ask me anyway.
What does bandwidth matter when it has less Texture Fillrate to begin with? According to your claims it shouldn't need the additional bandwidth, yet the 4870 runs circles around the 9800GTX+. Are you saying the 9800GTX+ is completely bandwidth limited to the point a further increase in core clocks (and as a result, higher texture fillrate) would yield no improvement? And why are you bringing up SP performance, are you finally going to acknowledge other factors impact performance before fillrate becomes an issue?
How would 4870 supersede GTX 280 far as render back performance go when 4870 can only do 16pix/clk with no MSAA while GTX 260 can do 28pix/clk. Only time a 4870 should be 12% better than GTX 260 is with AA and absolutely no advantage over GTX 280.
Hilarious that you claimed you knew what you were talking about. Here's a hint, RV770's
ROPs were reworked so that they can write/blend 2 pixels per clock, meaning they have an effective 32 ROPs that write 1 pixel per clock.
You were trying to compare CPU L2 cache performance in games to GPU core size. In a server environment L2 cache does whole lot more than CPU mhz which crumble your feeble arguments.
So were you referring to server performance with GT200's ROP size? You're just grasping at straws here now. You claimed the size of a logical unit should somehow be proportional to its impact on performance, when that's clearly a flawed assessment.
Why the "F" not? Crysis is the most GPU intensive game there is to PC gaming. Cut you bandwidth to 50% to match 9800gtx+ which would be 550mhz. Considering GTX 280 is only 30% faster than 9800gtx+ in raw performance you should get a pretty good idea 9800gtx+ is bandwidth starved. Test AA performance and then comeback. Perhaps you don't want to downclock your memory and benchmark because it would literally make your arguments obsolete.
I already know why, its because Crysis is one of the few titles that does actually scale with all facets of the GPU, core/memory/shader. But of course the exception doesn't make the rule, which is why you're reluctant to test any other games except for Crysis.
You proved me wrong? Like when you said bandwidth wouldn't matter at lower resolutions with AA? :laugh: How about when I said full g92 would show same results like my 8800gs? :brokenheart:
I said bandwidth was less relevant at lower resolutions with AA, when taken into account GT200 doesn't need all of its bandwidth and the performance difference with less bandwidth, I have proven you wrong. Bandwidth isn't the issue you claim it is with GTX 280, 295 and 260, plain and simple. As for G92 and GS, I never made any such claim, I asked which results you were showing and that it was obvious a bandwidth crippled card would show adverse effects if running higher resolutions.
Maybe you need to go back to your analysis and do some more testing because you've proven wrong multiple times in this thread alone.
Making a sound upgrade? Like a $500 GTX 280 which you over paid that perform worse than 9800gx2 or a 4870x2? :laugh:
Its hilarious really how Nvidia just happens to release parts so perfect in proving you wrong. G94 was the perfect part that really blows away all your arguments, the fact you continue to ignore these clear indicators just reinforce my belief you're too incompetent to understand such fundamental and elementary concepts. GTX 295 was another clear indicator, and you still don't get it.
As for my buying habits, don't worry about me, I got my GTX 280 for $314 ($65 after selling my 8800GTX ) on launch day, like many others on these forum. Its been worth every penny and is easily one of the best pieces of hardwarer I've owned. Awesome part without a doubt, anyone who has owned one would concur, and certainly avoid making ignorant comments like "the GTX 280 isn't much better than the 9800GTX+". Maybe if you held off on all the low-end parts you've bought in the same period of time you might've been able to pick one up yourself.
Without AA 10-12% better if that. That's really awesome considering it's also clocked 4% higher, 4% higher shader clocks and has 27% more bandwidth. :roll:
Did you know that you've been repeating that same line about 3x in every single one of your post without even proving anything? "ROP the greater impact in performance" How sad you have to tell yourself just to make yourself feel better.
And that 10% turns into 15-25% with AA. Its OK though really, believe what you like, it'll just be different from what knowledgeable observers conclude, including the reviewers I've already linked to.
It's useless when you just keep on telling me the same crap over and over again without no proof. I said bandwidth would be factor in low resolutions with AA which it was. That's been proven at this point. You need to prove yourself wrong again by downclocking your GTX 280 to 576/1242/872 then come back with the results. :thumbsup:
LMAO, no proof. That clearly shows the difference between GTX 280 SLI is far greater than GTX 295 and GTX 260 SLI between each other. Again, how would downclocking the GTX 280 4% show anything other than <=4% decrease in performance? I've already shown a 27% from bandwidth yields 3-8% difference, combined with a 4% linear decrease max from core/shader, it still can't account for the remaining differences between GTX 295 and GTX 280 SLI. How hard is that to understand?
I already proved with Crysis benches on my 8800gts but hey you want to be ignorant that's on you.
No you didn't, you had already increased the performance of the shader/core so that they did need more bandwidth and then
reduced bandwidth further. This is very different than you claiming G92 at a certain performance level was bandwidth starved, at which point you would
increase memory clockspeed to demonstrate a tangible gain from the increase. If you're going to claim the 9800GTX+ would beat the GTX 260 if it had the same bandwidth, you don't drop its bandwidth down to that of an 8800GS, heh.
ROFL.. In your own benchmarks bandwidth made the difference. Look at those minimum frames. Same pattern as GTX 295 vs GTX 280 SLI. I also said you need to downclock your core to GTX 260 levels which you haven't even done to prove yourself wrong again. I'm guessing you are too scared to prove yourself wrong at this point.
Again, 27% drop in bandwidth for a 3-8% difference in actual performance. Minimum FPS numbers were never in question, I'm sure they would be similar with the GTX 295 with 111GB/s, except we weren't comparing minimums, we were comparing averages. I said bandwidth at lower resolutions with AA was less significant an issue, and it clearly is. Throw in the 4% difference from core/shader and again you'll see the differences cannot be reconciled by bandwidth and clockspeed alone, which leaves ROPs.
Just do it and do come back. Make sure your shader is also clocked to GTX 295 levels which is 1242. If I had GTX 280 I would prove this to you but then again I'm just not gullible enough to pay $500 for a video card that perform 30% faster in raw frame rates than a $100 card.
Prove what? A 4% drop in core/shader will result in a linear decrease, at best? Anyone who has owned a G80 or higher part would see this is the case with most parts. Until then it looks like you're willing to make ignorant comments about $100 and $500 video cards when you clearly don't know what you're talking about. G92, G80 GTX, whatever other low-end part....been there done that, and I can say for 100% certainty I'm not in any hurry to go back to that level of performance.
What point? Because it has nothing to draw in the first place.
You've been going back and forth and changing it up to CPU now loading screens in a game that has no relevance. Then you go on to say "this proves my point". You've proven nothing. :laugh:
Rofl, nothing to draw? A blank screen still has pixels being drawn. Why don't you run some of your favorite 3DMark benches and note how many FPS you get in theoretical fillrate benches. Are frame rates so high because there's nothing to draw there as well?
It sure was an issue if it dropped 3-8% average depending on the game and made minimum frame rates drop by 21%.
So are you saying a 27% drop in core clock will result in less than 3-8% difference in performance? Like I said, bandwidth is clearly less of an issue, as I've stated numerous times.
Vram so you don't texture thrash. :roll: In a non vram or non bandwidth limited situation it's not much faster.
So why bring it up when it once again, is less of an issue at lower resolutions?
Crying? :laugh: Is that you've been doing in front of your computer when you benchmarked your GTX 280 to find out I was right all along?
Then again my 8800gts showed much bigger performance drop without AA in Crysis when I lowered the bandwidth over Core clocks which proves that g92 is bandwidth starved.
My results were as expected, much less impact in performance relative to the decrease in bandwidth, which also reinforces my claims ROPs are the biggest impact on performance as that's the only remaining difference with regards to GTX 295 and 280s core architecture.
Again, what do your results show other than a G96 is bandwidth limited at whatever resolution you're running? You're claiming a G92 is bandwidth limited at a certain performance level, then decreasing memory bandwidth, when you should be increasing it to show a tangible gain.
260 outperforming a 9800gx2? :laugh: You are quite funny. GX2 outperforms GTX 280.
All G92 cards are bandwidth starved.
Actually that should read 9800GTX+ compared to G92, GTS, 9800GX2 (compared to GTX+ SLI). You're claiming all G92 are bandwidth starved, yet the 9800GTX+ manages to outperform all slower G92 parts significantly despite the fact memory clocks are limited to available DDR3 speeds in the 1000-1100 range.
Considering GTX 260 only perform 10% better than 9900gtx in raw numbers and my 8800gts dropped 15.8% when I reduced my bandwidth by 28%. When you consider GTX 260 has 60% more bandwidth than 9800gtx+ this pretty much proves my theories.
Again, your GTS numbers already show far less than linear performance decrease from memory bandwidth alone, but you still ignore the fact that test is flawed as you've already overclocked the core/memory, increasing bandwidth requirements further. If you're going to claim something is limiting, you need to demonstrate gain from increasing that factor, not decreasing it. Increase memory 10% and see if you get anything close to 10% gain, I doubt you will.
Exactly it's not static and no it's not going to perform identically. When you have a card that's already bandwidth limited it's going to be more bandwidth limited when you raise everything by 25%. More bandwidth limitations than the card with 75% power of the chip. In this case my 8800gs. That's why my frames dropped more with my 8800gts than it did with 8800gs when I lowered my memory clocks.
Huh? No. If an 8800GS performs within 25% of an 8800GTS at 1280 due to core differences and memory differences, it just shows everything is scaling as expected and bandwidth isn't an issue. If an 8800GS performs 50% slower than an 8800GTS at 1920 or with AA, then it becomes apparent bandwidth isn't sufficient and is badly crippling the part.
And where is the benchmarks to show that it performs identically compared to 256 bit bus at lower resolutions? :roll:
That card has 96SP. It's not even a 8800gt. It's more like 8800gs on a 128bit bus.
Who cares, do your own research. The point is it won't even perform like an 8800GT or 8800GS at higher resolutions because the 128-bit bus is so crippling. You had an 8600GT, you should know exactly what I'm talking about.
I'm sorry I'm just not cool as you to be dropping $500 on a card just so you can have larger e-penis when GTX 280 is only 30% faster in raw performance over my 8800gts.
Uh ya, keep tellin yourself that while I'm enjoying all my games at resolutions and settings that my 8800GTX (faster than your 8800GTS) couldn't manage.
You denied it then you acknowledge it only when you've been caught red handed.
Uh, no I didn't, I said very early on that I knew Crysis was one of a few titles that benefitted from core/shader/memory bandwidth increases.