Originally posted by: ShadowOfMyself
Thats what we are speculating, but I very much doubt that... Since it was said by AMD we need to lower the % a bit, so lets assume 70%
GTX 280 = 100fps
HD 4870 = 70 fps
HD 4870X2 = 120- 140 fps ( depending on how the game scales )
It still comes out with a healthy margin for a much lower price
Originally posted by: chizow
I went ahead and compiled a comparison of different benchmarks between the 8800 GTX, GTX 280, GTX 260 and 9800 GX2 at 1920x1200 using the 8800 GTX as reference as that is the card and resolution I game at.
1920x1200 Review Compilation
As you can see, the differences from a single 8800GTX and GTX 280 (Column L vs P = Q) is significant and often the difference between playable or non-playable. This can be largely subjective but I can tell you for sure the games I own will play better without a doubt jumping from a 30-45FPS average (sub 60) to 60+ always.
The difference between a GTX 280 and 9800GX2 isn't nearly as significant and even though the GX2 often beats the GTX 280, its already at a performance level higher than necessary to achieve solid playable framerates (60+ for me). Keep in mind this difference will be even more pronounced when you go to higher resolutions as bandwidth and framebuffer become more of an issue in GTX 280's favor.
Now, sure you can daisy-chain a whole bunch of cheaper gimpy cards to achieve similar performance, but then you introduce a bunch of other problems including but not limited to:
- Profiles/Scaling- SLI/CF rely on driver profiles for their performance and in the case of ATI, you can't change these yourself. So if your particular game doesn't have a pre-defined profile you may see no benefit or even *worst* performance than with a single card. In the case of relying on two individually slower cards than your single card, you can see that you may actually be paying more for *worst* performance which is unacceptable to me.
- Micro-stuttering- Pretty heated debate about the significance of this problem on this board and others although it pops up infrequently. Basically the timing of each frame from the different GPU in AFR can be erratic, leading to this effect. Apparently some people are very sensitive to it and some aren't. I don't know as I have never used SLI, but I certainly wouldn't be happy if I spent $400-600 for SLI/CF only to find I couldn't stand micro-stutter.
- Heat/Power/Space - Typically not an issue for most enthusiasts, but it can become a problem when you have 2 or even 3x the power draw and heat from high-end cards. The PSU issue can be a total W issue, but also a power connector issue with so many high-end parts needing 6 or even 8-pin PCI-E connections. Many cases can also have problems accomodating 1x9"+ card, much less 2+.
- Multi-Monitor (NV only) - NV multi-GPU solutions do not support multi-monitors. I don't know if this is a superficial driver limitation to prevent desktop cards being used in professional workstations or a truly technical issue, but I'm leaning towards driver limitation as I'm assuming the Quadro GX2 would support more than 1 monitor..... Multi-Monitor support is important to me as I play full screen on my 1920 and use my 2nd monitor for various monitoring tools, surfing the web, etc.
- Bandwidth/Frame Buffer - Not as big a deal at 1920, but one of the major reasons to upgrade to the fastes GPU is for ultra high resolutions with AA. With a GX2 or SLI/CF solution, you're still limited to the same bus width and frame buffer as the individual cards even if you have more rendering horse power. This limitation is apparent in the higher resolutions with AA when comparing a GTX 280 with a true 512-bit bus and 1GB frame buffer to the X2/SLI solutions with a 256-bit bus and 512MB buffer.
- Chipset specific limitations - ATI CF requires an Intel/AMD chipset and NV SLI requires an NV chipset. This unnecessarily ties your platform to your GPU between generations and in the case of SLI, to NV's flaky chipsets.
- Overclocking ability? - NV used to have problems overclocking in SLI in Vista but I think its been fixed. Not sure if ATI has similar problems although I know many of their parts are clock-locked via BIOS.
Sorry but simple math and "logical" multi-GPU solutions are not going to make those problems go away. The only solution that is going to provide a significant increase in performance from where I am without multi-GPU problems is a GTX 280, which is why its worth it (to me).
Originally posted by: sourthings
That sure is a lot effort put into trying to shore up the quality of the 280. All it proved to me is that the older cheaper nv card is better. These are video cards, not elegant paintings or nice furniture. In the end it is how they perform in comparison to one another that matters. Much like arguments over how much power is consumed is silly. This card is a lame duck. Bring on the 4870x2 imo to stomp the 280 into the ground.
Originally posted by: Foxery
Originally posted by: keysplayr2003
We put together some benchmarks as well. They'll defo be some follow ups when newer drivers arrive.
GT200
Got questions? Let 'er rip..
Eek, your page is wider than my 720p monitor. Nice mini-review, though. You're going to switch that Folding@Home team to AnandTech soon, right?
I do have a silly question, or at least clarification I need:
We keep hearing about CUDA PhysX. Is there any (non-NDA'ed) information about how running PhysX affects the GPU's video-rendering speed? I'm blindly assuming here that the same transistors will be split up and used for both, as opposed to the cards currently hiding a second set of secret circuits from us.
Originally posted by: taltamir
What I wanna know is... why is the 9800GTX still priced at 250-300$? it is totally obsolete.
There's still a production cost involved that both nVidia and the retailer need to recoup as best they can. And it's only a few months old, not instantly useless because of one new product.
Originally posted by: chizow
Originally posted by: sourthings
That sure is a lot effort put into trying to shore up the quality of the 280. All it proved to me is that the older cheaper nv card is better. These are video cards, not elegant paintings or nice furniture. In the end it is how they perform in comparison to one another that matters. Much like arguments over how much power is consumed is silly. This card is a lame duck. Bring on the 4870x2 imo to stomp the 280 into the ground.
A lot of effort? It took me an hour on and off while watching TV to plug in those numbers (a nice benefit of not using SLI and having multi-monitors). I actually wanted to quantify the difference between a 260 and 280 but threw in the 9800GX2 numbers in there even though I was comfortable with the differences with GTX 280 just eye-balling it. For someone else considering a 260 vs 280 the difference might be worth it but the difference for me isn't worth it coming from an 8800GTX at 1920.
Saying the "older card is better" or the "4870x2 is going to stomp it into the ground" isn't going to explain away the problems with multi-GPU that I brought up. Compiling that list took very little effort as those are considerations I've weighed on and off over the last 7-8 months every time a SLI/CF/multi-GPU frankencard came along promising greater performance for less than the original price of a G80 GTX.
I agree that CPU limitations are at play but the answer is much simpler than that. The answer is for reviewers to use higher levels of AA instead of clinging to 4x like it?s some kind of holy grail. nVidia?s 8xQ is equivalent to AT?s 8x box filter and both have been around for almost two years so there?s absolutely no excuse for reviewers not to use them.In games like QW: ET and even COD4, both with AA, there's very little difference in the 16/19 resolutions. I don't think faster CPU is really the answer, I think devs need to really start taking advantage of more cores in the way they write their games.
Azn, did you see this quote from Anandtech?s article?Fillrate is king and GT200 isn't so much better than G92 where it was starved for bandwidth. Although ROP helps you in high resolutions and AA it doesn't help you when there are multiple textures.
For a 87.5% increase in compute, there's a mere 25% increase in texture processing power. This ratio echoes what NVIDIA has been preaching for years: that games are running more complex shaders and are not as bound by texture processing as they were in years prior. If this wasn't true then we'd see a closer to 25% increase in performance of GT200 over G80 at the same clock rather than something much greater.
I think reviewers stick to 4x AA because the differences between it and higher levels are extremely minute. Going from 0 to 2 to 4 comes with big improvements, but past that it's a sharp dropoff in perceived improvement, especially on high-DPI monitors like 20"/24"/30". I think most members in this forum would be challenged to play a game at 4x and 8x and be able to point out the differences.Originally posted by: BFG10K
I agree that CPU limitations are at play but the answer is much simpler than that. The answer is for reviewers to use higher levels of AA instead of clinging to 4x like it?s some kind of holy grail. nVidia?s 8xQ is equivalent to AT?s 8x box filter and both have been around for almost two years so there?s absolutely no excuse for reviewers not to use them.
Originally posted by: ViRGE
I think reviewers stick to 4x AA because the differences between it and higher levels are extremely minute. Going from 0 to 2 to 4 comes with big improvements, but past that it's a sharp dropoff in perceived improvement, especially on high-DPI monitors like 20"/24"/30". I think most members in this forum would be challenged to play a game at 4x and 8x and be able to point out the differences.Originally posted by: BFG10K
I agree that CPU limitations are at play but the answer is much simpler than that. The answer is for reviewers to use higher levels of AA instead of clinging to 4x like it?s some kind of holy grail. nVidia?s 8xQ is equivalent to AT?s 8x box filter and both have been around for almost two years so there?s absolutely no excuse for reviewers not to use them.
Originally posted by: chizow
I went ahead and compiled a comparison of different benchmarks between the 8800 GTX, GTX 280, GTX 260 and 9800 GX2 at 1920x1200 using the 8800 GTX as reference as that is the card and resolution I game at.
1920x1200 Review Compilation
As you can see, the differences from a single 8800GTX and GTX 280 (Column L vs P = Q) is significant and often the difference between playable or non-playable. This can be largely subjective but I can tell you for sure the games I own will play better without a doubt jumping from a 30-45FPS average (sub 60) to 60+ always.
The difference between a GTX 280 and 9800GX2 isn't nearly as significant and even though the GX2 often beats the GTX 280, its already at a performance level higher than necessary to achieve solid playable framerates (60+ for me). Keep in mind this difference will be even more pronounced when you go to higher resolutions as bandwidth and framebuffer become more of an issue in GTX 280's favor.
Now, sure you can daisy-chain a whole bunch of cheaper gimpy cards to achieve similar performance, but then you introduce a bunch of other problems including but not limited to:
- Profiles/Scaling- SLI/CF rely on driver profiles for their performance and in the case of ATI, you can't change these yourself. So if your particular game doesn't have a pre-defined profile you may see no benefit or even *worst* performance than with a single card. In the case of relying on two individually slower cards than your single card, you can see that you may actually be paying more for *worst* performance which is unacceptable to me.
[*]Micro-stuttering- Pretty heated debate about the significance of this problem on this board and others although it pops up infrequently. Basically the timing of each frame from the different GPU in AFR can be erratic, leading to this effect. Apparently some people are very sensitive to it and some aren't. I don't know as I have never used SLI, but I certainly wouldn't be happy if I spent $400-600 for SLI/CF only to find I couldn't stand micro-stutter.
[*]Bandwidth/Frame Buffer - Not as big a deal at 1920, but one of the major reasons to upgrade to the fastes GPU is for ultra high resolutions with AA. With a GX2 or SLI/CF solution, you're still limited to the same bus width and frame buffer as the individual cards even if you have more rendering horse power. This limitation is apparent in the higher resolutions with AA when comparing a GTX 280 with a true 512-bit bus and 1GB frame buffer to the X2/SLI solutions with a 256-bit bus and 512MB buffer.
[*]Chipset specific limitations - ATI CF requires an Intel/AMD chipset and NV SLI requires an NV chipset. This unnecessarily ties your platform to your GPU between generations and in the case of SLI, to NV's flaky chipsets.
Sorry but simple math and "logical" multi-GPU solutions are not going to make those problems go away. The only solution that is going to provide a significant increase in performance from where I am without multi-GPU problems is a GTX 280, which is why its worth it (to me).
Really? And exactly what are you basing this on? Reviews? Reviews of the most popular, most played, most scrutinized, most reviewed PC games out there? LMAO. Yes, of course games that are popular and tend to be reviewed will eventually get straightened out with proper profiles and tweaks, but I can guarantee you that wasn't the case when those multi-GPU cards or games were released juding from prior reviews and I can also guarantee you support for SLI/CF profiles will be one of the first to miss the check-list in future driver releases. You are completely at the mercy of continued profile support with SLI/CF with every new title, that to me is not a desirable prospect.Originally posted by: ddarko
This issue of profiles and scaling is a red herring. The fact is, in most games now, the profiles and scaling works.
You lost me at the part of LCD "refresh rate" giving people headaches.Another red herring. The very fact that a heated debate has occurred tells us that many, if not most people don't even know this issue exists. To warn people away from a solution because of a "problem" that most may not even detect and even if they can detect it, detect it only infrequently, is to completely blow this out of proportion. It's also true that some people are bothered by the low 60Hz refresh rate of LCD monitors; does that mean we should be cautioning people of the headache-inducing dangers of LCD monitors? Don't misunderstand me: if you detect micro-stuttering, then of course dual-GPU solution isn't for you. But to list it as a major bullet point caution for everyone to worry about is more akin to scare-mongering by transforming a remote possibility into a likelihood.
Actually many of the 2560 resolutions with AA do show the limitations of bandwidth and frame buffer; the X2 256-bit bus parts with 512MB x 2 buffers did even when compared to the G80 Ultra.Where is this theoretical problem manifest in the real world? The benchmarks on this site seem to show even at the highest resolution used by 30" monitors, the 9800GX2 still outperforms the 280. If we all bought a product because its theoretical numbers are better than real world performance, then we should all be ATI owners.
And a single-card multi-GPU solution negates much of the cost benefit of going multi-GPU in the first place. 3870X2 retailed at ~$500 and the 9800GX2 for $600, so in reality you don't save anything but inherit many of the problems associated with multi-GPU.Um, another red herring since single dual-GPU cards exist that do not need Crossfire or SLI. And why is "ties your platform to your GPU" supposed to be such an issue? First of all, it only ties yourself down if you're looking for a dual card solution, which dual GPU obviates the need for. Secondly, even if you're limited, so what? I don't exactly hear the masses complaining how their motherboard limits them to only AMD or Intel processors, even though every single motherboard out there forces you to "tie" yourself down to a processor line. So why should being "limited" be such an issue for GPUs? I have yet to read a motherboard review or forum user complain that a motherboard "only works with Intel/AMD processors." Until I do, I'm going to dismiss the "problem" of being restricted to Crossfire or SLI. It's a non-issue for CPUs, it's a non-issue for GPUs.
Without the hassle and problems of SLI. Very Logical indeed.So you've bought a solution that offers slower performance at a higher cost in most games. Very logical indeed.
Originally posted by: chizow
Sorry but simple math and "logical" multi-GPU solutions are not going to make those problems go away. The only solution that is going to provide a significant increase in performance from where I am without multi-GPU problems is a GTX 280, which is why its worth it (to me).
I think reviewers stick to 4x AA because the differences between it and higher levels are extremely minute. Going from 0 to 2 to 4 comes with big improvements, but past that it's a sharp dropoff in perceived improvement, especially on high-DPI monitors like 20"/24"/30". I think most members in this forum would be challenged to play a game at 4x and 8x and be able to point out the differences.
Originally posted by: bryanW1995
Originally posted by: ViRGE
I think reviewers stick to 4x AA because the differences between it and higher levels are extremely minute. Going from 0 to 2 to 4 comes with big improvements, but past that it's a sharp dropoff in perceived improvement, especially on high-DPI monitors like 20"/24"/30". I think most members in this forum would be challenged to play a game at 4x and 8x and be able to point out the differences.Originally posted by: BFG10K
I agree that CPU limitations are at play but the answer is much simpler than that. The answer is for reviewers to use higher levels of AA instead of clinging to 4x like it?s some kind of holy grail. nVidia?s 8xQ is equivalent to AT?s 8x box filter and both have been around for almost two years so there?s absolutely no excuse for reviewers not to use them.
I know a certain video mod who can most certainly tell the difference
@bfg: texture hasn't been a g80/g92 bottleneck, but for rv670 it definitely has been one.
BTW does anyone know if the new gen cards are going to remove the limitation for 4x4 SSAA < 1024x768
Originally posted by: chizowI can name at least 3 popular titles that, from user feedback, do not have properly working profiles or require workarounds. LOTRO still does not properly scale with SLI/CF. AOC had issues with SLI/CF as of final beta client. Mass Effect does not benefit at all from CF unless you rename the .exe as a workaround. These are only a few off the top of my head but of course you're going to assume SLI/CF works without problems because the top 10 PC games of 2007-08 scale properly...LMAO.
You lost me at the part of LCD "refresh rate" giving people headaches.
Actually many of the 2560 resolutions with AA do show the limitations of bandwidth and frame buffer; the X2 256-bit bus parts with 512MB x 2 buffers did even when compared to the G80 Ultra.
And a single-card multi-GPU solution negates much of the cost benefit of going multi-GPU in the first place. 3870X2 retailed at ~$500 and the 9800GX2 for $600, so in reality you don't save anything but inherit many of the problems associated with multi-GPU.
Originally posted by: CP5670
Originally posted by: chizow
Sorry but simple math and "logical" multi-GPU solutions are not going to make those problems go away. The only solution that is going to provide a significant increase in performance from where I am without multi-GPU problems is a GTX 280, which is why its worth it (to me).
Aside from the microstutter problem, the main turn off for me is the inconsistent vsync and triple buffering support on multi GPU setups. I consider these to be essential features and am ready to pay more for a slightly slower single GPU card in order to avoid wasting a hour with each of my games, messing with various driver settings and third party programs to get both working properly without any performance hit (which is what I ended up doing when I used SLI a while ago).
The 9800GX2 in particular also has low minimum framerates in a number of games, more than other multi GPU setups, according to this Xbit review. The performance of this card is deceptive in many benchmarks due to this, as hardly any sites show the minimum numbers these days.
As far as I am concerned though, it's a good thing that most people don't care about these issues. It means that the GTX 280 price will come down, and fast.
Except for that pesky fact that you get faster performance.
And none of that discounts the fact that a multi-GPU solution without proper profile support may not have *any* benefit at all over a single GPU. There's so many pitfalls in the PC business, why bother with another right?Originally posted by: ddarko
And I can name a couple games that are filled with bugs and need continuous patches and updates. Like, every single PC ever released. You understand my point? Constant driver support and being at the mercy of continued driver support is a fact of life with PC games. Why you're fixated with the driver support with dual GPU solutions when driver issues and patches are endemic with PC gaming, including with single GPU cards, is mystifying. You don't want to be at the mercy of patches to make something run properly on your PC? Then throw out your PC!
No, read again, or better yet, look at the spreadsheet I linked. I said there was a signficant difference upgrading from a 8800 GTX to a GTX 280 with only a negligible difference from a GX2, as both exceed begin to exceed 60FPS avg. These are quantifiable and easily observable thresholds when gaming on an LCD with Vsync. Being locked at 60 FPS Vsync vs. 15-20-25-30-60 increments is a HUGE difference in gaming experience even if the "average" FPS is only 25-30 different.And to use your own argument - so what if SLI/Crossfire don't work perfectly in those 3 games? Don't they offer performance that's "good enough"? After all, that was your excuse for accepting lower performance from a 280; it was good enough. Something tells me Lord of the Rings Online isn't exactly unplayable.
:looks around for LCD forums:Proving my point. This is an issue for some people with LCD monitors (go to the LCD forums to look it up). The fact that you're completely unaware of it demonstrate how small the number of ppl it affect. Micro-stuttering is the same. If it affects you, then it's a deal-breaker. But you can't even say that micro-stuttering DOES affect you. Instead, you've read about it and magnified the remote possibility that it MAY affect you into a certainty that it WILL affect you in your mind. Completely irrational.
Oblivion 2560Show me the single benchmark at 2560x1600 from Anandtech that demonstrate the bandwidth limiting the 9800GX2's performance to below the 280's. Oh right. There isn't one.
9800GX2 for $600?! LOL. Look, you can make up numbers. For the rest, look at newegg: there's a reference-clock 9800GX2 selling right now for $470 before rebate, $430 after mail-in relate.
http://www.newegg.com/Product/...x?Item=N82E16814127337
And you don't save anything? Except for that pesky fact that you get faster performance. it's interesting you can't acknowledge or answer this pesky obviously irritating fact: better performance most of the time for less money. But bah, who needs more for less? I'd rather get less for more! You sir, are the ideal Nvidia customer! Congratulations!
"The race is pretty close between the GTX 280 and 9800 GX2 in Oblivion with AA enabled. It's worth noting that at all other resolutions, the GX2 is faster than the GTX 280 but at 2560 x 1600 the memory bandwidth requirements are simply too much for the GX2. Remember that despite having two GPUs, the GX2 doesn't have a real world doubling of memory bandwidth."
Originally posted by: ddarko
Originally posted by: chizowI can name at least 3 popular titles that, from user feedback, do not have properly working profiles or require workarounds. LOTRO still does not properly scale with SLI/CF. AOC had issues with SLI/CF as of final beta client. Mass Effect does not benefit at all from CF unless you rename the .exe as a workaround. These are only a few off the top of my head but of course you're going to assume SLI/CF works without problems because the top 10 PC games of 2007-08 scale properly...LMAO.
And I can name a couple games that are filled with bugs and need continuous patches and updates. Like, every single PC ever released. You understand my point? Constant driver support and being at the mercy of continued driver support is a fact of life with PC games. Why you're fixated with the driver support with dual GPU solutions when driver issues and patches are endemic with PC gaming, including with single GPU cards, is mystifying. You don't want to be at the mercy of patches to make something run properly on your PC? Then throw out your PC!
And to use your own argument - so what if SLI/Crossfire don't work perfectly in those 3 games? Don't they offer performance that's "good enough"? After all, that was your excuse for accepting lower performance from a 280; it was good enough. Something tells me Lord of the Rings Online isn't exactly unplayable.
You lost me at the part of LCD "refresh rate" giving people headaches.
Proving my point. This is an issue for some people with LCD monitors (go to the LCD forums to look it up). The fact that you're completely unaware of it demonstrate how small the number of ppl it affect. Micro-stuttering is the same. If it affects you, then it's a deal-breaker. But you can't even say that micro-stuttering DOES affect you. Instead, you've read about it and magnified the remote possibility that it MAY affect you into a certainty that it WILL affect you in your mind. Completely irrational.
Actually many of the 2560 resolutions with AA do show the limitations of bandwidth and frame buffer; the X2 256-bit bus parts with 512MB x 2 buffers did even when compared to the G80 Ultra.
Show me the single benchmark at 2560x1600 from Anandtech that demonstrate the bandwidth limiting the 9800GX2's performance to below the 280's. Oh right. There isn't one.
And a single-card multi-GPU solution negates much of the cost benefit of going multi-GPU in the first place. 3870X2 retailed at ~$500 and the 9800GX2 for $600, so in reality you don't save anything but inherit many of the problems associated with multi-GPU.
9800GX2 for $600?! LOL. Look, you can make up numbers. For the rest, look at newegg: there's a reference-clock 9800GX2 selling right now for $470 before rebate, $430 after mail-in relate.
http://www.newegg.com/Product/...x?Item=N82E16814127337
And you don't save anything? Except for that pesky fact that you get faster performance. it's interesting you can't acknowledge or answer this pesky obviously irritating fact: better performance most of the time for less money. But bah, who needs more for less? I'd rather get less for more! You sir, are the ideal Nvidia customer! Congratulations!
Originally posted by: CP5670
Originally posted by: chizow
Sorry but simple math and "logical" multi-GPU solutions are not going to make those problems go away. The only solution that is going to provide a significant increase in performance from where I am without multi-GPU problems is a GTX 280, which is why its worth it (to me).
Aside from the microstutter problem, the main turn off for me is the inconsistent vsync and triple buffering support on multi GPU setups. I consider these to be essential features and am ready to pay more for a slightly slower single GPU card in order to avoid wasting a hour with each of my games, messing with various driver settings and third party programs to get both working properly without any performance hit (which is what I ended up doing when I used SLI a while ago).
The 9800GX2 in particular also has low minimum framerates in a number of games, more than other multi GPU setups, according to this Xbit review. The performance of this card is deceptive in many benchmarks due to this, as hardly any sites show the minimum numbers these days.
As far as I am concerned though, it's a good thing that most people don't care about these issues. It means that the GTX 280 price will come down, and fast.
I think reviewers stick to 4x AA because the differences between it and higher levels are extremely minute. Going from 0 to 2 to 4 comes with big improvements, but past that it's a sharp dropoff in perceived improvement, especially on high-DPI monitors like 20"/24"/30". I think most members in this forum would be challenged to play a game at 4x and 8x and be able to point out the differences.
I can see the difference pretty easily in most games, certainly between 4x and 16x at least. It's a lot more apparent in motion than it looks in screenshots. At the same time though, I don't think any of these cards is generally pulling high enough framerates that it makes sense to increase the level of AA.
Originally posted by: bryanW1995
Originally posted by: Foxery
Originally posted by: keysplayr2003
We put together some benchmarks as well. They'll defo be some follow ups when newer drivers arrive.
GT200
Got questions? Let 'er rip..
Eek, your page is wider than my 720p monitor. Nice mini-review, though. You're going to switch that Folding@Home team to AnandTech soon, right?
I do have a silly question, or at least clarification I need:
We keep hearing about CUDA PhysX. Is there any (non-NDA'ed) information about how running PhysX affects the GPU's video-rendering speed? I'm blindly assuming here that the same transistors will be split up and used for both, as opposed to the cards currently hiding a second set of secret circuits from us.
Originally posted by: taltamir
What I wanna know is... why is the 9800GTX still priced at 250-300$? it is totally obsolete.
There's still a production cost involved that both nVidia and the retailer need to recoup as best they can. And it's only a few months old, not instantly useless because of one new product.
actually 9800gtx was useless upon release unless you wanted it for tri-sli because you could get 8800gts 512 for ~ 60% of the cost with 95% of the performance. It is going to be pushed down in price by 48xx in the next few weeks, however.
Originally posted by: kreacher
For $400 it would be a pretty good card but certainly not for $650. Unless SLI scaling goes over 90% I don't consider it a viable option (plus it limits your motherboard choices).
Originally posted by: JACKDRUID
to Extelleron you are such a ATI fanTard..
just have to say that... or are you getting paid by AMD
in every post, EVERY post, you make assumption that 4850/70 is going to be faster than 280 without any prove what so ever.... just makes me sick..
to Ocguy31
G280 sucks hard price/performance/heat/environmentally friendliness. G260 is definately a good value, consider that it is a single card without limitation of SLI/Crossfire.