nVidia GT200 Series Thread

Page 40 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: tuteja1986

http://www.newegg.com/Product/...x?Item=N82E16814150304

http://www.newegg.com/Product/...x?Item=N82E16814130339
2x9800GTX $269 each ends up as fast a single GT280 and has better sli profile than 9800GX2.

http://www.newegg.com/Product/...x?Item=N82E16814133227
A 9800GX2 cost $429 and its end being 30% slower than GT280

This all cause of competition, As soon as the competitor goes in to trouble we see innovation go down. A person that bought a 8800GTX in 2006 has no real reason to upgrade unless he wants too play Crysis in better frame rate.

You know a lot about the performance of cards that aren't out yet... I imagine that your guesses may be right at some resolutions/settings, but very wrong at others. Plus, many of your examples require an SLI motherboard, which many people do not have and/or want.

I'm thinking the best performing/most stable/cost effective high performance setup for the second half of 2008 is going to be C2D 8xxx/P45/GTX 260 or HD4870, and not any dual card setup.

...I'm hanging on to this 780i board with the thought that I just might go with dual GTX 260s. If they launch for $399/ea and perform just a tad bit slower than the 280s, they might just be the way to go. Needless to say, I don't think this would be the most cost effective setup for most, but given my existing components it might be the best I can do. I'd also be surprised of anything dual from ATI would beat dual GTX 260s.
 

Sureshot324

Diamond Member
Feb 4, 2003
3,370
0
71
Originally posted by: HigherGround
Inq Benches. enjoy

BTW, we should get/make a dedicated review thread.

Impressive, but it's being compared to a card that is half it's price.

As a side note, wow I had no idea CoH ran so much slower in DX10! You can literally more than double your frame rates by switching to DX9.
 

Nightmare225

Golden Member
May 20, 2006
1,661
0
0
Originally posted by: Sureshot324
Originally posted by: HigherGround
Inq Benches. enjoy

BTW, we should get/make a dedicated review thread.

Impressive, but it's being compared to a card that is half it's price.

As a side note, wow I had no idea CoH ran so much slower in DX10! You can literally more than double your frame rates by switching to DX9.

That's what happens when you don't build you game for DX10 from the ground up. Same story happened with DX9, was much slower for games which were originally built on DX8
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: ViRGE
What the heck's up with everyone only using 2GB of RAM? A $600 video card, and you can only be bothered to couple it with $50 in RAM?

many still insist that "you only need 2GB of ram"... despite many individual GAMES Taking more then that (forget entire systems while running The game. Some games actually go over 2GB for the game itself)

2GB of ram allows them to run faster OCed ram... So they do, and they think the 1% speedup from OCing the ram is worth the massive slowdown of paging (well they don't, they just insist on BELIEVING that there is no paging, instead of actually TESTING it).

If I remember correctly, I got 2.1GB used by supreme commander once (with another 1.3GB used by windows vista). I haven't been too keen on testing it because once I saw 2GB was not enough for CoH (its ram use has went down since with patches), I simply upgraded to 4GB and never looked back.

Anyone else notice how LONG those cards are? are they longer then the 9800GTX perhaps? because those are already too big to fit in my case.

EDIT:

Look at those reviewsa, excellennt SLI scaling, playable max AA/AF at 2560x1600 res... and go through the one in the inq... UT3 under max AA/AF 2560x1600 on two maps you get a 2600% and a 3000% increase... Although that would be ENTIRELY due to ram amounts, not any of the other specs... But yea, 1GB should have been around sooner.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: chizow
A few things really popped out at me over some of the leaked info to-date, like the huge performance gain from 177.34 vs. 177.26 drivers. 43FPS vs 30FPS at Very High at 1920x1200 with just an updated driver is fantastic.

Wheres the surprise? Nvidia does this every launch. They hold back on performance to lower expectations and on launch day, voila! Means little in terms of future driver gains - which may or may not be big.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
anyone else notice the inq review:
1. Looks like they stole it from someone else.
2. Looks like they were comparing two different driver versions.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: ronnn
Originally posted by: chizow
A few things really popped out at me over some of the leaked info to-date, like the huge performance gain from 177.34 vs. 177.26 drivers. 43FPS vs 30FPS at Very High at 1920x1200 with just an updated driver is fantastic.

Wheres the surprise? Nvidia does this every launch. They hold back on performance to lower expectations and on launch day, voila! Means little in terms of future driver gains - which may or may not be big.

I don't think its quite that simple to think NV is doing this purposefully to make their product look better when in reality, a late driver release will ultimately lead in discrepancies and a potential for conflicting/poor impressions. No company in their right mind would risk a poor first impression in an effort to dampen expectations prior to launch. As they say, first impressions last and with hardware reviews that are rarely updated, that snapshot in time is what people ultimately remember.

I guarantee come tomorrow when NDA is lifted there will be sites that did not update their results based on the new 177.34 drivers and will show much smaller gains than sites that did update their results. Running comprehensive benchmarks, particularly with multiple GPUs and platforms and now with the added complexity of multi-GPU solutions is incredibly time intensive. I believe Anand and Derek have written in the past that the fastest they've turned around a product review was over an entire weekend with very little sleep.

In any case, my point was that the GTX 280 even with the leaked benches on the older 177.26 drivers were good enough to warrant an upgrade (for me). The fact that an updated driver resulted in almost 50% performance increase over an already solid mark is just gravy. Same would hold true for the other leaked benchmarks so far.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: taltamir
Originally posted by: ViRGE
What the heck's up with everyone only using 2GB of RAM? A $600 video card, and you can only be bothered to couple it with $50 in RAM?

many still insist that "you only need 2GB of ram"... despite many individual GAMES Taking more then that (forget entire systems while running The game. Some games actually go over 2GB for the game itself)

2GB of ram allows them to run faster OCed ram... So they do, and they think the 1% speedup from OCing the ram is worth the massive slowdown of paging (well they don't, they just insist on BELIEVING that there is no paging, instead of actually TESTING it).

If I remember correctly, I got 2.1GB used by supreme commander once (with another 1.3GB used by windows vista). I haven't been too keen on testing it because once I saw 2GB was not enough for CoH (its ram use has went down since with patches), I simply upgraded to 4GB and never looked back.

I think a lot of limiting to 2GB actually has to do with keeping their lofty FSB speeds intact on their flaky NV chipset boards. While it'd be nice to see some 4GB+ results on x64 SP1, I wouldn't go as far to say its pointless or a poor tradeoff vs. being able to OC to 4GHz as CPU bottlenecking does look to be a signficant issue in this tier of GPUs and multi-GPU solutions.

Some games/engines also handle memory better than others although more people having more RAM and/or a 64-bit OS would allow Devs to raise the bar with RAM use. Here's an interesting quick interview with Borderlands Dev Corrinne Yu about upcoming features and what kind of hardware you'll need. Specifically on the importance of system RAM going forward:

PCGH: Finally: Can you tell our readers what hardware will be recommended (not required) to play the game with all detail in 1.280x1.024 (no FSAA/AF) and 1.600x1.200 (4x FSAA/8:1 AF)?

Corrinne Yu: I recommend a DX 10 class card coupled with a PC with a lot of CPU memory, not just GPU VRAM. Sufficient CPU RAM is crucial to the proper performance of virtual texture management.


 

n7

Elite Member
Jan 4, 2004
21,303
4
81
Originally posted by: HigherGround
Inq Benches. enjoy

BTW, we should get/make a dedicated review thread.

I agree...we'll see about getting a dedicated review thread setup.

As for the benches, the GTX 280 is certainly made for my 2560x1600 display.

Difference is huge at that resolution for some games, particularly ones i play like UT3, where i could go from no AA since AA = unplayable, to 4x AA with good fps still it looks like.

 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
From the preliminary benchmarks we're getting I have to say "lol" to all those fanboys who automatically assumed that GTX 280 had 50% more efficient shaders!

50% more shading power in total, sure. But if the GTX had 87.5% more shaders that were also 50% more efficient we'd be seeing vastly different results. Come on guys, it wouldn't make sense for NVIDIA to do that much work on the shaders and still leave out DX10.1
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: chizow
Originally posted by: taltamir
Originally posted by: ViRGE
What the heck's up with everyone only using 2GB of RAM? A $600 video card, and you can only be bothered to couple it with $50 in RAM?

many still insist that "you only need 2GB of ram"... despite many individual GAMES Taking more then that (forget entire systems while running The game. Some games actually go over 2GB for the game itself)

2GB of ram allows them to run faster OCed ram... So they do, and they think the 1% speedup from OCing the ram is worth the massive slowdown of paging (well they don't, they just insist on BELIEVING that there is no paging, instead of actually TESTING it).

If I remember correctly, I got 2.1GB used by supreme commander once (with another 1.3GB used by windows vista). I haven't been too keen on testing it because once I saw 2GB was not enough for CoH (its ram use has went down since with patches), I simply upgraded to 4GB and never looked back.

I think a lot of limiting to 2GB actually has to do with keeping their lofty FSB speeds intact on their flaky NV chipset boards. While it'd be nice to see some 4GB+ results on x64 SP1, I wouldn't go as far to say its pointless or a poor tradeoff vs. being able to OC to 4GHz as CPU bottlenecking does look to be a signficant issue in this tier of GPUs and multi-GPU solutions.

Some games/engines also handle memory better than others although more people having more RAM and/or a 64-bit OS would allow Devs to raise the bar with RAM use. Here's an interesting quick interview with Borderlands Dev Corrinne Yu about upcoming features and what kind of hardware you'll need. Specifically on the importance of system RAM going forward:

PCGH: Finally: Can you tell our readers what hardware will be recommended (not required) to play the game with all detail in 1.280x1.024 (no FSAA/AF) and 1.600x1.200 (4x FSAA/8:1 AF)?

Corrinne Yu: I recommend a DX 10 class card coupled with a PC with a lot of CPU memory, not just GPU VRAM. Sufficient CPU RAM is crucial to the proper performance of virtual texture management.

it is good to see that he agrees with me... that means his game will probably use ram properly... too many games simply do not utilize the ram.. neverwinter nights 2 for example has ATROCIOUS load times, and every few minutes... it uses about 600MB at max everything, and every few minutes (when you move to a different "map", and the "maps" are TINY) dumps 400MB of it and then starts reloading using a SINGLE THREADED unoptimized decompression algorithm from one of their huge data files...
It is terrible. And all they had to do was keep stuff in ram and make use of at least SOME of the 4GB I have available.

Originally posted by: HurleyBird
From the preliminary benchmarks we're getting I have to say "lol" to all those fanboys who automatically assumed that GTX 280 had 50% more efficient shaders!

No one automatically assume it... nVidia said so!
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: HurleyBird
From the preliminary benchmarks we're getting I have to say "lol" to all those fanboys who automatically assumed that GTX 280 had 50% more efficient shaders!

50% more shading power in total, sure. But if the GTX had 87.5% more shaders that were also 50% more efficient we'd be seeing vastly different results. Come on guys, it wouldn't make sense for NVIDIA to do that much work on the shaders and still leave out DX10.1

So does this mean you'll "lol" and then disappear for another 6-8 months until the next big ATi/NV face off?

You're assuming people actually bought into the "50% more shaders" propaganda when those who actually use the parts know all those extra shaders are twiddling their thumbs waiting for CUDA PhysX and Folding@Home projects. Good thing GTX280 doubled everything else on G92 as well.....

Also:

1st German Review . Not a great review, only a few games and at some low resolutions, but very AA intensive which is good.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,726
1,342
136
Originally posted by: taltamir
No one automatically assume it... nVidia said so!

It doesen't take a genius to realize that a marketing slide claiming that "2nd gen unified shaders perform 50% better" could go either way. It also doesen't take a genius to calculate that 240 G92-type shaders clocked at 1300MHz would offer almost exactly 50% more shading power than current Geforce cards. It definitely doesen't take a genius to realize something is off about NVIDIA doing an extensive redesign of their shaders while skipping DX10.1

When I first mentioned that it might be possible that the marketing slides were talking only about shading power as a whole and were merely throwing in buzzwords like "2nd gen" the NVIDIA fanboys ridiculed me. As such, I think I have the right to say "I told you so."
 

zod96

Platinum Member
May 28, 2007
2,861
67
91
I'll be going Ati this time around. The 260 and 280 are way to rich for my blood. The ati 4850 is what I'll be getting. Its $179 and its as fast or faster than a 9800GTX and its single slot single cooler uses less power and less heat. The 260 and 280 are better cards for sure, but I just don't think their worth $200-300 more
 

AzN

Banned
Nov 26, 2001
4,112
2
0
We are getting to a point where CPU is the bottleneck with a card like GT200.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: taltamir
The big deal with nvidias next release is that the CUDA physx drivers should come out with the GTX 280... those will work for every DX10 card from nvidia... so all us G92 owners will now have a physX card as well... ofcourse, physx runs on SP, so that means we will need more then ever before.

But with that there will be over 90 million physX capable machines out there... as a result of that 180 news games are currently in work to support physX (before there were just not enough cards in the market for it to be worthwhile)...

I have been waiting for physics effects for 5 whole years... (and no, extra particle effects isn't physics, shooting a barrel and having a realistic explosion with sharpnel causing terrain damage and character injury is physics, which can be done, but isn't supported by most games who used physX)

Anyways... how long do you think it will take for a GTX 290 GDDR5 + 512bit bus?

I don't think they'll offer a gtx 290 with gddr5 + 512 bit bus. However, they WILL probably offer a midrange refresh in a year or so with a 256 bit bus and gddr 5 that will be competitive with gtx 280.
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,230
2
0
Lets just hope AMD takes the chance and bites them in the ass, I hate when companies sit on their tree milking ancient products
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Originally posted by: Azn
We are getting to a point where CPU is the bottleneck with a card like GT200.

Yes you are technically correct that cpu is the bottleneck, but when you are gettin 80-100-140 frames in a game, who cares about cpu bottlenecking?

This gets brought up every single gpu generation and every single gpu generation it gets proven time and time again that any modern cpu is sufficient for current gpus. The problem isn't cpu bottlenecking but too few games besides Crysis and WIC that are worthy of GT200 series (and those that play games at 2560x1536). As more intensive games get released, I would be willing to bet that a C2Q 3.4-3.6ghz will be sufficient even for GF11, outliving GTX 280 with ease.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: HurleyBird
It doesen't take a genius to realize that a marketing slide claiming that "2nd gen unified shaders perform 50% better" could go either way. It also doesen't take a genius to calculate that 240 G92-type shaders clocked at 1300MHz would offer almost exactly 50% more shading power than current Geforce cards. It definitely doesen't take a genius to realize something is off about NVIDIA doing an extensive redesign of their shaders while skipping DX10.1

When I first mentioned that it might be possible that the marketing slides were talking only about shading power as a whole and were merely throwing in buzzwords like "2nd gen" the NVIDIA fanboys ridiculed me. As such, I think I have the right to say "I told you so."

Actually its probably true that the shaders on G200 could theorectically perform up to 50% faster clock per clock to a G92 based one. How? well nVIDIA finally got its missing MUL working (and this would probably depend on the game, where the utilization of the MUL will show this gain or not). This is one of the reasons why G200 reaches ~930GFLOPs where as the G92/G8x dont since they cant utilize the missing MUL. I wouldn't call this "extensive redesign of of their shaders" but rather taking advantage of what was in there before.

It doesn't take a genius either to figure out that early in the design process nVIDIA didn't choose to go with the DX10.1 route. Im guessing its because it required alot of reconfiguration to the current architecture (TMU configuration/shaderAA and other stuff) and the performance return probably didn't offset the cost involved in doing such a meaningless change. With DX11 already in the talks already, and negligable IQ differences between DX10.1 and DX10, you can tell why nVIDIA chose this way.

Plus theres more to a GPU than ROPs, shaders and the usual specs. So id not talk if one doesn't understand those underlying aspects of a GPU. I mean for one, nVIDIA's architecture has dual precision (FP64), and its sustained throughput (for FLOPs) has been greatly increased.

note - Fact is, you cant sit on your ass doing nothing as a big company. G200 was in the works long before G80 was released. Its impossible to suddenly rip something out of the rear end seeing as it takes months and months just to get the manufacturering process correct. nVIDIA's probably already working on their refresh of G200 and the next generation architecture already (the DX11 part that will go up against larabee).

 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: taltamir
anyone else notice the inq review:
1. Looks like they stole it from someone else.
2. Looks like they were comparing two different driver versions.

not only that, but did you check out the oc on that 9800gtx that they used for comparison?
 

manko

Golden Member
May 27, 2001
1,846
1
0
Originally posted by: Cookie Monster
nVIDIA's probably already working on their refresh of G200 and the next generation architecture already (the DX11 part that will go up against larabee).
I thought I saw a report that the 55nm GT200b had already taped out and is due for a September release.

 

Foxery

Golden Member
Jan 24, 2008
1,709
0
0
Originally posted by: manko
I thought I saw a report that the 55nm GT200b had already taped out and is due for a September release.

It wasn't a "report" so much as rumor-mill from The Inqurier.

re: The tangent about system RAM
Originally posted by: taltamir
too many games simply do not utilize the ram.. neverwinter nights 2 for example has ATROCIOUS load times, and every few minutes... it uses about 600MB at max everything, and every few minutes (when you move to a different "map", and the "maps" are TINY) dumps 400MB of it and then starts reloading using a SINGLE THREADED unoptimized decompression algorithm from one of their huge data files...
Developers have to write code to work within certain restraints of the Minimum specs. Rather than write a second path the check for better hardware, they just consider newer/faster systems to be brute force improvements. If the code works reasonably well with the official Min specs and Recommended specs, then rather than spending time and money to add routines for the 3 people who bought more, they call it a day.

NWN2 was released in 2006. (Which means the code was written in 2005!) There weren't any multi-core machines or people with large amounts of RAM to worry about.

Some programs/companies may not even care about scaling. Being flashy and bleeding-edge tech isn't always a directive for every product.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
Originally posted by: Extelleron
Originally posted by: bryanW1995
Originally posted by: SickBeast
Originally posted by: BFG10K
I'll finish up with this: if the 4870 X2 turns out to be faster than the GTX280 it will be a hollow victory as far as I'm concerned.
I tend to agree with you, however I'm going to reserve judgment until they actually release the card. Two of the rumored new features sound like they will make a huge improvement. The GPU-to-GPU communication will be done in memory, plus the entire memory on the card will be fully utilized (as opposed to 'half wasted').

If you look at how seemlessly multiple CPUs can chug through an SMP-enabled application such as video encoding, you've gotta keep the faith that someday dual GPUs will be capable of 100% gains over a single one.

If the X2 can beat the GTX280 without any major driver issues, AMD will have pulled a major rabbit out of their hats IMO. I personally see the lack of texturing units as the 'achille's heel' of the 4870, but time will tell.

it's not going to be faster than 280 gtx if it's $150 cheaper...that's just a hunch...

http://www.xtremesystems.org/f...3047486&postcount=1390

According to OBR (who definitely has a GTX 280) HD 4850's in CF are faster than a single GTX 280.... in whatever benchmark he tested (probably Crysis). And he's no AMD fanboy either.

If HD 4850 CF is faster.... HD 4870 CF should definitely be faster as well.

Wow $179x2. Wow.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |