G92 9800GTX and GTS

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
rare doubling of their previous top-end

Rare as in, every generation? Because in case you havent noticed

6800 2x ++ 5900
7800 2x 6800
8800 2x ++ 7800

By the time it launches its been a full year since the 8800 was launched, so 2x is not surprising
 

Ackmed

Diamond Member
Oct 1, 2003
8,483
528
126
^ I agree.

But this rumor is hardly enough to make me a believer. More facts are needed, which as always, will trickle down the pipe with the crap. And sifting thru it all is a pain. Ill just wait for real reviews, before laying claim to anything yet.
 

LightningRider

Senior member
Feb 16, 2007
558
0
0
Man! I was gonna post this earlier today but I thought it was old news. Oh well, exciting though if it's all true.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: chewietobbacca
Except Nvidia has never skipped to a new nm process on the high end without testing it first on the mid-range and low-end

Also, the entire memory bandwidth and memory problem is questionable considering their current G80 architecture is what they are going to be building on. IT would be weird to suddenly revise it to 512-bit / 1 gig when 384/768 and its derivatives are working.

Aren't the 8600 and 8-series mobile parts all on 65nm? Pretty sure the latest rumors had NV scrapping the 80nm refresh (8900s) and instead choosing to go forward with a 65nm high-end part and new line. The 2900XT's performance (or lack-of) relative to the 8800GTX probably had a lot to do with that decision.

Also the current G80 memory structure is actually an odd one, with something like 5 memory controllers and 6 shader clusters making it very inflexible in terms of scaling the GPU. I forget the exact figures, but going back to a 512/1GB would be much easier for them to scale for lower-end parts.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: ShadowOfMyself
rare doubling of their previous top-end

Rare as in, every generation? Because in case you havent noticed

6800 2x ++ 5900
7800 2x 6800
8800 2x ++ 7800

By the time it launches its been a full year since the 8800 was launched, so 2x is not surprising

5950 Ultra to 6800 Ultra is definitely 2x with even 3x achieved in a few cases.

6800 Ultra to 7800 GTX achieved 1.6x and was just a bit shy of 6800 Ultra SLI which itself is about 1.7-1.8x 6800 Ultra.

7900 GTX to 8800 GTX, achieved a good 1.6-2.0x, and that's only if you discount 7950GX2.

 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Extelleron
Originally posted by: Cheex
Originally posted by: Extelleron

The 8800GTS 320MB is midrange

I'm insulted...:|
Take it back!!

If you look at the price ranges for GPUs.... from $60 to $600, $270~ is right in the middle, so technically, the 8800GTS 320 is midrange

In my opinion:

Below $100: Ultra-low end
$100-$150: Low-end
$150-$200: Lower Mid-range
$250-$300: Upper Mid-range
$350-$450: High-End
$500-$600: Ultra High-End

Overall, the 8800GTS 320MB is kind of a weird card. It's technically midrange, but really it's the same GPU as the "high-end" 8800GTS 640MB (which by extension is a hacked down version of the "ultra high-end" 8800GTX). I never understood why nVidia didn't release an 80nm, 64 shader part for the $250~ price range, rather than an extremely expensive, 90nm high-end GPU coupled with less memory than the card really needs. Memory doesn't cost much, so nVidia isn't saving much with the 320MB version. With a 480mm^2 GPU, nVidia can't be making much money selling an 8800GTS 320MB for $270~.

As for those that say 2-3x performance of G80 is not going to happen, I believe you are wrong here. My theory is that G90 will be around 2x, perhaps a bit less, the performance of G80 in DirectX 9, but closer to 3x faster in DX10. G80 is a first-generation DX10 part and I'm sure nVidia has learned more about DX10, and how developers are implementing it in games, since last year. With much more raw power AND optimizations, 3x the performance of G80 in DX10 is not that hard to imagine.

The real question is whether or not G90 will be a single die or multiple die. R700 is rumored to be multiple, and some have said G90 is as well. In my opinion, if nVidia is going for a card with approximately 2-3x the raw power of G80 (if the 1TFLOPs figure is true, G90 is actually around 3x more powerful than G80, in raw horsepower) than they need to have multiple die. For 3x the raw power, nVidia would need at LEAST 256 shaders, and they would still need to clock the shaders at well over 2000MHz. And, they would be stuck with the same old huge, expensive 480mm^2 chip they have now (with double the execution units, on a half-size process, G90 would be approximately the same size as G90). I can't see nVidia doing that.

However, if you have 2 die with, say, 160 shaders on each, then suddenly it becomes a lot more realistic. There are two die, of course, but they are smaller and the chips will have much better yields.

I still consider the 8800 GTS 320 a high end card since it's based on the high end core despite it's price.

39-99$ Budget
$100-199 Mainstream
$200-$299 Performance-Mainstream
$300-$449 Performance
$450+ Enthusiast

And actually 2x the Shader Units would not equal double the die size as your only doubling functional units not redundant stuff like the DX10 features themselves, or PureVideo, if the next high end is indeed 65nm, then expect Nvidia to be able to shrink the die somewhat, but not a whole lot. Personally I believe 1 Teraflops could be achieved by making each individual unit more powerful, as well as using the MHZ stick with increasing the number of functional units some.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: chizow
Originally posted by: chewietobbacca
Except Nvidia has never skipped to a new nm process on the high end without testing it first on the mid-range and low-end

Also, the entire memory bandwidth and memory problem is questionable considering their current G80 architecture is what they are going to be building on. IT would be weird to suddenly revise it to 512-bit / 1 gig when 384/768 and its derivatives are working.

Aren't the 8600 and 8-series mobile parts all on 65nm? Pretty sure the latest rumors had NV scrapping the 80nm refresh (8900s) and instead choosing to go forward with a 65nm high-end part and new line. The 2900XT's performance (or lack-of) relative to the 8800GTX probably had a lot to do with that decision.

Also the current G80 memory structure is actually an odd one, with something like 5 memory controllers and 6 shader clusters making it very inflexible in terms of scaling the GPU. I forget the exact figures, but going back to a 512/1GB would be much easier for them to scale for lower-end parts.

G80 is a 6x64Bit Configuration in full Shader Clusters should be either in Blocks of 16 or 32, as the 8800 GTS to GTX shows. so either 3/4 or 6/8

And I dunno it looks like Shader Blocks and Bit Width are pretty independent to me, as it's all on the G80 die, but your just disabling portions as you go along.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,995
126
Going by past trends I don't think 2x performance is an unreasonable claim, especially since it's claimed it will have eDRAM.
 

LittleNemoNES

Diamond Member
Oct 7, 2005
4,142
0
0
Originally posted by: BFG10K
Going by past trends I don't think 2x performance is an unreasonable claim, especially since it's claimed it will have eDRAM.

didnt ATI technically employ this first with xbox360? Ironic that nvidia might beat them to the punch on PC.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: gersson
Originally posted by: BFG10K
Going by past trends I don't think 2x performance is an unreasonable claim, especially since it's claimed it will have eDRAM.

didnt ATI technically employ this first with xbox360? Ironic that nvidia might beat them to the punch on PC.

Like Xenos' unfied shader architecture.

I think I'm going to ride this 2900XT out until Jan '08 and then buy myself a real high end part.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
The thing is that ever since the whole "free 4xAA" and "edRAM" took off with the xenos.. almost ALL rumours on next gen high end parts seem to suggest the card will have those features.

One thing for sure is that there is NO such thing as "free" 4xAA", because there will be ALWAYS a performance hit. Yes even a 1% hit does not mean its "free". Plus eDram will make it far too expensive/difficult to implement (not sure if the benefits of having eDRAm is worth it or not, probably not). It makes sense for the xbox but not for the discrete high end GPUs.
 

Polish3d

Diamond Member
Jul 6, 2005
5,500
0
0
Originally posted by: coldpower27
Originally posted by: ShadowOfMyself
rare doubling of their previous top-end

Rare as in, every generation? Because in case you havent noticed

6800 2x ++ 5900
7800 2x 6800
8800 2x ++ 7800

By the time it launches its been a full year since the 8800 was launched, so 2x is not surprising


7900 GTX to 8800 GTX, achieved a good 1.6-2.0x, and that's only if you discount 7950GX2.




Go look at the Anand benchmarks from the original 8800 release date. Those were immature drivers (there has since been a very solid driver-optimized performance increase as seen in later tests)

The 8800GTX is ALWAYS twice as fast as the 7900GTX, and in games like Oblivion, it has 3x the framerates.


The x1900xtx is faster than the 7900GTX, I just upgraded from an X1900xtx and the 8800 is EASILY twice as fast, especially with the right CPU. It's quite incredible.

It's more like 2.0-3.0x the 7900gtx









 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
Originally posted by: Frackal
Originally posted by: coldpower27
Originally posted by: ShadowOfMyself
rare doubling of their previous top-end

Rare as in, every generation? Because in case you havent noticed

6800 2x ++ 5900
7800 2x 6800
8800 2x ++ 7800

By the time it launches its been a full year since the 8800 was launched, so 2x is not surprising


7900 GTX to 8800 GTX, achieved a good 1.6-2.0x, and that's only if you discount 7950GX2.




Go look at the Anand benchmarks from the original 8800 release date. Those were immature drivers (there has since been a very solid driver-optimized performance increase as seen in later tests)

The 8800GTX is ALWAYS twice as fast as the 7900GTX, and in games like Oblivion, it has 3x the framerates.


The x1900xtx is faster than the 7900GTX, I just upgraded from an X1900xtx and the 8800 is EASILY twice as fast, especially with the right CPU. It's quite incredible.

It's more like 2.0-3.0x the 7900gtx

Not only that but the 7900 was a simple refresh, out less than 1 year from the 7800
All they did was double the memory and increase the clocks, no architectural changes (except for the shrink to 90nm)

What I mean is, we shouldnt be comparing small refreshes when talking about performance increases because obviously between those there wont be a huge jump

9700 -> 9800
X1800 -> X1900

7800 -> 7900

and so on
 

Sylvanas

Diamond Member
Jan 20, 2004
3,752
0
0
Even if these specs are reliable I am afraid I wont be jumping on the bandwagon, with the issues I am having on the driver front with my 8800, it could be 10million times faster- but if it isn't reliable its nothing...just my 2c.
 

biostud

Lifer
Feb 27, 2003
18,592
5,294
136
the specs are pretty much what anyone could come up with in a qualified guess.
 

betasub

Platinum Member
Mar 22, 2006
2,677
0
0
Originally posted by: ShadowOfMyself
Not only that but the 7900 was a simple refresh, out less than 1 year from the 7800
All they did was double the memory and increase the clocks, no architectural changes (except for the shrink to 90nm)

What I mean is, we shouldnt be comparing small refreshes when talking about performance increases because obviously between those there wont be a huge jump

9700 -> 9800
X1800 -> X1900

7800 -> 7900

and so on

Agree with your point, but wouldn't include x1800 -> x1900 as an example, because there clearly was an architectural change (in the shader layout).
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: coldpower27
Originally posted by: chizow
Originally posted by: chewietobbacca
Except Nvidia has never skipped to a new nm process on the high end without testing it first on the mid-range and low-end

Also, the entire memory bandwidth and memory problem is questionable considering their current G80 architecture is what they are going to be building on. IT would be weird to suddenly revise it to 512-bit / 1 gig when 384/768 and its derivatives are working.

Aren't the 8600 and 8-series mobile parts all on 65nm? Pretty sure the latest rumors had NV scrapping the 80nm refresh (8900s) and instead choosing to go forward with a 65nm high-end part and new line. The 2900XT's performance (or lack-of) relative to the 8800GTX probably had a lot to do with that decision.

Also the current G80 memory structure is actually an odd one, with something like 5 memory controllers and 6 shader clusters making it very inflexible in terms of scaling the GPU. I forget the exact figures, but going back to a 512/1GB would be much easier for them to scale for lower-end parts.

G80 is a 6x64Bit Configuration in full Shader Clusters should be either in Blocks of 16 or 32, as the 8800 GTS to GTX shows. so either 3/4 or 6/8

And I dunno it looks like Shader Blocks and Bit Width are pretty independent to me, as it's all on the G80 die, but your just disabling portions as you go along.

Right, but I remember there was some pretty lengthy discussion about the memory configuration of the G80 when the 320MB part launched and that the traces for the memory interface showed an imbalance, with some shader quads being serviced by a disproportionate number of memory interfaces/modules. I think AT even mentioned this as a road block as to why they couldn't put more memory on the 320MB model.

Ideally you'd want everything to be balanced and proportionate, which isn't the case with the GTS and GTX. If you went to an 8x64 bit configuration, you'd have 2:1 memory/shader ratio for the GTX or 1:1 ratio if you doubled the shaders on a G9X part. If you wanted to trim the part down for a mid-range model, you could simply cut the shaders and memory interfaces in half and still maintain a 1:1 ratio.
 

Nanobaud

Member
Dec 9, 2004
144
0
0
Originally posted by: manimal
You would think they would skip the 9800 number as well since it was, well, an ATI product name, it would be like starting a new car company and nameing the car the Corvair-G92,

... or running for president with a name like "Bush" or "Clinton".
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Frackal
Originally posted by: coldpower27
Originally posted by: ShadowOfMyself
rare doubling of their previous top-end

Rare as in, every generation? Because in case you havent noticed

6800 2x ++ 5900
7800 2x 6800
8800 2x ++ 7800

By the time it launches its been a full year since the 8800 was launched, so 2x is not surprising


7900 GTX to 8800 GTX, achieved a good 1.6-2.0x, and that's only if you discount 7950GX2.




Go look at the Anand benchmarks from the original 8800 release date. Those were immature drivers (there has since been a very solid driver-optimized performance increase as seen in later tests)

The 8800GTX is ALWAYS twice as fast as the 7900GTX, and in games like Oblivion, it has 3x the framerates.


The x1900xtx is faster than the 7900GTX, I just upgraded from an X1900xtx and the 8800 is EASILY twice as fast, especially with the right CPU. It's quite incredible.

It's more like 2.0-3.0x the 7900gtx


No it isn't that is only if your looking at 2560x1600 resolutions and nothing else, if you look at the review as a whole, you seen that there are times where the performance increase is only 1.6x. Simply looking at 25x16 is not representative as a whole.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: coldpower27
Originally posted by: Frackal
Originally posted by: coldpower27
Originally posted by: ShadowOfMyself
rare doubling of their previous top-end

Rare as in, every generation? Because in case you havent noticed

6800 2x ++ 5900
7800 2x 6800
8800 2x ++ 7800

By the time it launches its been a full year since the 8800 was launched, so 2x is not surprising


7900 GTX to 8800 GTX, achieved a good 1.6-2.0x, and that's only if you discount 7950GX2.



Go look at the Anand benchmarks from the original 8800 release date. Those were immature drivers (there has since been a very solid driver-optimized performance increase as seen in later tests)

The 8800GTX is ALWAYS twice as fast as the 7900GTX, and in games like Oblivion, it has 3x the framerates.


The x1900xtx is faster than the 7900GTX, I just upgraded from an X1900xtx and the 8800 is EASILY twice as fast, especially with the right CPU. It's quite incredible.

It's more like 2.0-3.0x the 7900gtx


No it isn't that is only if your looking at 2560x1600 resolutions and nothing else, if you look at the review as a whole, you seen that there are times where the performance increase is only 1.6x. Simply looking at 25x16 is not representative as a whole.

 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Sorry Coldpower, I hit the edit button instead of the quote button. Your post was not altered.

what I wanted to say was, isn't that just a CPU limitation or no?
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: chizow
Originally posted by: coldpower27
Originally posted by: chizow
Originally posted by: chewietobbacca
Except Nvidia has never skipped to a new nm process on the high end without testing it first on the mid-range and low-end

Also, the entire memory bandwidth and memory problem is questionable considering their current G80 architecture is what they are going to be building on. IT would be weird to suddenly revise it to 512-bit / 1 gig when 384/768 and its derivatives are working.

Aren't the 8600 and 8-series mobile parts all on 65nm? Pretty sure the latest rumors had NV scrapping the 80nm refresh (8900s) and instead choosing to go forward with a 65nm high-end part and new line. The 2900XT's performance (or lack-of) relative to the 8800GTX probably had a lot to do with that decision.

Also the current G80 memory structure is actually an odd one, with something like 5 memory controllers and 6 shader clusters making it very inflexible in terms of scaling the GPU. I forget the exact figures, but going back to a 512/1GB would be much easier for them to scale for lower-end parts.

G80 is a 6x64Bit Configuration in full Shader Clusters should be either in Blocks of 16 or 32, as the 8800 GTS to GTX shows. so either 3/4 or 6/8

And I dunno it looks like Shader Blocks and Bit Width are pretty independent to me, as it's all on the G80 die, but your just disabling portions as you go along.

Right, but I remember there was some pretty lengthy discussion about the memory configuration of the G80 when the 320MB part launched and that the traces for the memory interface showed an imbalance, with some shader quads being serviced by a disproportionate number of memory interfaces/modules. I think AT even mentioned this as a road block as to why they couldn't put more memory on the 320MB model.

Ideally you'd want everything to be balanced and proportionate, which isn't the case with the GTS and GTX. If you went to an 8x64 bit configuration, you'd have 2:1 memory/shader ratio for the GTX or 1:1 ratio if you doubled the shaders on a G9X part. If you wanted to trim the part down for a mid-range model, you could simply cut the shaders and memory interfaces in half and still maintain a 1:1 ratio.


I am not disagreeing with you, but I believe Nvidia didn't have much choice with regard to the memory amounts available on a 320Bit Memory Interface, The Memory chips are each connected in 32Bit pathways with 32Bit being used on the G80/R600 line and each memory module is a 2^x multiple so you have to work with either 32 or 64MB chips, so the only available options anyway for a 320Bit Interface card would indeed be 320MB of 640MB of memory.

We already saw evidence of both Nvidia and ATI moving away from only using whole 2^x ratios, we saw this with the R580 where they used not 2 or 4 shader units per texture unit/ROP but 3 Pixel Shaders, as well as 24 Texture units on G70/G71 which is also a 6x4 Quad Configuration.

It has been shown to work and work quite well. If you stick to balanced and absolutely proportionate you start driving up costs and reducing your flexibility options.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |