G80 De-Mystified; GeForce 8800GTX/GT

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
These specs make no sense. 700M transistors? Keep dreaming. 3x the clock speed? HA HA! 3x the clock speed but only 2x the pixel shader power? Somtin' wrong with that math.

Also, these rumored specs claim "Much more efficient than traditional architecture" even though this archtechture claims only 2x the pixel shader power with 2.5x the number of transistors and 3x the clock speed? 2.5x3 = 7.5x the raw processing resource yet only a 2x increase in pixel shader power? Laughable.

I'm wondering if nVidia is leaking phoney specs just to generate some hype and attention for their product.
 

GundamSonicZeroX

Platinum Member
Oct 6, 2005
2,100
0
0
So much for me stepping up. It's out of my price range( can't afford another 200-300 atm) and it's just out of my 90 days.
 

Damn Dirty Ape

Diamond Member
Nov 1, 1999
3,310
0
76
Originally posted by: Trey22
The GT at $449 is still out of the range I'll pay for a vidcard.

was thinking the same thing. I could forsee some big price cuts on ATI's part again and swoop in for the DX9 folks for one last hurrah.
 

Nightmare225

Golden Member
May 20, 2006
1,661
0
0
Originally posted by: zephyrprime
These specs make no sense. 700M transistors? Keep dreaming. 3x the clock speed? HA HA! 3x the clock speed but only 2x the pixel shader power? Somtin' wrong with that math.

Also, these rumored specs claim "Much more efficient than traditional architecture" even though this archtechture claims only 2x the pixel shader power with 2.5x the number of transistors and 3x the clock speed? 2.5x3 = 7.5x the raw processing resource yet only a 2x increase in pixel shader power? Laughable.

I'm wondering if nVidia is leaking phoney specs just to generate some hype and attention for their product.

Then, everyone would be disappointed and nobody would buy from them. Why are you so biased against the green team? :disgust: Using that logice, R600 will use more than 250W...
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
I think G80 is a "unified" shader architecture.
But its abit different than your normal definition for a USA. (unified shader architecture)

Now, from the rumours from various sources, it leads the G80 having split into 2 sections.
Dedicated Pixel shaders and unified VS/GS.

For DX10, we know that a shader can be addressed as a PS/VS/GS hence giving birth to USA for future DX10 GPU.

Now from the vr-zone, it said 2 x pixel shader performance and 12 x VS performance.
48 dedicated pixel shaders hence 2x24 = 48. Since the VS/GS is unified with a total of 96 shaders, this can equal for the card being 48VS/48GS if configured to be split into two. Also since its unified it means that it can have a total of 96 VS, hence 8x12=96.

So, basically G80 might have 48 PS, 48 VS and 48 GS. So with a total of 128 shaders, where 48 is dedicated and 96 dedicated in VS/GS.

For the 384bit, i think this might help for some to understand the reason behind it.
From B3D quoting jawed

My earlier idea that the 256-bit bus is for normal work and the "odd" bus is for Constant Buffers, Streamout Buffers and post-GS cache:

http://www.beyond3d.com/forum/showpo...&postcount=693
Maybe it's a 256-bit bus to conventional local RAM plus a 128-bit bus to a pool of memory dedicated to:
constant buffers
post geometry shader cache
stream-out


is making more sense now... In GTX the odd bus is 128 bits wide, total 768MB. In GT the odd bus is 64 bits wide (with half the memory, only 64MB), total 640MB. Also makes the board a bit cheaper to make.

In many ways, its arguable that the access patterns to these buffers are quite unlike the normal access patterns for local RAM. Well, that's certainly the case for Streamout and post-GS cache (which are tiled-write, serial read). Constant buffers (and the associated Texture buffers?) are more like textures in one sense, so maybe they'll live in regular memory (which is optimised for tiled-write, tiled-read).

If the two-chip rumour has any foundation then I could see something like:

VS/GS die <->VB,CB,TB,SO,PGSC odd-RAM->PS die<->VRAM

Note that the VS/GS pipes don't generally need access to textures, TBs and CBs serve those functions (as well as VB - vertex buffers for input data). The CPU would send vertices directly to the odd RAM, as a serial stream and it would directly update the CBs in odd RAM too.

The PS die then has read-only access to the odd RAM (need a better name!!!) and all framebuffer operations work against VRAM. Obviously the PS die can access any TBs and CBs in the odd RAM, whilst fetching normal textures from VRAM.

Im also lead to believe that G80 will have quite impressive DX10 performance not to mention DX9 performance.

Edit - finally 700M is a possbility. Thats because Nvidia themselves said their next gen part was "...over half a billion transistors..." Which means >500M. So its all possible.
 

TheRyuu

Diamond Member
Dec 3, 2005
5,479
14
81
Originally posted by: Cookie Monster
I think G80 is a "unified" shader architecture.
But its abit different than your normal definition for a USA. (unified shader architecture)

Now, from the rumours from various sources, it leads the G80 having split into 2 sections.
Dedicated Pixel shaders and unified VS/GS.

For DX10, we know that a shader can be addressed as a PS/VS/GS hence giving birth to USA for future DX10 GPU.

Now from the vr-zone, it said 2 x pixel shader performance and 12 x VS performance.
48 dedicated pixel shaders hence 2x24 = 48. Since the VS/GS is unified with a total of 96 shaders, this can equal for the card being 48VS/48GS if configured to be split into two. Also since its unified it means that it can have a total of 96 VS, hence 8x12=96.

So, basically G80 might have 48 PS, 48 VS and 48 GS. So with a total of 128 shaders, where 48 is dedicated and 96 dedicated in VS/GS.

For the 384bit, i think this might help for some to understand the reason behind it.
From B3D quoting jawed

My earlier idea that the 256-bit bus is for normal work and the "odd" bus is for Constant Buffers, Streamout Buffers and post-GS cache:

http://www.beyond3d.com/forum/showpo...&postcount=693
Maybe it's a 256-bit bus to conventional local RAM plus a 128-bit bus to a pool of memory dedicated to:
constant buffers
post geometry shader cache
stream-out


is making more sense now... In GTX the odd bus is 128 bits wide, total 768MB. In GT the odd bus is 64 bits wide (with half the memory, only 64MB), total 640MB. Also makes the board a bit cheaper to make.

In many ways, its arguable that the access patterns to these buffers are quite unlike the normal access patterns for local RAM. Well, that's certainly the case for Streamout and post-GS cache (which are tiled-write, serial read). Constant buffers (and the associated Texture buffers?) are more like textures in one sense, so maybe they'll live in regular memory (which is optimised for tiled-write, tiled-read).

If the two-chip rumour has any foundation then I could see something like:

VS/GS die <->VB,CB,TB,SO,PGSC odd-RAM->PS die<->VRAM

Note that the VS/GS pipes don't generally need access to textures, TBs and CBs serve those functions (as well as VB - vertex buffers for input data). The CPU would send vertices directly to the odd RAM, as a serial stream and it would directly update the CBs in odd RAM too.

The PS die then has read-only access to the odd RAM (need a better name!!!) and all framebuffer operations work against VRAM. Obviously the PS die can access any TBs and CBs in the odd RAM, whilst fetching normal textures from VRAM.

Im also lead to believe that G80 will have quite impressive DX10 performance not to mention DX9 performance.

Edit - finally 700M is a possbility. Thats because Nvidia themselves said their next gen part was "...over half a billion transistors..." Which means >500M. So its all possible.

Also, was a unified shader required for DX10?
The only benifit I can see to going to a unified shader is that it will change depending on the type of load. With fixed shaders, there just fixed.

And whats VCAA?
And hurray, about dam time for HDR+AA.
And (again), I think we should take all this with a grain of salt.
Is this source any more reliable then the last?
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,425
8,388
126
Originally posted by: wizboy11
Also, was a unified shader required for DX10?
no.
And (again), I think we should take all this with a grain of salt.
most definitely. 700 million transistors worth of salt. that isn't even believable. only a 65 nm fab could realistically do it, and i don't think anyone other than intel has those yet. the X1900 is 384 million transistors. core2duo has 291 million transistors, and is 145 sq mm at 65 nm. 145 sq mm is a big part. and gobs of that is cache (4 mb of it), and intel is an expert at fitting cache into the smallest possible places. 700 million transistors on a 90 nm or even 80 nm part is going to be absolutely gigantic. to give you an idea, an athlon x2 with 1 meg total cache is 154 million transistors and is 183 sq mm. 700 million transistors would be 4 times that size.

it would have to be two chips. but 2 chips in one package is ridiculously expensive (as intel found out with the pentium pro). unless G80 is going the same route as VSA-100, and they're quoting numbers for the whole package, it ain't gonna happen. if these specs are anything near accurate that HAS to be what they're doing.
Is this source any more reliable then the last?
most likely not.

 

Nightmare225

Golden Member
May 20, 2006
1,661
0
0
Originally posted by: ElFenix
Originally posted by: wizboy11
Also, was a unified shader required for DX10?
no.
And (again), I think we should take all this with a grain of salt.
most definitely. 700 million transistors worth of salt. that isn't even believable. only a 65 nm fab could realistically do it, and i don't think anyone other than intel has those yet. the X1900 is 384 million transistors. core2duo has 291 million transistors, and is 145 sq mm at 65 nm. 145 sq mm is a big part. and gobs of that is cache (4 mb of it), and intel is an expert at fitting cache into the smallest possible places. 700 million transistors on a 90 nm or even 80 nm part is going to be absolutely gigantic. to give you an idea, an athlon x2 with 1 meg total cache is 154 million transistors and is 183 sq mm. 700 million transistors would be 4 times that size.

it would have to be two chips. but 2 chips in one package is ridiculously expensive (as intel found out with the pentium pro). unless G80 is going the same route as VSA-100, and they're quoting numbers for the whole package, it ain't gonna happen. if these specs are anything near accurate that HAS to be what they're doing.
Is this source any more reliable then the last?
most likely not.

Or we're talking successor to 7950GX2? Nvidia could pull it off, they have the experience.
 
Jul 26, 2005
41
0
0
Originally posted by: Genx87
Originally posted by: Ulfhednar
Originally posted by: Genx87
Boo effing hoo pay up or quit complaining.
I think we already established how I have more sense than that. It's no skin off my nose; Nvidia lose a customer and I save money until prices become more sensible and the features I want become available on their graphics hardware, it's that simple.

Like I said, market will dictate prices. Apparently however you are in the minority and thus Nvidia prices stay higher than you like.

So, why do you think that? You're probably some 17 year old nub who has taken one economics class in high school and now thinks he understands all aspects of the economy.
Haven't you ever noticed how other countries don't always have the same prices for products as we do? Not to mention the fact that the market doesn't always have a whole heck of a lot to do with prices anyway. Ever heard of price fixing? Seems you're not quite the omnipotent being you purport yourself to be.
It's really too bad that the people with the lowest IQs and least significant things to say are so often the loudest.
 

Ulfhednar

Golden Member
Jun 24, 2006
1,031
0
0
Originally posted by: TheLiberalTruth
Originally posted by: Genx87
Like I said, market will dictate prices. Apparently however you are in the minority and thus Nvidia prices stay higher than you like.
So, why do you think that? You're probably some 17 year old nub who has taken one economics class in high school and now thinks he understands all aspects of the economy.
Haven't you ever noticed how other countries don't always have the same prices for products as we do? Not to mention the fact that the market doesn't always have a whole heck of a lot to do with prices anyway. Ever heard of price fixing? Seems you're not quite the omnipotent being you purport yourself to be.
It's really too bad that the people with the lowest IQs and least significant things to say are so often the loudest.
:shocked:
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: tanishalfelven
Originally posted by: Capt Caveman
http://www.vr-zone.com/?i=4007

* Unified Shader Architecture
* Support FP16 HDR+MSAA
* Support GDDR4 memories
* Close to 700M transistors (G71 - 278M / G70 - 302M)
* New AA mode : VCAA
* Core clock scalable up to 1.5GHz
* Shader Peformance : 2x Pixel / 12x Vertex over G71
* 8 TCPs & 128 stream processors
* Much more efficient than traditional architecture
* 384-bit memory interface (256-bit+128-bit)
* 768MB memory size (512MB+256MB)
* Two models at launch : GeForce 8800GTX and GeForce 8800GT
* GeForce 8800GTX : 7 TCPs chip, 384-bit memory interface, hybrid water/fan cooler, water cooling for overclocking. US$649
* GeForce 8800GT : 6 TCPs chip, 320-bit memory interface, fan cooler. US$449-499

Our friends have gotten some information of the upcoming NVIDIA G80 and it appears that G80 gonna be unified shader after all. It should be pretty close to the truth and it explained the mystery of the strange DRAM capacities that INQ reported previously. Expect G80 to be out somewhere in mid November along with Kentsfield.


3xcore clock increases only 2x pixel shading power ?
does anyone else see soemtihng wrong with that.

It could mean per clock also.

 

firewolfsm

Golden Member
Oct 16, 2005
1,848
29
91
Can you please change that to $100 donation to Palastine or something that could use it, screw AT, they have ads
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Ulfhednar
Originally posted by: Genx87
Boo effing hoo pay up or quit complaining.
I think we already established how I have more sense than that. It's no skin off my nose; Nvidia lose a customer and I save money until prices become more sensible and the features I want become available on their graphics hardware, it's that simple.

What can we say Ulfhednar. You're just perfect in every way. Just not here.

 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
VR Zone has been pretty accurate before (as I remember it) in terms of Nvidia hardware. Their ati speculation is always way too optimistic (32 pipes). So this is very encouraging that Nvidia looks to have a killer card on the horizon. The other interesting fact is that these stories are coming out this far before launch. Looks like Nvidia wants to kill sales of current gen higher end cards. Guess ati must be doing ok here. Anyways looking forward to an external power brick and a nice improvement in graphics. Hope I can afford all this.
 

latino666

Golden Member
Sep 27, 2005
1,103
0
0
Originally posted by: lopri
250W for a single card is a suicide. It WON'T happen. That is against the PCI-E specs and Intel's PSU Guidelines. External powerbrick could be a solution, sure, but again that won't happen at this point of the game. I will donate $100 to AT if either a R600 or a G80 consumes that much power. Now, 150W is a more realistic number since we know the motherboard PCI-E slot supplies ~75W and a 6-pin power connector can supply the rest.

Just making sure you keep your word if it does turn out to use that much power.
 

Genx87

Lifer
Apr 8, 2002
41,095
513
126
Originally posted by: TheLiberalTruth
Originally posted by: Genx87
Originally posted by: Ulfhednar
Originally posted by: Genx87
Boo effing hoo pay up or quit complaining.
I think we already established how I have more sense than that. It's no skin off my nose; Nvidia lose a customer and I save money until prices become more sensible and the features I want become available on their graphics hardware, it's that simple.

Like I said, market will dictate prices. Apparently however you are in the minority and thus Nvidia prices stay higher than you like.

So, why do you think that? You're probably some 17 year old nub who has taken one economics class in high school and now thinks he understands all aspects of the economy.
Haven't you ever noticed how other countries don't always have the same prices for products as we do? Not to mention the fact that the market doesn't always have a whole heck of a lot to do with prices anyway. Ever heard of price fixing? Seems you're not quite the omnipotent being you purport yourself to be.
It's really too bad that the people with the lowest IQs and least significant things to say are so often the loudest.

If you have proof of price fixing which is illegal please come forward and give it to the proper authorities.

Other countries dont have the same prices and are often higher because they have more inefficiencies within their economies which prevent the same price's as you will find within the United States. This can be from oppressive taxation on the product, poor supply distribution systems, or the fact it isnt built within the country it is being sold.

But none the less for that region the market will set the price even with the above factored in.

The only time the market doesnt have a lot to do with prices is when the govt gets involved or there is price fixing and or collusion on the part of the manufacturers.



 

coldpower27

Golden Member
Jul 18, 2004
1,677
0
76
Hmmm, just hmmm, 700 Million Transistors eh....

And 143mm2 is not a big die lol, Core 2 Duo is at a fairly efficient die size for mass production.

Hmm a ~500mm2 die size, would be needed on the 90nm node using G71's transistor density for 700 Million Transistor Core, given that High end GPU's aren't produced en masse, wow just wow if true. Intel themeselves have created a product that is larger then this die size on the 90nm node with Montencito with a die size of 596mm2 with the bulk of a it being cache.

We will see won't we.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
This almost sounds like it's a "Dual-Core" type solution with this high of a trasistor count. And I'm not so sure they'll get anywhere near 1.5GHz clocks. This GPU sounds like it should be something about 2 or 3 generations from now, at least.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Originally posted by: Genx87

The only time the market doesnt have a lot to do with prices is when the govt gets involved or there is price fixing and or collusion on the part of the manufacturers.

The market is a very complex thing and involves much more than just the manufacturers. Adam Smith really died with the industrial revolution.

 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
I think the 700M transistor is very possible as rumours were saying that it has a massive die. But that goes right into contrast with 1.5 ghz core clock. Large dies create great heat, and I dont think 1.5 ghz could be possible.

The only thing I can see nvidia doing is making a dual gpu in one die. Making a DX9 GPU and a DX10 GPU (which explains the split bus and the weird memory count). In that case they could both be clocked at around 750 mhz each giving 1.5 ghz of "marketable" frequency... that would still make a lot of friking heat thou.
 

schneiderguy

Lifer
Jun 26, 2006
10,769
52
91
Originally posted by: keysplayr2003
This almost sounds like it's a "Dual-Core" type solution with this high of a trasistor count. And I'm not so sure they'll get anywhere near 1.5GHz clocks. This GPU sounds like it should be something about 2 or 3 generations from now, at least.

yeah, two ~350million transistor cores sounds a lot more manageable than one 700million transistor core
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |