SpeedZealot369
Platinum Member
- Feb 5, 2006
- 2,778
- 1
- 81
256bit is very unlikely. The die seems 20% larger on a smaller process.Originally posted by: Cookie Monster
Leaked R600 die shots
R520 and R600 compared
It looks like R600 isnt too far off from its launch as leaks are spreading. However, notice that the R600 is the same size in terms of the package. Therefore many b3d members think that there wont be enough pins for 512bit memory bus interface and its going to be around 534 million transistors. (note that the R600 is 80nm).
This leads me to believe that R600 is going to be 256bit using faster GDDR4 memory. You have to keep in mind that GDDR4 at 1.4ghz (256bit) offers more bandwidth than what the G80 offers even with its 384bit. Theres also an possibility of the R600 using 384bit.
Originally posted by: Cooler
256bit us very unlikely the die seems 20% larger on a smaller process.Originally posted by: Cookie Monster
Leaked R600 die shots
R520 and R600 compared
It looks like R600 isnt too far off from its launch as leaks are spreading. However, notice that the R600 is the same size in terms of the package. Therefore many b3d members think that there wont be enough pins for 512bit memory bus interface and its going to be around 534 million transistors. (note that the R600 is 80nm).
This leads me to believe that R600 is going to be 256bit using faster GDDR4 memory. You have to keep in mind that GDDR4 at 1.4ghz (256bit) offers more bandwidth than what the G80 offers even with its 384bit. Theres also an possibility of the R600 using 384bit.
Originally posted by: Cookie Monster
Originally posted by: Cooler
256bit us very unlikely the die seems 20% larger on a smaller process.Originally posted by: Cookie Monster
Leaked R600 die shots
R520 and R600 compared
It looks like R600 isnt too far off from its launch as leaks are spreading. However, notice that the R600 is the same size in terms of the package. Therefore many b3d members think that there wont be enough pins for 512bit memory bus interface and its going to be around 534 million transistors. (note that the R600 is 80nm).
This leads me to believe that R600 is going to be 256bit using faster GDDR4 memory. You have to keep in mind that GDDR4 at 1.4ghz (256bit) offers more bandwidth than what the G80 offers even with its 384bit. Theres also an possibility of the R600 using 384bit.
The reason i came to that conclusion was because no matter how big the GPU die is you have to remember the number of pins to can have. At first i thought the R600 was about the same size as the R5x0 package. (not the GPu itself). Since you cant really change the pin spacings, i thought that it was impossible for R600 to have 512bit, because there would be no room for those extra pins.
However those die shots actually surprise me. We are talking about 2000ish pins!!! That is $HIT load of pins.
edit - 256bit is a possiblity because GDDR4 at 1.4ghz will provide more bandwidth, smaller PCB, cheaper to produce and less power figures. But its really leaning toward the 512bit rumour now thanks to the leaked shots.
Originally posted by: josh6079
Alright so I'll ask it: Why'd they turn it?
But it's not like the whole substrate itself is tilted, just the front-side portion. Your pictures show the back side displaying a right-side-up, square die: ClickThe same question im wondering about. But i do think its something to do with the memory chip layout/512bit and the PCB length.
Originally posted by: josh6079
But it's not like the whole substrate itself is tilted, just the front-side portion. Your pictures show the back side displaying a right-side-up, square die: ClickThe same question im wondering about. But i do think its something to do with the memory chip layout/512bit and the PCB length.
If the reason is relative to PCB design or memory module layout, wouldn't the front and back side of the die be positioned the same?
Originally posted by: Cookie Monster
Originally posted by: josh6079
Alright so I'll ask it: Why'd they turn it?
The same question im wondering about. But i do think its something to do with the memory chip layout/512bit and the PCB length.
edit - it wont be another NV30 cooler. GDDR4 is so much different to GDDR2. NV30 "failed" because not of GDDR2 memory but because of the NV30 taking unexpected twists and turns between the relationships of MS and nVIDIA about DX9, and whole lot of other things. Its a really complicated story that only those people who were part of the FX fiasco truly understand.
R600 doesnt look like NV30 unless something goes terribly wrong.
Originally posted by: tanishalfelven
Originally posted by: Cookie Monster
Originally posted by: josh6079
Alright so I'll ask it: Why'd they turn it?
The same question im wondering about. But i do think its something to do with the memory chip layout/512bit and the PCB length.
edit - it wont be another NV30 cooler. GDDR4 is so much different to GDDR2. NV30 "failed" because not of GDDR2 memory but because of the NV30 taking unexpected twists and turns between the relationships of MS and nVIDIA about DX9, and whole lot of other things. Its a really complicated story that only those people who were part of the FX fiasco truly understand.
R600 doesnt look like NV30 unless something goes terribly wrong.
or someone like me who spent 10 mins reading the wikipedia article on it.
Originally posted by: Cooler
The R600 Pin count is about ~500 more then G80. At the very least it will have a Gig of GDDR4.
I don't care if they release better performing drivers, as long as they release some that allow SSAA on a single G80. It's not like G80 needs a performance boost right now and by the time true DX10 games become prominent these cards are going to pale in comparision to their innevitable refreshes.With all these leaks, im beginning to wonder if nVIDIA is delaying their performance drivers for their G80, just like GF3 and 8500. When the R600 shows up, theres not need for high clocked G80s nor refreshes but just need a performance driver to steal some of ATi's thunder. This might indicate that the R600 is still alive and kicking and we might see it very soon.
Originally posted by: ArchAngel777
Originally posted by: Cooler
The R600 Pin count is about ~500 more then G80. At the very least it will have a Gig of GDDR4.
If they are throwing on 1GB of Vram, I would find it very hard to believe that it is 256bit bus. Then again, not sure where you came up with the known fact that it will be 1GB... Sounds too good to be true, but I won't complain. GDR4 + 512bit bus would probably be on par with nVidia 8800GTS SLI Rig when turning on full AA and AF. Can't wait to see if this is a rabit waiting to be pulled out or some dookie...
Originally posted by: jim1976
Originally posted by: ArchAngel777
Originally posted by: Cooler
The R600 Pin count is about ~500 more then G80. At the very least it will have a Gig of GDDR4.
If they are throwing on 1GB of Vram, I would find it very hard to believe that it is 256bit bus. Then again, not sure where you came up with the known fact that it will be 1GB... Sounds too good to be true, but I won't complain. GDR4 + 512bit bus would probably be on par with nVidia 8800GTS SLI Rig when turning on full AA and AF. Can't wait to see if this is a rabit waiting to be pulled out or some dookie...
It doesn't work that way.. Having more bandwith than necessary available will provide diminishing returns to performance yields.. And most importantly 1GB of GDDR4 will be much faster than GDDR3 and provide a lot more bandwith if needed for higher resolution and filtering by itself even with a 256/384 bit bus..
I can't see why ATI will need this bandwith right now.. It would be a costly and unnecessary investment, w/o being able to utilise it, from an architectural point of view AFAIK.. Unless they want it there for future plans..
Originally posted by: ArchAngel777
Originally posted by: jim1976
Originally posted by: ArchAngel777
Originally posted by: Cooler
The R600 Pin count is about ~500 more then G80. At the very least it will have a Gig of GDDR4.
If they are throwing on 1GB of Vram, I would find it very hard to believe that it is 256bit bus. Then again, not sure where you came up with the known fact that it will be 1GB... Sounds too good to be true, but I won't complain. GDR4 + 512bit bus would probably be on par with nVidia 8800GTS SLI Rig when turning on full AA and AF. Can't wait to see if this is a rabit waiting to be pulled out or some dookie...
It doesn't work that way.. Having more bandwith than necessary available will provide diminishing returns to performance yields.. And most importantly 1GB of GDDR4 will be much faster than GDDR3 and provide a lot more bandwith if needed for higher resolution and filtering by itself even with a 256/384 bit bus..
I can't see why ATI will need this bandwith right now.. It would be a costly and unnecessary investment, w/o being able to utilise it, from an architectural point of view AFAIK.. Unless they want it there for future plans..
It depends on what you or I deem necessary, which is totally subjective. When you start hitting 16X AA at some decently high resolutions, the memory bandwidth requirements go through the roof and yes, it does work that way
Who's to say it won't need it, the GPU isn't even out. We have absolutely no idea kind of AA the R600 will push and nor what other features it may bring. For instance, what if it supported 16xSAA? That would need a huge amount of bandwitdth when using it with resolutions of 1600x1200 or higher.And most importantly 1GB of GDDR4 will be much faster than GDDR3 and provide a lot more bandwith if needed for higher resolution and filtering by itself even with a 256/384 bit bus..
I can't see why ATI will need this bandwith right now..
Originally posted by: josh6079
Who's to say it won't need it, the GPU isn't even out. We have absolutely no idea kind of AA the R600 will push and nor what other features it may bring. For instance, what if it supported 16xSAA? That would need a huge amount of bandwitdth when using it with resolutions of 1600x1200 or higher.And most importantly 1GB of GDDR4 will be much faster than GDDR3 and provide a lot more bandwith if needed for higher resolution and filtering by itself even with a 256/384 bit bus..
I can't see why ATI will need this bandwith right now..
I'm not saying it will use that kind of AA, but I just think it's too early to judge its needs for high bandwidth before we know what features it supports.
Originally posted by: jim1976
Ok then I'd love to see your explanation of how this excess bandwith will be utilised ..
With a 256/384 bit bus and GDDR4 with today standards you can go to 120GB/sec .That's a lot more than G80s bandwith already..
Now if you go with a 512bit bus show me where are you going to need that? And most importantly where are you going to find the math/processing power and fillrate to push that huge memory bandwith beyond this limit @ insane resolutions.. Unless of course you expect R600 to be a card that is out of space or that memory can take care of everything at these insane settings..