- May 7, 2005
- 5,161
- 32
- 86
Well, looking at the 8800GTS 320mb vs 640mb, it seems that the performance difference is largely due to the frame buffer size and NOT effected by bandwidth. (Isnt the memory clock the same between the two cards?)
People seem to misunderstand what memory bandwidth is and how much performance gains you get out of it. One clear example is the X1950XTX and the X1900XTX. High bandwidth by using GDDR4 made it ~5% faster yet its bandwidth was around 29% more than the X1900XTX. You DONT need a 256bit memory interface IF the memory clock can sufficently be clocked high enough.
Most importantly however, the architecture is the most important as chizow pointed out.
Comparing a 7600GT to a 6800 ultra, the 6800ultra literally rapes the 7600GT in terms of who has the "bigger numbers". It has a much bigger bandwidth (256bit), more pipelines, vertex shaders, texture units etc. But the 7600GT is 20% faster than the 6800ultra, and much more faster in newer games. Why is that so? how can a 128bit card beat a 256bit card with AA/AF enabled at high res? think about it for a moment. This also goes to the X1650XT (128bit) vs X850XT PE (256bit).
A 8800GTS 320mb is clearly faster than the 7900GTX especially in shader intensive games and newer games. (The 640mb is in the 7950GX2 territory). Just like the 7 series, i expect the 8600GTS to be quite abit faster than the 8600GT ala 7600GT vs 7600GS.
Im thinking IMO that this card will be somewhere around the 7950GT/X1900XT ballpark, and beating it in shader intensive games and newer titles. (oblivion/CoH comes to mind)
People seem to misunderstand what memory bandwidth is and how much performance gains you get out of it. One clear example is the X1950XTX and the X1900XTX. High bandwidth by using GDDR4 made it ~5% faster yet its bandwidth was around 29% more than the X1900XTX. You DONT need a 256bit memory interface IF the memory clock can sufficently be clocked high enough.
Most importantly however, the architecture is the most important as chizow pointed out.
Comparing a 7600GT to a 6800 ultra, the 6800ultra literally rapes the 7600GT in terms of who has the "bigger numbers". It has a much bigger bandwidth (256bit), more pipelines, vertex shaders, texture units etc. But the 7600GT is 20% faster than the 6800ultra, and much more faster in newer games. Why is that so? how can a 128bit card beat a 256bit card with AA/AF enabled at high res? think about it for a moment. This also goes to the X1650XT (128bit) vs X850XT PE (256bit).
A 8800GTS 320mb is clearly faster than the 7900GTX especially in shader intensive games and newer games. (The 640mb is in the 7950GX2 territory). Just like the 7 series, i expect the 8600GTS to be quite abit faster than the 8600GT ala 7600GT vs 7600GS.
Im thinking IMO that this card will be somewhere around the 7950GT/X1900XT ballpark, and beating it in shader intensive games and newer titles. (oblivion/CoH comes to mind)