Originally posted by: Cheex
Originally posted by: Extelleron
The 8800GTS 320MB is midrange
I'm insulted...:|
Take it back!!
If you look at the price ranges for GPUs.... from $60 to $600, $270~ is right in the middle, so technically, the 8800GTS 320 is midrange
In my opinion:
Below $100: Ultra-low end
$100-$150: Low-end
$150-$200: Lower Mid-range
$250-$300: Upper Mid-range
$350-$450: High-End
$500-$600: Ultra High-End
Overall, the 8800GTS 320MB is kind of a weird card. It's technically midrange, but really it's the same GPU as the "high-end" 8800GTS 640MB (which by extension is a hacked down version of the "ultra high-end" 8800GTX). I never understood why nVidia didn't release an 80nm, 64 shader part for the $250~ price range, rather than an extremely expensive, 90nm high-end GPU coupled with less memory than the card really needs. Memory doesn't cost much, so nVidia isn't saving much with the 320MB version. With a 480mm^2 GPU, nVidia can't be making much money selling an 8800GTS 320MB for $270~.
As for those that say 2-3x performance of G80 is not going to happen, I believe you are wrong here. My theory is that G90 will be around 2x, perhaps a bit less, the performance of G80 in DirectX 9, but closer to 3x faster in DX10. G80 is a first-generation DX10 part and I'm sure nVidia has learned more about DX10, and how developers are implementing it in games, since last year. With much more raw power AND optimizations, 3x the performance of G80 in DX10 is not that hard to imagine.
The real question is whether or not G90 will be a single die or multiple die. R700 is rumored to be multiple, and some have said G90 is as well. In my opinion, if nVidia is going for a card with approximately 2-3x the raw power of G80 (if the 1TFLOPs figure is true, G90 is actually around 3x more powerful than G80, in raw horsepower) than they need to have multiple die. For 3x the raw power, nVidia would need at LEAST 256 shaders, and they would still need to clock the shaders at well over 2000MHz. And, they would be stuck with the same old huge, expensive 480mm^2 chip they have now (with double the execution units, on a half-size process, G90 would be approximately the same size as G90). I can't see nVidia doing that.
However, if you have 2 die with, say, 160 shaders on each, then suddenly it becomes a lot more realistic. There are two die, of course, but they are smaller and the chips will have much better yields.