apoppin - I sincerely doubt that we have many 'bitter AMD fans' or 'bitter' nVidia fans on this forum. What we do have are a lot of 'bitter' people who feel that they have been treated with little or no respect when posting on this board. I can't count how many times I've seen arguments get personal here, and they shouldn't. We are discussing ideas, not our children. I don't immediately become worthless if I get an idea wrong. Nor do I immediately become 'king of the hill' if I get one right.
Let's not mistake bitterness over rude behavior for bitterness over how a particular company performs.
Similarly, I'd be hard pressed to name many posters that I thought were 'fanboys' in the sense that everyone seems to use the word: mindless followers of a particular company at the expense of all reason and reliable thought. There just aren't many on AT who are actually like that. Most folks seem to lean towards one company or another, probably because of past business dealings, but very few folks fall into the 'fanboy' category.
The real purpose for using the term 'fanboy' is to be able to utterly disregard any and all points made by someone with whom you disagree. After all, they are simply 'mindless followers of a particular company at the expense of all reason and reliable thought'. You can't trust what *those* people say.
You know, I wasn't around when it was revealed that 'viral marketing' took place on these boards. Between that revelation and the already existing belief in die-hard 'fanboyism' there isn't much chance relaxed, but insightful, discussions about the GPU industry are going to be common on these boards, and that's too bad.
In all probability, there are only a few (a very few) posters who honestly fit the mold of a 'fanboy' or who consciously are espousing FUD. The rest are just giving their opinion about something. If that opinion doesn't make sense to someone, by all means offer a different one, but there is no need to deride anyone in the process. Calling someone a 'fanboy' and responding to them in a rude fashion is what creates the bitterness.
On topic - if this is an OEM XT that would explain some, not all, failings of the card. If the architecture of the XTX is precisely the same, then, given that they pushed the XT's clocks and got mediocre results, I'd be surprised if we see much improvement later.
Here's my thoughts (thoughts, not baby children) on the status of things right now.
The current released HD2900XT architecture is what was expected, with the XTX to include more (and higher rated) memory as well as a bump in clockspeed. I looked at the 2900XT overclocking in detail. The card does seem to need a balance between memory and core overclocks to perform efficiently. For whatever reason, in the released benchmarks, the 2900XT performed better when the ratio of core to memory overclock was a little more than 1:2 (almost twice as much of a bump, in terms of %, for memory than core). For a part which theoretically looks like it shouldn't be limited by the memory subsystem this seems awfully odd. But it seems to indicate that some benefit may be possible when moving to GDDR4. The question is how much. I buy the argument that with the 64 Vec5 shader system that the bad scenario for the R600 is still worse. So while bumps in clocks and memory may get us a little performance boost, I doubt we'll see that much. DX10 is still the wildcard; if it uses the Vec4 system better then you might actually get much, much better scaling out of the R600 architecture.
Second, my guess is that these current parts would have spawned the XTX (at naturally higher core and memory frequencies) had the performance been there. Indeed, that's what everyone has been assuming--both an XT and an XTX part on this process node and with this architecture. But the performance (thus far) just isn't there. It isn't. I'm beginning to wonder if the 'surprise' is something that has already been discussed ad nauseam on these boards: AMD going to 65nm. I'm beginning to wonder if AMD is frantically working on the refresh at 65nm, and is planning to dub that part the 'XTX'. Still, they've gone down the Vec4 path. If they ditch that with their next part, but then it ends up being better for DX10, then that doesn't make much sense. Doubling the Vec5 shaders to 128 also seems like it would cost too much money. So I'm a little cautious about what kind of 'surprise' that AMD could have. Still, nVidia went from the 7900gtx to the 8800gtx in one generation. ATI had the 9700Pro. So it's possible; it just doesn't seem likely.
At any rate, I don't think any of this is good for AMD at the moment. They seemed to have made a bad judgment in terms of R600's design. Best case, it was a bad judgment relative to the anticipated release of DX10 games. Worst case, it was just a bad judgment, period. Middle-of-the road case (as some have mentioned) AMD did this deliberately to be able to leverage the Vec4 system in markets other than the gaming market (drop-in Lab supercomputing).
[shameless bad joke]
I'm sure I overheard Henry Richard the other day saying, "If I say it's safe to make a Vec4 GPU, then it's safe to make a Vec4 GPU!"
[/shameless bad joke]
AMD is a little like the surfer on the beach in Apocalypse Now.
[edited for Vec5 mistake]