Originally posted by: taltamir
1. No, any phyx game out today, and any in development, will all work with phyx cuda as if it was a physX card with no changes needed to the game. And the drivers enabling it are supposed to be released on the same day as the G200
2. Much less of a compiracy, and more of an oversight... In crysis the SP DO matter a whole lot, and it was just assumed that more and more games will be extremely shader heavy. But most aren't.
3. Yea, that is the point. It is neutered in such a way that it is very close in low end tasks, but falls behind on high end tasks. Making it a much more desirable part, while maintaing the 280s supremacy (most high end users would get a GTX instead of an Ultra, I doubt any high end user will get a 260)
1. Again, emphasizing CUDA PhysX support as a significant reason for more SP is about as relevant to gamers as news of a NV Folding@Home client. We'll see about the driver support, not that it really matters, as it'll probably be limited to GT200 parts and the 5 games that support PhysX.
2. Well I disagree here. I think it is a corporate initiative of Nvidia's to pressure reviewers not to disclose and publish information beyond the specs and default settings of a given part. I've read enough reviews that show how high a part overclocks or a brief comparison of stock vs. overclock performance, but rarely will you find in-depth comparison between parts with detailed clock speed comparisons. There's also been rumors of NV pressuring AIB partners to cease production of OC parts (which also typically launch later to avoid direct comparison to stock parts), most notably lately with the GTS 512MB. Not surprisingly, NV launched their own OC'd GTS 512MB a few months later but named it the 9800GTX.
I don't fault NV for doing it, they're obviously in the business of selling video cards so the more perceived differences the better for their bottom-line, I just don't like how reviewers and hardware sites that used to revel in making such comparisons suddenly seem indifferent or even intimidated from doing such comparisons. Just as bad as it is for NV or ATI to miss a product cycle, it'd be equally damaging for a review site to miss a product launch because they got cut off for breaking NDA or corporate guidance.
3. Again, I disagree here. GTX 260 sees much smaller reductions in clock speed and core components than G80 GTS. Without getting into all the details, you can see G80 GTS saw closer to 20% reduction across the core with ~1/5th fewer ROP, bandwidth, TMU and shaders along with ~12% difference in clock speed. GTX 260 is closer to 1/8th fewer ROP, bandwidth, TMU and shaders but only ~4% difference in clock speed. Also, I'm pretty sure GTX 260 has a lower shader clock than GTX 280 which again hints that GT200 will be limited by other factors before SP become the main bottleneck.