Originally posted by: dreddfunk
Guys, I think this is all a mountain over a mole-hill. I simply think what Marc is trying to say is that he feels:
1) G80 wasn't initially designed with GPGPU in mind.
2) That, despite the close proximity of CUDA's launch to G80, the development cycle for G80 was long enough for it to plausibly be an after-the-fact (of design) decision on NV's part
3) That the architecture of any GPU has the potential for making a good GPGPU, if the company throws enough weight behind developing the API, etc., for such applications.
4) That NV decided to throw enough weight behind their GPU's GPGPU potential to create CUDA, and make a stronger move than AMD to go after the GPGPU market (which seems like a good marketig move now, so he's complimenting NV here).
I just don't see this as a slight to NV at all--nor does it make ATI look like a savant. He's not saying that CUDA is an 'accident' or 'lucky'. I think he's merely trying to point out that NV may not have had to do very much (if anything) on a hardware side to make G80/GT200 good GPGPUs.
I can't evaluate that statement (or Marc's position), as I'm no engineer. If someone is a credible GPU engineer here, then perhaps they could explain to us just what hardware differences are required, or if it is merely building the proper software to access the hardware. I admit to being confused. Some seem to be saying that NV had to consider GPGPU a lot when designing the G80/GT200, and that it impacted the number of transistors and die size in some way, but we're short of verifiable information--or even truly knowledgeable speculation.
Honestly, however, I think people are looking for things to interpret as slights in Marc's comments.
IDC - it's really not like your comparisons of memory or CPUs. What he's saying is more like this: if we design a truck to haul lumber, which has a large, flat bed, behind the cab, it would be no surprise that it would also be good at hauling bricks. After all, it's good at hauling things that can fit into large, flat beds.