Originally posted by: MarcVenice
Originally posted by: Keysplayr
Originally posted by: MarcVenice
Originally posted by: Keysplayr
Originally posted by: Cookie Monster
Then there is the problem of how many chips you can produce per wafer and these dont come cheap either. In this case, nVIDIA still requires ~2 wafers to match what AMD produces per 1 wafer assumnig yields are equal.
Originally posted by: dreddfunk
JL - As keys says, that's only looking at it one way: GPU performance. Would the GT200 die size be smaller without GPGPU stuff? Probably. The question is: are they getting enough extra GPGPU sales to justify the larger die size? We can't really know.
Probably not, but this GPGPU capabilities on GT200 may well be an experiment done by nVIDIA (Tesla comes to mind also). A calculated move so that they can truly optimize their next generation architecture for GPGPU apps. The transistors spent on GPGPU isnt probably the only factor in the resulting large die size of GT200 so its hard to point at a single part of the GPU and say this is what resulted in the large die size.
Agreed. There probably isn't an "area" of the GPU dedicated for GPGPU ops. Instead, it's the entire complex shader design that attributes to the overall die size.
I wonder why IDC isn't shooting holes in this argument with his peewee comment.
I do not think Nvidia 'ment' G80 (and gt200 for that fact, as they are mighty similar) to be 'good' at gpgpu ops. It just is, because gpu's simply are massive parallel processing units. Besides gaming performance, Nvidia probably realised it's gpu's had untapped potential, and they unlocked that potential by writing CUDA for it. And it surely is no experiment, you don't experiment by producing millions of a gpu.
They design it, and KNOW what it can do in terms of gpgpu-ops long before it is released onto the market.
Look at the diesize of a G92, look at how many shaders it has. Now look at GT200(b) and how many shaders it has. It's diesize correlates directly to the amount of shaders (and tmu's/rop's to go with it). Knowing G80 and thus G92 are also very good at gpgpu-ops, but simply have less shaders, means Nvidia ment to build a massive gpu, good at gpgpu-ops, more then 3 years ago, before it even aquired Ageia. That's what you are saying.
Now, I dare to say that if AMD were to invest as much R&D resources into ATI Stream, it's gpu's would deliver similar performance as Nvidia's gpu's. Not because they are such great gpu's, but because it's inherent to gpu's that can run games like Nvidia's and ATI's gpu's do, through the directx api.(which requires massive parellel processing power)
The two bolded comments above directly contradict one another. So which is it? They didn't mean for it to be a good GPGPU and it just ended up that way? Or did they KNOW what they had long before it was released onto the market?
Die size? G92 is 230mm2 at 55nm. GT200 is 490mm2 at 55nm. Even if you doubled everything the G92 has, 128 to 256 shaders, 256bit memory interface to 512bit, 16ROP's to 32 ROP's. You'd still only end up with 460mm2 and this is including redundancy transistors that probably wouldn't be needed if just adding shaders, memory controller, ROP's. The die size does NOT correlate directly. AND you forget the external NVIO chip present on GT200 cards that G92 had moved ON DIE when it moved from G80.
Stream: If ATI could have done it, they would have. Heck, they are working on it now with little to no success if you compared it to what CUDA is now.
ATI's architecture is excellent for gaming as they have shown. But that is where the excellence ends. G80 thru GT200 are just all around technologically more advanced as is evident looking at what they're capable of. To deny this would simply be a farce.
If ATI's architecture was truly more advanced, well then, we'll never know because nobody wants to code for it. You have to provide the tools as well as the hardware.
You're not thinking things through, Marc.
It only contradicts in the way you read it keys. They ment it to be good at gaming, and as a bonus, it's also good at gpgpu ops. But that's because gpu's in general are good at some things cpu's arent very good at, because of their parallel structure. CUDA unlocks that potential.
CUDA takes a LOT of time and money to produce though. That's something Nvidia choose to do. AMD/ATI choose not to, maybe because they can't ( I doubt it, some apps can do pretty impressive things with ATI gpu's in terms of gpgpu-computing ) or maybe because they don't have the money for it, or don't think it's important enough, right now.
Thats speculation though, but it's equally possible as your statement that this is what Nvidia wants, and that it is why GT200 ended up so big ( which it didn't if you compare it to G92 and the amount of raw fps it can spit out ).
And 460mm2 vs 487mm2 isn't such a big difference lol. At 460mm2 GT200 still would have been considered big. Adding redundancy is also a bad argument. G92 had the same thing, some gpu's ended up in a 8800gts 512, some in the 9800gt. Some gt200's ended up in the gtx285, some in GTX260.
Really, you keep re-iterating that GT200 is so big because of it's IMMENSE gpgpu-capabilities. But you don't know. Neither do I. But you keep using it as an argument, which imo isn't a very strong one.