evolucion8
Platinum Member
- Jun 17, 2005
- 2,867
- 3
- 81
Originally posted by: MarcVenice
The point being that GT200 ended up so big, because by your reasoning it is because of it's gpgpu-capabilities, that is taking up extra diespace. And I'm saying that's incorrect. You forget easily it seems.
Don't stress yourself, he's like Wreckage, nVidia = the Gods of videocards
ATi = teh suck - No reasoning, no understanding, no impartiallity, I bet their neurons have the nVidia logo inside.
Since CUDA and OpenCL are tailored for the GPU's stream processors for computing, the only thing really needed in the GPU level besides flexibility in the register management is cache to keep the data flowing through the stream processors, specially when not very parallel code is used like branchy code with jump, hops and subroutines or general purpose code, and the cache uses too little die space (256K in total). ATi did the same, GPU's stream processors are massively parallel and sacrificing that for multi purpose GPGPU performance will also impact the performance in games, leave that kind of work for the CPU's. So like you stated, I found doubtfull that the nVidia's huge die size is because of the GPGPU capabilities, you can see the GT200 diagram and will see that the stream processors are identical in complexity and size compared to the G92 GPU.
Both companies are necessary to avoid monopoly, better pricing, more innovative technologies and more choices, the 8800/9800 series of cards is what made ATi to wake up and release the HD 4800 series which is the reason why the GTX series dropped to almost competetively prices, heck a now you can get a nice GTX 260+ for less than $190 when it was released for more than $400. Now with the next generation of card like the GT300, we just have to wait how this is gonna end.