This little question has been tugging on my mind for a little while. From what I've read, the GeForce's GPU is supposed to be programmable like an Intel or AMD cpu. However, what does this mean from a programmer's point of view? For example, the Intel and AMD cpu's share the X86 intstruction set which makes them both compatible with most software written. However, how does it work when you want to "program" the GPU of a video card? Does it have its own instruction set that has to be compiled for each game? The way I thought it worked traditionally, is that the game wants to draw a line, so it calls the directX instuction to draw a line, directX tells the video driver to draw a line and the video driver sends it along to the right place in hardware. But say you want each pixel in that line to sing and dance like NVidia says it can, so what happens? I am guessing that you write the code to change each pixel in directX8 and it tells the driver to do this and the driver either says "No I can't do this, send it the the CPU or it says, Yes, I'm a GeForce3 or Radeon2 and I can do it" Then the driver is responsible for taking the DirectX8 instruction and interpreting it into its own GPU instruction set. At least this is how I hope it is or are game developers going to have to choose between supporting GeForce3 vs Radeon2 vs increased cost and production time to support both?
Thanks for any insight any of you have.
Kishkumen
Thanks for any insight any of you have.
Kishkumen