And I was talking about PhysX and Havok, based on Godfrey Cheng's statements:Originally posted by: josh6079
Yes, indeed, it is. But I was talking about CUDA and OpenCL.
- AMD says PhysX will die
"There is no plan for closed and proprietary standards like PhysX," said Cheng. "As we have emphasised with our support for OpenCL and DX11, closed and proprietary standards will die."
And his comment from a few days ago doesn't absolve AMD of statements made over the last 9 months that were clearly deceptive and contradictory, they just offer clarity going forward.I'm confused. Dave Baumann clarified that they'd be happy to use either Havok or PhysX. So long as it's not through CUDA, they're fine.
The OP showed Havok running on Radeons through OpenCL. Had PhysX been available through OpenCL, why wouldn't they have used it?
I think the problem is Godfrey wasn't keeping tabs on what he was saying in the press, given he turns right around and claims proprietary standards like Havok and DirectX11 are somehow superior.Personally, I'm not sure. Cheng, who has more expertise and a knowledge of PC history than I, commented on whether or not it was a better alternative:
To summarize, it seems some believe that "proprietary interfaces", like CUDA, will be eclipsed by "collaborative industry interfaces" like OpenCL.
Beings how there is a historical basis for such an assumption (e.g., his comments concerning S3 Metal, 3dfx, Glide, and CG.), it seems valid to say that concentrating resources on those "collaborative industry interfaces" like OpenCL would be better served than for CUDA on the short term.
To what degree this has really "hurt" GPU-accelerated physics advancements is controversial.
Most likely not, unless they wanted to purchase the source code and recompile it for their own Stream runtime as well as write their own low-level driver API. The much easier route would've been to use CUDA's low-level driver API, which is essentially what they ended up doing anyways by writing an OpenCL driver API.If nVidia would have released PhysX and PhysX alone to ATi, could ATi have used their Stream instead?
Of course they're mutually exclusive....again, how do you think PhysX is running on x86, PS3, XBox360, Wii, and anything else even before CUDA existed? PhysX is not tied to CUDA, in any way, shape or form. If AMD wanted to use PhysX and not CUDA, they could've paid $50k or whatever it is for the source code and recompiled it to run with their own Stream/Brook+ driver API, or whatever they were using at the time.So nVidia didn't force CUDA on ATi? They left both PhysX and CUDA mutually exclusive and free to take one or the other?
Actually that slide shows pretty clearly how similar they are. OpenCL essentially takes the place of the low-level CUDA Driver API with C for CUDA being the high-level runtime API. It also looks as if the low-level CUDA/OpenCL Driver API will only use a different compiler with most of the underlying C code remaining the same, probably with different headers (the ones listed on the OpenCL site).OpenCL is not a "copy" of CUDA though. From a developer's standpoint, they are different and it depends on what they want.
CUDA Hierarchy Diagram
Tim Murray @ Nvidia Developer Forums
OpenCL API Registry (Spec .pdf and Header file)
Again, look at installed user-base and it should become obvious as to why they didn't do anything spectacular in that time frame. Ageia's PPU sold ~100-200k units total. All-time. Nvidia increased that number exponentially overnight to 70 million when they released their CUDA PhysX driver, which has since grown to 100+ million.I'm not really saying it's anyone's fault because I don't know what exactly was hurt.
Processing physics on processors other than the CPU have been around since Aegia's PPU, but physics themselves haven't really done anything spectacular in light of that.
It simply comes down to hardware limitations. GPUs excel at highly parallel instructions and floating point math, areas that have always been weaknesses historically on the CPU. As a comparison, the world's fastest desktop CPU Core i7 965 is capable of 70 Gflops. GT200 is capable of 933 Gflops. RV770 is capable of 1.2 Tflops. These aren't just paper specs either, these performance gains have been realized in just about every application suited to highly parallel computing (F@H, Video transcoding, physics, etc).This addresses some of the questions I've personally had regarding physics in general. Do all of physics really need a GPU to calculate them? One as powerful as my 9800 GTX+ or an 8800 GT? Why haven't quad core processors become almost a necessity in PC gaming yet? Why have there been very little improvements to physics - regardless of what is processing them - over the past 5 years or so?
As answered above, CPUs simply aren't capable of adequately handling the required calculations.The GDC showed some pretty cool stuff. I really liked the cloth demo Ben linked earlier. But why has it taken so long to even get here?
I find all of what you've written here extremely ironic given you don't see anything wrong with AMD's actions and press releases over the last 8-9 months. In any case, the answer is obvious as to why we haven't seen it sooner, the middleware and API didn't support GPUs until 9 months ago when Nvidia changed the landscape of physics acceleration overnight. And if they waited another 6 months for OpenCL instead of using CUDA, we'd see even less results than what we have now.What is processing them isn't the answer, because that's varied from CPUs, PPUs, and GPUs alike recently. Couple that with the fact that the G80 and derivatives dominated the PC gaming market for so long, and I don't see any reason why physics are still where they are.
Actually it does look like they're nearly identical, meaning the underlying C code is the same, they just use a different compiler and header file (Again, see previous links).Oh, I know it's similar. I just know that it's not identical, and where's there's differences there can be reasons for choices. ATi made theirs for reasons that lie within their differences.