ATI Havok GPU physics apparently not as dead as we thought

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
145
106
www.neftastic.com
Originally posted by: DaveBaumann
Originally posted by: chizow
CUDA is simply the C-based programming language Nvidia created in the absence of a suitable alternative.
No its not. "C for CUDA" is that. CUDA is a software stack that allows access to their hardware. The CUDA Driver model is more low level and more commonly used - this is specific to their hardware and likely the way PhysX gets to NVIDIA's hardware.

Originally posted by: chizow
Sweet, Dave Baumann folks, product manager for AMD's GPG division, maybe we'll get some straight answers now.

Oh sweet, chizow's vast knowledge and nvidiot righteousness getting owned in one simple statement. /lawchair & /popcorn

Originally posted by: chizow
Not to mention they've made all required tools for AMD to make their parts compatible with PhysX long before OpenCL was ratified.

Let me help: Those are hardly the tools required to port CUDA/PhysX to another platform. There's also certain things called licenses involved which complicate the matter even more. If it were freely implementable, why would NVIDIA just have made the grand announcement that they've licensed PhysX out for the PS3? Yeah, there's a lot more than you think going on behind the scenes. Or you do know but you're too busy trying to market NVIDIA as the next messiah or something to be bothered with that point.

Originally posted by: chizow
Now for what-ifs. Now that ATI has demonstrated a GPU accelerated OpenCL Havok client based on an industry standard, when will we see it in production? What will ATI's response be if Nvidia also supports hardware accelerated Havok? If Nvidia supports Havok on their GPUs, will ATI support PhysX on their GPUs? Any answers would be appreciated thanks.

Let me help you out with those questions... what purpose would it serve to tell you? Other than to provide that information to a competitor and lose any sort of market advantage it would provide?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: SunnyD
Oh sweet, chizow's vast knowledge and nvidiot righteousness getting owned in one simple statement. /lawchair & /popcorn
LOL right, given it comes down to semantics about a brand description he's not even sure about. But ya, CUDA is Nvidia's complete top to bottom compute architecture solution and includes C for CUDA, a high and low level API, the hardware interface driver all included within the SDK. So CUDA is in fact the programming language and more.

What is CUDA
  • ¦Standard C language for parallel application development on the GPU
    ¦Standard numerical libraries for FFT (Fast Fourier Transform) and BLAS (Basic Linear Algebra Subroutines)
    ¦Dedicated CUDA driver for computing with fast data transfer path between GPU and CPU
    ¦CUDA driver interoperates with OpenGL and DirectX graphics drivers
    ¦Support for Linux 32/64-bit and Windows XP 32/64-bit operating systems
Let me help: Those are hardly the tools required to port CUDA/PhysX to another platform. There's also certain things called licenses involved which complicate the matter even more. If it were freely implementable, why would NVIDIA just have made the grand announcement that they've licensed PhysX out for the PS3? Yeah, there's a lot more than you think going on behind the scenes. Or you do know but you're too busy trying to market NVIDIA as the next messiah or something to be bothered with that point.
Really? And you know this how? Did you try? Do you know if AMD tried? Did Eran Badit at NGOHQ have some tools we're not aware of?

Licensing is surely a consideration, but I doubt it prevented AMD from supporting PhysX given AMD managed to squeeze a Havok license out of Intel. And we all know Intel doesn't give away those licenses freely. Coupled with the reports that Nvidia was more than willing to reciprocate PhysX licensing and support, I highly doubt licensing was the problem here. But neither you or I know, which is why I asked Dave.

Let me help you out with those questions... what purpose would it serve to tell you? Other than to provide that information to a competitor and lose any sort of market advantage it would provide?
Simple, answering those questions would allow all the AMD fans out there, like yourself, to know what to expect from their product in the future instead of having to resort to constantly bashing PhyX while promoting vapourware in the same breath. Giving a production date and realistic expectations would certainly remove that vapourware status. Given its based on OpenCL its also exciting for Nvidia owners as we can expect hardware physics support from both PhysX and Havok. :thumbsup:

Oh and responses from you are a poor substitute for a response from someone who actually has answers, like Dave.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: chizow

Certainly more available than Havok, which won't even allow you to demo without authorization, or OpenCL, which again was only ratified a few months ago and only available to registered members.


Anyone is free to go to the Havok site and download .
http://software.intel.com/sites/havok/

There is no authorization, they are just taking name and email so they can send you updates and news. Enter junk if you like
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: chizow

I don't think that's possible, as Havok isn't scalable beyond what's already coded into the game and limited by whatever physics/fidelity settings its shipped with. Meaning, just because I upgrade my CPU (or to a GPU-accelerated Havok library in the future), that doesn't mean Havok effects are going to scale dynamically to take advantage of that extra processing power. If anything, the existing effects will run faster or allow for additional effects up to the setting limits it shipped with. Retrofitting those effects would require additional content creation or at the very least a patch to increase its existing physics parameters. Manual editing of .ini and .dll files might work too, but I'm not overly optimistic.

Unless if you are a software engineering, all that you saying is pure speculation, the same thing could be said of DirectSound and EAX in which doesn't work in Vista because of the vast difference in the driver model used in Vista, and yet it now works translating the calls, so if there's DLL to mess around, there's a possibility to use the same concept to translate the calls and accelerate them in hardware, like Creative did with the Alchemy thing that translated the DirectSound calls into OpenAL and it got full acceleration in hardware, with all the same constraint and fidelity originally made.
 

dreddfunk

Senior member
Jun 30, 2005
358
0
0
Personally, I don't like the idea of the physics industry standard being in the hands of any major hardware player.

Yet, it sounds to my un-technical ears that some want AMD to make a Faustian choice: either admit that they're refusing to implement PhysX out of spite (and lose hardware sales as a result), or implement CUDA/PhysX and make two critical pieces of your competitor's IP into industry standards.

I'll reverse the question posed: if AMD were to implement GPU-accelerated PhysX, would we ever see NV implement GPU-accelerated Havok? If they did, it would fly in the face of business logic, because they would be supporting a direct competitor to their own IP.

I'll repeat, I'm not happy with either of the current proposed solutions, because it leaves the IP for physics in the hands of a major hardware vendor. Of the two hands to leave it in, however, I'd pick Intel too, if I were AMD.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: dreddfunk
Just a question, why would it make sense for AMD to adopt PhysX as a standard implementation, with PhysX in the hands of Nvidia?
That would be a terrible idea. It would be akin to everyone having to adopt Unreal Engine 3 as the standard game engine.

Physics simulators are middleware, they are not standards. All AMD needs to do is push for an open standard API (which they have, OpenCL). If someone wants to write a physics simulator, they can write it for OpenCL and it will run on AMD's cards (and NVIDIA's cards, and Intel's crap, etc).
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Wavey! How the hell are you man? How's the new job going?

However, to just dismiss the open standard path we are taking is wrong.

I don't think anyone is dismissing it at all, what most of us are questioning is why not do both? nVidia is.

Also, I see in this topic the proclamation that this is "Only about GPU Physics", well, actually it?s not, now.

Given that PhysX is capable of running on both the CPU and GPU, how is this different? I understand it is new to Havok, but not to physics acceleration, one of the APIs has just been way behind the curve for a while now.

However, GPU programming involves several layers of abstraction, and that's why I refer to anything using PhysX or CUDA as software, running on top of the GPU HW.

Software-D3D-Driver-Hardware. How is 3D graphics rendering different? PhysX-CUDA-Driver-Hardware. Seems like the same level of abstraction to me.

If Ati were going to support the PhysX API on their GPU's, they'd still have to do the hard part of making it work on their HW with their own drivers, so what benefit would they have from adopting NV's standard as opposed to some other API which isn't owned by a competitor?

Havok is owned by their biggest competitor- the same competitor that is currently trying to sue them out of their largest market, OpenCL was built on nV hardware. You are talking like one of their choices is a strong ally of theirs', this is very far removed from reality. They, since they chose not to enter the fray themselves, has limited their options to their largest competitors offering or their second largest competitor. Big difference between them, nV has a very vested interest in seeing GPU based acceleration as the defacto standard, Intel has a very vested interest in making sure that never happens(Larrabee will be x86 based after all- making ideal code paths for x86 makes more sense for them).

AMD CPU's just happen to support the same instruction set as Intel CPU's.

Not if Intel wins their ongoing lawsuit.

I'll reverse the question posed: if AMD were to implement GPU-accelerated PhysX, would we ever see NV implement GPU-accelerated Havok?

GPU accelerated Havok is already running on nV hardware as much as it is ATi atm if that answers your question. Advancing gaming=good. Holding back the industry=bad
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: BenSkywalker
Software-D3D-Driver-Hardware. How is 3D graphics rendering different? PhysX-CUDA-Driver-Hardware. Seems like the same level of abstraction to me.
The difference is with graphics the GPU can do things that the CPU can't, such as rendering pixel shader effects and such. With physics, I'm talking about the concept of interactive physics, not just drawing extra debris from an explosion. In that sense, while the technical details may be similar, the overall concept is different because not only can the same effect be computed on the GPU as the CPU, but the end result will need to be accessible via the CPU in a scenario with interactive physics.

In graphics you have CPU>driver>GPU>Screen. In interactive physics you'd have CPU>driver>GPU/PPU>CPU.


Havok is owned by their biggest competitor- the same competitor that is currently trying to sue them out of their largest market, OpenCL was built on nV hardware. You are talking like one of their choices is a strong ally of theirs', this is very far removed from reality. They, since they chose not to enter the fray themselves, has limited their options to their largest competitors offering or their second largest competitor. Big difference between them, nV has a very vested interest in seeing GPU based acceleration as the defacto standard, Intel has a very vested interest in making sure that never happens(Larrabee will be x86 based after all- making ideal code paths for x86 makes more sense for them).
While neither Havok nor PhysX are the ideal solution for AMD/Ati, at the moment Havok seems like the lesser of the two evils, since Intel does not yet have a competing HW counterpart. If Intel already had Larrabee on the market, and was pushing Havok to leverage its HW sales, then it would be just as much of a problem for AMD/Ati as adopting Nvidia's PhysX.


 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Modelworks
Originally posted by: chizow

Certainly more available than Havok, which won't even allow you to demo without authorization, or OpenCL, which again was only ratified a few months ago and only available to registered members.


Anyone is free to go to the Havok site and download .
http://software.intel.com/sites/havok/

There is no authorization, they are just taking name and email so they can send you updates and news. Enter junk if you like

Hay your first post said . You under DNA. Well Info has been released on the new AI.

Can you talk about that now. Other good stuff out . Care to talk about. Come on feed the fishes.

 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Nemesis 1
Originally posted by: Modelworks
Originally posted by: chizow

Certainly more available than Havok, which won't even allow you to demo without authorization, or OpenCL, which again was only ratified a few months ago and only available to registered members.


Anyone is free to go to the Havok site and download .
http://software.intel.com/sites/havok/

There is no authorization, they are just taking name and email so they can send you updates and news. Enter junk if you like

Hay your first post said . You under DNA. Well Info has been released on the new AI.

Can you talk about that now. Other good stuff out . Care to talk about. Come on feed the fishes.

I really can't say much just yet.
I can give you a clue about what it is related to:
http://area.autodesk.com/index...detail/meet_us_at_gdc/
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: munky
PhysX on the GPU does require CUDA
Please stop posting false information. CUDA provides an interface for NVIDIA GPUs. It will certainly run best on NVIDIA GPUs using CUDA, but they could use OpenCL or some other API to get CUDA running. It could also run on other GPUs without CUDA. Stop posting this misinformation.

Originally posted by: munky
So you admit that Nvidia intentionally tied GPU-PhysX to CUDA.
Much in the same way that DirectX is tied to windows. Why would they not use their own software?

Originally posted by: munky

In fact, the Wii GPU is not much different from the old GameCube GPU, and there's no way you'd be running PhysX on it.

They have PhysX running on the iPhone. Your point is moot.

Originally posted by: DaveBaumann
No its not. "C for CUDA" is that. CUDA is a software stack that allows access to their hardware. The CUDA Driver model is more low level and more commonly used - this is specific to their hardware and likely the way PhysX gets to NVIDIA's hardware.

To clarify. This does not tie PhysX to NVIDIA hardware, it just provides access to it.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
We talked about NV PhysX Already.

This is about ATI Havok . I supp;ied 3 nice links . most all of which is known . But Those 3 things should give us enough to talk about . Without bring in how AMD/ATI is putting screws to NV. All the while NV has done everthing in its power to stop DX10.1(Global illumination). The hippocracy is overwhelming.

 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: chizow
...snip...

AMD strategy is dedicated hardware support on CPU for Havok API in addition to GPU, not PhysX API. The advantages is obvious to developers.

As for when they will support GPU accelerated physics engine, when is now. Havok had a better offer then Nvidia obviously. AMD didn't block support for PhysX either, since AMD's SDK is available for Nvidia to download and use with ATI stream.
"People like me" are advocating things that benfits all consumers, while "people like you" are Guerrilla marketing Nvidia whenever you get the chance. Havok is clearly the better solution, as it is optimized for CPU for years for everyone with a larger toolset as well. It has broader support with more developers using it.
The marketing used for PhysX is laughable at best, though some might bite on Nvidia trying to make PhysX sound bigger then it is:
THQ Selects NVIDIA PhysX Technology For Use In Its WorldWide Studios
http://www.nvidia.com/object/io_1229607540213.html
Sounds big, yes? What they left out was:
"As a part of our long-standing partnership with Havok, nine out of our ten internal studios, including Relic, Rainbow and Volition, are actively using Havok Physics and other Havok products in development today," said Roy Tessler, THQ's senior vice president, production and worldwide studios.
http://www.havok.com/content/view/680/53/
This makes me laugh.
NVIDIA, Sony, ink deal to bring PhysX to the PS3
http://arstechnica.com/hardwar...g-physx-to-the-ps3.ars
Slow newsday? Aegia already brought PhysX to PS3 in 2005:
http://news.softpedia.com/news...ics-Library-5292.shtml
I can bring out more, but these were some recent ones that gave me a good laugh.
Go to every "major licensing agreements with PhysX" and you'll see that they already have a larger support for Havok. And, Havok supports all three consoles in addition to PSP.
I wish to make one thing clear as well. That CUDA supports OpenGL and DX11, doesn't mean that OpenGL supports CUDA. As long as PhysX is a part of CUDA, it will remain unsupported by other then Nvidia. Havok, on the other hand, will be supported by ATI via Stream through OpenGL, and Nvidia will have the option to support it as well. PhysX needs to be ported to OpenGL or die. Even there, PhysX will probably die, since its offering less support then Havok even in major tools like Autodesk as mention in this thread.

PhysX wasn't even a finalist in the 2008 Game developers award in middleware:
http://www.gamasutra.com/php-b..._index.php?story=21273

To sum it up:
Havok is already MUCH larger then PhysX and supported by all major players. If Firingsquad is correct:
OpenCL-powered Havok physics effects should be compatible with NVIDIA GPUs as well
http://firingsquad.com/news/ne...cle.asp?searchid=21450
PhysX is supported by Nvidia and the developers they throw money at for minor support (Like THQ, where 9 of 10 studio's use Havok). ROFL! :laugh:


 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Originally posted by: munky
PhysX on the GPU does require CUDA
Please stop posting false information. CUDA provides an interface for NVIDIA GPUs. It will certainly run best on NVIDIA GPUs using CUDA, but they could use OpenCL or some other API to get CUDA running. It could also run on other GPUs without CUDA. Stop posting this misinformation.
Can you run PhysX on Nvidia GPU's without CUDA? NO! It doesn't matter if Ati could use OpenCL for PhysX, because currently PhysX does not support OpenCL, and Ati would need to spend resources on porting it to OpenCL. And even if they did, PhysX IP is still owned and controlled by NV, it's not "open" in the same way as OpenCL, OpenGL or other open technologies. Why should Ati spend resources on promoting an API which is owned by NV?

Much in the same way that DirectX is tied to windows. Why would they not use their own software?
No it isn't, because Windows is HW-agnostic, in the sense that it runs on both Intel and AMD CPU's. CUDA, on the other hand, is a proprietary NV technology that only runs on their HW.

They have PhysX running on the iPhone. Your point is moot.
You may as well tell me PhysX runs on my wrist watch, it wouldn't matter, because it's not utilizing a GPU. Ati is the GPU business, so keep that in mind.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Originally posted by: chizow
Now for what-ifs. Now that ATI has demonstrated a GPU accelerated OpenCL Havok client based on an industry standard, when will we see it in production?
We?ve still got to produce a compliant OpenCL stack yet, so that won?t happen before our previously announced roadmap on ATI Stream SDK with OpenCL support.

What will ATI's response be if Nvidia also supports hardware accelerated Havok?
I don?t know how much attention you?ve paid to the announcement of this, but we are not doing anything here that uses non-core OpenCL extensions; i.e. the net result is that all NVIDIA need to do is provide an OpenCL stack on the Windows platform and they will get support of this by default. This is one of the reasons why we are going this path and one of the key reasons why we are working with Havok here ? they were open to taking this path (not least because it frankly opens things up further for them).

If Nvidia supports Havok on their GPUs, will ATI support PhysX on their GPUs? Any answers would be appreciated thanks.
Simple. If NVIDIA pushes PhysX through standard OpenCL calls, much like the work we are doing with Havok, then PhysX will operate on all hardware that has an OpenCL software interface (be that AMD, NVIDIA, Intel, Sony, Sun, etc., etc.).
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
The difference is with graphics the GPU can do things that the CPU can't, such as rendering pixel shader effects and such

What on Earth makes you think that CPUs not only can't handle pixel shaders, but do a lot more on that end then current GPUs? By a staggering amount too. Check out SW:EP3 for some evidence of this(the movie, not any game). While we have made huge progress with shaders lately, we aren't remotely close to GPUs handling radiosity yet. The problem is how slow CPUs are at physics calculations, they aren't close(well, excluding Cell or comparable architectures).

In graphics you have CPU>driver>GPU>Screen. In interactive physics you'd have CPU>driver>GPU/PPU>CPU.

Would depend on how the engine was coded, but there would certainly be circumstances where you would want two way communication. That will be a long time coming yet though, sadly one of the key PC gaming companies decided holding up physics advancement so they could play politics was their best option.

While neither Havok nor PhysX are the ideal solution for AMD/Ati, at the moment Havok seems like the lesser of the two evils, since Intel does not yet have a competing HW counterpart.

Wait, when did Intel exit the CPU market? I would have thought something like that would have been all over, apparently I missed it?

If Intel already had Larrabee on the market, and was pushing Havok to leverage its HW sales, then it would be just as much of a problem for AMD/Ati as adopting Nvidia's PhysX.

So now you are starting to see how shortsighted AMD's current approach is, I knew you'd come around

This is about ATI Havok .

Intel's Havok. Owned, fully, by Intel. Dictated, by Intel. Controlled, by Intel.

Havok is clearly the better solution, as it is optimized for CPU

PhysX has always run on CPUs, that has never been an issue even prior to nV's involvement with it.

Havok, on the other hand, will be supported by ATI via Stream through OpenGL

It's getting annoying with you trying to sound like you know what you are talking about and you keep using 'OpenGL'- it is OpenCL.

Havok is already MUCH larger then PhysX and supported by all major players. If Firingsquad is correct:

Of course nV is going to support Havok- not supporting a standard you have available does nothing but hurt your customers. Some companies care more about politics then there customer base, but that is to be expected and well within their rights.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Dave, what's the ETA for ATi OpenCL. Are Intel themselves working with game developers to push OpenCL? And can you list any games in development that support Havok OpenCL accelerated physics and what their ETA is? Are you working with any of those developers?

Sorry for the barrage, but you aren't here that often.

I'd also note that several PhysX games will launch this year, and that while it's great ATi is looking to implement some sort of physics acceleration do you see any reason today's GPU buyer should care just yet?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Modelworks
Anyone is free to go to the Havok site and download .
http://software.intel.com/sites/havok/

There is no authorization, they are just taking name and email so they can send you updates and news. Enter junk if you like
Oh nice, did they change it recently? Last time I navigated to their download page I coulda swore it had about 50 fields of info with an option to fax it in and wait for an authorization. Good to know though, thanks.

Also, I don't think people are ignoring the fact Havok encompasses other production tools, but those tools as you said are modular and often redundant within a complete middleware game engine solution. Even one of your examples in Kynapse has ties to PhysX, via Morpheme.

There's certainly merit to your point about one being easier to implement for the developer, just as there's merit to one being technically superior in capability over the other. While an OpenCL Havok client would certainly eliminate that edge, that wasn't the case for the last 9 months.

Originally posted by: evolucion8
Unless if you are a software engineering, all that you saying is pure speculation, the same thing could be said of DirectSound and EAX in which doesn't work in Vista because of the vast difference in the driver model used in Vista, and yet it now works translating the calls, so if there's DLL to mess around, there's a possibility to use the same concept to translate the calls and accelerate them in hardware, like Creative did with the Alchemy thing that translated the DirectSound calls into OpenAL and it got full acceleration in hardware, with all the same constraint and fidelity originally made.
I wouldn't need to be a software engineer to understand existing physics content is not dynamically scaling to hardware capabilities beyond the established parameters and limits the title shipped with. I can see this by simply upgrading or over/under clocking my CPU where there are no additional effects, only perhaps additional performance for the existing effects. There's also various titles that allow you to mess with .ini settings, like number of Havok CPU threads, which again, yields no tangible increase in effects quality nor performance (see: Bioshock). This is the same problem existing PhysX titles face, as the chance of developers going back and retrofitting their games with additional hardware PhysX effects is slim to none, especially when you consider most of them no longer support their games with patches.

I also find it funny that the very same EAX/wrapper analogy you're attempting to use here is suddenly good enough now when apparently it wasn't months ago when I compared PhysX to EAX. Funny how that goes. In any case, its inapplicable in this case as a wrapper or upgraded GPU-accelerated library wouldn't do anything for existing content compiled to run on the CPU. Not to mention you might run into the problems ViRGE mentioned earlier if those back-ends are statically compiled and aren't easily changed.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: dreddfunk
Personally, I don't like the idea of the physics industry standard being in the hands of any major hardware player.

Yet, it sounds to my un-technical ears that some want AMD to make a Faustian choice: either admit that they're refusing to implement PhysX out of spite (and lose hardware sales as a result), or implement CUDA/PhysX and make two critical pieces of your competitor's IP into industry standards.

I'll reverse the question posed: if AMD were to implement GPU-accelerated PhysX, would we ever see NV implement GPU-accelerated Havok? If they did, it would fly in the face of business logic, because they would be supporting a direct competitor to their own IP.

I'll repeat, I'm not happy with either of the current proposed solutions, because it leaves the IP for physics in the hands of a major hardware vendor. Of the two hands to leave it in, however, I'd pick Intel too, if I were AMD.
I don't think its quite a Faustian choice, as AMD is really forced to choose the lesser of two evils given they don't have a horse in the race when it comes to physics middleware. How's that Shakespearian idiom go....politics makes strange bedfellows. Nothing short of amazing that AMD has partnered up with Intel, a company looking to take away 75% of their core business as we speak.

I look at it more like Prisoner's Dilemma, where AMD really can't win here no matter what they do given they don't have a horse in the race when it comes to software, but would still clearly benefit the most from cooperating. They keep choosing to defect (from PhysX) when its clearly in their best interest to cooperate. The other player, Nvidia (and Intel if you extend it) have more to gain overall regardless of what AMD chooses.

What you'll end up with is 60-70% of the GPU market supporting PhysX (Nvidia), 100% of the GPU market supporting Havok (AMD + NV assuming Nvidia supports OpenCL Havok, as capable and expected). While this may seem as a loss for Nvidia PhysX, they ultimately win as their GPUs will have strengthened their position in the market by supporting both major physics SDKs while AMD can only claim support for one.

Also as ViRGE posted in reply, the only concern any consumer should have with regard to industry standards is an open standard API, which is what you have with OpenCL and DX11 to a lesser degree later this year. After that you'll have to bitch to your IHV or physics middleware provider if your HW doesn't support those standard APIs.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,979
126
Originally posted by: chizow

I don't think that's possible, as Havok isn't scalable beyond what's already coded into the game and limited by whatever physics/fidelity settings its shipped with. Meaning, just because I upgrade my CPU (or to a GPU-accelerated Havok library in the future), that doesn't mean Havok effects are going to scale dynamically to take advantage of that extra processing power.
Oh yes, I see what you mean now.

I was referring to the ?pure? performance case where existing effects possibly run faster courtesy of a new Havok back-end that takes advantage of hardware.

This is unlike legacy PhysX titles which do not automatically offer a performance gain just because an nVidia GPU is installed in the system.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,979
126
I forgot to add, the bridge demo is quite cool. I could definitely see this being used in a WWII FPS in a mission about blowing up a railway bridge. In fact MoH Spearhead and CoD:UO have missions that do precisely that.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: instantcoffee
AMD strategy is dedicated hardware support on CPU for Havok API in addition to GPU, not PhysX API. The advantages is obvious to developers.
Really, what's the advantage? 9 more months of the same CPU physics we've seen for the last 5-6 years? And what about the consumers? The longer it takes for GPU physics adoption from both IHVs, the longer it'll take for developers to implement additional GPU physics effects in games.

As for when they will support GPU accelerated physics engine, when is now. Havok had a better offer then Nvidia obviously. AMD didn't block support for PhysX either, since AMD's SDK is available for Nvidia to download and use with ATI stream.
LMAO. So lets get this straight. You think AMD would allow Nvidia to write drivers for their hardware? Here's where SunnyD starts lecturing you about the obvious licensing and IP problems you'd encounter.

"People like me" are advocating things that benfits all consumers, while "people like you" are Guerrilla marketing Nvidia whenever you get the chance. Havok is clearly the better solution, as it is optimized for CPU for years for everyone with a larger toolset as well. It has broader support with more developers using it.
Rofl, that's funny, it seems you've done nothing but post misinformation and lies while blatantly ignoring facts to the contrary since you joined. As for Havok clearly being the better solution....lol...are you saying CPU physics is a better alternative to GPU physics? You're defending AMD's position to segregate the GPU physics market and you're advocating things that benefit all consumers? LOOOOOOOL.

The marketing used for PhysX is laughable at best, though some might bite on Nvidia trying to make PhysX sound bigger then it is:
THQ Selects NVIDIA PhysX Technology For Use In Its WorldWide Studios
http://www.nvidia.com/object/io_1229607540213.html
Sounds big, yes? What they left out was:
"As a part of our long-standing partnership with Havok, nine out of our ten internal studios, including Relic, Rainbow and Volition, are actively using Havok Physics and other Havok products in development today," said Roy Tessler, THQ's senior vice president, production and worldwide studios.
http://www.havok.com/content/view/680/53/
This makes me laugh.

NVIDIA, Sony, ink deal to bring PhysX to the PS3
http://arstechnica.com/hardwar...g-physx-to-the-ps3.ars
Slow newsday? Aegia already brought PhysX to PS3 in 2005:
http://news.softpedia.com/news...ics-Library-5292.shtml
I can bring out more, but these were some recent ones that gave me a good laugh.
Go to every "major licensing agreements with PhysX" and you'll see that they already have a larger support for Havok. And, Havok supports all three consoles in addition to PSP.
Past titles and licensing agreements show exactly that, history. The point of those announcements is to show that publishers are giving developers the choice to implement whatever middleware they choose. There are more Havok titles than PhysX, I've never claimed otherwise, but there's also absolutely no denying there are more GPU-accelerated PhysX titles than Havok titles, offering more advanced physics simulations that aren't possible on current CPUs. Do licensing agreements automatically translate into more PhysX titles? No. Do they offer more potential for adoption, more options for developers, and ultimately, a higher probability of GPU accelerated titles in the future? Absolutely yes.

I wish to make one thing clear as well. That CUDA supports OpenGL and DX11, doesn't mean that OpenGL supports CUDA. As long as PhysX is a part of CUDA, it will remain unsupported by other then Nvidia. Havok, on the other hand, will be supported by ATI via Stream through OpenGL, and Nvidia will have the option to support it as well. PhysX needs to be ported to OpenGL or die. Even there, PhysX will probably die, since its offering less support then Havok even in major tools like Autodesk as mention in this thread.
LOL clear as mud. The only thing you've made clear is that you have no clue about what you're talking about. And to a lesser degree that your bent on promoting misinformation over all else.

PhysX wasn't even a finalist in the 2008 Game developers award in middleware:
http://www.gamasutra.com/php-b..._index.php?story=21273
LOL and? World of Warcraft hasn't won Best MMO since the year it launched, but that doesn't make it any less relevant.

To sum it up:
Havok is already MUCH larger then PhysX and supported by all major players. If Firingsquad is correct:
OpenCL-powered Havok physics effects should be compatible with NVIDIA GPUs as well
http://firingsquad.com/news/ne...cle.asp?searchid=21450
PhysX is supported by Nvidia and the developers they throw money at for minor support (Like THQ, where 9 of 10 studio's use Havok). ROFL! :laugh:
LOL, which is the end result I've been advocating all along, widespread adoption of GPU accelerated physics which will ultimately lead to accelerated implementation in games. Nvidia will be able to claim support for both "in the best interest of consumers" while AMD and people like you advocate segregation of the market, ultimately delaying any widespread use of GPU physics.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |