ATI Havok GPU physics apparently not as dead as we thought

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: chizow
Originally posted by: dreddfunk
Personally, I don't like the idea of the physics industry standard being in the hands of any major hardware player.

Yet, it sounds to my un-technical ears that some want AMD to make a Faustian choice: either admit that they're refusing to implement PhysX out of spite (and lose hardware sales as a result), or implement CUDA/PhysX and make two critical pieces of your competitor's IP into industry standards.

I'll reverse the question posed: if AMD were to implement GPU-accelerated PhysX, would we ever see NV implement GPU-accelerated Havok? If they did, it would fly in the face of business logic, because they would be supporting a direct competitor to their own IP.

I'll repeat, I'm not happy with either of the current proposed solutions, because it leaves the IP for physics in the hands of a major hardware vendor. Of the two hands to leave it in, however, I'd pick Intel too, if I were AMD.
I don't think its quite a Faustian choice, as AMD is really forced to choose the lesser of two evils given they don't have a horse in the race when it comes to physics middleware. How's that Shakespearian idiom go....politics makes strange bedfellows. Nothing short of amazing that AMD has partnered up with Intel, a company looking to take away 75% of their core business as we speak.

I look at it more like Prisoner's Dilemma, where AMD really can't win here no matter what they do given they don't have a horse in the race when it comes to software, but would still clearly benefit the most from cooperating. They keep choosing to defect (from PhysX) when its clearly in their best interest to cooperate. The other player, Nvidia (and Intel if you extend it) have more to gain overall regardless of what AMD chooses.

What you'll end up with is 60-70% of the GPU market supporting PhysX (Nvidia), 100% of the GPU market supporting Havok (AMD + NV assuming Nvidia supports OpenCL Havok, as capable and expected). While this may seem as a loss for Nvidia PhysX, they ultimately win as their GPUs will have strengthened their position in the market by supporting both major physics SDKs while AMD can only claim support for one.

Also as ViRGE posted in reply, the only concern any consumer should have with regard to industry standards is an open standard API, which is what you have with OpenCL and DX11 to a lesser degree later this year. After that you'll have to bitch to your IHV or physics middleware provider if your HW doesn't support those standard APIs.

You really are clueless as to how far advanced ATI in so far as future games. Games using Globial illumination for instance will excell on ATI products. Of the Time. Add in DX11 Open CL Havok and AMD going with Havok makes perfect sense. Games made to run with Intel Larabbee I believe will run remarkably well on AMD products once Open CL / GL in widely implemented. Than you will see AMD/ATI have very strong horse in this race,

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: DaveBaumann
Originally posted by: chizow
Now for what-ifs. Now that ATI has demonstrated a GPU accelerated OpenCL Havok client based on an industry standard, when will we see it in production?
We?ve still got to produce a compliant OpenCL stack yet, so that won?t happen before our previously announced roadmap on ATI Stream SDK with OpenCL support.

What will ATI's response be if Nvidia also supports hardware accelerated Havok?
I don?t know how much attention you?ve paid to the announcement of this, but we are not doing anything here that uses non-core OpenCL extensions; i.e. the net result is that all NVIDIA need to do is provide an OpenCL stack on the Windows platform and they will get support of this by default. This is one of the reasons why we are going this path and one of the key reasons why we are working with Havok here ? they were open to taking this path (not least because it frankly opens things up further for them).

If Nvidia supports Havok on their GPUs, will ATI support PhysX on their GPUs? Any answers would be appreciated thanks.
Simple. If NVIDIA pushes PhysX through standard OpenCL calls, much like the work we are doing with Havok, then PhysX will operate on all hardware that has an OpenCL software interface (be that AMD, NVIDIA, Intel, Sony, Sun, etc., etc.).
Awesome info, thanks! Very informative, so it sounds like AMD would only need to write a wrapper for their existing OpenCL driver for PhysX support if its not compatible as-is. Also sounds like Nvidia hardware will run OpenCL Havok without issue as well. Now its just a matter of when we'll see the OpenCL Havok SDK go into production.

Originally posted by: BenSkywalker
Of course nV is going to support Havok- not supporting a standard you have available does nothing but hurt your customers. Some companies care more about politics then there customer base, but that is to be expected and well within their rights.
Exactly! Its amazing how there's actually people who claim they're advocating the best interest of anyone when defending AMD's position on GPU physics when it has done nothing but slow the adoption of enhanced GPU physics.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BFG10K
Oh yes, I see what you mean now.

I was referring to the ?pure? performance case where existing effects possibly run faster courtesy of a new Havok back-end that takes advantage of hardware.

This is unlike legacy PhysX titles which do not automatically offer a performance gain just because an nVidia GPU is installed in the system.
Yep, definitely seems possible to see performance gains for legacy titles, although I think the real key here is earlier adoption and simultaneous implementation of CPU and GPU physics content during development. That's really the only time meaningful physics enhancements can be implemented so the sooner everyone gets on board with all available options, the sooner devs can start putting it into games, whatever SDK they choose to use.

Originally posted by: BFG10K
I forgot to add, the bridge demo is quite cool. I could definitely see this being used in a WWII FPS in a mission about blowing up a railway bridge. In fact MoH Spearhead and CoD:UO have missions that do precisely that.
Heh ya, it reminded me of that bridge CS in Warhead as well. Instead of a prerender CS, it might've been a playable interactive experience with GPU physics. Also reminded me of that scene in the moview Wanted, although I haven't played the game yet so not sure if they did anything there.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: chizow
What you'll end up with is 60-70% of the GPU market supporting PhysX (Nvidia), 100% of the GPU market supporting Havok (AMD + NV assuming Nvidia supports OpenCL Havok, as capable and expected).

Only problem I see with PhysX is why should developers use it over the more mature and familiar Havok they've been using once Havok allows GPU-accelerated physics?

If everyone will allow Havok's GPU-accelerated physics, "everyone" being nVidia - since they're nice, ATi - since they didn't want to use nVidia's, and Intel - since Havok is theirs, what's the point in PhysX?

The only area that PhysX has been utilized to a greater extent than Havok is in GPU-accelerated physics. Beings how developers are more than familiar with Havok, if Havok provides GPU-accelerated physics that "everyone" is supporting, why would developers bother with PhysX? If only nVidia would use it and if they will also use Havok, what's the point?

While some say ATi is being a Debbie-downer over there refusing to allow PhysX, because they aren't doing so and because nVidia would allow Havok-based GPU-accelerated physics, wouldn't that jeopardize PhysX to becoming a check-box feature?

---------------

Let's pretend I'm a developer.

Let's pretend Havok has GPU-accelerated physics (GPU-AP) available.

NVidia allows PhysX - their GPU-AP AND Havok's GPU-AP.

Havok offers developers more than PhysX, and (again, if they have introduced their GPU-AP) is supported by all hardware vendors.

Why should I use PhysX?
 

Denithor

Diamond Member
Apr 11, 2004
6,300
23
81
bit-tech.net: AMD demonstrates Havok with GPU acceleration

Just as we expected, AMD has just demonstrated GPU-accelerated physics at the Game Developers Conference (GDC) in San Francisco. According to AMD, the session included a demonstration of Havok Cloth accelerated on AMD GPUs via OpenCL.

Commenting on AMD?s demonstration of GPU-accelerated Havok physics, Havok?s vice president, David Coghlan, said that "Havok is committed to delivering highly optimised cross-platform solutions to our game customers and we are pleased to be working with AMD to ensure that gamers enjoy a great user experience when running Havok-powered games on AMD platforms."

Coghlan also added that "unlocking the parallel processing capability of AMD?s hardware provides real advantages to our customers, and the greater the total computing resources available, the better the gaming experience developers can deliver."

The technology has also been endorsed by Saber Interactive, developer of TimeShift and the Saber 3D engine. Saber?s chief operating officer, Andrey Iones, said that "Havok?s awesome toolset has allowed us to deliver astonishing physics interactions in our games, including detailed real-time destruction and complex ragdoll models, and we are excited about using ATI Stream technology to pursue more astounding in-game accomplishments."

The announcement coincided with Havok?s revelation that Havok Cloth is, in fact, the fastest-selling Havok product ever. Originally released in March 2008, the Havok Cloth SDK allows developers to create realistic material effects, including clothing and environmental cloth such as rugs or flags, while also taking advantage of multi-threading across a variety of CPUs. This includes the Cell processor found in the PlayStation 3, and now crucially also supports AMD?s GPUs via OpenCL.

As the technology has been demonstrated running on OpenCL, there?s theoretically no reason why Nvidia wouldn?t be able to support this on its own GPUs either. In fact, in a recent Q&A session, Nvidia's director of product management for PhysX, Nadeem Mohammad, said that "we would be thrilled to work with Havok and accelerate Havok on GeForce. I don?t see that as conflict at all with our PhysX efforts ? it would be a great complement." Mohammad also pointed out Nvidia had previously worked with Havok on Havok FX, and added that "we already have OpenCL drivers, and we were the first to demonstrate GPU acceleration in OpenCL."

In the meantime, however, Nvidia is still very much pushing its own PhysX technology, and recently revealed its new APEX tools at GDC as well. Among APEX?s features is APEX Clothing which, like Havok Cloth, will enable game developers to easily create detailed cloth effects. The APEX toolset also includes tools for creating destructible objects and vegetation with realistic effects. Nvidia was also keen to point out that APEX is scalable across a variety of platforms, starting from the Wii and going through various processor technologies all the way to GPU acceleration via CUDA.
 

Denithor

Diamond Member
Apr 11, 2004
6,300
23
81
Originally posted by: josh6079
Let's pretend I'm a developer.

Let's pretend Havok has GPU-accelerated physics (GPU-AP) available.

NVidia allows PhysX - their GPU-AP AND Havok's GPU-AP.

Havok offers developers more than PhysX, and (again, if they have introduced their GPU-AP) is supported by all hardware vendors.

Why should I use PhysX?

My thoughts exactly after wading through this flame-fest of a thread.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: chizow

Exactly! Its amazing how there's actually people who claim they're advocating the best interest of anyone when defending AMD's position on GPU physics when it has done nothing but slow the adoption of enhanced GPU physics.

Exactly! Its amazing how there's actually people who claim they're advocating the best interest of anyone when defending nVidia's position not to adopt DX10.1 when it has done nothing but slow the adoption of features such as enhanced Anti Aliasing performance, Global Illumination and higher flexibility for the developers for example.

So I guess here that is just a matter of vision/tastes than anything else, PhysX is great, but Havok is a more complete package which runs great on CPU, something that PhysX won't, specially if it's coded single threaded for a purpose, Mirrors Edge anyone?
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: evolucion8

So I guess here that is just a matter of vision/tastes than anything else, PhysX is great, but Havok is a more complete package which runs great on CPU, something that PhysX won't, specially if it's coded single threaded for a purpose, Mirrors Edge anyone?

:roll:

Pure FUD. PhysX runs just fine on the CPU. Probably just as well as Havok.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Wreckage

:roll:

Pure FUD. PhysX runs just fine on the CPU. Probably just as well as Havok.

Oh pleeeaassse, knowing that you are an nVidia fan, try to run Mirrors Edge in your system with PhysX on your puny system and see that it will become a crawl, the other older games which supported PhysX won't even run because of the lack of an AGEIA card and if they run, there will be areas which the performance hit is too great, that doesn't happen with games with the Havok engine, so if you don't have something smart to say to counter this, just keep it shut please, your credibility is questionable at best.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: evolucion8

Oh pleeeaassse, knowing that you are an nVidia fan, try to run Mirrors Edge with PhysX on your puny system with an ATi card and see that it will become a craw,

Mirror's Edge PhysX is programed for the GPU, not the CPU.

There are many games that use PhysX that don't require an Ageia card or a NVIDIA card.

It's like saying try to run DirectX 10 on your computer without a video card.


PhysX can be programmed to run on a GPU/CPU/game console/cell phone/etc.

You really should do a smidgen of research before making so many false statements.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Wreckage
Originally posted by: evolucion8

Oh pleeeaassse, knowing that you are an nVidia fan, try to run Mirrors Edge with PhysX on your puny system with an ATi card and see that it will become a craw,

Mirror's Edge PhysX is programed for the GPU, not the CPU.

There are many games that use PhysX that don't require an Ageia card or a NVIDIA card.

It's like saying try to run DirectX 10 on your computer without a video card.


PhysX can be programmed to run on a GPU/CPU/game console/cell phone/etc.

You really should do a smidgen of research before making so many false statements.

Yeah, it's quite a lot of statements that I did in the previos post loll, I know that PhysX can be programmed everywhere, even my cat supports PhysX, because when my cat jumps on my trashcan and throws all the trash in the floor, I throw him a shoe, and it bounces so realistic in him, I can see the physX of law in there, is just breathtaking!!! Overall, Havok is far more complete than PhysX, period. Whether you like it or not, nVidiackage.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: evolucion8
Exactly! Its amazing how there's actually people who claim they're advocating the best interest of anyone when defending nVidia's position not to adopt DX10.1 when it has done nothing but slow the adoption of features such as enhanced Anti Aliasing performance, Global Illumination and higher flexibility for the developers for example.

So I guess here that is just a matter of vision/tastes than anything else, PhysX is great, but Havok is a more complete package which runs great on CPU, something that PhysX won't, specially if it's coded single threaded for a purpose, Mirrors Edge anyone?
LMAO, is Global Illumination next on the list of talking points published in the Nvidia Bashing for Dummies handbook? Who's stopping the proliferation of DX10.1, global illumination, tesselation, or anything else? Its out there for devs to make use of, no one's stopping them. Nvidia isn't going to promote features their hardware doesn't support, that's very different from AMD and PhysX support.

As for PhysX not being able to run on CPUs and Mirrors Edge running poorly as an example. Its simple enough and a question I posed earlier. If Havok was capable of accelerating and producing similar effects, why haven't we seen them yet in games? Why are we only seeing similar simulations, features and capabilities now that they've demonstrated a GPU-accelerated OpenCL client? This part has nothing to do with software, APIs, drivers, industry standards - its a hardware limitation, simple as that.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Denithor
Originally posted by: josh6079
Let's pretend I'm a developer.

Let's pretend Havok has GPU-accelerated physics (GPU-AP) available.

NVidia allows PhysX - their GPU-AP AND Havok's GPU-AP.

Havok offers developers more than PhysX, and (again, if they have introduced their GPU-AP) is supported by all hardware vendors.

Why should I use PhysX?

My thoughts exactly after wading through this flame-fest of a thread.
The same reasons PhysX was able to co-exist with Havok prior to Nvidia's purchase or announcement of porting PhysX to CUDA and their GPUs. Even on a level playing field pre-GPU PhysX in terms of hardware (discounting the ~100k PPUs as insignificant), PhysX still drew enough support to be included in an impressive array of titles, despite Havok being the more established market leader. The exact reasons? Only devs would know for sure, but maybe price, support, availability, engine bundling, compatibility, cross-platform portability etc. influenced their decision.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: chizow

LMAO, is Global Illumination next on the list of talking points published in the Nvidia Bashing for Dummies handbook? Who's stopping the proliferation of DX10.1, global illumination, tesselation, or anything else? Its out there for devs to make use of, no one's stopping them. Nvidia isn't going to promote features their hardware doesn't support, that's very different from AMD and PhysX support.

Global Illumination, the real deal cannot be done in realtime in DX10, it will barely run. Seems that your experience reading Dummies handbook is great, I haven't seen one here where I live. While is true that nVidia doesn't have to promote unsupported features, they're also stagnating it, influencing developers to not to use it, or removing it like they did in Assassin Creed, and teaching them how to use similar features with driver query crap which eventually will be removed.



As for PhysX not being able to run on CPUs and Mirrors Edge running poorly as an example. Its simple enough and a question I posed earlier. If Havok was capable of accelerating and producing similar effects, why haven't we seen them yet in games? Why are we only seeing similar simulations, features and capabilities now that they've demonstrated a GPU-accelerated OpenCL client? This part has nothing to do with software, APIs, drivers, industry standards - its a hardware limitation, simple as that.

Remember that AGEIA was already a Hardware based Physics solution, Havok wasn't. And since you are not a hardware or software engineer, your opinion is just pure speculation again. Intel bought Havok and everything froze up, now they're taking speed again. Mirrors Edge is a great example because it was purpotedly single threaded, so when PhysX are turned on, it would impact terrible the performance of the game, Crysis is even more playable with those Physics effects on and everything maxxed.

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: evolucion8
Global Illumination, the real deal cannot be done in realtime in DX10, it will barely run. Seems that your experience reading Dummies handbook is great, I haven't seen one here where I live. While is true that nVidia doesn't have to promote unsupported features, they're also stagnating it, influencing developers to not to use it, or removing it like they did in Assassin Creed, and teaching them how to use similar features with driver query crap which eventually will be removed.
Assassins' Creed had Global Illumination cut because of Nvidia? Really? I personally think it was a second gunman on the grassy knoll. LOL. Anyways if you and Nemesis want to create another thread about DX10.1, global illumination and custom fitting tin foil hats, feel free to start another thread. I'll be right over there shortly....seriously. :roll:

Remember that AGEIA was already a Hardware based Physics solution, Havok wasn't. And since you are not a hardware or software engineer, your opinion is just pure speculation again. Intel bought Havok and everything froze up, now they're taking speed again. Mirrors Edge is a great example because it was purpotedly single threaded, so when PhysX are turned on, it would impact terrible the performance of the game, Crysis is even more playable with those Physics effects on and everything maxxed.
What does AGEIA hardware have to do with PhysX or Havok's capabilities on like-hardware, ie. the CPU? You're claiming PhysX performs poorly on the CPU based on performance resulting from enabling features that were never meant to run on a CPU. That's the whole point. If those advanced physics effects could run acceptably on CPUs, we would've seen them years ago. Yet neither PhysX, Havok, or any other physics SDK was able to produce them in real-time on the CPU, why?

As for the nonsense about Mirrors Edge being single-threaded....do you own Mirrors Edge? I don't, but I'm extremely confident its not single-threaded being a UE3.0 based game, especially given all my UE3.0 games are some of the best-threaded and CPU-intensive games that I own.

 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: chizow
What does AGEIA hardware have to do with PhysX or Havok's capabilities on like-hardware, ie. the CPU? You're claiming PhysX performs poorly on the CPU based on performance resulting from enabling features that were never meant to run on a CPU. That's the whole point. If those advanced physics effects could run acceptably on CPUs, we would've seen them years ago. Yet neither PhysX, Havok, or any other physics SDK was able to produce them in real-time on the CPU, why?

PhysX was originally from AGEIA before nVidia bought it, but while is true the part of PhysX running poorly on a CPU because it never was meant to, Havok runs much better and does more things.

As for the nonsense about Mirrors Edge being single-threaded....do you own Mirrors Edge? I don't, but I'm extremely confident its not single-threaded being a UE3.0 based game, especially given all my UE3.0 games are some of the best-threaded and CPU-intensive games that I own.

I do OWN Mirrors Edge and using monitoring sofware with RivaTuner and it does only use one CPU, the other ones remain idle, while the UE3.0 engine is multi threaded, is up to the developers to choose to implement it in the game. Did Bioshock supported Anti Aliasing in DX10? Did Gears of War supported HDR? All those features are supported in the UE3.0 and not implemented in those games because the developer didn't want to, but they're implemented in Mirrors Edge.

In Mirrors Edge, when you are running inside of the building and the cops shots at the windows and breaks them, it runs at single digits FPS, even if you are not looking at the windows and you move your character to a wall until it hit the face, a position which usually skyrockets your fps, the fps stays like that until you restart the game. So what can you say about it? That doesn't happen to ATi users with the AGEIA card or nVidia cards.

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: BenSkywalker
What on Earth makes you think that CPUs not only can't handle pixel shaders, but do a lot more on that end then current GPUs? By a staggering amount too. Check out SW:EP3 for some evidence of this(the movie, not any game). While we have made huge progress with shaders lately, we aren't remotely close to GPUs handling radiosity yet. The problem is how slow CPUs are at physics calculations, they aren't close(well, excluding Cell or comparable architectures).
In the context of this thread, which is interactive graphics in games, cpu's are not a viable option for pixel shaders, although I'm aware of their use in CG render farms.

Would depend on how the engine was coded, but there would certainly be circumstances where you would want two way communication. That will be a long time coming yet though, sadly one of the key PC gaming companies decided holding up physics advancement so they could play politics was their best option.
Seeing how the PhysX IP is owned by Ati's competitor, I'm not surprised in the least at their refusal to support it. It may disappoint HW enthusiasts, but as a business strategy it makes sense.


Wait, when did Intel exit the CPU market? I would have thought something like that would have been all over, apparently I missed it?
Why are we talking about CPU's, when they're supposedly too slow to matter for HW-accelerated physics? Besides, Ati is not in the CPU business.


So now you are starting to see how shortsighted AMD's current approach is, I knew you'd come around
Given the alternative, I'd say it's the lesser of two evils right now for AMD/Ati.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: evolucion8
but while is true the part of PhysX running poorly on a CPU because it never was meant to, Havok runs much better and does more things.

Please stop posting this FUD. This is blatantly false information.

Once more, the PhysX in Mirror's edge was NOT MEANT to run on the CPU. There are many games out there (Gears of War) that run CPU PhysX.

Now unless you have a smidgen of proof about Havok running better or having more features in regards to physics..... stop misleading people.

 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
chizow:
The exact reasons? Only devs would know for sure, but maybe price, support, availability, engine bundling, compatibility, cross-platform portability etc. influenced their decision.

Agreed. I think it's safe to say that PhysX is not out of the game simply because Havok is going to confront it with GPU-accelerated physics.

But, from what we've seen so far it seems that Havok - when they finally allow GPU-accelerated physics - does have a leg up in all of the areas you've mentioned from a developer's standpoint, except for possibly price. (Only devs would really know)

What I think would have to happen in order for PhysX to remain prevalent is advancement in PhysX itself. Once Havok becomes available for GPU-accelerated physics, the battle between the physics application will really pick up pace.

chizow:
PhysX still drew enough support to be included in an impressive array of titles, despite Havok being the more established market leader.

PhysX probably got its foot in the door by being the cheaper alternative. What gave it a boost was the PPU and - to a much larger extent - the GPU.

Take away that boost and in a level playing field where both Havok and PhysX can access the same processing units in a PC and you'll see that developers are going to look for which provides the most, the easiest. As it stands, that's Havok. But, as mentioned, that doesn't mean PhysX will "die."

PhysX will have to devote more time and resources into developing areas that Havok currently has advantages with to remain competitive. The link that Denithor provided, and what Modelworks was going to provide, shows this. While Havok is going to finally provide GPU-AP, PhysX is:

bit-tech.net:
...reveal[ing] its new APEX tools at GDC... Among APEX?s features is APEX Clothing which, like Havok Cloth, will enable game developers to easily create detailed cloth effects. The APEX toolset also includes tools for creating destructible objects and vegetation with realistic effects.

What will this increase in competition do to the price developers would have to pay for PhysX? Only they and nVidia would really know. I could see Havok as being slightly more expensive, but then we would be trying to estimate too much for the purpose of this discussion.

In summary, I think that for us this is good; we will be seeing physic competition pick up pace.

For nVidia, it's going to be more on their plate to compete for graphics and physics; they'll no longer hold the lead of being able to use the GPU while Havok relies on good'ol x86. That said, "more on their plate" is every processing unit vendor's goal, right?

For ATi, we may see PhysX run on Radeons still, as Dave Baumann suggested.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
evolucion8:
In Mirrors Edge, when you are running inside of the building and the cops shots at the windows and breaks them, it runs at single digits FPS, even if you are not looking at the windows and you move your character to a wall until it hit the face, a position which usually skyrockets your fps, the fps stays like that until you restart the game. So what can you say about it? That doesn't happen to ATi users with the AGEIA card or nVidia cards.

What seems to be the culprit in your scenario is that the nVidia card with the low frame rate is in response to simultaneously calculating the physics while rendering the graphics using only one processing unit.

The reason why it probably doesn't happen (as much) with the "...ATi users with the AGEIA card or nVidia cards" is because those extra cards are calculating the physics while the ATi card is rendering the graphics.

I'm sure if you had two nVidia cards arranged to have one process the physics while the other the graphics or an nVidia card + AGEIA PPU it would be similar. Hence, it probably wouldn't slow down to the single digits.

edit: spelling
 

Denithor

Diamond Member
Apr 11, 2004
6,300
23
81
Originally posted by: Modelworks


You beat me to it
I told people that Havok using the GPU wasn't vaporware
There is also going to be more news about Havok coming shortly.

??

NDA or what?

Anyway, it doesn't even matter anymore - these guys are so busy flaming each other into kebabs that they don't even notice when actual news is posted.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Wreckage
Originally posted by: evolucion8
but while is true the part of PhysX running poorly on a CPU because it never was meant to, Havok runs much better and does more things.

Please stop posting this FUD. This is blatantly false information.

Once more, the PhysX in Mirror's edge was NOT MEANT to run on the CPU. There are many games out there (Gears of War) that run CPU PhysX.

Now unless you have a smidgen of proof about Havok running better or having more features in regards to physics..... stop misleading people.

Don't take it personally, so STAY IN TOPIC, I'm not misleading people, they can research and learn by their own, they're not dorks, not unlike some guy who always favors nVidia over ATi using FUD, trolling and persuasive techniques
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: josh6079

What seems to be the culprit in your scenario is that the nVidia card with the low frame rate is in response to simultaneously calculating the physics while rendering the graphics using only one processing unit.

The reason why it probably doesn't happen (as much) with the "...ATi users with the AGEIA card or nVidia cards" is because those extra cards are calculating the physics while the ATi card is rendering the graphics.

I'm sure if you had two nVidia cards arranged to have one process the physics while the other the graphics or an nVidia card + AGEIA PPU it would be similar. Hence, it probably wouldn't slow down to the single digits.

edit: spelling

Actually, there's some results in the web using the AGEIA card with ATi and nVidia cards, and proved that is even faster calculating Physics than using the 9600GT only as a Physx card, they can be found cheap on ebay, so I might buy one AGEIA card in the future if Havok doesn't take off.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: josh6079
evolucion8:
In Mirrors Edge, when you are running inside of the building and the cops shots at the windows and breaks them, it runs at single digits FPS, even if you are not looking at the windows and you move your character to a wall until it hit the face, a position which usually skyrockets your fps, the fps stays like that until you restart the game. So what can you say about it? <<That doesn't happen to ATi users with the AGEIA card or nVidia cards.

What seems to be the culprit in your scenario is that the nVidia card with the low frame rate is in response to simultaneously calculating the physics while rendering the graphics using only one processing unit.

The reason why it probably doesn't happen (as much) with the "...ATi users with the AGEIA card or nVidia cards" is because those extra cards are calculating the physics while the ATi card is rendering the graphics.

I'm sure if you had two nVidia cards arranged to have one process the physics while the other the graphics or an nVidia card + AGEIA PPU it would be similar. Hence, it probably wouldn't slow down to the single digits.

edit: spelling

Out of curiosity, what's the cpu usage like in Mirrors Edge? In other words, given a decent quad-core cpu, does it serve any benefit for enhancing physics? Or does the game simply revert to low-detail physics in the absence of dedicated PhysX HW?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |