ATI Havok GPU physics apparently not as dead as we thought

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: josh6079
Yes, indeed, it is. But I was talking about CUDA and OpenCL.
And I was talking about PhysX and Havok, based on Godfrey Cheng's statements:

  • AMD says PhysX will die

    "There is no plan for closed and proprietary standards like PhysX," said Cheng. "As we have emphasised with our support for OpenCL and DX11, closed and proprietary standards will die."
So again, you don't see anything wrong with AMD publicly condemning a closed and proprietary standard like PhysX, when they're actually supporting their own preferred closed and proprietary standards instead?

I'm confused. Dave Baumann clarified that they'd be happy to use either Havok or PhysX. So long as it's not through CUDA, they're fine.

The OP showed Havok running on Radeons through OpenCL. Had PhysX been available through OpenCL, why wouldn't they have used it?
And his comment from a few days ago doesn't absolve AMD of statements made over the last 9 months that were clearly deceptive and contradictory, they just offer clarity going forward.

Personally, I'm not sure. Cheng, who has more expertise and a knowledge of PC history than I, commented on whether or not it was a better alternative:

To summarize, it seems some believe that "proprietary interfaces", like CUDA, will be eclipsed by "collaborative industry interfaces" like OpenCL.

Beings how there is a historical basis for such an assumption (e.g., his comments concerning S3 Metal, 3dfx, Glide, and CG.), it seems valid to say that concentrating resources on those "collaborative industry interfaces" like OpenCL would be better served than for CUDA on the short term.

To what degree this has really "hurt" GPU-accelerated physics advancements is controversial.
I think the problem is Godfrey wasn't keeping tabs on what he was saying in the press, given he turns right around and claims proprietary standards like Havok and DirectX11 are somehow superior.

If nVidia would have released PhysX and PhysX alone to ATi, could ATi have used their Stream instead?
Most likely not, unless they wanted to purchase the source code and recompile it for their own Stream runtime as well as write their own low-level driver API. The much easier route would've been to use CUDA's low-level driver API, which is essentially what they ended up doing anyways by writing an OpenCL driver API.

So nVidia didn't force CUDA on ATi? They left both PhysX and CUDA mutually exclusive and free to take one or the other?
Of course they're mutually exclusive....again, how do you think PhysX is running on x86, PS3, XBox360, Wii, and anything else even before CUDA existed? PhysX is not tied to CUDA, in any way, shape or form. If AMD wanted to use PhysX and not CUDA, they could've paid $50k or whatever it is for the source code and recompiled it to run with their own Stream/Brook+ driver API, or whatever they were using at the time.

OpenCL is not a "copy" of CUDA though. From a developer's standpoint, they are different and it depends on what they want.
Actually that slide shows pretty clearly how similar they are. OpenCL essentially takes the place of the low-level CUDA Driver API with C for CUDA being the high-level runtime API. It also looks as if the low-level CUDA/OpenCL Driver API will only use a different compiler with most of the underlying C code remaining the same, probably with different headers (the ones listed on the OpenCL site).

CUDA Hierarchy Diagram
Tim Murray @ Nvidia Developer Forums
OpenCL API Registry (Spec .pdf and Header file)

I'm not really saying it's anyone's fault because I don't know what exactly was hurt.

Processing physics on processors other than the CPU have been around since Aegia's PPU, but physics themselves haven't really done anything spectacular in light of that.
Again, look at installed user-base and it should become obvious as to why they didn't do anything spectacular in that time frame. Ageia's PPU sold ~100-200k units total. All-time. Nvidia increased that number exponentially overnight to 70 million when they released their CUDA PhysX driver, which has since grown to 100+ million.

This addresses some of the questions I've personally had regarding physics in general. Do all of physics really need a GPU to calculate them? One as powerful as my 9800 GTX+ or an 8800 GT? Why haven't quad core processors become almost a necessity in PC gaming yet? Why have there been very little improvements to physics - regardless of what is processing them - over the past 5 years or so?
It simply comes down to hardware limitations. GPUs excel at highly parallel instructions and floating point math, areas that have always been weaknesses historically on the CPU. As a comparison, the world's fastest desktop CPU Core i7 965 is capable of 70 Gflops. GT200 is capable of 933 Gflops. RV770 is capable of 1.2 Tflops. These aren't just paper specs either, these performance gains have been realized in just about every application suited to highly parallel computing (F@H, Video transcoding, physics, etc).

The GDC showed some pretty cool stuff. I really liked the cloth demo Ben linked earlier. But why has it taken so long to even get here?
As answered above, CPUs simply aren't capable of adequately handling the required calculations.

What is processing them isn't the answer, because that's varied from CPUs, PPUs, and GPUs alike recently. Couple that with the fact that the G80 and derivatives dominated the PC gaming market for so long, and I don't see any reason why physics are still where they are.
I find all of what you've written here extremely ironic given you don't see anything wrong with AMD's actions and press releases over the last 8-9 months. In any case, the answer is obvious as to why we haven't seen it sooner, the middleware and API didn't support GPUs until 9 months ago when Nvidia changed the landscape of physics acceleration overnight. And if they waited another 6 months for OpenCL instead of using CUDA, we'd see even less results than what we have now.

Oh, I know it's similar. I just know that it's not identical, and where's there's differences there can be reasons for choices. ATi made theirs for reasons that lie within their differences.
Actually it does look like they're nearly identical, meaning the underlying C code is the same, they just use a different compiler and header file (Again, see previous links).
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: chizow
So again, you don't see anything wrong with AMD publicly condemning a closed and proprietary standard like PhysX, when they're actually supporting their own preferred closed and proprietary standards instead?

No, no. Don't misconstrue me. I think I understand how ATi picking Havok physics is contradicting behavior when comparing Havok and PhysX alone.

It's just that I think I can understand their "picking between two poisons" if Havok was willing to give their SDK via OpenCL, especially if ATi is assuming CUDA will be non-existent in the near future.

I'm not saying I agree or disagree with their assumptions, but at least they gave reasons for why they had them based on history instead of sentiments.

And his comment from a few days ago doesn't absolve AMD of statements made over the last 9 months that were clearly deceptive and contradictory, they just offer clarity going forward.

I understand you feel the reasons gave were deceptive, and that's well within enthusiasts' rights. Others feel the same about Ubisoft's reasoning for pulling DX10.1 support. I'm not wanting to go astray into that can of worms; I'm just using it as an analogy of official claims vs. enthusiast doubt.

But again, had PhysX been available through OpenCL for the aforementioned demo, why wouldn't they have used it?

I think the problem is Godfrey wasn't keeping tabs on what he was saying in the press, given he turns right around and claims proprietary standards like Havok and DirectX11 are somehow superior.

I think many would agree that, currently, Havok is superior in many regards outside of GPU-accelerated physics. And that advantage is deteriorating with the arrival of OpenCL.

So nVidia didn't force CUDA on ATi? They left both PhysX and CUDA mutually exclusive and free to take one or the other?
Of course they're mutually exclusive....again, how do you think PhysX is running on x86, PS3, XBox360, Wii, and anything else even before CUDA existed?

Yes, I understand that it doesn't need CUDA to function. But it running on x86, PS3, Xbox360, Wii, and etc. are all examples of non GPU-acceleration, are they not?

Beings how nVidia used CUDA to enable PhysX's GPU-accelerated functions, did they not couple both PhysX and CUDA together when offering it to ATi's Radeons?

They were allowing complete exclusion from CUDA if ATi were to use PhysX?

If AMD wanted to use PhysX and not CUDA, they could've paid $50k or whatever it is for the source code and recompiled it to run with their own Stream/Brook+ driver API, or whatever they were using at the time.

But that's not what bit-tech claimed three days ago. They claimed that nVidia was coupling PhysX with CUDA as far as GPU-accelerated physics are concerned. Link

bit-tech:
Nvidia has so far guarded its GPU PhysX technology behind a large CUDA-shaped wall, meaning that anyone who wants to use it has to use CUDA too.

Also, isn't Havok's physics free? Or does the tool-set lie separate from the source code as far as cost is concerned?

Originally posted by: chizow
OpenCL essentially takes the place of the low-level CUDA Driver API with C for CUDA being the high-level runtime API. It also looks as if the low-level CUDA/OpenCL Driver API will only use a different compiler with most of the underlying C code remaining the same, probably with different headers (the ones listed on the OpenCL site).

CUDA Hierarchy Diagram
Tim Murray @ Nvidia Developer Forums
OpenCL API Registry (Spec .pdf and Header file)

Interesting. Again, thanks for the informative links.

Again, look at installed user-base and it should become obvious as to why they didn't do anything spectacular in that time frame. Ageia's PPU sold ~100-200k units total. All-time. Nvidia increased that number exponentially overnight to 70 million when they released their CUDA PhysX driver, which has since grown to 100+ million.

That just furthers my questioning though. With all of that support for GPU-accelerated physics and TWIMTBP titles being such a large chunk of the recent gaming library AND the G80 < being the dominant GPU for 2-3 years, why have features pertaining to physics remained so static?

Demos from GDC 09 was the first thing I've seen in a while that's been remotely impressing, imo.

In any case, the answer is obvious as to why we haven't seen it sooner, the middleware and API didn't support GPUs until 9 months ago when Nvidia changed the landscape of physics acceleration overnight. And if they waited another 6 months for OpenCL instead of using CUDA, we'd see even less results than what we have now.

That's probably true. Could one also say that OpenCL was necessary because of Havok and it being the primary choice of the majority of developers?

I find all of what you've written here extremely ironic given you don't see anything wrong with AMD's actions and press releases over the last 8-9 months.

Oh no, no. Like I said, I can see how their unwillingness to allow CUDA has been a factor in delaying GPU-accelerated physics.

But, I also understand that business is politics, and nVidia was unwilling to cleave PhysX from CUDA when regarding Radeon adoption.

Had nVidia been willing to let ATi go down the "path less traveled" in getting it to function through Stream, whose to say it wouldn't have happened sooner?

To me, these combined factors explain the delay.

Had I been in ATi's position, I would have probably bitten the bait. What Godfrey Cheng termed as "...artificially disadvantag[ing] our platforms in relation to theirs" seemed more like fluff. He really needs to clarify what exactly he meant by that.

Actually it does look like they're nearly identical, meaning the underlying C code is the same, they just use a different compiler and header file (Again, see previous links).

Yeah, it seems your correct in their architectural similarities, but there were also "semantic" differences (i.e., Open vs. Proprietary).
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: josh6079
No, no. Don't misconstrue me. I think I understand how ATi picking Havok physics is contradicting behavior when comparing Havok and PhysX alone.

It's just that I think I can understand their "picking between two poisons" if Havok was willing to give their SDK via OpenCL, especially if ATi is assuming CUDA will be non-existent in the near future.

I'm not saying I agree or disagree with their assumptions, but at least they gave reasons for why they had them based on history instead of sentiments.
And like I said from the very beginning, I wouldn't have a problem if AMD came out and said they didn't want to support a competitor's IP and they felt the horse they were backing was better. That happens all the time in this industry, no problem. Just don't claim you're not supporting it for reasons that are disingenuous and then contradict yourselves by doing the exact same thing you just condemned.

I understand you feel the reasons gave were deceptive, and that's well within enthusiasts' rights. Others feel the same about Ubisoft's reasoning for pulling DX10.1 support. I'm not wanting to go astray into that can of worms; I'm just using it as an analogy of official claims vs. enthusiast doubt.

Heh, again with the Assassin's Creed example. The difference here is Ubisoft's reason for pulling DX10.1 was valid, there were in fact verifiable rendering errors and omissions with AMD's DX10.1 parts. Nvidia has denied influencing the decision in any way and there's really no reason to believe otherwise.

Curiously absent is any statement from AMD denying Ubisoft's claims about AMD's DX10.1 problems, affirming their belief that their DX10.1 path is rendering properly. Also curiously absent is any promise from AMD to work with Ubisoft in order to properly support DX10.1 in the future.

Assassins Creed Controversy Discussion @ Tech Report

So again, Ubisoft's reason is valid based on verifiable facts, whereas AMD's reason for not supporting PhysX is contradictory, hypocritical and deceitful. Big difference there.

But again, had PhysX been available through OpenCL for the aforementioned demo, why wouldn't they have used it?
Probably for the same reasons they didn't adopt PhysX when the spec was ratified in December or in the months since: it didn't "make sense" yet or they were too busy working on their own "closed and proprietary standards".

I think many would agree that, currently, Havok is superior in many regards outside of GPU-accelerated physics. And that advantage is deteriorating with the arrival of OpenCL.
Perhaps, but that doesn't change the fact Godfrey Cheng's statements were contradictory and deceitful at the time when he claimed they wouldn't support PhysX because it was "closed and proprietary".

Yes, I understand that it doesn't need CUDA to function. But it running on x86, PS3, Xbox360, Wii, and etc. are all examples of non GPU-acceleration, are they not?

Beings how nVidia used CUDA to enable PhysX's GPU-accelerated functions, did they not couple both PhysX and CUDA together when offering it to ATi's Radeons?

They were allowing complete exclusion from CUDA if ATi were to use PhysX?
Look, I've already broken it down numerous times to both you and others. CUDA is not tied to PhysX, in any way, shape or form. I don't understand why people don't understand this or simply refuse to. Its like they're clinging to this bit of FUD as if it were true or meaningful, when its not.

CUDA is simply the architecture that includes the API, driver, and runtime libraries that sit between the hardware (GPU) and the application (PhysX). Nothing was stopping AMD from creating that entire HAL with their own API, drivers, and libraries if they wanted to, to suit whatever API or compute architecture they were pushing at the time. Again, there's plenty of examples showing this to be the case with x86, PS3, Wii, XBox 360 support of PhysX using a different HAL by necessity to interface different hardware. This has nothing to do with CUDA, as many of these PhysX solutions pre-date CUDA.

But that's not what bit-tech claimed three days ago. They claimed that nVidia was coupling PhysX with CUDA as far as GPU-accelerated physics are concerned. Link

bit-tech:
Nvidia has so far guarded its GPU PhysX technology behind a large CUDA-shaped wall, meaning that anyone who wants to use it has to use CUDA too.

Also, isn't Havok's physics free? Or does the tool-set lie separate from the source code as far as cost is concerned?
And that's certainly one author's interpretation, probably based on similar misinformation from AMD quotes and interviews, despite all other information and technical resources indicating otherwise.

Also from what I've seen, PhysX is free to use whether personal or commercial, 50k for the source code to port, troubleshoot or customize. Havok is 20k per commercial license, not sure if that's per seat or what, but also includes the open source code.

That just furthers my questioning though. With all of that support for GPU-accelerated physics and TWIMTBP titles being such a large chunk of the recent gaming library AND the G80 < being the dominant GPU for 2-3 years, why have features pertaining to physics remained so static?

Demos from GDC 09 was the first thing I've seen in a while that's been remotely impressing, imo.
Again, I find it extremely difficult to reconcile the fact you ask these questions, yet don't see any fault in AMD's actions and statements over the last 9 months. Look at some timelines and it should become clear why. Nvidia acquired Ageia about a year ago and released CUDA PhysX drivers in July/August 2008, which should give you a reference starting point for GPU accelerated physics.

Also, you can find similar PhysX demos in the Nvidia Power Pack download like the Nurien demo. Havok's application is undoubtedly more impressive, but the underlying technology is the same and only enabled through the superior hardware capabilities of GPUs used for physics calculations.

That's probably true. Could one also say that OpenCL was necessary because of Havok and it being the primary choice of the majority of developers?
Absolutely not, as the scope and purpose of OpenCL extends far beyond physics or even GPUs. Also, as has been stated numerous times already by Ben, the spec was initially submitted by Apple, who has no direct interest in consumer-level physics or GPUs. You'll also notice Microsoft's support is not-so-curiously absent from the OpenCL standards board.

Oh no, no. Like I said, I can see how their unwillingness to allow CUDA has been a factor in delaying GPU-accelerated physics.

But, I also understand that business is politics, and nVidia was unwilling to cleave PhysX from CUDA when regarding Radeon adoption.

Had nVidia been willing to let ATi go down the "path less traveled" in getting it to function through Stream, whose to say it wouldn't have happened sooner?

To me, these combined factors explain the delay.

Had I been in ATi's position, I would have probably bitten the bait. What Godfrey Cheng termed as "...artificially disadvantag[ing] our platforms in relation to theirs" seemed more like fluff. He really needs to clarify what exactly he meant by that.
Well again, everything I've linked to, including numerous interviews, SDKs and open source downloads, technical documents etc all indicate nothing was stopping AMD from porting PhysX to [whatever API or architecture they wanted to] if they actually wanted to. Instead they chose to support an API that didn't yet exist along with a closed and proprietary standard with Havok. No problem with that, companies do that all the time. But if AMD does that, don't turn around and condemn exactly the same thing they're doing with closed and proprietary standards and claim what they're doing is somehow better, especially when they don't actually have anything to show.

At this point, based on the most recent quotes and developments I'm just glad GPU accelerated physics should see cross-support from both IHV and middleware vendors. But that certainly doesn't mean AMD is going to get off the hook for the last 8-9 months, especially now that they're promoting GPU accelerated Havok as the greatest thing since sliced bread.....just as I predicted they would months ago. I guess they're "ready" now and it finally "makes sense".
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: josh6079

I think many would agree that, currently, Havok is superior in many regards outside of GPU-accelerated physics.

How so? PhysX (outside of GPU) is used in a lot of top shelf games like Gears of War, Mass Effect, GRAW, Splinter Cell, etc.

I think it has exceeded Havok (outside of GPU) in many ways and is currently the only option for GPU acceleration.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
This is a Havok thread. So if your going to talk Physics X . Have the Good sense to show video of what X can do . In AI destruction fluid cloth ect. ect / Than we came make comparisons rather than a bunch of cheap sales tatics.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: Wreckage
Originally posted by: josh6079

I think many would agree that, currently, Havok is superior in many regards outside of GPU-accelerated physics.

How so? PhysX (outside of GPU) is used in a lot of top shelf games like Gears of War, Mass Effect, GRAW, Splinter Cell, etc.

That's true. But (also outside of GPU) Havok is used in a plethora of titles - more than there are titles using PhysX.

Granted, part of that is because of its age.

I think it has exceeded Havok (outside of GPU) in many ways and is currently the only option for GPU acceleration.

I'm not saying I know which is "superior" (again, outside of GPU-AP).

I'm just saying that it seems like developers enjoy the bundling of Havok's utilities/tools moreso than PhysX's currently.

Even in this thread we had one give us his informed opinion:

Originally posted by: Modelworks
They
[previous software-based Havok titles] don't have to be re-done. That is one thing they did do right in Havok. They made sure that each version remained compatible in the API. The only real difference between the versions is in how inverse kinematics is calculated and that is very easy to adapt to a GPU.

Only Source? It is still used by Gamebryo and UE3. Even Blizzard is using it in Diablo 3 and Starcraft 2. It is up to the developer who purchases the license to decide what they will use. Most developers do not purchase a engine and use it for the physics, sound, graphics, networking, etc. They purchase what they need to produce the best end result from different sources and combine them into the final product.

So the renderer might be gamebryo but the sound is Miles.

It is also supported in the major content creation applications at a very high level. PhysX is just physics. But Havok also has behavior.

I can not only create things that use physics but can also give them their own AI without ever leaving the API.

If I use PhysX then I would still have to use Havok to do the AI and pass arguments back and forth between the two API. More work for the same result....

Havok works best as a package covering animation, AI, behavior and physics. If I am working on a project and I use PhysX for the physics I still would need Havok for the AI , behavior and animation. I then have to deal with two API instead of one and also work out how to pass the information between the two. Or I can just use Havok from the start where everything was designed to work together. It is much easier for the developer....

I have used Havok for work for nearly 7 years. I like PhysX too, but Havok is just a much more mature product. It has been around longer and because of that has had time to grow and evolve....

There really is nothing that PhysX can do that Havok cannot and vice versa. When it comes down to it I would use Havok over PhysX because of the ease of use. PhysX is getting better, but it is not there yet. Havok is also supported fully in the major content creation apps for games and that alone makes it a priority over PhysX for most of the developers.

As I've mentioned earlier though, I expect this gap to deteriorate exponentially as time goes on.

PhysX and Havok have already begun to pick up pace in their competition. Again, as I said earlier, cheers to them. :beer:; ATi is under 21.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: BFG10K
Originally posted by: Frank Encruncher

Beside all that, most physics are terrible anyways.
Most will agree with me, until I can drop a wall on an opponent using a tank shell or fireball(genre of your choice) physcis are wasteful.
Check out the interviews and trailers for Red Faction 3. All static fixtures (buildings, bridges, walls, etc) are destructible because they use real physics principles to perform load-bearing calculations. If the engine deems you've done enough damage to the right places, the entire building can collapse.

When developers first started implementing buildings, a lot of buildings instantly collapsed because they weren't "built" properly, and the engine decided they weren?t strong enough to support their own weight. This required a dramatic change in the way of thinking about level design.

The irony is that the whole thing runs under software physics: an in-house engine for the calculations, and Havok for the actual physics effects. It?s also PC and console compatible.

Which is exactly the kind of physics I want to see in games. I remember the original Red Faction also had interactively destructive environments, though in a more limited context, but nevertheless it was an integrated part of gameplay, not some physics eye-candy added as an afterthought. I find that kind of physics application much more beneficial and impressive than showing off fancy glass-shattering effects.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Wreckage
Well back on topic... I doubt we will see a game using Havok + GPU this year or even next year. So we are mostly just discussing speculation at this point.

Well mister . You got a link for that . Because you just broke the rules big time. Find a link that supports that statement or remove the lie.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
X86 does not run native on Leopard

I can't continue with this discussion honestly, I have a hard time believing that any person with the capability of registering on these forums can be as ignorant as your posts make you seem. Do some research and figure out what you are talking about. I will give you a few quick pointers-

Windows isn't x86

x86 is an instruction set processors use

Windows isn't x86

x86 is what compilers output when they are creating code to run on x86 based processors

Windows isn't x86

New Macs run x86 based processors

And finally, Windows isn't x86


Somebody help this poor boy out, Windows is an OS that uses programms to do work or play whatever.

If you take sandy bridge and install windows . If you install another program it must be ported or it won't run on Sandy bridge. After its ported than it will run . Likely on SNOW also .

x86 to run on sandy sse must be ported.

Ya the front end takes the code and converts for the backend. Is this a recent discovery.
you made. your bolded

SO with AVX sse is recompiled than it goes to the new compiler than to the BACKEND . Qustion is whats the backend. Must be differant. Is that x86 processor your talking about is that backend Risk cisk or what is it.

 

waffleironhead

Diamond Member
Aug 10, 2005
6,924
437
136
Originally posted by: Nemesis 1
Originally posted by: Wreckage
Well back on topic... I doubt we will see a game using Havok + GPU this year or even next year. So we are mostly just discussing speculation at this point.

Well mister . You got a link for that . Because you just broke the rules big time. Find a link that supports that statement or remove the lie.

You dont need a link to speculate. If you did you would have been banned long ago.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
X86 does not run native on Leopard

I can't continue with this discussion honestly, I have a hard time believing that any person with the capability of registering on these forums can be as ignorant as your posts make you seem. Do some research and figure out what you are talking about. I will give you a few quick pointers-

Windows isn't x86

x86 is an instruction set processors use

Windows isn't x86

x86 is what compilers output when they are creating code to run on x86 based processors

Windows isn't x86

New Macs run x86 based processors

And finally, Windows isn't x86

Rollo use to pull this same shit when he was cornered. It worked to until everyone found out his reasons for attacks. Your useing same tatic.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: waffleironhead
Originally posted by: Nemesis 1
Originally posted by: Wreckage
Well back on topic... I doubt we will see a game using Havok + GPU this year or even next year. So we are mostly just discussing speculation at this point.

Well mister . You got a link for that . Because you just broke the rules big time. Find a link that supports that statement or remove the lie.

You dont need a link to speculate. If you did you would have been banned long ago.

I bet I can find a link says Havok on GPU will be out befor 2011. Likely from Havok themselves. That makes it a bad post. I will take time find link.

This isn't a speculation based thread. Read the TITLE. Your defending WRONG.

 

AmberClad

Diamond Member
Jul 23, 2005
4,914
0
0
Nemesis 1 - You need to knock it off with calling out various people in this thread, as well as the nonsensical rants about things that are unrelated to the thread.

AmberClad
Video Moderator
 

waffleironhead

Diamond Member
Aug 10, 2005
6,924
437
136
Originally posted by: Nemesis 1
Originally posted by: waffleironhead
Originally posted by: Nemesis 1
Originally posted by: Wreckage
Well back on topic... I doubt we will see a game using Havok + GPU this year or even next year. So we are mostly just discussing speculation at this point.

Well mister . You got a link for that . Because you just broke the rules big time. Find a link that supports that statement or remove the lie.

You dont need a link to speculate. If you did you would have been banned long ago.

I bet I can find a link says Havok on GPU will be out befor 2011. Likely from Havok themselves. That makes it a bad post. I will take time find link.

This isn't a speculation based thread. Read the TITLE. Your defending WRONG.

You can still be skeptical that it will ever surface in that time frame. No need for a link.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: waffleironhead
Originally posted by: Nemesis 1
Originally posted by: Wreckage
Well back on topic... I doubt we will see a game using Havok + GPU this year or even next year. So we are mostly just discussing speculation at this point.

Well mister . You got a link for that . Because you just broke the rules big time. Find a link that supports that statement or remove the lie.

You dont need a link to speculate. If you did you would have been banned long ago.

Actually If I go against the OP title . Its not often I don't get a Link to back up my stance. If I am going along with title that leaves a little wiggle room and I don't need link.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: AmberClad
Nemesis 1 - You need to knock it off with calling out various people in this thread, as well as the nonsensical rants about things that are unrelated to the thread.

AmberClad
Video Moderator

Ok Amber. I kinda missed were I called anyone out in this thread. Idid call viditor out in Cpus but in a fun way . As far as offtopic I am staying with Havok and CL Amd isn't only player effected here . We have Apple intel and various cpus that will handle phyisics loads differantly. All Part of the Intel /Havok /AMD / Apple future. CY is big part of that .

Now I don't see were PX belongs here when . People make statements about this or that . Is it unreasonable to ask to see video proof of statements. After all Meteor is a strong vid .

If theres better footage out there lets see it.

On the calling out thing . I need help on this one . Please point out so I can correct mistake.

 

AmberClad

Diamond Member
Jul 23, 2005
4,914
0
0
Callouts:
Originally posted by: Nemesis 1
God ya want to know about Physics x start a thread On Havok. Whats NV pay you guys.

Originally posted by: Nemesis 1
Rollo use to pull this same shit when he was cornered. It worked to until everyone found out his reasons for attacks. Your useing same tatic.

You're continuing to derail this thread by arguing about the issue of providing links and arguing about whether or not you're violating the forum guidelines. If you have anything else not pertinent to the thread topic to say, take it to PM.

AmberClad
Video Moderator
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Amber even tho a said I wouldn't reply in this thread again . I must . I see were I got Ben confused . As to what I was saying. It was my fault. As I choose word incorrectly.

When I talk intel or MS its always x86 or x86-64 . So This porting thing . Let me reword it correctly and accurately My bad.

I understood this I just didn't write it correctly. On sandybridge when you run MS os on it . You want to use a programm that uses sse That program has to be ported(recompiled) or it will not run on sandy . There I hope that clears that part up . But so use to just calling those programms x86 . My error . The easyest port is All SSE2 with a prefix of (vec) .

Now back on topic I don't care what you all think my, honest evaluation . Of some of what Havok is bringing is an example of whats in Meteor vid. I am happy . Its the best example I have seen todate . and I think ATI/Havok are going to have a nice platiform to work off of. Should be great. I see Intel larrabee havok having great physics also with a great platform .

NV PX looks interesting Stuff I seen in the thread they posted I liked what I seen looked OK. And I said so and left . As I should. Just being respectful of an NV thread as Virge requested some time ago . Is that not so Virge. I hope thats not rule violation.
 

Denithor

Diamond Member
Apr 11, 2004
6,300
23
81
We interupt this flame-fest for the following news (courtesy of xbitlabs).

Havok, a wholly owned subsidiary of Intel Corp. and a provider of various middle-ware for game development, along with ATI, graphics products group of Advanced Micro Devices, demonstrated the industry?s first cross-platform physics effects engine that enables realistic behaviour of cloth, fabric, hair, etc. Thanks to OpenCL, Havok Cloth middle-ware can be processed on both ATI and Nvidia graphics chips.

?Havok is committed to delivering highly optimized cross-platform solutions to our game customers and we are pleased to be working with AMD to ensure that gamers enjoy a great user experience when running Havok-powered games on AMD platforms. Unlocking the parallel processing capability of AMD?s hardware provides real advantages to our customers, and the greater the total computing resources available, the better the gaming experience developers can deliver,? said David Coghlan, vice president of development for Havok.

At Game Developers Conference 2009, Havok is showcasing industry-changing technical breakthroughs in real-time clothing simulation, marking the first time that movie-quality cloth has been seen on in-game characters. The ease of integration of the product is highlighted by the fact that the first games to feature Havok Cloth are launching in Spring 2009. It should be noted that those titles are based on Havok Cloth released in early 2008 and are not OpenCL-based.

Havok Cloth is a platform-optimized run-time software development kit and toolset that significantly increases the realism of game characters and environments by enabling character designers to add true-to-life, physically-based motion to garments, environmental objects and other deformable items like hair, bellies or tails. Havok Cloth also minimizes the time that artists spend on animating the behavior of character garments and environmental cloth in games.

Thanks to optimization of the latest Havok Cloth version for OpenCL standard, the new cloth simulation engine can utilize both graphics processing units that support OpenCL as well as traditional central processing units. Presently Havok Cloth can take advantage of ATI Stream technology supported by Radeon graphics chips. Later on it will also gain support of Nvidia GeForce, S3 Graphics Chrome and Intel Larrabee graphics processors.

But wait, there's more: YouTube video: Havok on AMD GPU

Most impressive demo I've seen yet, that could make for some superb effects in clothing, banners, curtains, etc in games.

I wonder if they could also use this kind of simulation for clothing designers to preview their work before prototyping? You would have to factor in fabric density, drape, stretch, etc.

Now, we return you to your regularly scheduled flamewar.

/ducks for cover
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Denithor


Most impressive demo I've seen yet, that could make for some superb effects in clothing, banners, curtains, etc in games.

I wonder if they could also use this kind of simulation for clothing designers to preview their work before prototyping? You would have to factor in fabric density, drape, stretch, etc.

Now, we return you to your regularly scheduled flamewar.

/ducks for cover



Been using the cloth simulations for a bit now and yes it is amazing. It really was the last edge that PhysX had. I've been running them using Havok on Nvidia hardware at that
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Denithor
But wait, there's more: YouTube video: Havok on AMD GPU

Most impressive demo I've seen yet, that could make for some superb effects in clothing, banners, curtains, etc in games.

I wonder if they could also use this kind of simulation for clothing designers to preview their work before prototyping? You would have to factor in fabric density, drape, stretch, etc.

Now, we return you to your regularly scheduled flamewar.

/ducks for cover
That demo was linked last week by wlee15, at a higher frame rate as well. But yes, no one has said it wasn't an impressive demo and certainly shows the potential of hardware in physics in games, which is why some of us have been "pimping" accelererated physics since Day 1.

Originally posted by: Modelworks
Been using the cloth simulations for a bit now and yes it is amazing. It really was the last edge that PhysX had. I've been running them using Havok on Nvidia hardware at that
Now that's awesome news. :thumbsup:
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |