ATI Havok GPU physics apparently not as dead as we thought

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Originally posted by: Wreckage
Originally posted by: Zstream

Considering Havok can run on PS2, xbox, 360, ps3, gc and a PC it is easy to prove it runs better if not much more optimized.

PhysX can run on all of those as well....it even runs on the iPhone.

Not sure what your point is.

When did they start supporting the xbox and playstation 2? You still need Cuda enabled card to accelerate Physx. Just because the SDK is available does not mean acceleration is running. Who would want to run Physx without hardware acceleration?

It does run on ATI hardware, The SDK is now available for the Wii Wii
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: Zstream
Originally posted by: Wreckage
Originally posted by: Zstream

Considering Havok can run on PS2, xbox, 360, ps3, gc and a PC it is easy to prove it runs better if not much more optimized.

PhysX can run on all of those as well....it even runs on the iPhone.

Not sure what your point is.

When did they start supporting the xbox and playstation 2? You still need Cuda enabled card to accelerate Physx. Just because the SDK is available does not mean acceleration is running. Who would want to run Physx without hardware acceleration?

It does run on ATI hardware, The SDK is now available for the Wii Wii
I don't recall it ever being announced for those platforms. They have SDKs for the current-gen consoles, but not the last-gen ones.

And the Wii implementation is a software (CPU-only) implementation; so it's a bit of misnomer to say it runs on ATI hardware. The GPU in the Wii isn't fully programmable anyhow, so we'd be talking about something that's not possible.

Oh, and I'm sure plenty of people would want to use PhysX without hardware (GPU) acceleration. It's a solid physics system, and as far as I know it's in the same league as Havok performance-wise.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,980
126
Originally posted by: Zstream

Who would want to run Physx without hardware acceleration?
Most of the PhysX games at nzone run under software. I have several such titles: UT3 (I keep hardware physics disabled), Jericho, Hell's Highway, and MoH Airborne. They run fine on the CPU and have some decent physics effects.

PhysX is by no means unusable on the CPU, just like Havok isn't.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Originally posted by: BFG10K
Originally posted by: Zstream

Who would want to run Physx without hardware acceleration?
Most of the PhysX games at nzone run under software. I have several such titles: UT3 (I keep hardware physics disabled), Jericho, Hell's Highway, and MoH Airborne. They run fine on the CPU and have some decent physics effects.

PhysX is by no means unusable on the CPU, just like Havok isn't.

Oh I agree, it is that all the optimizations for PhysX is for the GPU (IMO). Now this might not be true but it seemed Havok runs decent under the CPU.
 

Frank Encruncher

Junior Member
Mar 28, 2009
2
0
0
My post turned into a two discussion as I was writing it.

Part 1: ATI and Havok

Most of this discussion is moot as to why ATI went with Havok for physics.
It had little to do with who has the best physics engine or the most game developers.

As they say in the movie "Show me the money"

You have to look at these companies as whole entities, AMD, Intel and Nvidia.
AMD decision was based on what will sell the most hardware now and in the future.
There has been a major shift to platforms sales since AMD bought ATI and Intel's future Larabee.
It wasn't so much of choice between PhysX and Havok but more of CUDA(GPU) vs OpenCL(CPU/GPU).
Computer parts sales for gaming is small part of these companies. They make their money on OEM sales mostly to businesses selling servers, work desktops and work stations.

There will be big fight for 3D model rendering is coming up, movie CGI, climatic modelling, protien folding modelling and other fields, which will reguire GPGPU servers. Sales which make gaming look insignificant.

AMD and Intel will be pushing to sell whole networks(business and home), servers, desktops, work stations and upcoming GPGPU servers.

Do you think AMD and Intel would let nVidia's CUDA become the standard for GPGPU servers. AMD and Intel will push hard sell their own solutions which use both CPU/GPGPU.

As to nVidia being a partcipant in OpenCL that was before AMD bought ATI and Larabee was a gleam in somebody eye.

It's a case of AMD and Intel will allow nVidia to play in the game, but won't let them write the rules. nVidia going to be squeezed to a third player slowly because it doesn't have a CPU/GPGPU solution.

It all comes down to money(actually revenues(money coming in))
For AMD the sales added because of PhysX using CUDA is minuscule compared to the sales using OpenCL

Part2: Physics General

Beside all that, most physics are terrible anyways.
Most will agree with me, until I can drop a wall on an opponent using a tank shell or fireball(genre of your choice) physcis are wasteful.
No one cares about how nice there opponent robes flow before you kill them or be killed by them or how realastic that flag looks.
Ragdoll effects look cool until you die because you were looking at the how silly of an effect more then playing the game.
When I shoot a paint can I want to see paint splatter all over the place not watch the can bounce a couple times.
The bridge demo was nice as a "cut scene" more for cinematic effects but I want to drop a building on somebody.

Most of this eyecandy is a waste of game developers time and a waste of comsumer money for eyecandy they'll rarely notice.

GPU physics needs to give a gamer more relavant effects for consumer to spend their money, like dropping a wall on an enemy.

In the end
AMD's use of Havok was an overt act to show nVidia it was going to be a third player in future hardware sells. You may see PhysX on ATI cards in the future but using OpenCL not CUDA(That's what this is all about: NOT CUDA)


 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: munky

Isn't that what this thread is all about? Ati working to support Havok on their GPU's?
No, you stated that "Ati could also be right up there with their own implementation". We are talking about Havok which is owned by Intel.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Frank Encruncher
My post turned into a two discussion as I was writing it.

Part 1: ATI and Havok

Most of this discussion is moot as to why ATI went with Havok for physics.
It had little to do with who has the best physics engine or the most game developers.

As they say in the movie "Show me the money"

You have to look at these companies as whole entities, AMD, Intel and Nvidia.
AMD decision was based on what will sell the most hardware now and in the future.
There has been a major shift to platforms sales since AMD bought ATI and Intel's future Larabee.
It wasn't so much of choice between PhysX and Havok but more of CUDA(GPU) vs OpenCL(CPU/GPU).
Computer parts sales for gaming is small part of these companies. They make their money on OEM sales mostly to businesses selling servers, work desktops and work stations.

There will be big fight for 3D model rendering is coming up, movie CGI, climatic modelling, protien folding modelling and other fields, which will reguire GPGPU servers. Sales which make gaming look insignificant.

AMD and Intel will be pushing to sell whole networks(business and home), servers, desktops, work stations and upcoming GPGPU servers.

Do you think AMD and Intel would let nVidia's CUDA become the standard for GPGPU servers. AMD and Intel will push hard sell their own solutions which use both CPU/GPGPU.

As to nVidia being a partcipant in OpenCL that was before AMD bought ATI and Larabee was a gleam in somebody eye.

It's a case of AMD and Intel will allow nVidia to play in the game, but won't let them write the rules. nVidia going to be squeezed to a third player slowly because it doesn't have a CPU/GPGPU solution.

It all comes down to money(actually revenues(money coming in))
For AMD the sales added because of PhysX using CUDA is minuscule compared to the sales using OpenCL

Part2: Physics General

Beside all that, most physics are terrible anyways.
Most will agree with me, until I can drop a wall on an opponent using a tank shell or fireball(genre of your choice) physcis are wasteful.
No one cares about how nice there opponent robes flow before you kill them or be killed by them or how realastic that flag looks.
Ragdoll effects look cool until you die because you were looking at the how silly of an effect more then playing the game.
When I shoot a paint can I want to see paint splatter all over the place not watch the can bounce a couple times.
The bridge demo was nice as a "cut scene" more for cinematic effects but I want to drop a building on somebody.

Most of this eyecandy is a waste of game developers time and a waste of comsumer money for eyecandy they'll rarely notice.

GPU physics needs to give a gamer more relavant effects for consumer to spend their money, like dropping a wall on an enemy.

In the end
AMD's use of Havok was an overt act to show nVidia it was going to be a third player in future hardware sells. You may see PhysX on ATI cards in the future but using OpenCL not CUDA(That's what this is all about: NOT CUDA)

Your point 2 . Oh ! God no. If I am in a game to become fully immersised sound has to be excellent and what I see has to fool my senses. Ray tracing for instance in combat real world there are many elements to keep your head on shoulders . With ray tracing you get those elements to aid you you game reflection of enemey behind you is best example of many many. Who cares about how a scene looks. Anyone who wants a cheap thrill without spending excessive money on other forms on entertainment or hobbies. Immersion has to do with fooling the senses . Sound and graphics is Immersion . Immersion is holy grail.

If in a scene the world your in . Has elements that you need to survive. You will use those extra elements. Drawing you further into the game. If a scene can occupy your senses enough . Your gaming experience should be greatly inhanced. But I an old dreamer. SO who knows.

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Originally posted by: munky

Isn't that what this thread is all about? Ati working to support Havok on their GPU's?
No, you stated that "Ati could also be right up there with their own implementation". We are talking about Havok which is owned by Intel.

Which means they could be right up there with their own GPU implementation of Havok in games which use Havok. Or in Mirrors Edge if Nvidia ports PhysX to OpenCL.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: munky

Isn't that what this thread is all about? Ati working to support Havok on their GPU's?
No, you stated that "Ati could also be right up there with their own implementation". We are talking about Havok which is owned by Intel.

Which means they could be right up there with their own GPU implementatioin of Havok in games which use Havok. Or in Mirrors Edge if Nvidia ports PhysX to OpenCL.

Munky the POINT is HAVOK is owned by INTEL . It doesn't matter Havok and ATI been working for over a year on this stuff. It doesn't matter AMD is choosing how the implament pyhsics GpU/CPU . Being how were talking Open CL here .I would imagine that the havok/ATI will use full platiform resources including the CPU.

I personnally won't get sucked in to the NVvs intel havok is Intel NV is cuda PX.

Because I don't care what NV does I will never buy another NV product. So it doesn't matter. But Having AMD/ATI and Intel on same page.Does matter it gives me choice and compatiability.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
If your talking about the Metor clip ya thats it.

Banners waving from a slight breeze.

Foliage moving heavily from big gusts of wind(banners still only moving from a slight breeze)

Meteors come in knocking down large stone pillars, right next to the plants which still only move from the gusts of wind(banners still in the slight breeze)

What little particle effects the meteor creates, not impacted by either the slight breeze or the heavy gusts of wind.

It isn't that that Meteor clip wasnt a great physics demonstration, it was flat out bad. Seriously, the ones displaying the dress and stuff that Havok released earlier were far more impressive. You wouldn't release a lighting demo and have some of the lights work on some objects only and none of the lights work on every object, but that is exactly what we are seeing in this Meteor demo. It almost looks like everything was simply following a scripted animation sequence and wasn't really using physics at all(not saying that is what was happening, just that is what it looked like).

However AMD cannot change the fact that their primary competitor in the GPU market owns PhysX and implemented it on their HW, which makes GPU-PhysX support a lot less appealing to AMD from a business perspective.

Now they are faced with the possibility of nVidia releasing a less then optimal OpenCL version of PhysX instead of having a part in hand tuning a CUDA version. Actually, they can only release a sub optimal version in reality, just the nature of the platform(not that that is bad, just part of being cross platform).

There will be big fight for 3D model rendering is coming up, movie CGI, climatic modelling, protien folding modelling and other fields, which will reguire GPGPU servers. Sales which make gaming look insignificant.

The gaming market dwarfs high end visualization. Ask SGI, Realizm, 3DLabs etc about that. nVidia was able to move in and take over that market(which they now share with AMD) just by using a bit of spare die space on their gaming parts due to scales of economy.

As to nVidia being a partcipant in OpenCL

Apple and nVidia build OpenCL on nVidia hardware, one of nVidia's VPs is the chair of the OpenCL group. They aren't 'being allowed' to play, they made the game with Apple in the first place. Not sure how people can get it mixed up with Intel and AMD, they were sitting around doing nothing while others were building the standard.

No one cares about how nice there opponent robes flow before you kill them or be killed by them or how realastic that flag looks.

That is utterly wrong, I care for one. Not saying it is the deciding factor when I am buying a game, but it certainly is a factor. Would I own Crysis if it wasn't the defacto standard for visuals on the PC? Doubtful. It's a decent shooter, but we have tons of those. The fact that it has stunning visuals set it apart. Games that utilize physics to improve visuals certainly benefit from them considerably(not that Crysis does much of that, games like KZ2 do though).

GPU physics needs to give a gamer more relavant effects for consumer to spend their money, like dropping a wall on an enemy.

Sadly, we need to wait until everyone is on the ball with support for hardware accelerated physics before we can see that unless we want to start seeing nVidia only games released. I don't want to go back to the Glide days

With ray tracing you get those elements to aid you you game reflection of enemey behind you is best example of many many

Rasterizers are actually much better at that particular example. Off scene reflections are an exponential increase in rendering requirements for a ray tracer, trivial by comparison for a rasterizer.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Nemesis 1
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: munky

Isn't that what this thread is all about? Ati working to support Havok on their GPU's?
No, you stated that "Ati could also be right up there with their own implementation". We are talking about Havok which is owned by Intel.

Which means they could be right up there with their own GPU implementatioin of Havok in games which use Havok. Or in Mirrors Edge if Nvidia ports PhysX to OpenCL.

Munky the POINT is HAVOK is owned by INTEL . It doesn't matter Havok and ATI been working for over a year on this stuff. It doesn't matter AMD is choosing how the implament pyhsics GpU/CPU . Being how were talking Open CL here .I would imagine that the havok/ATI will use full platiform resources including the CPU.

I personnally won't get sucked in to the NVvs intel havok is Intel NV is cuda PX.

Because I don't care what NV does I will never buy another NV product. So it doesn't matter. But Having AMD/ATI and Intel on same page.Does matter it gives me choice and compatiability.

No, the point is not only who owns what, but how they're using it. Intel may own Havok, but they're not doing anything with it so far. Nvidia, on the other hand, not only owns PhysX, but is using physX to do pretty much everything I'd expect at this point. I already explained how these decisions fit into the future of GPGPU and all the companies involved, and AMD knows it, as does Nvidia, while most people here only seem to be concerned that Nvidia has physics on their GPU but not AMD.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
If your talking about the Metor clip ya thats it.

Banners waving from a slight breeze.

Foliage moving heavily from big gusts of wind(banners still only moving from a slight breeze)

Meteors come in knocking down large stone pillars, right next to the plants which still only move from the gusts of wind(banners still in the slight breeze)

What little particle effects the meteor creates, not impacted by either the slight breeze or the heavy gusts of wind.

It isn't that that Meteor clip wasnt a great physics demonstration, it was flat out bad. Seriously, the ones displaying the dress and stuff that Havok released earlier were far more impressive. You wouldn't release a lighting demo and have some of the lights work on some objects only and none of the lights work on every object, but that is exactly what we are seeing in this Meteor demo. It almost looks like everything was simply following a scripted animation sequence and wasn't really using physics at all(not saying that is what was happening, just that is what it looked like).

However AMD cannot change the fact that their primary competitor in the GPU market owns PhysX and implemented it on their HW, which makes GPU-PhysX support a lot less appealing to AMD from a business perspective.

Now they are faced with the possibility of nVidia releasing a less then optimal OpenCL version of PhysX instead of having a part in hand tuning a CUDA version. Actually, they can only release a sub optimal version in reality, just the nature of the platform(not that that is bad, just part of being cross platform).

There will be big fight for 3D model rendering is coming up, movie CGI, climatic modelling, protien folding modelling and other fields, which will reguire GPGPU servers. Sales which make gaming look insignificant.

The gaming market dwarfs high end visualization. Ask SGI, Realizm, 3DLabs etc about that. nVidia was able to move in and take over that market(which they now share with AMD) just by using a bit of spare die space on their gaming parts due to scales of economy.

As to nVidia being a partcipant in OpenCL

Apple and nVidia build OpenCL on nVidia hardware, one of nVidia's VPs is the chair of the OpenCL group. They aren't 'being allowed' to play, they made the game with Apple in the first place. Not sure how people can get it mixed up with Intel and AMD, they were sitting around doing nothing while others were building the standard.

No one cares about how nice there opponent robes flow before you kill them or be killed by them or how realastic that flag looks.

That is utterly wrong, I care for one. Not saying it is the deciding factor when I am buying a game, but it certainly is a factor. Would I own Crysis if it wasn't the defacto standard for visuals on the PC? Doubtful. It's a decent shooter, but we have tons of those. The fact that it has stunning visuals set it apart. Games that utilize physics to improve visuals certainly benefit from them considerably(not that Crysis does much of that, games like KZ2 do though).

GPU physics needs to give a gamer more relavant effects for consumer to spend their money, like dropping a wall on an enemy.

Sadly, we need to wait until everyone is on the ball with support for hardware accelerated physics before we can see that unless we want to start seeing nVidia only games released. I don't want to go back to the Glide days

With ray tracing you get those elements to aid you you game reflection of enemey behind you is best example of many many

Rasterizers are actually much better at that particular example. Off scene reflections are an exponential increase in rendering requirements for a ray tracer, trivial by comparison for a rasterizer.

Good job . Now show me better right now . or reveal your true intent.

This is a hybred. If you looked at scene as close as you looked for programming errors. You would have seen the differant lighting being used. I do believe Intel has said plainly rendering hardware pipeline be around a bit longer.

But please post more on your outragious BS on cl with links so I can reply in kind.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
OpenCL was initially conceived by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, Intel and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008 the Khronos Compute Working Group was formed[1] with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for the next 5 months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008.[2] This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.[3]

OpenCL is scheduled to be introduced in Mac OS X v10.6 ('Snow Leopard'). According to an Apple press release:[4]

Snow Leopard further extends support for modern hardware with Open Computing Language (OpenCL), which lets any application tap into the vast gigaflops of GPU computing power previously available only to graphics applications. OpenCL is based on the C programming language and has been proposed as an open standard.

AMD has decided to support OpenCL (and DirectX 11) instead of the now deprecated Close to Metal in its Stream framework.[5][6] RapidMind announced their adoption of OpenCL underneath their development platform, in order to support GPUs from multiple vendors with one interface.[7] Nvidia announced on December 9, 2008 to add full support for the OpenCL 1.0 specification to its GPU Computing Toolkit.[8]
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Now show me better right now

Any of the demos Havok released this week at GDC, they are all MUCH better displays of physics. What Meteor showed is poor by CPU based physics standards, honestly flat out bad. Havok showed FAR more impressive stuff themselves. This isn't about PhysX versus Havok, in that demo the physics sucked- we have already seen that Havok can do much better.

You would have seen the differant lighting being used.

I didn't mention anything about the lighting in that demo, I brought up a hypothetical demo which was supposed to show off lighting but sucked at it. That is what that demo did in terms of physics.

But please post more on your outragious BS on cl with links so I can reply in kind.

You realize you yourself just posted what I was talking about? If it is outrageous BS, why did you bother to quote it youself?
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
HPCwire >> Off the Wire
December 08, 2008

The Khronos Group Releases OpenCL 1.0 Spec


--------------------------------------------------------------------------------

Page: 1 of 2
1 | 2 All »


New industry standard unleashes the vast computing power of modern processors

SINGAPORE, Dec. 9 -- The Khronos Group today announced the ratification and public release of the OpenCL 1.0 specification, the first open, royalty-free standard for cross-platform, parallel programming of modern processors found in personal computers, servers and handheld/embedded devices. OpenCL (Open Computing Language) greatly improves speed and responsiveness for a wide spectrum of applications in numerous market categories from gaming and entertainment to scientific and medical software. Proposed six months ago as a draft specification by Apple, OpenCL has been developed and ratified by industry-leading companies including 3DLABS, Activision Blizzard, AMD, Apple, ARM, Barco, Broadcom, Codeplay, Electronic Arts, Ericsson, Freescale, HI, IBM, Intel Corporation, Imagination Technologies, Kestrel Institute, Motorola, Movidia, Nokia, NVIDIA, QNX, RapidMind, Samsung, Seaweed, TAKUMI, Texas Instruments and Umeå University. The OpenCL 1.0 specification and more details are available at http://www.khronos.org/opencl/.

"The opportunity to effectively unlock the capabilities of new generations of programmable compute and graphics processors drove the unprecedented level of cooperation to refine the initial proposal from Apple into the ratified OpenCL 1.0 specification," said Neil Trevett, chair of the OpenCL working group, president of the Khronos Group and vice president at NVIDIA. "As an open, cross-platform standard, OpenCL is a fundamental technology for next generation software development that will play a central role in the Khronos API ecosystem and we look forward to seeing implementations within the next year."

"We are excited about the industry-wide support for OpenCL," said Bertrand Serlet, Apple's senior vice president of Software Engineering. "Apple developed OpenCL so that any application in Snow Leopard, the next major version of Mac OS X, can harness an amazing amount of computing power previously available only to graphics applications."

OpenCL enables software developers to take full advantage of a diverse mix of multi-core CPUs, Graphics Processing Units (GPUs), Cell-type architectures and other parallel processors such as Digital Signal Processors (DSPs). OpenCL consists of an API for coordinating parallel computation and a programming language for specifying those computations. Specifically, the OpenCL standard defines:

A subset of the C99 programming language with extensions for parallelism.
An API for coordinating data and task-based parallel computation across a wide range of heterogeneous processors.
Numerical requirements based on the Institute of Electrical and Electronics Engineers' IEEE 754 standard.
Efficient interoperability with OpenGL, OpenGL ES and other graphics APIs.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
Now show me better right now

Any of the demos Havok released this week at GDC, they are all MUCH better displays of physics. What Meteor showed is poor by CPU based physics standards, honestly flat out bad. Havok showed FAR more impressive stuff themselves. This isn't about PhysX versus Havok, in that demo the physics sucked- we have already seen that Havok can do much better.

You would have seen the differant lighting being used.

I didn't mention anything about the lighting in that demo, I brought up a hypothetical demo which was supposed to show off lighting but sucked at it. That is what that demo did in terms of physics.

But please post more on your outragious BS on cl with links so I can reply in kind.

You realize you yourself just posted what I was talking about? If it is outrageous BS, why did you bother to quote it youself?

Any of the demos Havok released this week at GDC, they are all MUCH better displays of physics. What Meteor showed is poor by CPU based physics standards, honestly flat out bad. Havok showed FAR more impressive stuff themselves. This isn't about PhysX versus Havok, in that demo the physics sucked- we have already seen that Havok can do much better

Just 1 link is all we ask to see. Lets see it.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Apple and nVidia build OpenCL on nVidia hardware, one of nVidia's VPs is the chair of the OpenCL group. They aren't 'being allowed' to play, they made the game with Apple in the first place. Not sure how people can get it mixed up with Intel and AMD, they were sitting around doing nothing while others were building the standard.

This is a flat out freaken lie . As I showed.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Intel® Software NetworkConnect with developers and Intel engineers
Communities
AcademicCluster ReadyManageabilityMobilityParallel Programming and Multi-CoreOpen SourceVirtualizationVisual ComputingMore...DownloadsTools
Product SuitesCompilersVTune? AnalyzersPerformance LibrariesThreading Analysis ToolsCluster ToolsSOA ProductsBuy or RenewFree Evaluation SoftwareReseller CenterAcademic ProgramCustomer CommentsPlatform Administration ProductsContent Management ProductsTools Knowledge BaseForums/Blogs
ForumsBlogsBlog CategoriesMeet The BloggersResources
Events CalendarIntel Press Technical BooksIntel Software Insight MagazineIntel Visual Adrenaline MagazineIntel Software Partner ProgramKnowledge BaseTake Five VideosTrainingWhat If SoftwareSoftware Support
English | ?? | ???????


Login

Login IDassword:Remember Me?


Forgot Login ID?
Forgot Password?
New Registration?

Search

Entire SiteKnowledgebaseForumsVideosBlogs
Advanced Search

1,861 Posts served

6,545 Conversations started

Navigation
Blog Categories
Meet The Bloggers
Archives
Posts By Category
Academic
Atom
Bit Stories
Cool Software
Customer Support
Events
Financial Services Industry
Gaming
Intel SW Partner Program
Intel® Software Network 2.0
Manageability
Mobility
Multi-Core
Open Source
Social Media & Virtual Worlds
Software Engineering
Threading Building Blocks
Virtualization
Visual Computing
What If Software
XML Software
Popular Posts
Update on the 915 Graphics WDDM Vista Driver Issue
ASF and Intel AMT - Spot the differences (part 1)
Video: Why Intel 915 graphics don't have a WDDM driver for Vista
Why Windows Threads Are Better Than POSIX Threads
Windows Server 2008 "Aero Enabled" Workstation Edition
Home ? Software Blogs ?
Mac OS X 10.6 Snow Leopard: Reading from the Intel Cookbook
By Josh Bancroft (Intel) (69 posts) on June 10, 2008 at 10:08 pm
The Apple WWDC 2008 keynote has come and gone, and my wild speculation about what Apple might say about the next version of OS X, 10.6 code named "Snow Leopard" (and affectionately christened "Snot Leopard" thanks to a typo during my WWDC liveblogging ;-) ), that it would be announced as the operating system for a "netbook" or Mobile Internet Device powered by the Intel Atom processor, didn't come true. In fact, besides a brief reference to an after-lunch WWDC session (under NDA), Steve Jobs didn't say much about Snow Leopard at all. Since then, a few more details have become available, and Apple has put up a page with the (limited) info:

http://www.apple.com/macosx/snowleopard/

Much has been written about the more controversial questions - are they really not adding any new features? Are they going to drop PPC support? Is it going to be 64-bit only (and if so, what about early Intel Core Duo chips that aren't fully 64-bit capable?). I'll leave all that to the people who know what they're talking about. But what strikes me as interesting is that the few fundamental technologies they HAVE discussed looks like a mirror image of the technologies Intel, and specifically, our group Intel Software Network, Intel's developer community, have been promoting and evangelizing to software developers for quite a while now.

First, I have to cling to my hope and dream that one day, Apple will release something along the lines of a "netbook", like the Asus Eee PC or the MSI Wind. Something like the MacBook Air, but much smaller. Apple's throwing fuel on that particular speculative fire with statements like this:

Snow Leopard dramatically reduces the footprint of Mac OS X, making it even more efficient for users, and giving them back valuable hard drive space for their music and photos.
Having recently paved and done a clean install of Mac OS X Leopard on my MacBook Pro, I can tell you that the operating system itself only takes up about 5.5 GB of hard drive space. Hard drives are growing in capacity and dropping in price at an astounding rate (did you ever dream you'd be able to pick up a terabyte of disk space for a couple hundred bucks?). So why would Apple care about reducing that 5-6 GB footprint, when drives are huge and cheap? Think SSD. Solid State Disks. Like the ones in the netbook devices. The Asus Eee PC I got to play with a while ago had a 4 GB SSD. Current models have 12 or 20GB. Fast, efficient, and no moving parts. Perfect for mobile devices. But still really expensive - you can get a 64GB SSD in a MacBook Air instead of the much slower 80GB hard drive, but it will cost you a cool $999 for the upgrade. SSDs are coming down in price, but they're still going to be expensive in any really large sizes for a while. So, if Apple was thinking of doing a Mobile Internet Device or netbook, it makes sense to squeeze OS X down as much as they can, to make, say, an affordable 16GB SSD a viable option that won't get hogged by just the OS.

Next, there's the new "Grand Central" technology, that focuses on taking full advantage of multicore processors:

?Grand Central,? a new set of technologies built into Snow Leopard, brings unrivaled support for multicore systems to Mac OS X. More cores, not faster clock speeds, drive performance increases in today?s processors. Grand Central takes full advantage by making all of Mac OS X multicore aware and optimizing it for allocating tasks across multiple cores and processors. Grand Central also makes it much easier for developers to create programs that squeeze every last drop of power from multicore systems.
Emphasis mine. Intel Software Network has been banging on the multicore drum for quite a while now, ever since it became clear that the future of processor performance was more and more cores working in parallel, rather than ever-increasing clock speeds. In fact, we have a whole multicore developer community (hosted by my awesome colleague, Aaron Tersteeg) dedicated to multicore programming resources, tools, learning, and access to the Intel experts who literally wrote the book on this stuff. I'm sure as Snow Leopard gets closer, you Mac developers will (hopefully) be seeing a lot more details from both Apple and Intel on how to make your apps sing on many-core processors. It's the biggest fundamental shift in computing since, say, the x86 architecture became the standard. I can't wait to see this gain broader acceptance and implementation.

Finally, Apple teases us with this little tidbit on the vaguely-named Open CL (Open Computing Language), apparently aimed at taking advantage of upcoming super-powerful GPUs for other computing tasks:

Another powerful Snow Leopard technology, OpenCL (Open Computing Language), makes it possible for developers to efficiently tap the vast gigaflops of computing power currently locked up in the graphics processing unit (GPU). With GPUs approaching processing speeds of a trillion operations per second, they?re capable of considerably more than just drawing pictures. OpenCL takes that power and redirects it for general-purpose computing.
They don't name any one company's products or technologies, but it's well known that Nvidia and Intel are both working on many-core GPUs that support "GPGPU" - General Purpose (Computing) on the GPU. And again, my group, Intel Software Network, has a whole community (this one just freshly minted!) dedicated to what we call Visual Computing. Steve Pitzel hosts this community (Steve has more interesting stories than ANYONE I know - ask him some time!), and the super swanky page design came from our resident web development wizard, Kevin Pirkl. Intel has a little upcoming product called Larrabee that we think is going to really turn the notion of what a GPU is for on its head. Have you noticed how Nvidia has been getting very aggressive towards Intel, some might say even attacking? Yeah, it's because of Larrabee. And knowing Apple, they'll be right there, ready to take advantage of all of the advances in the visual computing world. Competition is a good thing.

Anyway, that's it for today's dose of idle speculation, and listening to me play armchair industry analyst. I have to say it feels pretty cool to work for a company (Intel) that has such influence over the world of technology. I get to see SO MANY COOL THINGS in the course of my job, I feel spoiled. And I try to share as much with you as I can - like tomorrow, I'll be filming demos at the Research@Intel event at the Computer History Museum in Mountain View. From the previews I've seen, some of this stuff is just freaky sci-fi cool. I can't wait to see it, shoot it, and get it out to you. As usual, I'd love to hear your thoughts, even if all you have to say is how wrong you think I am. Leave it in a comment!
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
This is a flat out freaken lie . As I showed.

I apologize for being able to Google something, and not being an idiot that listens to your comments and takes them as fact. Sorry man, but seriously seconds worth of research would make your posts far more valid.

Indeed Neil Trevett, formerly of 3D Labs and currently VP of Embedded Content at NVIDIA, heads up the OpenCL work group. Given the similarity between C for CUDA and OpenCL and the fact that, officially, Apple initiated the idea and put it forward for discussion (after having decided to equip its new Macs with NVIDIA products?), NVIDIA can in reality be seen as a joint instigator.

Link.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
This is a flat out freaken lie . As I showed.

I apologize for being able to Google something, and not being an idiot that listens to your comments and takes them as fact. Sorry man, but seriously seconds worth of research would make your posts far more valid.

Indeed Neil Trevett, formerly of 3D Labs and currently VP of Embedded Content at NVIDIA, heads up the OpenCL work group. Given the similarity between C for CUDA and OpenCL and the fact that, officially, Apple initiated the idea and put it forward for discussion (after having decided to equip its new Macs with NVIDIA products?), NVIDIA can in reality be seen as a joint instigator.

Link.

LOL like Intel and Jobs aren't in bed to gether LOL . Apple throws NV a bone LOL.

Since apple switched to NV chipsets they have lost market share FACT!. It the economy right LOL.

YOu seem to think a VP at NV being the head of a group is a bid deal . Sorry thats not the case at all. It clearly states all members have rights to add to the specs. This is an industry standard . Not how its meant to be played payoffs. NV can't buy anyone here. Who is more important to Apple? Intel or Sickly NV.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
Pretty easy.

Lol not even close . Besides thats in this game . Just because you didn't see it . LOL. god that was funny.

Along time ago I told you guys in a thread on Apple and Intel and what it means to gaming for Apple. I said intel and apple were working on a standard outside of DX and it was all true. But the NV people now want you to believe it was NV and Apple LOL.

I believe the thread was called Apple intel what it means to gaming . I had it bookmarked for just this reason but IE8 dumped it.
 

nosfe

Senior member
Aug 8, 2007
424
0
0
Since apple switched to NV chipsets they have lost market share FACT!. It the economy right LOL.

that statement has so many flaws in it that i don't even know where to begin(or is my sarcasm detector broke again?)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Who is more important to Apple? Intel or Sickly NV.

You should study up a bit on the history of Apple, Intel has been a very, very small supplier for them on a historical basis. Apple is also very much the sort of company that would drop x86 whenever they felt like it and move to a completely different architecture- they have done it twice before.

YOu seem to think a VP at NV being the head of a group is a bid deal .

I pointed out that he was as some exceptionaly dishonest people were acting like OpenCL was an AMD/Intel proposal from the start- they were brought on board after Apple and nVidia had already been working on the standard. These things are fact.

Lol not even close .

Then quite frankly you have no idea what you are talking about. I explained the issues with the rather shockingly poor demo vid you linked, the "physics" in it were horribly broken and I pointed out easy ways for anyone to be able to tell. If you can't understand that, I can't help you. Go to school, learn a little bit about computer science perhaps?

Besides thats in this game

No, it isn't. Supposedly it is in the engine, but we don't have proof of that either. Noone involved claims it is in the game.

I said intel and apple were working on a standard outside of DX and it was all true

Apple has had OpenGL for many years now, what are talking about? Seriously, do you have any idea at all what the difference between OpenGL and OpenCL is? Honestly, it sounds like you don't have the slightest understanding of any of the technologies involved. You give the impression that you can read press releases and then put together exceedingly poorly written posts.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |