Project offset Demo using INTEL

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Acanthus
Man Neme, no offense but your posts are hard to read my friend.

I agree it's a very nice engine.

As an aside, i dont see software physics overtaking hardware physics in the short term. Maybe 5 years down the road.

None taken . I understand. I am surprized I can do as well as I am doing actually.

Well I see software physics overtaking hardware physics with the release of Larrabee.

Best part is its getting closer . Not long now. I am inpatient but what can I do? Nothing but set here and rot away.

 

zagood

Diamond Member
Mar 28, 2005
4,102
0
71
Originally posted by: Acanthus
Man Neme, no offense but your posts are hard to read my friend.

Agreed...I can literally only read one or two of your posts to get the gist of what the thread is about, then read responses.

IF what you're proposing is true, and IF the end result of the game uses the same tech and IF the average consumer can afford the hardware (i.e. the Intel processor that can run it with all the bells and whistles) then we're looking at an amazing new tech that will change the world of gaming as we know it.

CryEngine2 got two out of three, and look how that turned out.

I'll believe it when I see it.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Nemesis 1
Originally posted by: Acanthus
Man Neme, no offense but your posts are hard to read my friend.

I agree it's a very nice engine.

As an aside, i dont see software physics overtaking hardware physics in the short term. Maybe 5 years down the road.

None taken . I understand. I am surprized I can do as well as I am doing actually.

Well I see software physics overtaking hardware physics with the release of Larrabee.

Best part is its getting closer . Not long now. I am inpatient but what can I do? Nothing but set here and rot away.

I honestly believe that graphics will be the last thing to be integrated onto the CPU as we head toword the "system on a chip" models Intel and AMD are both striving for.

The amount of real estate that a high performance graphics solution sucks up would be a disproportionately large part of the CPU.

Physics and Graphics are both insanely, almost infinately parallel tasks. As programmability of graphics cards increase, physics is being programmed to work on GPUs. I just see no reason to offload physics to the CPU when there are plenty of other things for the CPU to do.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: Nemesis 1
None taken . I understand. I am surprized I can do as well as I am doing actually.

You are hanging in there pretty good I'd say. It doesn't take too much effort for folks to understand, I read ya just fine.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: Acanthus
I honestly believe that graphics will be the last thing to be integrated onto the CPU as we head toword the "system on a chip" models Intel and AMD are both striving for.

The amount of real estate that a high performance graphics solution sucks up would be a disproportionately large part of the CPU.

Physics and Graphics are both insanely, almost infinitely parallel tasks. As programmability of graphics cards increase, physics is being programmed to work on GPUs. I just see no reason to offload physics to the CPU when there are plenty of other things for the CPU to do.

At face value, and for the reasons you discuss plus the one's I know you know about but weren't wasting the time to list, I fully agree with this sentiment.

Except for one nagging feeling. I get this feeling anytime the technical folks (myself included) line up their technical reasoning's all lined up when it comes to the direction of technology because we ever so much typically end up getting proved wrong once a couple process node generations have played out.

For thermal/TDP reasons I agree there seems to be no superior reason why the compute power of a discreet processing should ever be integrated into the die containing the CPU core logic.

But If they did it, why would they? Remove the boundary conditions on your logic tree regarding TDP and die-size limitations (as these are removed by the eventual sequential iteration to process node X...be it 22nm or 16nm or 11nm) and figure out what value it would bring, regardless the negatives, and then decide if it is likely to happen or not.

In this regard I see it as being inevitable. It is heterogeneous processing, the best of all worlds. Do we really need 16 core processors? Or would we be better off packing 6-8 cores onto a die along with a graphics processing module that doubles as both GPU as well as CPU-like processing resources as we see with CUDA today?

I agree with the logic that so long as the CPU guys are allocated the full TDP budget (~150W max practical) they are jsut going to keep stamping CPU's with increasing core count and increasing cachesize.

But if/when project management comes along and says "CPU guys, for 22nm you get 50% of the xtor and TDP budget, GPU guys you get the other half" then I think you'll see some nicely powered GPU offerings (mid-range stuff, not high-end of course) and a new heterogeneous processor paradigm that will boost the performance of newly compiled programs.

I haven't quite figured out how the memory question gets answered...where does the 1GB of GDDR5 go? Maybe integrated into the mobo itself just like the onboard cache was integrate onto the mobo back in the early pentium days? (mine was literally a slot, just like a ram dim, I could upgrade the onboard mobo cache to size/speed I desired and could find on the market, maybe video ram goes same direction too)
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: Idontcare
Originally posted by: Nemesis 1
None taken . I understand. I am surprized I can do as well as I am doing actually.

You are hanging in there pretty good I'd say. It doesn't take too much effort for folks to understand, I read ya just fine.

Yeah, I'm not bilingual and can't imagine how I'd probably sound trying to speak a foreign language.

:thumbsup: for being talented enough to come this far. Just keep at it and you'll get fluent in no time. Don't take pompous criticisms to heart; Engrish is pretty funny and it's no one's fault.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: josh6079
Originally posted by: Idontcare
Originally posted by: Nemesis 1
None taken . I understand. I am surprized I can do as well as I am doing actually.

You are hanging in there pretty good I'd say. It doesn't take too much effort for folks to understand, I read ya just fine.

Yeah, I'm not bilingual and can't imagine how I'd probably sound trying to speak a foreign language.

:thumbsup: for being talented enough to come this far. Just keep at it and you'll get fluent in no time. Don't take pompous criticisms to heart; Engrish is pretty funny and it's no one's fault.

Thanks! LOL . The real; sad part is I am American . Only know one language and I masacre it. LOL . Good intentions anyway thank.

 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: Nemesis 1
Originally posted by: josh6079
Originally posted by: Idontcare
Originally posted by: Nemesis 1
None taken . I understand. I am surprized I can do as well as I am doing actually.

You are hanging in there pretty good I'd say. It doesn't take too much effort for folks to understand, I read ya just fine.

Yeah, I'm not bilingual and can't imagine how I'd probably sound trying to speak a foreign language.

:thumbsup: for being talented enough to come this far. Just keep at it and you'll get fluent in no time. Don't take pompous criticisms to heart; Engrish is pretty funny and it's no one's fault.

Thanks! LOL . The real; sad part is I am American . Only know one language and I masacre it. LOL . Good intentions anyway thank.

Well there's trying to write English while having no distractions versus trying to write English while having serious pains in the ass :laugh: Just remember to NOT mix the big blue pills with whiskey (again) next time, we don't need any more of THOSE posts to try and decipher :laugh:
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
IDC, i'm still having trouble wrapping my head around high end integrated gpus no matter how small the process gets.

You can continuously increase the density of the chips, and graphics and physics will always be able to utilize the maximum amount of real estate for both surface area and TDP.

Because of the highly parallel nature of physics and graphics, youll never see a situation where sacrificing that will be beneficial to gamers. Unless of course we reach a point where we have photorealism and physics that arent discernable from reality.

When you reach that point, efficiency will be king and only then can i see putting a "photorealistic" core on the CPU.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: Nemesis 1
Originally posted by: josh6079
Originally posted by: Idontcare
Originally posted by: Nemesis 1
None taken . I understand. I am surprized I can do as well as I am doing actually.

You are hanging in there pretty good I'd say. It doesn't take too much effort for folks to understand, I read ya just fine.

Yeah, I'm not bilingual and can't imagine how I'd probably sound trying to speak a foreign language.

:thumbsup: for being talented enough to come this far. Just keep at it and you'll get fluent in no time. Don't take pompous criticisms to heart; Engrish is pretty funny and it's no one's fault.

Thanks! LOL . The real; sad part is I am American . Only know one language and I masacre it. LOL . Good intentions anyway thank.

Haha, well I'm sure you're doing the best you can. That's all anyone can ask of you.

:beer:
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: Acanthus
IDC, i'm still having trouble wrapping my head around high end integrated gpus no matter how small the process gets.

You can continuously increase the density of the chips, and graphics and physics will always be able to utilize the maximum amount of real estate for both surface area and TDP.

Because of the highly parallel nature of physics and graphics, youll never see a situation where sacrificing that will be beneficial to gamers. Unless of course we reach a point where we have photorealism and physics that arent discernable from reality.

When you reach that point, efficiency will be king and only then can i see putting a "photorealistic" core on the CPU.

Yeah you are thinking along the lines I am. Diminishing returns. 1->2 cores, big help. 2->4, not so much. 4->6, or 4->8, even less so. Amdahl's law sees to it plus the fact very few consumer's operate their desktops in a fashion that saturates a quad let alone an octo-core.

So in four years what do you do with 4x more transistors? Build 16 core chips that no one really needs...remember how cool it was to get that 8x CD reader when upgrading from 4x or 2x, but after a couple years bo one cared whether their $20 bought them a 32x or a 36x CD rom? The read time for a 750MB disk was so small at 32x it was hardly worthwhile to save a few seconds and buy the fastest.

Same with CPUs in four years. HD video isn't goin to increase in resolution, 1080p is it. So whom is really going to need a 16core cpu that can render HD in 5min versus 10min for an 8core, etc?

In looking around for something to do with those transistors the natural applications will be more and more cache plus GPU. It just seems like something that happens.

And yes gaming runs into the same issue, who needs 1920x1280 at 300fps? Screens aren't going to get much larger than 30" and resolutions there have already peaked. So its just going to be an end-game in terms of how much photo-realism are the programmers going to want to program versus how much the consumer cares to see. Wii is successful in ways that PS3 and 360 are not.

That 2-yr node cadence is crazy in terms of what it can enable when you look down the road 6-8 yrs. What makes no sense to do today can suddenly make no sense to not do in 6 yrs.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Acanthus
IDC, i'm still having trouble wrapping my head around high end integrated gpus no matter how small the process gets.

You can continuously increase the density of the chips, and graphics and physics will always be able to utilize the maximum amount of real estate for both surface area and TDP.

Because of the highly parallel nature of physics and graphics, youll never see a situation where sacrificing that will be beneficial to gamers. Unless of course we reach a point where we have photorealism and physics that arent discernable from reality.

When you reach that point, efficiency will be king and only then can i see putting a "photorealistic" core on the CPU.


Why are you having problems with this? Is it that your so use to the way its been that its not possiable to go back to the beginning. Because basicly thats what were talkingabout here. Your thinking in terms of mono . Think many cores that actually scale . Lets say just 8 nehalems. Back in the beginning had there been this kind of power . GPU wouldn't ever have been created. Than In terms of only 8 cores on die add to that with sandy bridge with AVx. I can tell ya . Intel won't even need a gpu with this for light work. Than you move to 22nm and add in FMA and your talking huge amout of power over what 8 core nehalem will have i. By 4x at the least, thats in 1 generation after nehalem . Sandy bridge.

I think thats what people are struggling with. They Find it impossiable to believe that. The sandy generation is going to be 4x Nehalem in 3 years. Than we have to start using more than just gpu for graphics we have to use whole platiform . Actually to me it seems easier now that they have the compute power for software. Also I am not poor . But I count my pennies . THE new DX stuff all the time is pain in the ass . With what intel is doing its doesn't matter what they do . Intels setup will run with correct software update its that simple. My grandson is turning 2 next week . He is exceptional . My daughter bless her . Has enough faith in me to allow me to care for him when she works . So I get him 2-3 days aweek without wife. Its very hard for me. But he is a special child. When he greets people he extends his hand and say nice to me you. I thought him that and much much more. We are struggling with the consept of Mam and Sir. But he does understand the benefits of ASS Kissing. Something I could never do . DAMIT. So he will get the sir thing.

I make it a game with him . I play the user type ass kisser and show how decietful it is . Werre his part is based on respect and honor . I love it and he responds so well.

Were was I . Oh! Ya. All that to say I building him a nehelem for his birthday. Wife is getting programms for elementry .He is not 2 yet . He plays his own DVDs, sets Dish to reciever correct, and TV channel. My daughter says what do you want from him . I turmed grabbed the c+ programming book and said I want him to know this by time he is 5. The trick is to make him want to learn it. Were so alike I got him and he knows it and loves it.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
And yes gaming runs into the same issue, who needs 1920x1280 at 300fps?

Enough rendering power to push Crysis at max settings at that level of performance would be close to 1/100th of what we will need for base level radiosity(at 24fps). GPUs have a very, very long way to go before they approach the issues CPUs are running into. At 1nm build process GPUs could still put every transistor to use, CPUs sharing a die are going to be wasting valueable GPU die space. Perhaps CPUs will end up as glorified North Bridges or something comparable within 10 years, 10 years out if GPUs consider to slightly outpace Moore's guidelines for performance improvements we will still need a couple more decades before most visualization issues are solved via brute force computational power.

Screens aren't going to get much larger than 30" and resolutions there have already peaked.

No, they aren't even close yet. Pixel density on current 30" monitors is poor compared to this.

HD video isn't goin to increase in resolution, 1080p is it

2K and 4K are both already on the market, given it is professional only atm, but 1080p isn't the end of the road

I haven't quite figured out how the memory question gets answered...where does the 1GB of GDDR5 go?

1GB GDDR5 for netbooks in six years I assume? It won't be close to enough for contemporary desktop GPUs in the timeframe. From an engineering standpoint, I would think the pin out issue would be a rather large one.

From a GPU software standpoint we have yet to see layered organic models(cloth on top of skins on top of muscles on top of organs on top of skeletons), we can't simulate tesselation properly yet, we aren't close to approaching a solid physics system, we still can't handle using 3D textures even in basic form(let alone properly deformable ideal setups) and we still are a very long way off from radiosity based lighting(or one of the comparable rendering methods).

Wii is successful in ways that PS3 and 360 are not.

Absolutely, much as Yahoo games is succesful in ways that Steam is not. That does not remove the high end market from being very viable as a business model.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I like your thinking anyway. So you believe ATIs epic backend is the wave of the future. I will not disagree with that kind of thinking. Fact is I have hinted at it many times. Fact is there is a old topic here. Would you like to read it? Funny thing is Its pointing to exactly whats happening right now. I really would like to save the old topic for just the right time.

I know 1 guy here thats going to get a kick out of that topic . Idon't care. He will enjoy the day I bring that baby back. I doubt alot of you guys won't be aliken it much. SO far its dead on . A couple of double reverses tho. But as far as ATI goes its playing out exactly as it was stated. Intel threw X86 in the mix . Why. I really haven't figured that 1 out. I thought intel would be epic also . but for me right now AVX is got me, Why the x86 port. Intel did they change the backend on sandy. Than there is the AMD puzzel . AMD is on record as saying or claiming that Bulldozer would use FMA . Thats good really good.

Problem is I want to see FMA run on AMDs backend . I don't believe they can do . But they can with an EPIC backend AMD . I don't believe intel can make FMA work on Nehalem . Yet they claim it for sandy at 22nm . I forget its code name. . But intel also has epic and it does use FMA. fuse multi/add. Amd says its FMA is a 3operand 1 for FMA . So AMD is adding 1operand. Nehalem is 2 operand were sandy is 4 operand. 2 added 1 for FMA to be active later at 22nm. Very strange happenings. But I do agree I believe in the end it will be epic. I also believe the elbrus compiler now intels improved elbrus compiler will set the stage. Weather Intel helps AMD here depends entirely on AMD.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Befor you ask whats so special about Elbrus compiler. First thing is it can issue 9 or ten threads per core. Imagine ATIs little cores getting 10 issues . It be a trick for ati to do 10 issue core tho.LOL. ATI and NV both use FMA. I love this stuff . You get inside here and its a universe of possiabilities

Second thing special about elbrus. Is Intel got Boris in the Deal . The First european Intel fellow./ I think the most valueable man on earth so far as compute. All the magic you want from NV/ATI hangs on their compilers. NV flat has to change its Tech. NO questions . Thats why their flopping around like a wish out of water. Thing Idon't get is why on earth would NV want to build an x86 cpu. Clearly Intel is playing something here. What I haven't a clue. Why didn't NV just buy that Media outfit that emulated x86. Nothing the NV has talked about adds up. They don't need X86 like AMD / INTEL. All they need is to emulate it with the massive compute power they have. True there a little short on cache but that can be worked out. So were is it exactly that both ATI and NV run into the brick wall? I let ya guess

The third thing about elbrus compiler ya start adding things like cache and Vertex and this does magic. The math involved to do this stuff , pure genius. I bet getting inside thats guys head would be an experiance. Oh ya with elbrus ya get to throw out abunch of logic transitors that are costly.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Idontcare
Originally posted by: Acanthus
IDC, i'm still having trouble wrapping my head around high end integrated gpus no matter how small the process gets.

You can continuously increase the density of the chips, and graphics and physics will always be able to utilize the maximum amount of real estate for both surface area and TDP.

Because of the highly parallel nature of physics and graphics, youll never see a situation where sacrificing that will be beneficial to gamers. Unless of course we reach a point where we have photorealism and physics that arent discernable from reality.

When you reach that point, efficiency will be king and only then can i see putting a "photorealistic" core on the CPU.

Yeah you are thinking along the lines I am. Diminishing returns. 1->2 cores, big help. 2->4, not so much. 4->6, or 4->8, even less so. Amdahl's law sees to it plus the fact very few consumer's operate their desktops in a fashion that saturates a quad let alone an octo-core.

So in four years what do you do with 4x more transistors? Build 16 core chips that no one really needs...remember how cool it was to get that 8x CD reader when upgrading from 4x or 2x, but after a couple years bo one cared whether their $20 bought them a 32x or a 36x CD rom? The read time for a 750MB disk was so small at 32x it was hardly worthwhile to save a few seconds and buy the fastest.

Same with CPUs in four years. HD video isn't goin to increase in resolution, 1080p is it. So whom is really going to need a 16core cpu that can render HD in 5min versus 10min for an 8core, etc?

In looking around for something to do with those transistors the natural applications will be more and more cache plus GPU. It just seems like something that happens.

And yes gaming runs into the same issue, who needs 1920x1280 at 300fps? Screens aren't going to get much larger than 30" and resolutions there have already peaked. So its just going to be an end-game in terms of how much photo-realism are the programmers going to want to program versus how much the consumer cares to see. Wii is successful in ways that PS3 and 360 are not.

That 2-yr node cadence is crazy in terms of what it can enable when you look down the road 6-8 yrs. What makes no sense to do today can suddenly make no sense to not do in 6 yrs.

I agree that if the hardware gets ahead of the software that integrating the GPU makes sense.

However I dont think that will be the case. Typical player models in Unreal Engine 3 range from 60,000 - 250,000 polygons. But the source art for those models are well over 2 million.

Textures have a near infinate amount of room to grow.

Physics is literally in its infancy.

If CPUs continue down the "more equally performing cores" path, then there will be a lot of untapped performance that could be offloaded to more complex AI, better and more complex compression algorithms for networking, more streaming data within games, and less per-core optimization work.

Im just not seeing a 16 core nehalem system in software rendering being able to overcome even a mid range gaming system of today.

Hell id like to see a nehalem handle the original unreal tournament at max settings.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I would like to see that also. But in RT. AS long as the frames rates are playable and smooth . I going for the eye candy . NO questions about. Isn't Quake 4 being released to play on larrabbee. I sure I read that. Old game new graphics . I know Daniel was working on it. and did some demos . But I sure 4 is being retuned for larrabee.

STop breaks I miss read that 16 cores. LOL Your kidding right. An 8 core sandy AVX will do that easily
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Nemesis 1
I would like to see that also. But in RT. AS long as the frames rates are playable and smooth . I going for the eye candy . NO questions about. Isn't Quake 4 being released to play on larrabbee. I sure I read that. Old game new graphics . I know Daniel was working on it. and did some demos . But I sure 4 is being retuned for larrabee.

STop breaks I miss read that 16 cores. LOL Your kidding right. An 8 core sandy AVX will do that easily

I will believe it when i see it i guess.

We were told all about tile-based rendering, mitosis, Intels triumphant return to high performance graphics a year ago, quantum computers for the home, IA64 being wave of the future, Cell being wave of the future...

I lose optimism quickly when someone proposes 1000%+ increases on a platform (software rendering).

Tile based rendering and what is essentially 32 Atoms with 4 way hyperthreading doing software rendering at a reasonable performance level is certainly an ambitious project.

All i can really say about it is, we will see.

If 25 cores give us 60fps average in Gears of War 16x12 max settings no AA 16xAF, 32 cores should give us around 75fps...

That is right around 8800GTX performance.

Intel will be starting out in the midrange if that is to be the performance level at launch.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: Acanthus
Im just not seeing a 16 core nehalem system in software rendering being able to overcome even a mid range gaming system of today.

Hell id like to see a nehalem handle the original unreal tournament at max settings.

That's my point, big fat cores and lots of them aren't nearly as efficient as a handful of big fat cores combined with a pile of SPE's. Look at what works in today's computers (quad-core CPU + xfire or SLI discreet GPU) and skip ahead 4 or 6 years when Moore's law has made xtor budgets even more silly stupid for the same thermal budget.

The homogeneous processing model is simpler to implement but it leaves performance on the table. I became I believer when I upgraded to the most recent version of TMPGEnc which uses both my NV GPU and Intel CPU to encode video.

If it can be done patch-work and piecemeal today then this is the earliest signs we get of where the semiconductor evolution is taking us.

I'm not saying the discreet GPU market is going to be supplanted by an emerging fusion product lineup, that won't be the case anymore so than the emergence of multi-cored CPU's led to the demise of multi-socket systems.

But as was the case with multi-core CPU's, we will see further marginalization (IMO) of discreet GPU systems in the desktop and notebook domains as xtor budgets continue to further enable acceptable IGP performance.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Originally posted by: Idontcare
Originally posted by: Acanthus
I honestly believe that graphics will be the last thing to be integrated onto the CPU as we head toword the "system on a chip" models Intel and AMD are both striving for.

The amount of real estate that a high performance graphics solution sucks up would be a disproportionately large part of the CPU.

Physics and Graphics are both insanely, almost infinitely parallel tasks. As programmability of graphics cards increase, physics is being programmed to work on GPUs. I just see no reason to offload physics to the CPU when there are plenty of other things for the CPU to do.

At face value, and for the reasons you discuss plus the one's I know you know about but weren't wasting the time to list, I fully agree with this sentiment.

Except for one nagging feeling. I get this feeling anytime the technical folks (myself included) line up their technical reasoning's all lined up when it comes to the direction of technology because we ever so much typically end up getting proved wrong once a couple process node generations have played out.

For thermal/TDP reasons I agree there seems to be no superior reason why the compute power of a discreet processing should ever be integrated into the die containing the CPU core logic.

But If they did it, why would they? Remove the boundary conditions on your logic tree regarding TDP and die-size limitations (as these are removed by the eventual sequential iteration to process node X...be it 22nm or 16nm or 11nm) and figure out what value it would bring, regardless the negatives, and then decide if it is likely to happen or not.

In this regard I see it as being inevitable. It is heterogeneous processing, the best of all worlds. Do we really need 16 core processors? Or would we be better off packing 6-8 cores onto a die along with a graphics processing module that doubles as both GPU as well as CPU-like processing resources as we see with CUDA today?

I agree with the logic that so long as the CPU guys are allocated the full TDP budget (~150W max practical) they are jsut going to keep stamping CPU's with increasing core count and increasing cachesize.

But if/when project management comes along and says "CPU guys, for 22nm you get 50% of the xtor and TDP budget, GPU guys you get the other half" then I think you'll see some nicely powered GPU offerings (mid-range stuff, not high-end of course) and a new heterogeneous processor paradigm that will boost the performance of newly compiled programs.

I haven't quite figured out how the memory question gets answered...where does the 1GB of GDDR5 go? Maybe integrated into the mobo itself just like the onboard cache was integrate onto the mobo back in the early pentium days? (mine was literally a slot, just like a ram dim, I could upgrade the onboard mobo cache to size/speed I desired and could find on the market, maybe video ram goes same direction too)

I always find it interesting when things come "full circle". I remember the days of having the VRAM in a slot, and it was a "big deal" to replace the 1MB with a 2MB chip. As for what this offers the general public, I think it would be great to return to the CPU/software being a viable solution for 2D and simple 3D. If spare processing power can fuel basic GPU functions, it would be a more elegant solution that either a low-end discrete card or a poor integrated GPU.

As for your comment regarding sharing the TDP, that could reap a lot of dividends for the user. We have benefited from both AMD and Intel focusing on power efficiency, and it would be great to see the GPU makers follow suit. The TDP of modern GPU designs is huge, and it would be great to focus on not only making them faster, but more efficient as well. A mid-range CPU with a low-to-mid-range GPU would be a great product.

 

waffleironhead

Diamond Member
Aug 10, 2005
6,924
437
136
Originally posted by: Nemesis 1

OK OK . Stop! You are not looking at this picture correctly. OK . ATI/NV have their GPU world they own the high end market . They are Masters of Hardware RENDER pipeline.

Intel is not intrudeing on ATI/NVs Hardware render world at all .

Intel is creating their own software render world . I think that is wonderful. It is Intel thats pushing raytracing and software pyhsics. It is Intel going down a differant world were use to .
So When intel decided to go magic(software) They bought 4 other game related business all having to do with software and x threading.

In a very short time will see more of what Intel is doing. Like this Demo . Which is outstanding. It really had everthing . But 2 elements. The cameria motion was great . The motion blur all the time takes getting use to but is very realistic. The softshades were perfect and light reflection was also very good.

Intel built hardware and put a bunch of software companies working together on a unified engine. This is Intel Risk . They are betting that they can do it better. From what I seen in this video I am a believer. This was game footage.

So when Intel releases there Larrabee and this game. If its a great game . Better than anything current by alot . If it plays on Intel only hardware. The crying will be loud and completely unfounded. AS Intel is creating a new space and not intrudeing on the OLD hardware render pipeline.

Intel is going a differant direction . If its not better they will fail its that simple.

If its great all the AMD ATI NV trolls will crawl out of woodwork saying how intel did this or that.

The only thing Intel is doing here is going another direction with graphics. Its a hugh gamble. But If software render is better(it is) Intel will be way ahead. Nv could have been working in that same direction but NO . Ati however has been working in that direct which is why the DX10.1 standard that NV tramples.


So if you have a problem with what intel is doing its because your a fanbois. If intel fails it changes nothing. If intel succeeds it changes everthing. The only way intel can succeed is by proving their tech superior To do that Intel needed a game publisher and many other elements Intel spent the money. This demo is First fruits we have seen of Intels efforts . Anyone who wasn't impressed with that scene simply doesn't understand everthing they are seeing here. Its remarkable actually.

Well if you could go ahead and lay some more info out there it would be appreciated as you seem to be an Intel knowledge base of sorts.

When do you suppose that the hardware capable of running this program/game will be widespread? I am having a hard time picturing the time frames involved to bring all of this to the masses. Are you imagining the hardware needed to run Offset will be out in full effect 5 years down the road? less? more?

It is not a question of the capabilities as much as the projected time frame of market usage. If, when this Game/Program is released to the public, there are only a few systems that can run it. I just dont see it taking off, so this seems to be a long hard battle for intel to make. I wonder if they have the stamina to see it thru to its final glory.

Is the hardware capable of running this going to be an add-on card? Is it also going to be capable of doing everything that current cards from ATI/Nvidia are capable of doing or are we going to be forced to have 2 different cards to enjoy the best of both worlds.

It will not be the first time a superior(cant say if it will be or not) product has failed because of lack of adoption.

How do you see it playing out?
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Idontcare
Originally posted by: Acanthus
Im just not seeing a 16 core nehalem system in software rendering being able to overcome even a mid range gaming system of today.

Hell id like to see a nehalem handle the original unreal tournament at max settings.

That's my point, big fat cores and lots of them aren't nearly as efficient as a handful of big fat cores combined with a pile of SPE's. Look at what works in today's computers (quad-core CPU + xfire or SLI discreet GPU) and skip ahead 4 or 6 years when Moore's law has made xtor budgets even more silly stupid for the same thermal budget.

The homogeneous processing model is simpler to implement but it leaves performance on the table. I became I believer when I upgraded to the most recent version of TMPGEnc which uses both my NV GPU and Intel CPU to encode video.

If it can be done patch-work and piecemeal today then this is the earliest signs we get of where the semiconductor evolution is taking us.

I'm not saying the discreet GPU market is going to be supplanted by an emerging fusion product lineup, that won't be the case anymore so than the emergence of multi-cored CPU's led to the demise of multi-socket systems.

But as was the case with multi-core CPU's, we will see further marginalization (IMO) of discreet GPU systems in the desktop and notebook domains as xtor budgets continue to further enable acceptable IGP performance.

I think the definition of acceptable is different between you and myself... We have to remember that developers go for the lowest common denominator when it comes to programming games... So if Intel can raise the floor on integrated and low-cost systems im all for it.

However I am discussing Larabees possibility of dethroning or even being competitive with "the traditional duo of graphics" on the high end.

I dont see it happening, at least not in round 1.

Intel won't "Conroe" the graphics card in 2009.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Acanthus
Originally posted by: Nemesis 1
I would like to see that also. But in RT. AS long as the frames rates are playable and smooth . I going for the eye candy . NO questions about. Isn't Quake 4 being released to play on larrabbee. I sure I read that. Old game new graphics . I know Daniel was working on it. and did some demos . But I sure 4 is being retuned for larrabee.

STop breaks I miss read that 16 cores. LOL Your kidding right. An 8 core sandy AVX will do that easily

I will believe it when i see it i guess.

We were told all about tile-based rendering, mitosis, Intels triumphant return to high performance graphics a year ago, quantum computers for the home, IA64 being wave of the future, Cell being wave of the future...

I lose optimism quickly when someone proposes 1000%+ increases on a platform (software rendering).

Tile based rendering and what is essentially 32 Atoms with 4 way hyperthreading doing software rendering at a reasonable performance level is certainly an ambitious project.

All i can really say about it is, we will see.

If 25 cores give us 60fps average in Gears of War 16x12 max settings no AA 16xAF, 32 cores should give us around 75fps...

That is right around 8800GTX performance.

Intel will be starting out in the midrange if that is to be the performance level at launch.

Ya. I like this.

That mitosis thing . You know me right . When something is stinkey I say so. Go lookat the hardware slide of mitosis. That shows how it combines software with hardware.

Than look at peoples names that are involved development . Fasy forward Intel couldn't get Mitosis to work On cpu . SO did Intel trash all that work . or have bits and piecies worked its way in to present tech . Than looks over shoulder looks at new Tech Called Hydra. Loks at founders . Interesting.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: waffleironhead
Originally posted by: Nemesis 1

OK OK . Stop! You are not looking at this picture correctly. OK . ATI/NV have their GPU world they own the high end market . They are Masters of Hardware RENDER pipeline.

Intel is not intrudeing on ATI/NVs Hardware render world at all .

Intel is creating their own software render world . I think that is wonderful. It is Intel thats pushing raytracing and software pyhsics. It is Intel going down a differant world were use to .
So When intel decided to go magic(software) They bought 4 other game related business all having to do with software and x threading.

In a very short time will see more of what Intel is doing. Like this Demo . Which is outstanding. It really had everthing . But 2 elements. The cameria motion was great . The motion blur all the time takes getting use to but is very realistic. The softshades were perfect and light reflection was also very good.

Intel built hardware and put a bunch of software companies working together on a unified engine. This is Intel Risk . They are betting that they can do it better. From what I seen in this video I am a believer. This was game footage.

So when Intel releases there Larrabee and this game. If its a great game . Better than anything current by alot . If it plays on Intel only hardware. The crying will be loud and completely unfounded. AS Intel is creating a new space and not intrudeing on the OLD hardware render pipeline.

Intel is going a differant direction . If its not better they will fail its that simple.

If its great all the AMD ATI NV trolls will crawl out of woodwork saying how intel did this or that.

The only thing Intel is doing here is going another direction with graphics. Its a hugh gamble. But If software render is better(it is) Intel will be way ahead. Nv could have been working in that same direction but NO . Ati however has been working in that direct which is why the DX10.1 standard that NV tramples.


So if you have a problem with what intel is doing its because your a fanbois. If intel fails it changes nothing. If intel succeeds it changes everthing. The only way intel can succeed is by proving their tech superior To do that Intel needed a game publisher and many other elements Intel spent the money. This demo is First fruits we have seen of Intels efforts . Anyone who wasn't impressed with that scene simply doesn't understand everthing they are seeing here. Its remarkable actually.

Well if you could go ahead and lay some more info out there it would be appreciated as you seem to be an Intel knowledge base of sorts.

When do you suppose that the hardware capable of running this program/game will be widespread? I am having a hard time picturing the time frames involved to bring all of this to the masses. Are you imagining the hardware needed to run Offset will be out in full effect 5 years down the road? less? more?

It is not a question of the capabilities as much as the projected time frame of market usage. If, when this Game/Program is released to the public, there are only a few systems that can run it. I just dont see it taking off, so this seems to be a long hard battle for intel to make. I wonder if they have the stamina to see it thru to its final glory.

Is the hardware capable of running this going to be an add-on card? Is it also going to be capable of doing everything that current cards from ATI/Nvidia are capable of doing or are we going to be forced to have 2 different cards to enjoy the best of both worlds.

It will not be the first time a superior(cant say if it will be or not) product has failed because of lack of adoption.

How do you see it playing out?

I really have thought about this alot. Because it bothers me . Why would intel take such a risk. Because your right . If this game is intel exclusive They not going to sell a lot if it requires high end. SO ya perplexing .

Than ya Say low volumn high end gaming . Intel bought at least 4 software companies for that. Your right . But lets pretend that Intel is partnered with another company that sell Complete systems and software Low volumn sales. A company that Loves Graphics.

A company who just developed a new open standard called CL A company who is releasing a new Os that uses CL a os with new features for multi threading called Grand Central . Its My hope that when this company releases Its OS that Nehalem / Larrabbee perform best ON its Platform compared to MS with DX11. Why . Because if you give these mac people A more powerful gaming machine than MS has to offer . Intel well sell lots of larrabee. and Games even if ITS only 1 game . You know thats true. These guys are crazy fanbois. If you look at available info we have now, You will see . This is more than just a possiability. Lets not forget that alot of these Macs are going to be DP. These Apple people been setting in the shadow of MS along time . You let a RAY OF LIGHT (lOL) threw and it will be a stampede. Lets see were Apple goes with memory They have their choice.

 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |