Originally posted by: nosfe
well i didn't read it all but from what i've seen he wasn't talking about software rendering but of using CUDA/Brook+/OpenCL/etc instead of DirectX/OpenGL/etc
Originally posted by: Arkaign
Originally posted by: apoppin
- intel talks the talk ,, but we saw the P4 walk; intel is capable of massive failure also
This is a grossly ignorant myth. P4 was not a failure at all, except in the enthusiast community after AMD64 hit the streets.
P4 Willamette was a slow start, and RDRAM didn't gel quite right, everything was too expensive and only the bleeding-edge types bought it, for little to no benefit over something like an Athlon Thunderbird on a decent board (hard to find a decent Socket A board back then though, before the NForce2 days).
P4 Northwood started a period of performance superiority for Intel that lasted a pretty long time. Northwood-A processors were roughly equivalent to their AMD branded counterparts (eg; a P4 2.0A was ~= to AXP2000+), then Northwood-B came out, and would slightly edge their counterparts (eg; P4 2.4B would be slightly faster than AXP2400+), and then the Northwood-C series started running away with the game at the end (eg; P4 2.8C would often outperform AXP3200+ in encoding/games, and the 3.2C was pretty dominant in everything). Not helping AMD's case was the ridiculous overclockability of the Northwood chips. With the AMD chips, a couple of gems stood out, some assorted mobile AXPs, and the Barton 2500+, which was a pretty reliable 3200+ stand-in. But it was a pretty long run of success here for Intel even from an enthusiast perspective. From the market perspective, Intel just piled profit upon profit.
P4 Prescott was the beginning of the end of P4 architecture. EVEN SO, there were many benchmarks where the P4 remained competitive, particularly encoding. AMD launched their finest hour around this time with the Athlon64 stuff, but people forget that there wasn't a huge performance delta at this time. A Prescott 3.4Ghz was not noticeably slower than an Athlon64 3400+, although the 3400+ was hands-down a better product (cooler, great NV chipsets, excellent memory performance, great FPU, etc). Whatever the case, Intel still sold gobs of chips, and continued to enjoy massive profits. The smart buyers and enthusiasts were buying AMD64 boxes, but the unwashed masses continued to buy Intel inside for the most part.
Pentium D, Presler, etc, all were placeholders that moved performance slightly upward and offered new SKUs until Core2 was ready to hit.
Overall, P4 was a HUGE hit for Intel. Just look at the profits they made, and the long period of time in which a Northwood box was often the fastest thing you could build.
If you want to look at Intel failures, look at their stumbling around with Itanium, and with server offerings in general. They've always dominated desktops, and it looks likely to continue, but the server/backoffice environment is quite a different story. And of course i740 was no home run either lol.
Originally posted by: taltamir
please, do try reading it, and you will see what he is actually talking about.
TS: From my point of view, the ideal software layer is just to have a vectorizing C++ compiler for every architecture?NVIDIA, Intel, AMD, whoever. Let us write code in C++ to run on the GPU
JS: It sounds like, instead of the standard CPU plus GPU configuration, we may just have many-core CPUs... or, sorry, not many-core general-purpose CPUs[...]
TS: No, I see exactly where you're heading.
Originally posted by: taltamir
ok, i see a sales pitch for a graphics engine, its not a review btw, it is a about a million bullet points. AND I actually went through them. (almost all actually), what does this have to do with what exactly?
Originally posted by: Nemesis 1
Originally posted by: taltamir
ok, i see a sales pitch for a graphics engine, its not a review btw, it is a about a million bullet points. AND I actually went through them. (almost all actually), what does this have to do with what exactly?
I just found it interesting that its done in C++ Keep inmind this was the engine befor intel bought the company. Much of the overview really doesn't mean anything to larrabbee but to ATI DX10.1 it does. Intel larrabbee software render should allow project offset game to increases its capabilities 2x of what we know about the engine and the game. Add in Havak physics this looks like what the article was talking about. Its exciting stuff . But That ATI cinemia stuff is really really cool>. Thats the kinda stuff that really gets me excited to see that realizm .
I remember the first time I played the fear demoe. It actually gave me a rush. Had that game been done with the rendering techs underway. It would of literally scared the shit out of me. But it was a 1 time thrill.
With the kind of graphics wear talking the screen should immerse you. Thats what Iwant to see. Remember the First time you see I max . The experiance doesn't change each viewing is the same intense visuals. Thats whats cool.
Originally posted by: nosfe
i read it all and still don't see where he states that the GPU's are history
TS: From my point of view, the ideal software layer is just to have a vectorizing C++ compiler for every architecture?NVIDIA, Intel, AMD, whoever. Let us write code in C++ to run on the GPU
JS: It sounds like, instead of the standard CPU plus GPU configuration, we may just have many-core CPUs... or, sorry, not many-core general-purpose CPUs[...]
TS: No, I see exactly where you're heading.
please point out to me where he says that its the end of the GPU's; besides the articles title, all i'm seeing is the death of DirectX/OpenGL and the current way of programing for GPU's and a lot of talk about making C++(CUDA/Brook+/OpenCL are all based on C) run on GPU's. Just because he says "software rendering" doesn't mean that it'll run on the CPU, technically speaking "software" cannot render anything, its hardware rendering all the way, it just depends on what hardware will render it. The thing is that CPU's and GPU's are different beasts, suited for different types of workloads, raster rendering is what GPU's were created for so guess which of the two is better suited for the task
What he is talking at the end is Fusion type processors but with high end graphics chips in them but thats way way in the future, unless there's some breakthrough in getting rid of the processors heat. Don't forget that he doesn't know hardware engineering, he doesn't know what's feasible with the current and near future tech, he just writes code, thats why he is talking about the coding aspect of the graphics cards and daydreams about a beautiful world where all he needs is C++ or some language based on it to do his work.
Problem is that the real world doesn't quite work that way, just look at web browsers, they still haven't gotten all together to make all the browsers render the same page the same way, its still full of "tweaks" needed for different browsers and there aren't any hardware restrictions there unlike in the graphics world
Originally posted by: apoppin
Originally posted by: Nemesis 1
Originally posted by: taltamir
ok, i see a sales pitch for a graphics engine, its not a review btw, it is a about a million bullet points. AND I actually went through them. (almost all actually), what does this have to do with what exactly?
I just found it interesting that its done in C++ Keep inmind this was the engine befor intel bought the company. Much of the overview really doesn't mean anything to larrabbee but to ATI DX10.1 it does. Intel larrabbee software render should allow project offset game to increases its capabilities 2x of what we know about the engine and the game. Add in Havak physics this looks like what the article was talking about. Its exciting stuff . But That ATI cinemia stuff is really really cool>. Thats the kinda stuff that really gets me excited to see that realizm .
I remember the first time I played the fear demoe. It actually gave me a rush. Had that game been done with the rendering techs underway. It would of literally scared the shit out of me. But it was a 1 time thrill.
With the kind of graphics wear talking the screen should immerse you. Thats what Iwant to see. Remember the First time you see I max . The experiance doesn't change each viewing is the same intense visuals. Thats whats cool.
What is the difference to programming C++ in CUDA and using PhysX?
--it looks like nvidia is also heading for "fully programmable" GPUs - what you want to do on Larrabeast next year, you can evidently do *now* with Tesla and CUDA; even with AMD's new SDK.
.. or am i missing some facts?