Originally posted by: BenSkywalker
On a more pedestrian side: Does anyone have an idea at what resolution the demo was run? And assuming, for a moment, that ray tracing was ubiquitous as rasterization today - How would AMD/NV's offering fare at that ray tracing demo?
This is a bit of a loaded question. If nV/AMD ran that exact same demo right now their boards would suck running it, much as Larrabee does. The big difference is that Larrabee is built with nigh no compromises to run exactly that, while AMD/nV just would happen to be able to because of what their hardware is capable of, a huge portion of their die space would simple be idle while doing it. There are many, many reasons why ray tracing isn't a good choice for real time rendering- nVidia and ATi both know this quite well(dynamic geometry and diffuse being huge stumbling blocks for ray tracing, both key elements of games). Intel made a gamble that rasterizers would stop scaling, at which point ray tracing would start to catch up(you could have a million cores working on a ray traced scene with perfect scaling as long as the support sytems were in place). The problem for Intel, rasterization is scaling as fast as it ever has- so the rift is growing at a significant pace still, it will be a very long time before ray tracing comes close to being even with rasterization for real time(likely more then a decade at least- and that is just talking speed- in terms of feature parity it will take much longer).
If this was the motivation, versus say the motivation being a desire to diversify into alternative revenue sources such as the existing GPU marketspace, then it would make sense that their priorities are really to make sure the product can service the GPGPU industry and if it can be sold as a discreet GPU then that is just extra.
Don't get me wrong, I think Intel would love to own the GPU gaming market, and the console space as well with Larrabee, but it appears to be a rather distant secondary concern.
So do you feel that it's sort of a bait and switch?
I think they are using the PCI-E slot as a Trojan horse for their next progression of computing platforms. They get into your system as a graphics card, but look how much faster it is now at all these other tasks.
That Intel may be trying very hard to mislead GPGPU makers into thinking they are going all out to be competitive in the graphics market, when all along, ever since the fostering of the idea of Larrabee, was to get a multicore CPU to perform, or outperform massive parallel tasks such as CUDA or Stream?
I don't think they have fooled AMD or nVidia for an instant, but they are going to try and fool the typical consumer in a way. Look at the entire design of Larrabee, it is flat out bad for a GPU. Using pre P2 cores bolted together with some modifications for running graphics? It can't be taken seriously as an attempt to enter into the GPU market, what it does do however, is put a x86 alternative to CUDA out there before CUDA gains too much traction. I think it could end up a mixed bag there too. Yes, Larrabee is x86 based which is going to help some, but it is also in order execution and relies heavily on vectorization for performance. In other words, almost no code base around is going to run fast on it without a rewrite anyway(anything old enough to be in order isn't going to be heavily vectorized, anything new enough to be heavily vectorized is unlikely to be in order).
This is one of the reasons I see nV as so utterly bullish going head to head with Intel on this one. They are shouting from the rooftops how badly they are going to whip Intel in the GPU market, because they
know they are- and it isn't even going to be close. They don't go nearly as far when discussing ATi parts as they know they are going to always be competitive(even on ATi's 'bad' cycles, they are still in the ballpark, Intel won't be). nV is starting very early, and very confidently, to convince users that Intel can't make a GPU worth a damn- even though they know that isn't what Intel is really trying to do(at least in the near future). Because nV sells tens of millions of GPUs to gamers, they can sell parts to the HPC sector for a relative pittance when looking at the R&D to develop it versus the volume those markets push. If Intel honestly wanted to make the fastest GPU on the market, and put they same amount of money into it as they did in Larrabee and made it on the same build process(whch ATi/nV can't hope to compete with) they would at the very least be extremely competitive if not dominant, that isn't their goal. They want to use the GPU market to subsidize further development and to work as a potential defensive measure if things were to go horribly wrong for Intel moving forward(say, DX14 timeframe when any ARM CPU combined with a GPU can run any normal application far faster then a CPU.... maybe ).
I think it is safe to say there are likely a lot more people at Intel worried about CUDA/STREAM and Tegra then there are at AMD/nV about Larrabee. Maybe Intel will get serious with Larrabee2, or perhaps they will run back to their intergrated doghouse with their tail between their legs like the last time they wanted to play in this market. However it plays out, in relative terms I have no problem stating that Larrabee
will be much faster in GPGPU measures then any gaming benchmarks.