Originally posted by: Idontcare
Originally posted by: Acanthus
I honestly believe that graphics will be the last thing to be integrated onto the CPU as we head toword the "system on a chip" models Intel and AMD are both striving for.
The amount of real estate that a high performance graphics solution sucks up would be a disproportionately large part of the CPU.
Physics and Graphics are both insanely, almost infinitely parallel tasks. As programmability of graphics cards increase, physics is being programmed to work on GPUs. I just see no reason to offload physics to the CPU when there are plenty of other things for the CPU to do.
At face value, and for the reasons you discuss plus the one's I know you know about but weren't wasting the time to list, I fully agree with this sentiment.
Except for one nagging feeling. I get this feeling anytime the technical folks (myself included) line up their technical reasoning's all lined up when it comes to the direction of technology because we ever so much typically end up getting proved wrong once a couple process node generations have played out.
For thermal/TDP reasons I agree there seems to be no superior reason why the compute power of a discreet processing should ever be integrated into the die containing the CPU core logic.
But If they did it, why would they? Remove the boundary conditions on your logic tree regarding TDP and die-size limitations (as these are removed by the eventual sequential iteration to process node X...be it 22nm or 16nm or 11nm) and figure out what value it would bring, regardless the negatives, and then decide if it is likely to happen or not.
In this regard I see it as being inevitable. It is heterogeneous processing, the best of all worlds. Do we really need 16 core processors? Or would we be better off packing 6-8 cores onto a die along with a graphics processing module that doubles as both GPU as well as CPU-like processing resources as we see with CUDA today?
I agree with the logic that so long as the CPU guys are allocated the full TDP budget (~150W max practical) they are jsut going to keep stamping CPU's with increasing core count and increasing cachesize.
But if/when project management comes along and says "CPU guys, for 22nm you get 50% of the xtor and TDP budget, GPU guys you get the other half" then I think you'll see some nicely powered GPU offerings (mid-range stuff, not high-end of course) and a new heterogeneous processor paradigm that will boost the performance of newly compiled programs.
I haven't quite figured out how the memory question gets answered...where does the 1GB of GDDR5 go? Maybe integrated into the mobo itself just like the onboard cache was integrate onto the mobo back in the early pentium days? (mine was literally a slot, just like a ram dim, I could upgrade the onboard mobo cache to size/speed I desired and could find on the market, maybe video ram goes same direction too)