Originally posted by: JackyP
Yes, AMD has earned at least some level of admiration, but Idontcare as an industry insider you should know that it is not that hard for them to keep up, because of diminishing returns. If they did it being profitable it would have been quite a feat.
Intel can invest all the money they want, 10 times the R&D won't give them 10 times the performance in any case. No comparison is ever fair, both companies' employees are probably doing their best and I'm assuming are the best in their respective fields.
Am I wrong?
You are absolutely correct when you bring up the fact that diminishing returns in the pursuit of pushing the edge of technological capability will reduce the efficacy of larger and larger R&D budgets.
It is very much a similar (philosophically) situation as the single-core versus multi-core processor situation in that more and more cores will produce higher performance (to a peak, then it actually causes harm at some point) but with an ever reducing efficiency for each successively added processing core.
We could view Intel as having a multi-core R&D approach, and AMD not so much. So when both are attempting to optimize for a specific process technology or architecture asset they can come pretty close to one another's results in that singular item. But Intel gets to reap the rewards of having many more singular projects going on in parallel so the aggregate is predestined to be a superior product for the goal in mind (which for Intel is generally gross margins...even in P4 days the GM's held above 50% thanks to yield management and die-size management among 1000's of other things).
So yes 10x more R&D won't mean 10x more performance...but 10x more R&D can mean 10x more employees or 9x higher paid employees and with that elevated headcount (and cost structure) comes the expectation from shareholders that it is 10x and not 2x or 20x for some non-arbitrary damn good reason when it comes to future earnings and GM's.
In other words the burden of proof falls to Intel to demonstrate thru consistency of delivering on their product map as proof that they are optimizing/maximizing their position on the diminishing returns curve relative to maximizing shareholder returns.
Now when it comes to AMD and comparing to Intel in a "GM normalized comparison of the the efficacy of R&D budgets" there is an obvious disparity in the results. But this wasn't the case in my experience at Texas Instruments where our GM's were close(r) to Intel's despite our ~4x lower R&D budget. In this case it was always apparent to us (because employees migrate both directions, and invariably like to talk about the delta's in work environments) that we were literally running about 1/4 the number of fully funded projects in terms of parallel pathfinding and parallel integration projects for later downselection to the production-worthy process technology of choice.
It is a reality, 4x more R&D means 4x more parallel programs. Now we got lucky sometimes and our "1" program in a certain area (say strained silicon) happened to overlap Intel's project portfolio so we both ended up at the same endpoint albeit we spent say 1/4 as much getting there but there are other glaringly obvious cases where Intel spent the money and got better technology in exchange (for example their tungsten plug process is bar-none industry best, voidless and near seamless W plugs, pure beauty when seen in SEM xsection).
My point is really just this: Intel has no excuse not to succeed at every step, so no one really is surprised by their continued success, but AMD has every excuse for not succeeding and so we (being people who know enough about the industry to have educated opinions on these things, not we the ignorant masses on public forums) tend to have lowered expectations anyways and are always intrigued when they get 95% close to Intel versus 85 or 90% close to them.
I'm by no way discounting the hard work done by individuals at either company, Intel engineers and project managers work just as many hours and spend just as few days with their families as the AMD guys (judging by reports of friends who work at both). This is merely a "big picture" viewpoint I am expressing here.
Originally posted by: Cookie Monster
AM3 denebs will probably net them maybe ~5% extra performance thanks to DDR3 and an increase in un-core frequency (NB/IMC/L3). Its strange to see the desktop PII only having a un-core frequency of 1.8Ghz where as the shanghai's are clocked at 2.2GHz.
Going by the scaling performance results from the tech-report review of both shanghai and deneb using two HPC programs (Euler3D and MyriMatch) it appears that the slower 1.8GHz L3$/NB clockspeed does not markedly reduce scaling performance for these 45nm chips:
The 2384 has 2.2GHz L3$/NB
The 2356 has 2.0GHz L3$/NB
The X4 940 has 1.8GHz L3$/NB
The X4 9950 has 2.0GHz L3$/NB
Stars Euler3D Scaling Graph
MyriMatch Proteomics Scaling Graph
These two cases obviously can't represent the entirety of applications that a desktop user will encounter, but the data here does support the notion that above some minimum threshold L3$ latency and bandwidth level (maybe around 1.6GHz?) the performance gains from additional bandwidth and latency reductions in the L3$/NB uncore are well into the saturated point of diminishing returns.