One place where I think the small-die strategy has helped AMD is in pricing flexibility. It has allowed them to effectively re-price previously high-end parts as the market pushes on. I think it also saves on R&D (if you can profitably sell last-gen's high-end parts in the mid-range, there is no need to do a separate midrange design each generation).
Two parts to this, one is during the last generation AMD made a bit on GDDR5 prices dropping rapidly. They won that bet. It was a risk, and hats off too them for putting it on the line. The reason I bring this up is if you look at the current pricing issues, the GDDR3 on the GTX parts are the largest price difference between them and the 4xxx parts, not the chip. This gets compounded a bit because nV is using higher bit width creating a more complex PCB. Not saying that AMD's chips aren't cheaper, but there isn't nearly the price rift some are making it out to be.
Second part- The G92 is the chip that is hanging in for several generations, not the ATi counterparts(which everything made when the G92 launched on the red side is long dead and buried). Creating a larger chip in several ways makes it easier to scale up and down to account for yield issues. It won't be ideal on a margin side all of the time, but it does give them some flexibility in maximizing less then ideal yields. This isn't saying AMD's approach is wrong, just pointing out that nV's side isn't wrong either. Two different approaches, pros and cons to each.
As an aside, I'm really hoping for a top-to-bottom launch from NV early next year.
Honestly I think there is close to no chance that will happen. With them launching their low end DX10.1 parts I think it is safe to say that they are only going to push mid to high end parts with the GT300 core to start with.
But I think the strategy itself is a sound one, and one I'd like to see NV pursue. A GT360 that starts at $299 and gets pushed down to $175-180 when the next generation hits? Yes, please!
Isn't that precisely what they did with the G92 parts? If they had gone with GDDR5 and a 256bit bus they could have done that with the GTX260 too, they didn't as they made the wrong bet. This generation it will be a bit easier to see the impact of the die size impact as there are less varriable between the parts.
I give them 1-2 more years tops in the high end before they are gone for good.
First up Intel needs to clear the lawsuit they are facing to see if they will be allowed to make graphics chips at all. Second, even using Intel's most agressive projections it is highly unlikely that the fastest Larrabee available in two years will be competitive with the GTX295. Not the comparable part in that timeframe, the currently available one. Intel poses no threat to nV or ATi in the high end for several years at the earliest.
nVidia has no viable future in the discrete graphics market. Once 2012 rolls around and AMD + Intel start incorporating their GPU's on die, they will have all-in-one solutions that nVidia won't be able to compete with.
So many ways this is wrong, and it isn't specific to nV by any stretch of the imagination.
First off, come 2012 the next generation of consoles is going to be hitting meaning that PC games are going to see a massive instant spike in system requirements. I know PC gamers aren't used to it, but being tied to the consoles changes the normal ebb and flow of things around considerably. A GPU that can push a 2011 title at 200FPS may be unplayable for those hitting in late 2012. To frame this statement, the GS was a souped up Voodoo1, not even matching the Voodoo1 in everythnig, its' follow up was the GF 7900 based part- that is a normal console style evolution and we will see something comparable in 2012. I state this mainly as an example of why GPU power is going to matter.
Now, if you believe that PC gaming is going to utterly die, then perhaps you will think this won't matter. I don't believe that at all.
So we get to the next segment, GPU comparisons. i7 has 730Million transistors. The 5750 has 1.04billion transistors. Both of these chips are in the 90Watt range for TDP(I'm using the lowest i7 numbers). In order for an on die CPU/GPU combo chip to competitive with today's mid range, it will need to be packing ~2Billion transistors and pushing ~180Watt TDP, die size would be ~450mm2 give or take on Intel's build process. Intel demands a 60% margin to enter a market, that's just how they do business. In order for them to reach that level of margin they would need to be pushing far beyond the price of the GTX295 to reach that level given even their exceptional fabbing abilities. Given, you are getting a CPU and GPU in that deal, so that certainly must be taken into account.
That brings us to mid level graphics and a high end CPU for die space, certainly reasonable given the general guidelines in terms of fabbing the chip for Intel, a bit rougher for AMD, but we come across our first major issue. While AMD certainly has the graphics expertise to make it happen, they don't have the fab capability to handle something that complex on a large scale. For Intel, they certainly have the fab capability for it, but they can't build a remotely decent GPU. Thinking that Intel could be vaguely competitive with AMD transistor for transistor in the GPU space is obscene, the way things are shaping up it is looking like Larrabee will end up quite a bit larger then the largest AMD GPU while it will struggle to compete in the $100 segment. Neither company is in a good position to leverage this technology towards the mid end segment for one reason or another. And this is the good news for them.
Then we get to bandwidth. Even in the lower mainstream market, the CPU socket doesn't have remotely close to enough bandwidth to provide for a CPU and GPU. Of course, Intel can double the bit width to compensate, but this will drive up the cost of the motherboard considerably along with requiring premium RAM in order to get the performance you are paying for. Between the added cost to the CPU, the added cost of the mobo and added cost of the RAM in all you are likely looking at somewhere around $200 in additional expenses to compete with a $100 video card.
For anything in the next few years, on CPU die graphics are simply going to be a replacement for integrated chips. There is no chance they are going to even manage to be cost effective in the mainstream segment, let alone the enthusiast space.