I would expect that mobile parts based on these die would have to be clocked significantly lower and would thus lose much of their performance. Note in particular that the GK208 review indicates that they dropped to 64 bit memory bus (with GDDR5 vs DDR3) and reduced ROPs and Texture units to save die area and compensated by cranking the clocks up to compensate. GDDR5 is more expensive and not as power friendly as DDR3.
The specs on the Oland part look like they are a match for Trinity/Richland (384 shaders, probably of the VLIW4 or 5 type, not GCN). Since Trinity and Richland loses to the Iris Pro, I wouldn't think this part would do any better (after clocks are pushed back down from 1GHz to a more typical mobile number to hit TDP.
In other words, you can hit similar performance by EITHER being "narrow and high clocked" - which has low die area (cost) but high power OR by being "wider and lower clocked" - more costly but lower power. You can't have your cake and eat it too.. Are you looking for a fast GPU at low cost and don't care about power? then these are probably decent options (e.g. for desktop). if you have a constrained power envelope, that may not be viable.
which also answers in part blackened's question above:
the answer is likely: because desktop users probably aren't willing to pay a sufficient premium over HD4600 to allow Intel to maintain good margins on the extra die area for GT5100.. why? because low/mid end desktop discrete cards offer more perf at ~$75 - power is not as much of a constraint. This creates a price squeeze, GT3/HD5100 part can't be more that $30-40 > HD4600 part on desktop.
Oland is GCN and both will be faster than the GT640M in the review in the OP and so is the GK208,which drops power consumption even more over the earlier cards. The HD7730 for desktop has the same specs as Oland but is a salvage part of the Cape Verde GPU found in the HD7770. Oland is only found in mobile and OEM desktop.
The desktop HD7750 1GB cards have very low power consumption:
http://www.techpowerup.com/reviews/ASUS/HD_7750/24.html
http://www.techpowerup.com/reviews/HIS/HD_7750_iCooler/24.html
TechPowerUp do not use cheap measuring equipment,with what they use for power measurements costing nearly $2000. They measure graphics card power consumption at the PCI-E slot and power connectors. That is under 45W for the entire card including GPU,PCB,VRMs,cooler and GDDR5.
Iris Pro might be fast but this is getting to Apple levels of "its amazing" and the like. Once you start increasing the resolution performance is not that hot TBH.
The problem is that other companies are now starting to introduce things like DRAM stacking as seen with the Amkor work with Hynix on the PS4. Nvidia has stated the use of stacked DRAM on near future cards too,which will no doubt save on power and size of cards.
The other problem is the cost especially considering how massive the Iris Pro containing CPUs are. The GPU is frickin massive(HD7790 levels) in die area and probably easily outpaces something like Cape Verde in transistor count. The CPU bigger in total than the Core i7 4960X,excluding the L4 cache(which is made on a more expensive process) and even with a shrink to 14NM,Intel will have to increase EU count,etc by a decent amount if they want a good performance increase. They might need to make other changes too,and as you can see this is the same problem AMD and Nvidia have with their GPUs. Moreover,all those billions spent on R and D,process development and fab building does not come cheap,so Intel won't sell them cheap,since why should they?? They have 100,000 people to pay. They have to amortise the cost somehow,and large desktop CPUs at lower prices are not the answer it seems. The whole Iris Pro development was pushed by Apple to make thinner laptops which cost decent money. They are only going to this is if they can charge more for the privilege. It is only a small part of the market.
The future is in things like Atom which are small,probably have high yields and are cheap to make. The market is a race to the bottom.
The whole L4 cache which Iris Pro has is as big as a whole next generation Atom SOC.
Even then looking at HD4600 containing CPUs,even with the massive increase in EU count and massive increase in bandwidth of Iris Pro the performance scaling is not perfect and this is the problem. People read way too much into marketing(GFLOPs,etc) to see what is in front of them. Just doubling certain parts of a GPU does not always equate to a doubling of performance,and this has been evident for the last decade,as you start to hit other bottlenecks in the design,and it affects any company which makes GPUs or IGPs from Qualcomm to Nvidia,especially once you have established a decent performance baseline. People have too short memories on computer forums,and computer companies always seem to "re-invent" the wheel. Meh. Just call me a cynic.