I m aware that 3DMark 11 score is not representative of games scores, in this case the 20% are entirely due to eDRAM.
I know people like to think about "what ifs", but the reality is that's minute detail. Same as the people thinking "what ifs" about Nvidia Maxwell having HBM, or AMD having Fury with strong front-end or Maxwell's power efficiency. The actual products though, they end up being pretty similar to each other and things like "HBM" and "better architecture" do not co-exist and is merely a differentiator to have them equally viable competitors.
Actualy the GPU use more power than the CPU, IrisPro is not that powerfull that the CPU must be at full througput to load it at 100%, in Hardware.fr link you can check in LUXMARK power comsumption on GPU mode with CPU unloaded, that s 50W difference when losses are accounted, on CPU + GPU loading the CPU add 21W...
You are assuming. The problem is that Carrizo basically isn't available widely to test your *theory* out. You could be right about 50W, be slightly wrong, or be entirely wrong. Not to mention noone is playing LuxMark. Luxmark isn't even a 3D benchmark anyway.
Stress test start shortly at 36W and the device get rapidly to 28W, deltas at the SoC level are hence 24W and 17.5W, to wich we an add 0.5-0.7W that are within the idle power comsumption..
Indeed those figures correlate perfectly with your numbers, of course that the 5010U can be set to a strict 15W or 10W but then it wont achieve the same scores.
You should realize that if 10W was feasible at 2.1GHz then the Y variants that are 1GHz would be at less than 2.5W real TDP.
Yet they are rated 4.5W, and applying a raw square law point to almost 20W at 2.1GHz, let s assume that it s 18W due to the uncore not scaling as much..
When Intel spec a same line, that is the 2C/4T, from 1.9GHz to 2.5GHz it s obvious that the latter will have a TDP that is (2.5/1.9)^2 = 1.73x higher than the former, yet they are all specced 15W, wich is physicaly impossible..
Better to aknowledge that their ratings are a mess than keeping negating laws of physics, because that s the only thing left to try "explaining" the unexplainable...
You can't reliably prove that. Because you are assuming the difference between 36W and 28W is entirely due to the SoC. Increasingly the entire platform is becoming dynamic in power usage. The only reason I quoted 10W is because its just as inaccurate as the Max power figure you are quoting. Could be a figure shown shortly after the test.
It cant be more efficient than Carrizo s GPU by the virtue of the same laws i explained above, it start from too low to close the gap if it use the same process as BDW.
While performance will be lower, it won't be to a significant degree(10-20%). That means however far lower power use because lower average operating frequency and voltage. Same thing shown by AMD, because gains at desktop TDPs are pathetic, while being quite impressive at lower voltages.
That s due to process, not uarch, and at high frequencies Intel has quite an advantage, you had to ressort to Kaveri to try making a point, but what differentiate this chip from Carrizo is essentialy a more efficient process at average frequencies.
1) There's no Carrizo product to test that out, that's why I had to use 7870K. Oh, and its Desktop to Desktop, making it fair
2) Minute detail as I said in the beginning. There's always an assumption that someone will make a "dream device" based on the advantages of everyone.
The only thing I can tell you. Intel has process advantage, AMD has HBM, and Nvidia has a good architecture. None of them has all three. I highly doubt you will get all that in one product, sadly. Someone *always* has something that the others don't.