Iris pro benchmarks are in, and........they're VERY GOOD

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Khato

Golden Member
Jul 15, 2001
1,225
281
136
A chip that consume as much as a A10 6700 65W DT CPU
is considered being a mobile dedicated chip??..

http://translate.google.de/translat...3/intel-iris-pro-5200-grafik-im-test/&act=url

The delta between idle and Crysis 3 is actually greater than the A10 6700 - the 4750HQ goes from 18W to 82W (64W delta) while the A10 6700 goes from 30W to 87W (57W delta.) What's most interesting in those results though is the fact that the delta power consumption in Crysis 3 between HD 4600 and Iris Pro 5200 is only 5 watts. That's 5 watts to go from ~27 fps to 45 fps.

Which raises the question of what's using so much power with Intel's graphics? Unfortunately it appears that the windows based processor power utility Intel offers just reports total package power. The linux power governor version reports estimates for package, CPU cores, graphics, and uncore... but only in linux.
 

Kallogan

Senior member
Aug 2, 2010
340
5
76
Perf per watt is more or less the same. If i play with my i7 4700HQ locked at 2,5 ghz (4750HQ is locked at only 2,0 Ghz when gaming) on my 750M the power draw is 80-90W (73W watts for iris pro) when gaming. An extra 20% perf for the 750M on average for a few watts more. I'd say efficiency is nearly identical.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
That is a VERY optimistic prediction. In fact I would say it is totally impossible within the TDP constraints of a laptop or even more so an ultrabook. Current iris pro is maybe half the speed of a desktop 7750, and PS4 will be probably be twice that. So you would need 4x the performance in the same power envelope. IMO it could take several generations to see that, if ever.

It might be possible to build a huge monster desktop chip like that, but I just dont see how it could be made to fit in a mainstream ultrabook power/thermal envelope.

I'm just summing up what ShintaiDK has claimed. The PS4 GPU is 1.8 TFlops, and he said the Broadwell Iris Pro IGPU will be 2 TFlops, all within the same or lower TDP than current Haswell based Iris Pro.




If we look at IB vs HW. For 2W more on the mobile front, you got FIVR, 256bit paths and almost 3x the IGP peak power. All on the same node. And you doubt it can be doubled when going 14nm?

And you both do know that shader performance is only one part of many on a GPU to determine gaming performance?
 
Last edited:

Zap

Elite Member
Oct 13, 1999
22,377
2
81
It doesn't look like Iris Pro is going to make it to an LGA 1150 CPU, as far as I can tell. :\

Because it wouldnt fit in the socket. Maybe at 14nm.

What about Iris (vanilla) with the 40 EU and no extra memory?

I would like to see Intel selling the dual core with HD 5000 (Iris pro without l4 cache) for a more reasonable price... desktop means easy access to higher clocks, DDR3 2133+, so the cache is less important I think...

YES! But I think it is as likely as a Core i3 K series unlocked CPU. I'd buy one! Gimme a Haswell i3 with Iris HD 5000 and unlocked multiplier for $130. I'm still running Sandy Bridge and Ivy Bridge on my various PCs. I'd upgrade a few of them to Haswell if Intel would release such a beast.

The average consumer wants to be able to purchase a laptop, use it all day, MAYBE play some games, and do it at a relatively cheap price. Intel has been inching closer and closer to this

You lost me at "relatively cheap price." I don't think it likely that Intel will give "average consumers" better graphics at low prices.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
You lost me at "relatively cheap price." I don't think it likely that Intel will give "average consumers" better graphics at low prices.

I'm thinking 2-4 years from now. Right now though intel is trying to show that they can actually compete.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Yes 750 is the one on the market now, and the comparison should be to 750m. 750m is more or less the same as 650m. About plus 10% if i remember correctly, its a rebrand. Remember to get the gddr5 version

750m and the similar amd chips will do fine against 5200 as long as the prices is what they are for iris pro. The iris pro does not make a difference as it is. But its the start of the end of the midrange. Single chip is far more cost effective for the oem and ddr4 will take a toll on the gpu.

750m is significantly faster is using GDDR5. Clocks for the 650m GDDR5 are generally 850/1000 with boost, with boost the 750m clocks around 1058/1250 or about 24% faster clocks. Problem is nvidia took one step forward two steps back with the 7xxm series shrinking the bus for midrange chips to 64 bits and using a lot of DDR3 (660m runs at 950/1250 with boost and always uses GDDR5, a 750m with DDR3 at 1058 mhz is not going to be any faster).
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Perf per watt is more or less the same. If i play with my i7 4700HQ locked at 2,5 ghz (4750HQ is locked at only 2,0 Ghz when gaming) on my 750M the power draw is 80-90W (73W watts for iris pro) when gaming. An extra 20% perf for the 750M on average for a few watts more. I'd say efficiency is nearly identical.

I can second that. On my 3630qm/660m system tomb raider at high performance setting with the 660m overclocked to 1058/1250 power use is 95 watts after 5 seconds dropping to 86 watts (turbo reduced, TR doesn't require it). Extended gaming uses 85-88 watts and I get ~44 fps at my settings. Remove the O/C (950/1250) and power use drops to 82 watts and 40 fps. Changing the clocks to 835/1250 mhz (remove boost) gives me 74 watts and 36 fps.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
With updated Intel drivers, outperforms the Nvidia GT650M in most tests, and the 740M. Nvidia and AMD have a LOT to worry about here - no wonder Apple ditched nvidia in the macbook pro. Note: this is the 20 core cut down version. This will only get worse for nV/AMD with Broadwell.

nV/AMD will be shut out of the ultrabook market increasingly (which already, most ultrabooks do not have dGPU) and limited ONLY to full size gaming laptops, since TDP and size do not matter for that type of machine.



It gets better. This is the 20 core 47W TDP version of Iris Pro. The 40 core 53TDP version will be even faster than this. The 40 core (or possibly more) version will be used in the upcoming 2013 Retina Macbook Pro.

http://uk.hardware.info/reviews/4776/intel-iris-pro-5200-graphics-review-the-end-of-mid-range-gpus


They we're never in ultra books to start but now they won't even be in mainstream machines
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Hahaha. That is funny as hell. I can understand someone might be enthusiastic about specific part, but to me it looks like you are the marketing guy @Intel that is resposible for sales of iris pro...

There was a linked review in which faster CPU+gtx750m consumes only 7W more in crysis3. That is with dedicated GDDR5 memory for GPU and whole banks of DDR3 dedicated only for system and applications.
Your 47 TDP chip is taking almost 2x the amount it can dissipate, you know what that means? After 5 minutes of acceptable gameplay CPU (and GPU, since it is integrated) will start to throttle - enjoy your slide show!
If you think about it: More TDP - better!
On top of that Iris pro performs around gtx650m. Slightly faster on low details, slower on high. There is a long way to 7970m from there.

What is quite funny is that notebookcheck ranks the HD5200 Iris Pro as another class below the HD7970M:

http://www.notebookcheck.net/Intel-Iris-Pro-Graphics-5200.90965.0.html
http://www.notebookcheck.net/AMD-Radeon-HD-7970M.72675.0.html

Even the benchmarks listed on both webpages seem to more or less agree.

The same website ranks the following mobile graphics cards above the Intel IGP:

GeForce GT 750M *
Radeon HD 8850M *
Radeon HD 7850M
GeForce GTX 660M
Radeon HD 8790M *
Mobility Radeon HD 4870 X2
Quadro 4000M
GeForce GTX 470M
GeForce GTX 480M
Quadro K1100M *
GeForce GT 650M
GeForce GT 745M *
Radeon HD 7770M
GeForce GTX 560M
Radeon HD 8770M *
GeForce GT 740M
Quadro K2000M
GeForce GTS 450
GeForce GTX 260M SLI
Mobility Radeon HD 5870
Quadro 5000M
FirePro M4000
Radeon HD 7750M *
FirePro M7820
Radeon HD 6870M
Radeon HD 4850
GeForce 9800M GTX SLI
GeForce GTX 460M
GeForce GT 730M
GeForce GT 645M *
Radeon HD 8830M *
Quadro 3000M
Quadro FX 3800M
GeForce GTX 285M
Mobility Radeon HD 4870
GeForce GT 640M
Radeon HD 7730M
Radeon HD 8750M *
GeForce GT 735M *
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
What is quite funny is that notebookcheck ranks the HD5200 Iris Pro as another class below the HD7970M
Iris isn't even close to 7970m performance. Notebookcheck is generally fairly good with their ordering though on products they haven't tested much they just stick them where they would guesstimate they would go.

Iris Pro probably would go in that list between the 650m (GDDR5) and the 640m.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Iris isn't even close to 7970m performance. Notebookcheck is generally fairly good with their ordering though on products they haven't tested much they just stick them where they would guesstimate they would go.

Iris Pro probably would go in that list between the 650m (GDDR5) and the 640m.

They say it is in the same class as the GT640M,so you are more or less correct. The HD7970M is in-between desktop HD7850 and HD7870 level performance,or probably around GTX660 level. Looking at the review in the OP with decent settings at 1920X1080 the GT640M is faster than the HD5200.

That is for an IGP which is a bigger chip than that found in an HD7770(and I think the HD7790 also) and with probably more transistors too.

Nvidia has released the 79MM2 GK208 recently too:

http://hexus.net/tech/reviews/graphics/59081-nvidia-gainward-geforce-gt-640-rev-2-gk208/

AMD also has the 90MM2 Oland too:

http://www.techpowerup.com/gpudb/1853/radeon-hd-8670.html

I expect mobile graphics cards based on these will be quite energy efficient, and will not be very expensive,considering they are made on a common process.
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
You lost me at "relatively cheap price." I don't think it likely that Intel will give "average consumers" better graphics at low prices.

I'm with you here. It seems like desktop users who, on average, spend more than most consumers are really getting the shaft in this respect. Why not include HD5000 or 5100 on desktop SKUs? I understand it, but at the same time I don't - intel wants to segregate their product line as to create value where its needed, I get that. But an I7 with HD5100? Why not? Oh well.
 
Aug 11, 2008
10,451
642
126


If we look at IB vs HW. For 2W more on the mobile front, you got FIVR, 256bit paths and almost 3x the IGP peak power. All on the same node. And you doubt it can be doubled when going 14nm?

And you both do know that shader performance is only one part of many on a GPU to determine gaming performance?

Raw gflops is far from gaming performance. The hd7750 has the same gflops as the highest level iris pro but I am sure it has much better gaming performance. Even if they can double the raw gflops for gt4, the gaming performance likely will be far less than a 7850.
 

Blue_Max

Diamond Member
Jul 7, 2011
4,227
153
106
Does the desktop variant fare just as well, or is Iris Pro a mobile-only part?
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de


If we look at IB vs HW. For 2W more on the mobile front, you got FIVR, 256bit paths and almost 3x the IGP peak power. All on the same node. And you doubt it can be doubled when going 14nm?

And you both do know that shader performance is only one part of many on a GPU to determine gaming performance?
FIVR is not simply adding to TDP, because it also increases efficiency. And since it's efficiency curve is said to be flat (which IMO means that it scales the active cells with power needs) and delivers or allows for finer granularity in power management plus maybe lower tolerances reducing power consumption even further.

So in the end it might even help to reduce TDP on the other end.

IGP and CPU cores use a balanced power management. If this has been improved, there might be more room for the IGP. But what power is left for the CPU cores (compared to SB or IB) when the IGP is at peak power?

It's not as static as in the past anymore.
 

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
The width is the same as LGA1150 chips.


Not much area around to fit the IHS on. Unless they design an entirely new IHS that may fit there.

Ah ah heck, I hadn't seen that direct comparison. You're probably right. Shame, I was looking forward to them bringing out an LGA GT3e in Haswell Refresh
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
Platform TDP yes, CPU TDP no.
Ok, what were the contributions of the 256b paths and the IGP, if FIVR adds 1-2W to TDP?

Please read the available information on FIVR. It not only allows the chip to save power by being more efficient than external VRs. Those also mean that power has to be routed through the package to the chip and power flow can't be switched on and off as fast. This means, you have significant up and down ramp phases, at least causing leakage power consumption in the involved chip areas. And then there likely are bigger tolerances to be included to avoid voltage droop effects etc.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,548
2,545
146
Hmm, while this is a step in the right direction, it seems to be not ready yet. I would be interested if/when they put a high end part in a nice 17" laptop, that performed as well as high end GPU's from AMD and Nvidia, IF the driver/game support was there, and for a reasonable price. Ultrabooks and macs, meh.

With that said, Intel should take the technology further, try to reinvent larrabee, they might give GTX 780 a run for the money Wouldn't that be awesome, then we would get some real competition in the GPU market.
 

Kallogan

Senior member
Aug 2, 2010
340
5
76
Well, seems like 2014 will be a serious breakthrough year for APus/iGpus. As a moderate gamer and laptop user, i'll probably go for it either via Kaveri or Broadwell if prices become acceptable.
 

rootheday3

Member
Sep 5, 2013
44
0
66
Nvidia has released the 79MM2 GK208 recently too:

http://hexus.net/tech/reviews/graphics/59081-nvidia-gainward-geforce-gt-640-rev-2-gk208/

AMD also has the 90MM2 Oland too:

http://www.techpowerup.com/gpudb/1853/radeon-hd-8670.html

I expect mobile graphics cards based on these will be quite energy efficient, and will not be very expensive,considering they are made on a common process.

I would expect that mobile parts based on these die would have to be clocked significantly lower and would thus lose much of their performance. Note in particular that the GK208 review indicates that they dropped to 64 bit memory bus (with GDDR5 vs DDR3) and reduced ROPs and Texture units to save die area and compensated by cranking the clocks up to compensate. GDDR5 is more expensive and not as power friendly as DDR3.

The specs on the Oland part look like they are a match for Trinity/Richland (384 shaders, probably of the VLIW4 or 5 type, not GCN). Since Trinity and Richland loses to the Iris Pro, I wouldn't think this part would do any better (after clocks are pushed back down from 1GHz to a more typical mobile number to hit TDP.

In other words, you can hit similar performance by EITHER being "narrow and high clocked" - which has low die area (cost) but high power OR by being "wider and lower clocked" - more costly but lower power. You can't have your cake and eat it too.. Are you looking for a fast GPU at low cost and don't care about power? then these are probably decent options (e.g. for desktop). if you have a constrained power envelope, that may not be viable.

which also answers in part blackened's question above:
It seems like desktop users who, on average, spend more than most consumers are really getting the shaft in this respect. Why not include HD5000 or 5100 on desktop SKUs?
the answer is likely: because desktop users probably aren't willing to pay a sufficient premium over HD4600 to allow Intel to maintain good margins on the extra die area for GT5100.. why? because low/mid end desktop discrete cards offer more perf at ~$75 - power is not as much of a constraint. This creates a price squeeze, GT3/HD5100 part can't be more that $30-40 > HD4600 part on desktop.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
I would expect that mobile parts based on these die would have to be clocked significantly lower and would thus lose much of their performance. Note in particular that the GK208 review indicates that they dropped to 64 bit memory bus (with GDDR5 vs DDR3) and reduced ROPs and Texture units to save die area and compensated by cranking the clocks up to compensate. GDDR5 is more expensive and not as power friendly as DDR3.

The specs on the Oland part look like they are a match for Trinity/Richland (384 shaders, probably of the VLIW4 or 5 type, not GCN). Since Trinity and Richland loses to the Iris Pro, I wouldn't think this part would do any better (after clocks are pushed back down from 1GHz to a more typical mobile number to hit TDP.

In other words, you can hit similar performance by EITHER being "narrow and high clocked" - which has low die area (cost) but high power OR by being "wider and lower clocked" - more costly but lower power. You can't have your cake and eat it too.. Are you looking for a fast GPU at low cost and don't care about power? then these are probably decent options (e.g. for desktop). if you have a constrained power envelope, that may not be viable.

which also answers in part blackened's question above:

the answer is likely: because desktop users probably aren't willing to pay a sufficient premium over HD4600 to allow Intel to maintain good margins on the extra die area for GT5100.. why? because low/mid end desktop discrete cards offer more perf at ~$75 - power is not as much of a constraint. This creates a price squeeze, GT3/HD5100 part can't be more that $30-40 > HD4600 part on desktop.

Oland is GCN and both will be faster than the GT640M in the review in the OP and so is the GK208,which drops power consumption even more over the earlier cards. The HD7730 for desktop has the same specs as Oland but is a salvage part of the Cape Verde GPU found in the HD7770. Oland is only found in mobile and OEM desktop.

The desktop HD7750 1GB cards have very low power consumption:

http://www.techpowerup.com/reviews/ASUS/HD_7750/24.html
http://www.techpowerup.com/reviews/HIS/HD_7750_iCooler/24.html

TechPowerUp do not use cheap measuring equipment,with what they use for power measurements costing nearly $2000. They measure graphics card power consumption at the PCI-E slot and power connectors. That is under 45W for the entire card including GPU,PCB,VRMs,cooler and GDDR5.

Iris Pro might be fast but this is getting to Apple levels of "its amazing" and the like. Once you start increasing the resolution performance is not that hot TBH.

The problem is that other companies are now starting to introduce things like DRAM stacking as seen with the Amkor work with Hynix on the PS4. Nvidia has stated the use of stacked DRAM on near future cards too,which will no doubt save on power and size of cards.

The other problem is the cost especially considering how massive the Iris Pro containing CPUs are. The GPU is frickin massive(HD7790 levels) in die area and probably easily outpaces something like Cape Verde in transistor count. The CPU bigger in total than the Core i7 4960X,excluding the L4 cache(which is made on a more expensive process) and even with a shrink to 14NM,Intel will have to increase EU count,etc by a decent amount if they want a good performance increase. They might need to make other changes too,and as you can see this is the same problem AMD and Nvidia have with their GPUs. Moreover,all those billions spent on R and D,process development and fab building does not come cheap,so Intel won't sell them cheap,since why should they?? They have 100,000 people to pay. They have to amortise the cost somehow,and large desktop CPUs at lower prices are not the answer it seems. The whole Iris Pro development was pushed by Apple to make thinner laptops which cost decent money. They are only going to this is if they can charge more for the privilege. It is only a small part of the market.

The future is in things like Atom which are small,probably have high yields and are cheap to make. The market is a race to the bottom.

The whole L4 cache which Iris Pro has is as big as a whole next generation Atom SOC.

Even then looking at HD4600 containing CPUs,even with the massive increase in EU count and massive increase in bandwidth of Iris Pro the performance scaling is not perfect and this is the problem. People read way too much into marketing(GFLOPs,etc) to see what is in front of them. Just doubling certain parts of a GPU does not always equate to a doubling of performance,and this has been evident for the last decade,as you start to hit other bottlenecks in the design,and it affects any company which makes GPUs or IGPs from Qualcomm to Nvidia,especially once you have established a decent performance baseline. People have too short memories on computer forums,and computer companies always seem to "re-invent" the wheel. Meh. Just call me a cynic.



 
Last edited:

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
If we rewind several years, you'll remember that AMD was ahead of intel in terms of x86 performance. What happened?

When AMD was riding high with the Athlon, their management completely wasted that opportunity by throwing their money away and making poor business choices.

You are aware that Intel illegally paid large computer manufacturers to not use AMD processors for years right? They got the hell sued out of them for it. That is "what happened".

AMD didn't make the money they should have made(and needed to continue maintaining their lead) during the multi year time when they had the best processors.

This is supposed to be common knowledge.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
You are aware that Intel illegally paid large computer manufacturers to not use AMD processors for years right? They got the hell sued out of them for it. That is "what happened".

AMD didn't make the money they should have made(and needed to continue maintaining their lead) during the multi year time when they had the best processors.

This is supposed to be common knowledge.

If you try educate people. Please stick to the rest of reality as well.

AMD delayed 65nm because they was confident in their lead and wanted to milk the process node further.
AMD didnt expand capacity, because they only wanted the premium segment.
AMD delayed and reduced R&D, because they was confident that Intel would never catch their K8.
AMD wasted all its savings on ATI.

Then you can blame Intel as much as you want for the rest.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |