Intel Skylake / Kaby Lake

Page 221 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dahorns

Senior member
Sep 13, 2013
550
83
91
But still, Skylake is on 14nm and carizzo is 28nm. So in a TDP limited scenario, seems like intel should be able to pull ahead without having to resort to edram. OTOH, we dont really know the true power consumption of either, and AMD reference platforms generally perform quite well compared to real world devices.

I'd also consider that the U series CPUs are much higher performing than Carrizo counterparts.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
But still, Skylake is on 14nm and carizzo is 28nm. So in a TDP limited scenario, seems like intel should be able to pull ahead without having to resort to edram. OTOH, we dont really know the true power consumption of either, and AMD reference platforms generally perform quite well compared to real world devices.

I would not say intel has caught up to AMD or Nvidia just yet but their graphics are getting pretty darn good. Compared to several years ago they are making significant progress. I look forward to seeing their IGP designs over the next few years.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
Only 30% better at 15W than carizzo. Am I reading that correctly?

30% better than the fastest 15W Carrizo inside AMD's reference platform and almost as fast as a full-blown 35W FX-8800P. That's really impressive considering we were told by some users that it would become TDP limited and barely beat HD Graphics 520 (let alone Carrizo).

And the best part is, no throttling after a Skyrim gaming session, 15 minutes of Prime95 and 10 minutes of Furmark.

AMD reference platforms generally perform quite well compared to real world devices.

No question. A10-8700P barely scores above ~1500 @ NotebookCheck review.


I would say the perf/SP for SKL GT2 is quite good - it is likely either TDP limited (turbo up on narrow design hits P ~ f*v^2; f~v in turbo regime; P ~ v^3 ... therefore, design which is narrow but fast can get decent perf but isn't as efficient as running wide and slow) OR else frequency limited (hitting gpu Fmax).

Given the scaling of SKL GT3e (48 EU = 384sp), where it can run wider but slower, the perf/watt and perf/sp for SKL gpus actually looks really good.

Thanks for the input.

mikk said:
60%+ on the same power envelope is pretty good, much better than expected from me.

Sky Diver and 3dmark11 graphics score are ~30% faster compared to my i7-6700k @HD530 with DDR3-3000.

15W Skylake-U GT3e giving 95W Skylake-S GT2 a run for its money.
It also beat my expectations, and this bodes very well for the 28W models.
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,173
2,211
136
Only 30% better at 15W than carizzo. Am I reading that correctly?


Only? This would translate into 50% on real products and much more than 50% in real gaming since AMD does better in 3dmark11. In real games Skylake GT2 ULT is enough to beat Carrizo 15W by the looks of it.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
But still, Skylake is on 14nm and carizzo is 28nm. So in a TDP limited scenario, seems like intel should be able to pull ahead without having to resort to edram. OTOH, we dont really know the true power consumption of either, and AMD reference platforms generally perform quite well compared to real world devices.

Carrizo is over twice the size. And you can offset power consumption with more transistors. So there isn't any excuse left. AMD haven't moved on the GPU uarch side for ages.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
Carrizo is over twice the size. And you can offset power consumption with more transistors. So there isn't any excuse left. AMD haven't moved on the GPU uarch side for ages.

This. I think Intel has a really good product in their hands, time to give them some credit. To me having Iris iGPUs and eDRAM here (U-series) is even more important than desktops/AIO, where you can usually find some fairly competent and cheap dGPUs.

I would like to see more laptops using Core i5 Skylake-U GT3e, something like a sleek ultrabook/convertible for less than $1000. Hopefully after Apple refreshes their Macbook Air line with these chips others will follow.

It can easily replace some low-end mobile dGPUs like the Geforce GT940M, and I doubt the cost of Skylake-U GT3e is much higher than Skylake-U GT2 + dGPU.

- 3DMark 11
Geforce GT940M: ~2360
Iris Graphics 540: ~2600
 
Aug 11, 2008
10,451
642
126
@Shintai: But that is the point. AMD has basically done nothing on the gpu side for years and is at a 2 node disadvantage, while Intel has gone through several generations in "improved" graphics, keeps throwing more transistors at the problem, and the mainstream solution still trails AMD. And lets face it iris pro is not a mainstream solution. Maybe skylake will bring it to more mainstream models, but so far it has been a niche product for the very top end.
 
Reactions: Grazick
Mar 10, 2006
11,715
2,012
126
This. I think Intel has a really good product in their hands, time to give them some credit. To me having Iris iGPUs and eDRAM here (U-series) is even more important than desktops/AIO, where you can usually find some fairly competent and cheap dGPUs.

I would like to see more laptops using Core i5 Skylake-U GT3e, something like a sleek ultrabook/convertible for less than $1000. Hopefully after Apple refreshes their Macbook Air line with these chips others will follow.

It can easily replace some low-end mobile dGPUs like the Geforce GT940M, and I doubt the cost of Skylake-U GT3e is much higher than Skylake-U GT2 + dGPU.

- 3DMark 11
Geforce GT940M: ~2360
Iris Graphics 540: ~2600

Great post, Sweepr.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
And lets face it iris pro is not a mainstream solution. Maybe skylake will bring it to more mainstream models, but so far it has been a niche product for the very top end.

Half the mobile i5 SKUs comes with EDRAM. I would call that mainstream.

It also shows the great potential being used that all Skylake CPUs got an EDRAM interface. New EDRAM SKUs can be made in a matter of months. Should the demand be there.

For NUC, both the i5 and i7 got EDRAM. You have to buy the i3 to avoid it so to say.
 
Aug 11, 2008
10,451
642
126
Maybe half the SKUs have edram, but how does that translate to availability in the retail channel? I think that has yet to be seen. After all if half the SKUs have it, but 90% of the shipping models dont, that is meaningless. And relax, the 90% is just a number as an example. I think there are too few products available yet to say for sure. But if it follows past trends, retail availability will be sparse.
 
Mar 10, 2006
11,715
2,012
126
Maybe half the SKUs have edram, but how does that translate to availability in the retail channel? I think that has yet to be seen. After all if half the SKUs have it, but 90% of the shipping models dont, that is meaningless. And relax, the 90% is just a number as an example. I think there are too few products available yet to say for sure. But if it follows past trends, retail availability will be sparse.

Remember that the GT3e models have quite large die sizes and are probably hard to make in light of Intel's 14nm troubles.
 

mikk

Diamond Member
May 15, 2012
4,173
2,211
136
AMD has basically done nothing on the gpu side for years


Since years? AMD launched Kaveri last year, it was a big step over VLIW.

and is at a 2 node disadvantage, while Intel has gone through several generations in "improved" graphics, keeps throwing more transistors at the problem, and the mainstream solution still trails AMD.


Intel doesn't invest as much GPU related transistors/die size as AMD on their Mainsteam GT2 SKU. GT2 is a bit over 50 mm² on 14nm. Through several generations doesn't mean much, Intel started from a much lower base some years ago. Not to mention that AMD did go through several generations in "improved" graphics as well over the years since Llano.
 

jpiniero

Lifer
Oct 1, 2010
14,840
5,456
136
It can easily replace some low-end mobile dGPUs like the Geforce GT940M, and I doubt the cost of Skylake-U GT3e is much higher than Skylake-U GT2 + dGPU.

Except Intel doesn't optimize every game like nVidia does. I imagine in most games the 940M is much faster.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
And the best part is, no throttling after a Skyrim gaming session, 15 minutes of Prime95 and 10 minutes of Furmark.

It is throttling. It runs at 500MHz and the score on reddit is on par with 520 and 1/2 of SB for the FFXIV bench.

It'll end up better than 520 for sure but whether it'll be 60% like in short duration benchmarks is up in the air.
 

Cali3350

Member
May 31, 2004
127
11
81
Has me excited for the next Macbook Pro 13" though. Apple shells out for 28Watt parts and while they are clocked higher / Vcore on the CPU side thats still a lot more thermal room for the GPU.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Yea.

I am just not impressed in light of the competition and what they used to push out. Now the latter may be due to the possibility that we are really at the last legs of "Moore's Law". They want to "delay" it forever but with each generation getting lower of a gain.

We didn't need to have pricey GT3e's to get 2x the gain over last generation, we just needed new process/architecture.

It does not matter a bit whether GT3e i5's are available when essentially the only guaranteed users are Apple and Microsoft. Tell me when GT3e i5 is available on a Laptop for $699. Certainly it should be doable. Core i5 6260U is only $23 more than Core i5 6200U. Yet we get $1500+ systems with only Core i7's and displays with ridiculous resolutions. Best Buy Canada has a Core i5 6200U laptop(a HDD not an SSD though) for $629. That's like $500 US. $550 US Core i5 6260U device with otherwise same specs should be possible. But it probably isn't going to happen.

Regarding competition - It's not AMD. AMD is quite far away now. Intel is essentially using extra GPU performance for "upselling" over AMD parts. eDRAM will mean GT3e parts will do better in real games than with 3DMarks.

The real competition is ARM - the strongest of being Apple by far. True, they are covered by the "x86" curtain that's at the moment impenetrable. But the latest <5W A9X is merely 30-40% away in graphics from the "best and greatest" GT3e part. I find it so stupid essentially Intel's grip on the x86 market is preventing any other player from entering it. I would very much like a device what A9X like chip would enable - 10 hour battery, 27x20 display, 1.6lbs, $950, Core i5 CPU/GPU performance, fanless. Yep, imagine a iPad Pro running Windows 10.
 
Last edited:

dahorns

Senior member
Sep 13, 2013
550
83
91
The real competition is ARM - the strongest of being Apple by far. True, they are covered by the "x86" curtain that's at the moment impenetrable. But the latest <5W A9X is merely 30-40% away in graphics from the "best and greatest" GT3e part.

What does ARM have to do with GPU development for the iphone? Intel has used Imagination's PowerVR with their atom products and I assume could do so again if it wanted. And, it still seems to me that comparing graphics between x86 and mobile is complicated by the FP16 v. FP32 issue. I think it is fair to say that Skylake's iGPU will vastly outperform the mobile competition at higher levels of precision. It is probably also true that, assuming Intel ever bothers to get FP16 driver support, most of the mobile parts will out perform it at the lower level of precision since the mobile products are geared that way.

Finally, do we actually know the A9x is a <5W TDP part? I haven't seen any power numbers for it.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
The real competition is ARM - the strongest of being Apple by far. True, they are covered by the "x86" curtain that's at the moment impenetrable. But the latest <5W A9X is merely 30-40% away in graphics from the "best and greatest" GT3e part.

A lot of assumptions here. I hope you're not basing this '30-40%' on 3DMark or GFXBench results because it's apples and oranges for Windows vs iOS/Android devices, as repeatedly told by AnandTech.

AnandTech said:
Futuremark&#8217;s 3DMark is available on all platforms, although when throwing Windows into the mix we always have to take a bit more caution as the level of rendering precision is not always equal. This is due to the fact that lower precision rendering modes - widely available and regularly used on Android and iOS to boost performance and save on power consumption - are not commonly available on Windows PCs, which forces them to use high (full) precision rendering most of the time.

...On the tablet comparisons, I&#8217;ve installed the OpenGL version of GFXBench. Once again the Surface Pro 4 outperforms everything, although in this test the margin is not quite as high. As with 3DMark, on Windows PCs, GFXBench runs at high precision only due to limitations in OpenGL versus OpenGL ES.

In other note:

NotebookCheck said:
The Surface Pro 4 was 33% faster than the Surface Pro 3 in this test - a video rendering of a complex edit with fades, overlays and filters. The video was 8 minutes long and the render, up-scaled from 25 fps to 50 fps at 1920 x 1080, was completed in under 10 minutes on the Surface Pro 4. Watch the video for more details.

www.youtube.com/watch?v=w0VdE9IlZKg
 
Last edited:

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
INBF the Intel brigade will say that the Sacred Intel iGPU will outclass the HBM dGPU from nVIDIA and AMD
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
A lot of assumptions here. I hope you're not basing this '30-40%' on 3DMark or GFXBench results because it's apples and oranges for Windows vs iOS/Android devices, as repeatedly told by AnandTech.

What? You mean the precision thing? If you look at the mobile benchmark results for the Intel graphics at Notebookcheck, the one in Broadwell Core M performs exceptionally well compared to 15W and above parts considering how bad they do in actual games.

They've been likely optimizing their architecture and drivers to more fit for those mobile benchmarks.

Don't you think its already silly that a dedicated CPU manufacturer with a CPU costing $300 is competing against a newly formed group that's not even the main focus of the company and the chips that probably cost $30 and has 1/3 TDP and power use?

Years ago it would have been unfathomable to think Apple would do *THIS* well. I think arguing about precision is a blip compared to the big picture.

Plus, 16-bit precision means 2x the Flops of 32-bit. It won't net anywhere near 50% gains, usually doubling of resources gives about ~30%, which is why you need double Flops, double bandwidth, double fillrate to achieve 2x the performance.

*Its also nice to know that since Gen 8 supports 16-bit precision, comparing Cherry Trail results between Android and Windows will tell us the impact of doubled Flops going to 16-bit*
 
Last edited:

dahorns

Senior member
Sep 13, 2013
550
83
91
What? You mean the precision thing? If you look at the mobile benchmark results for the Intel graphics at Notebookcheck, the one in Broadwell Core M performs exceptionally well compared to 15W and above parts considering how bad they do in actual games.

They've been likely optimizing their architecture and drivers to more fit for those mobile benchmarks.

Don't you think its already silly that a dedicated CPU manufacturer with a CPU costing $300 is competing against a newly formed group that's not even the main focus of the company and the chips that probably cost $30 and has 1/3 TDP and power use?

Years ago it would have been unfathomable to think Apple would do *THIS* well. I think arguing about precision is a blip compared to the big picture.

Plus, 16-bit precision means 2x the Flops of 32-bit. It won't net anywhere near 50% gains, usually doubling of resources gives about ~30%, which is why you need double Flops, double bandwidth, double fillrate to achieve 2x the performance.

*Its also nice to know that since Gen 8 supports 16-bit precision, comparing Cherry Trail results between Android and Windows will tell us the impact of doubled Flops going to 16-bit*

I guarantee you the a series processors have cost Apple substantially more than 30 dollars per chip. It just doesnt matter since they make their money on the whole device. Again, im also not sure what any of that has to do with graphics performance since Apple just uses someone elses technologies.

Finally, my understanding is that intels igpus have supported 16 bit precision in hardware for awhile. But there has been no driver support so it doesnt really matter. I dont honestly know if that has changed for its android devices.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Take a look at the bar graph immediately above - see the 3DMark11 Performance score for Surface Pro 4 (1584)? That is SKL U GT2.
So ~1600 for SKL GT2 vs ~2000 for Carizzo FX8800P 15W in the table (SKL GT2 20% slower) and SKL GT2 on par with the A10-8600P Carrizo 15W.

Now consider that SKL GT2 = 24EUs = ~192 sp (Intel EUs are SIMD8). The Carrizo FX8800P is 512sp and the A10-8600P is 384 sp.

I would say the perf/SP for SKL GT2 is quite good - it is likely either TDP limited (turbo up on narrow design hits P ~ f*v^2; f~v in turbo regime; P ~ v^3 ... therefore, design which is narrow but fast can get decent perf but isn't as efficient as running wide and slow) OR else frequency limited (hitting gpu Fmax).

Given the scaling of SKL GT3e (48 EU = 384sp), where it can run wider but slower, the perf/watt and perf/sp for SKL gpus actually looks really good.

Actually if you take the iGPU die area and normalize for the manufacturing process (28nm HDL vs 14nm FF), the Carrizo 512sp iGPU would be close to Skylakes 192sp GT2 if both manufactured at 14nm FF.
So it takes double the iGPU die area for Skylake GT3 and eDRAM to have little higher performance than Carrizo smaller iGPU with ordinary 2133MHz DDR-3 memory (35W TDP).

edit.
rough die size analysis on the iGPUs,

Carrizo iGPU is close to 95mm2 at 28nm HDL
Skylake GT2 (taken from 4c 8t die) is close to 43mm2 at 14nm FF

Intel 14nm FF is close to 2x more dense (perhaps more ??) than 28nm HDL.

That makes Carrizo iGPU at 40-50mm2 at 14nm FF.
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,173
2,211
136
So it takes double the iGPU die area for Skylake GT3 and eDRAM to have little higher performance than Carrizo smaller iGPU with ordinary 2133MHz DDR-3 memory (35W TDP).


Little faster based on what? SKL GT2 15W is faster than Carrizo 15W with a smaller die, manufacturing process normalized. Pretty sure a 28W SKL GT3e wouldn't be only a little faster in real gaming. Maybe in 3dmark but surely not in real gaming.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Little faster based on what? SKL GT2 15W is faster than Carrizo 15W with a smaller die, manufacturing process normalized. Pretty sure a 28W SKL GT3e wouldn't be only a little faster in real gaming. Maybe in 3dmark but surely not in real gaming.

SKL 15W GT2 is not faster than Carrizo at 15W in games. As for the die size, dont forget that Carrizo APU is a SoC that also has the ARM core AND integrates the FCH, something that SKL 2C 4T GT2/GT3e doesnt.

Also, i doubt the 28W SKL GT3e will be more than 10-20% faster than 35W Carrizo with DDR-3 2133MHz.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |