Intel Broadwell Thread

Page 42 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Oh dear, please do not comment on topics, you are apparently no expert of.
Let me first state that when idle, leakage is the single biggest contributor to power. In other use cases active power is (still) dominating. The relation between these to values however goes up with each smaller process node in favor of leakage. What you also should know is, that leakage not just doubles when moving down half a node. Leakage is one of the main effects working against Moores law.

The main reason to going to 3d transistors/FinFETs is to have more control of the geometric layout of the gate channel and the electric field controlling the channel, thus reducing leakage current significantly.
To give you some idea. 22nm FinFET has much lower leakage than 28nm planar. I estimate that 14 nm FinFET is about the same as 28nm planar for logic but already worse for SRAM. 14nm FinFET is in any case worse then 22nm FinFET, because you cannot defeat physics.

At this point you need to think about clever designs, as for instance the big.little concept. Keep in mind Intel is already past the general FinFET/3-gate gain with respect to leakage.

Mind to enlighten me, how is this supposed to improve when going down to 10nm, even if you consider that you are using compound materials? I mean at some point you also need to reduce the thickness of the gate dielectric, where high-k materials are already used to extend Moores law.

I have to understand this. Does leakage go up in absolute terms when a node shrinks eg 28->20 planar or 22->14 finfet???
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
First things first, I'd appreciate it if you refrained from the use of ad hominem.
Sorry if my post was perceived as argument ad hominem. It was indented as advice.
Now back to the topic. You basically link random internet articles without contributing to the argument itself. 90% of the linked information does not even apply to the question leakage at 22nm vs 14nm. Some of the articles discuss the move to high-k dielectrica few years back.
The only thing, that was really applicable to the discussion, was a propaganda slide from Intel...nothing to add here.
I thought you would bring a real argument.

Interestingly if you go to
http://www.intel.com/content/www/us/en/silicon-innovations/standards-14nm-explained-video.html
they explicitly mention "reduction of active power"...no mention of leakage. Coincidence? I don't think so.

If you don't have a big R&D budget, sure it might be cheaper to develop a mediocre big core and a mediocre little core, but SoCs like Apple's A series and Intel's Atom to me prove that a good core does not need a companion core.
It is not question of budget nor a question of mediocre vs. good core it is a general principle. Strange that you seem to blindly believe Intel propaganda yet dismiss ARM propaganda. In any case objectively looking at the issue indicate that big.little seems like a reasonably good idea.

Even more surprising is, that you consider Atom a non mediocre SoC, yet it gets destroyed in performance by the competition at comparable power despite the big technology advantage. One would assume that this thing is blowing anything out of the water that is still on planar, yet it does not. One would assume, that this thing is blowing anything out of the water, who uses fully synthesizable cores and not have full-custom parts, yet it does not.
Anyway lets not get into discussion what is mediocre and what not, I have the feeling that someone with Intel sunglasses will not contribute objectively here.

I have to understand this. Does leakage go up in absolute terms when a node shrinks eg 28->20 planar or 22->14 finfet???

The leakage current raise faster than voltage drops at the same performance point. In addition lowering threshold voltage increase sub-threshold leakage. In fact few generations ago you practically had no sub-threshold leakage at Vgs = 0.
And as I already mentioned leakage went down in absolute terms when moving from planar to 3d in conjuction with moving to the next smaller node. (Witeken showed lots of material here)

Also, I have to say you are being a bit too doom and gloom about idle power. Power gating exists for a reason.
Well it is not all doom and gloom but a serious challenge. Power gating is quite aggressive already today. Props to Intel that they reduced the overwhelming amount of power states to C0, C1/C2 and C6 with Silvermont, which brings them closer to the ARM active-dormant-shutdown principle.
Not sure though, that we ever see C6/shutdown in light use scenarios. That's where the big.little idea kicks in. Still don't know why witeken thinks this idea is stupid.
 
Last edited:

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Windows doesn't support big little so it's kind of moot whether Intel likes it or not.

If there's nothing technical that inherently stops it from being implemented, MS could just release a patch to enable support for it just as it did with the AVX instruction set. I'm pretty sure there's nothing technical that stops it from being implemented but it might just not work software-wise just as the DX 10 version and up doesn't play nice with Windows XP and older versions but I seriously doubt it because that was due to the change in the driver model which wouldn't be the case with BIG.little. If there's no CPU supporting BIG.little in the X86 market, they why would MS direct any resources to enable proper support for it? Someone might mention the chicken and the egg argument here but I think in this case it's invalid. The change to the scheduler in MS windows were added after BD/HT hit the market not the other way around why would it be any different this time? In case someone forgot the changes helped both HT and modular CPUs by so that physical cores/modules were loaded first and only after the OS run out of physical cores or whole modules further threads were assigned. With BD the situation wasn't so clear with some guys at AMD arguing that it would be better to load the whole modules first and that turbo frequency and non-shared L2 cache would make up for the two threads sharing a single module penalty. It was believed that turbo would make up for the performance penalty and the end result would be lower power due to the ability to put the unused module into a lower performance state, how it turned out we all know. The turbo didn't live up to its expectations.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Sorry if my post was perceived as argument ad hominem. It was indented as advice.
It was mainly about this sentence:
Oh dear, please do not comment on topics, you are apparently no expert of.
And while I'm at it, you repeated the same fallacy:
I have the feeling that someone with Intel sunglasses will not contribute objectively here.
I don't tell you to look at my avatar and interpolate my whole personality from there. Look at the content of my post.

Now back to the topic. You basically link random internet articles without contributing to the argument itself.
Let's not argue about things we agree on, so I'm going to post what I'm disputing:
I estimate that 14 nm FinFET is about the same as 28nm planar for logic but already worse for SRAM. 14nm FinFET is in any case worse then 22nm FinFET, because you cannot defeat physics.
I contend this statement is wrong. The wrong assumption you made is that 14nm is the same process as 22nm, but ~2x denser. That is demonstrably false:


By your logic, Intel's 32nm transistor should have had worse leakage because it didn't introduce a new technology like HKMG;
We're confident this new transistor can be scaled further, and development is already well under way on our next-generation 32-nm transistors using an improved version of high- k plus metal gate technology.
The same happened with strained silicon, which went into its 6th generation at 14nm.

90% of the linked information does not even apply to the question leakage at 22nm vs 14nm. Some of the articles discuss the move to high-k dielectrica few years back.
I just added those as extra information, as I said. But it should be clear that leakage will go anywhere but down.

The only thing, that was really applicable to the discussion, was a propaganda slide from Intel...nothing to add here.
I thought you would bring a real argument.
It isn't really propaganda because it's just used as technical information. You can't simply dismiss that information (Intel: "our 14nm has 2X lower leakage") and say your own "theory" is correct. I've given you a bunch of information from which it should be clear what will happen at 14nm, so it's up to you to provide more evidence for why I must believe you. I'm not going to believe you if you don't give me a reason to do so. But sure, I could be wrong. A definitive analysis from IDC would be helpful when Broadwell-K is released.

Interestingly if you go to
http://www.intel.com/content/www/us/en/silicon-innovations/standards-14nm-explained-video.html
they explicitly mention "reduction of active power"...no mention of leakage. Coincidence? I don't think so.
That is what I'd rather call propaganda (it's more information about the technology for the layman, but you get the point), which you were condemning 1 sentence ago.

It is not question of budget nor a question of mediocre vs. good core it is a general principle. Strange that you seem to blindly believe Intel propaganda yet dismiss ARM propaganda. In any case objectively looking at the issue indicate that big.little seems like a reasonably good idea.
Where are your "objective" facts? Even besides the issue of any benefit it gives, the extra little cores will only give you an advantage when the CPU is not stressed anyway, so a downclocked CPU with power gated cores will already not consume much anyway, so the total amount of energy saved really won't be much if you have designed a big core with good power management.

Even more surprising is, that you consider Atom a non mediocre SoC, yet it gets destroyed in performance by the competition at comparable power despite the big technology advantage.
I think we concluded about 1 year ago that the power consumption of Silvermont is vastly superior than any planar ARM core; the performance per watt of Silvermont is unchallenged, so yes, it is not a mediocre core and it easily competes with Snapdragon 800.

One would assume that this thing is blowing anything out of the water that is still on planar, yet it does not. One would assume, that this thing is blowing anything out of the water, who uses fully synthesizable cores and not have full-custom parts, yet it does not.
It blows anything out of the water with regards to power consumption, but apparently only Intel has the tools to measure that. Silvermont is just a dual-issue architecture, so you shouldn't expect it too have a mind boggling IPC.

Anyway lets not get into discussion what is mediocre and what not,

Not sure though, that we ever see C6/shutdown in light use scenarios. That's where the big.little idea kicks in. Still don't know why witeken thinks this idea is stupid.
'Cause I haven't seen any evidence that it has any meaningful benefit for big companies who can afford to build a good architecture from idle to high performance use cases.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
It blows anything out of the water with regards to power consumption, but apparently only Intel has the tools to measure that.

You mean only Intel have a voltmeter?

The Anand article is by far the worst article ever on Anandtech. Its so uncritical its insane, and its best laid to rest.

An Intel engineer happen to come by visiting Anand with a voltmeter. Teaching us ohms law. And then together they busted the x86 power myth.

Yet nobody wants the cpu and Intel have to pay OEM crazy amounts to use it - losing 4B a year on mobile.

Perhaps Intel should give the OEM a voltmeter and teach them OEM ohms law, because aparently they dont know it. Its cheaper than 4B. Or perhaps give Samsung, Lenovo, Acer and Apple the link to the article?
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
On BW size cpu the burden of decoding is ofcource of far less importance than BT - and perhaps relatively insignificant.

But the problem is, when we talk BW size we have the leaking challenge as shown by the battery life problems. If they are not going to improve much with new nodes - clearly there is a challenge. We have a single product, so its to early to know for sure, but as of now, it clearly points to a challenge and problem.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
On BW size cpu the burden of decoding is ofcource of far less importance than BT - and perhaps relatively insignificant.

But the problem is, when we talk BW size we have the leaking challenge as shown by the battery life problems. If they are not going to improve much with new nodes - clearly there is a challenge. We have a single product, so its to early to know for sure, but as of now, it clearly points to a challenge and problem.
I'm tired of busting this myth, but the CPU is not the biggest contributor to platform power consumption in mainstream use cases. Certainly not when the screen is 13.3" and has 6MP to feed.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
I'm tired of busting this myth, but the CPU is not the biggest contributor to platform power consumption in mainstream use cases.

Agree. But people will prefer extra battery life to performance imo. But anyway this product seems to do neither convincingly despite its price. Instead of all this praising whatever with an Intel label lets just say what is. To slow and to battery consuming. Its just not fit for the low tdp envelope by a long stretch.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
I am looking forward to the first 18 tdp (?) parts for the ultrabooks. When are they due? And what can we expect from cpu and gpu perf?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I am looking forward to the first 18 tdp (?) parts for the ultrabooks. When are they due? And what can we expect from cpu and gpu perf?

I hope they will be launched at CES, but someone who has paid closer attention to the roadmap might correct me (it should be somewhere in Q1).
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Guys even Intel says Broadwell Y's Core M vs Haswell Y 6.5w skus will only improve battery life by 13% in 3 different tests. They did a total of 4 tests, and 3 of them it is improved by 13%. Only 1 test got more than 13% and that was the h264 audio playback and that was changes to the audio stack on both hardware and software level and in that case the battery life improvement is 24%.

Source Slide For Numbers

The screen power consumption is far higher than the SoC

So in sum a 9 hour battery instead of a 8 hour battery more or less. And if you are going to be using the tablet for 8 to 9 hours, I do not think you will be satisfied with 200nits for the brightness and often will want a higher brightness and thus even more power draw via the screen.

And due to the fact every laptop maker is pushing these super high res panels now, with crazy resolutions like 3200x1800 on a 13.3" screen is stupid and wasteful on the power consumption level. Windows can not do proper scaling with such a resolution so often you will have to run the panel at a lower resolution so it is still usable. Thus you really do not get the benefit of a higher dpi and sharper text and such. On Android, iOS, and OSX this is different but even apple uses 220 ppi for a 15" screen, and not something stupid like 276ppi for a 13.3" screen. Now your brain may shorten 276 and 220 that is not much difference only 25% ish more dense, but you have to remember the density is both in the x and y directions so we are talking about 58% higher pixel density once you take x*y.

And 58% higher pixel density will do murder on your battery life.

The OEMs are so god damn stupid sometimes. 1920x1080 would have been 167 ppi, sure that is not marketable as retina but that is the highest you would want to go until windows fixes its god damn scaling issues.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Intel should have had some "ultra book" standard for Core M and limited it to 1080p.
A Super High Res is neat and all but 90% of the people purchasing a laptop of the size Core M is supposed to enter couldn't care less about "QHD" screens.

OEM's find the most ridiculous things to compete over. Why are we competing over high resolution screens on laptops of this caliber....

Core M isn't even a "bad" product. Not amazing by any means but the majority of products I'm seeing Core M advertised for essentially defeat the purpose of Core M.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Intel should have had some "ultra book" standard for Core M and limited it to 1080p.
A Super High Res is neat and all but 90% of the people purchasing a laptop of the size Core M is supposed to enter couldn't care less about "QHD" screens.

I agree I wish it will happen but it is not going to happen (more on that later). I really want Intel to made an awesome tablet and released 4 versions. $250 (atom 2gb), $500 (core m pentium/low i3 type sku, 4gb), $750 (i5 with great turbo), and $1000 (i7 more storage more ram). I know Intel has the physical expertise that they can do a better job than the OEMs, just like Microsoft did a great job (though not perfect) with the tablets.




But Intel will not do this, here is why
It would be great for consumers but horrible for the OEMs, Intel real customers in the short term are the OEMs and not the people buying the final product. It is only the period over several years vs a 3 month quarter which is forever in the business sense that customers dictate Intel's future.

Thus Intel will not be making a gold standard tablet that is competitive in pricing. If they did so they would piss off their OEMs and often pissing off your biggest customers is not good business sense. You will not see Intel release a competitive priced high end tablet such as a Microsoft Surface or a Google Nexus 7 2012/2013. In those cases they were trying to create a brand new market that never existed before. Furthermore in the 2013 model it was nowhere near as cut-throat in pricing, and even now Google is being less competitive with the pricing with the Nexus 6 and Nexus 9.

Intel, Google, and Microsoft goal is not to make a kick ass product but instead to make money this is due to their revenue structure and how they make money. Apple on the other hand want a kick ass product, since they are vertically integrated instead of horizontally integrated. Now it is question of degrees and Google has done many things to make them more vertically integrated on the software side with Google Play Services even though they allow more horizontal options on the hardware side (you have to make $50 dollar phones for india and africa but also $800 phablets after all)

This table sums up the differences and the advantages and disadvantages pretty well



Source

Microsoft only did the surface lines when they had to. It took 3 years of getting their but kick (Ipad 1 2010, Surface 1st gen 2012, Surface Pro 1 2013) and I would argue it was not till late 2013 with the surface pro 2/3 and the surface 2 that they got the device right. And their app store still sucks.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
Is Core M the only broadwell variant available for 2014?

This roadmap is REALLY confusing me right now.
So essentially in 2014 we got Haswell Refresh and Broadwell's Core M and that's it?
Then 2015 we get Broadwell for Desktop/Laptop, and then Skylake?

I'm trying to decide if I want to purchase a laptop now or wait and I can't seem to find a gaming review of Core M (It's not gaming obviously but it should perform similar to my current laptop which is all I actually want. My Core 2 Duo/9800M GTS should be beat out by a Core M + HD5300 I think right?), but I just can't find the info I need.

Is Anandtech going to do an indepth review on a Core M Laptop? Did I miss that review?
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
I'm tired of busting this myth, but the CPU is not the biggest contributor to platform power consumption in mainstream use cases. Certainly not when the screen is 13.3" and has 6MP to feed.
No one was claiming, that the CPU is the biggest contributor, when the screen is on. However it is still a contributor and just cannot be neglected. In particular since leakage is increasing in absolute terms.
In a close to idle use case like listening to music with screen off I guess CPU will never reach C6+ state. So do not assume the screen is always on when the CPU has some work to do.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Is Core M the only broadwell variant available for 2014?

Core M is the only 14nm chip released this year. Originally the better broadwell skus and even cherrytrail atom was supposed to come out this year but they got delayed. Now such skus are going to be start released at CES and certain skus may not become commonly available till like q2 (aka april+ time)

Supposedly this is the fabs fault, but who knows.

Eventually anandtech will probably do a core m piece with hands on numbers. They already did some of the theory of the architecture at IDF which was september 2014 timeframe.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
I hope they will be launched at CES, but someone who has paid closer attention to the roadmap might correct me (it should be somewhere in Q1).

Do we know if skylake is q3 or q4 then for the 10-15w ulv parts? Anyway we are in for a treat in all of 2015. Its going to be a fine year as consumer if you got the $. I Could use a new Samsung 13.3 ultrabook with oled 4k, skylake and some crazy fast ssd at q4. And then hope win10 is going to be a new win7/xp. (Yes i want the 270ppi if its pentile on oled )
 
Last edited:

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
I Could use a new Samsung 13.3 ultrabook with oled 4k
And then hope win10 is going to be a new win7/xp. (Yes i want the 270ppi if its pentile on oled )
Due to the fact it is OLED this would be nice but Windows 10 better do resolution correctly or you are in a world of hurt.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Citation needed.
You still do not get it. This fact comes from laws of (quantum) physics.
Nothing you linked in this thread points to the evidence, that Intel has somehow defeated physics. In fact, aside from the propaganda slides, everything points to the evidence, that leakage has increased. In particular decreasing channel length while increasing height of fins to sustain Ion and decreasing Vt is a clear indicator of increased leakage.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |