Intel Broadwell Thread

Page 51 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
More bad assumptions. I would like nothing more than to have the data you are looking for as it's fascinating data. I just haven't seen any even remotely rigorous testing that we could use to draw these conclusions. I'm not saying you are wrong. Just that you are drawing conclusions from incomplete data.

We'll hopefully see more relevant data when the BDW-U and desktop processors come out when we have a slightly more applicable comparison.

That s not bad assumption, BDW need substancialy higher voltage at 2.6 than Haswell, when squared (power follow a square law in function of voltage) the ratio is higher than the theorical reduction in parasistic capacitances allowed by a node shrink, this latter parameter will not compensate enough for the higher voltage, indeed intel stated that the current 14nm used for BDW core M is to be replaced by a new process.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Seems odd that Broadwell only "partially" decode h265 through igpu when upcoming snapdragons will fully decode it.

No real hevc hardware decoding until Slylake seems a long way.

It's not like hevc is mainstream yet but still.

Gen8 can decode and encode HEVC just fine. It simply requires more energy because it isn't fixed function.
 

Kallogan

Senior member
Aug 2, 2010
340
5
76
Yes so basically if you buy a core M now, it will be less efficient in decoding hevc than upcoming snaps. For tablets etc it matters for battery life i guess.
 
Last edited:

kimmel

Senior member
Mar 28, 2013
248
0
41
That s not bad assumption, BDW need substancialy higher voltage at 2.6 than Haswell, when squared (power follow a square law in function of voltage) the ratio is higher than the theorical reduction in parasistic capacitances allowed by a node shrink, this latter parameter will not compensate enough for the higher voltage, indeed intel stated that the current 14nm used for BDW core M is to be replaced by a new process.

Lets look at something that could cause the behavior you are seeing which has virtually nothing to do with the process in the way you are thinking.

During design, you time your circuits from A->B etc... to make sure the data you need, gets to where it needs to go in the time allotted. These are characterized in simulations before you attempt to manufacture them from the data provided by the fab. This is true for Intel or Qcom or anyone who does circuit design really.

Now, at some point you'll have either a characterization of circuits which turns out to be overly optimistic (or pessimistic which is another discussion entirely). Most of the time you'll have enough timing margin on these paths. However, if this happens on a path that you are unable to speed up with your allowed knobs, a potential "fix/workaround" is to boost the voltage on the domain which can help correct for the slowness of the path. These types of things tend to be improved in later steppings of a product as they are low hanging fruit.

Now, the situation listed above isn't really a problem with the process per se. The fets in Core M could indeed be faster and lower leakage than the old process but with simply the voltage+frequency information you'll never know. I have no idea what really is the cause of the data points you are showing. Nor do you.

Something you could potentially look at and draw conclusions on, is the entire design efficiency (process+design and compare to power+work done) as it's something you can measure more accurately. However, singling out "inferior transistors / inferior process" is something you can not do with your current data set.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
Lets look at something that could cause the behavior you are seeing which has virtually nothing to do with the process in the way you are thinking.

During design, you time your circuits from A->B etc... to make sure the data you need, gets to where it needs to go in the time allotted. These are characterized in simulations before you attempt to manufacture them from the data provided by the fab. This is true for Intel or Qcom or anyone who does circuit design really.

Now, at some point you'll have either a characterization of circuits which turns out to be overly optimistic (or pessimistic which is another discussion entirely). Most of the time you'll have enough timing margin on these paths. However, if this happens on a path that you are unable to speed up with your allowed knobs, a potential "fix/workaround" is to boost the voltage on the domain which can help correct for the slowness of the path. These types of things tend to be improved in later steppings of a product as they are low hanging fruit. I'm not saying that this is the case here.

Now, the situation listed above isn't really a problem with the process per se. The fets in Core M could indeed be faster and lower leakage than the old process but with simply the voltage+frequency information you'll never know. I have no idea what really is the cause of the data points you are showing. Nor do you.

Something you could potentially look at and draw conclusions on, is the entire design efficiency (process+design and compare to power+work done) as it's something you can measure more accurately. However, singling out "inferior transistors / inferior process" is something you can not do with your current data set.


Good points but this doesnt really apply in this case as it s not a completely new design, it s an iteration of an existing uarch, as such its building blocks are well characterised no matter that the process has been shrinked.

Second point is that we re not on extreme conditions where a part newly designed could struggle to reach the expected speed, we are at 2.6GHz, that is a relatively low frequency, so if a voltage boost is needed at this clock rate then it means that the process is sluggish, to summarize i think that they didnt get the expected conductance at a given voltage, hence the boosted voltage (transistors transconductance not high enough).
 

kimmel

Senior member
Mar 28, 2013
248
0
41
Good points but this doesnt really apply in this case as it s not a completely new design, it s an iteration of an existing uarch, as such its building blocks are well characterised no matter that the process has been shrinked.

Second point is that we re not on extreme conditions where a part newly designed could struggle to reach the expected speed, we are at 2.6GHz, that is a relatively low frequency, so if a voltage boost is needed at this clock rate then it means that the process is sluggish, to summarize i think that they didnt get the expected conductance at a given voltage, hence the boosted voltage (transconductance not high ).

More assumptions. Using "new process node" and "well characterized" without having a not between them is just flat out wrong. You are massively underestimating the complexity involved.

Why is it so hard for people to admit they don't have enough data to draw conclusions? "I don't know" should be in any educated person's vocabulary.
 

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
More assumptions. Using "new process node" and "well characterized" without having a not between them is just flat out wrong. You are massively underestimating the complexity involved.

Why is it so hard for people to admit they don't have enough data to draw conclusions? "I don't know" should be in any educated person's vocabulary.

If this was due to the CPU itself then why did intel state that they ll use another process for the next stepping of the Core M..?.

In principle, as you pointed it, you do a rapid fix before modding the CPU, not the full process...

Why is so hard to admit, to borrow your own words..?.
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Good points but this doesnt really apply in this case as it s not a completely new design, it s an iteration of an existing uarch, as such its building blocks are well characterized no matter that the process has been shrinked.

If we're talking a tick where the process spec is getting rolled out, how can you say it's "well characterized" unless you mean "it's logically MOSTLY well defined but physically (area/timing/power) not well defined until the process spec rolls out"
 

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
If we're talking a tick where the process spec is getting rolled out, how can you say it's "well characterized" unless you mean "it's logically MOSTLY well defined but physically (area/timing/power) not well defined until the process spec rolls out"

Because they have the tools to extract their process caracteristics out of real silicon, hence simulations will be accurate even at the transistors level, routing caracteristics extraction is accurate by the definition, in principle there will be acceptable prevision of the circuit behaviour, unless the process does not yield the expected results, be it in transistors caracteristics or more likely for caracteristics dispersion, this latter case will often require an higher voltage if it s too high.
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Because they have the tools to extract their process characteristics out of real silicon, hence simulations will be accurate even at the transistors level, routing characteristics extraction is accurate by the definition, in principle there will be acceptable prevision of the circuit behavior, unless the process does not yield the expected results, be it in transistors characteristics or more likely for characteristics dispersion, this latter case will often require an higher voltage if it s too high.

Are we talking as if we were designing a "tick" project for the first time or something else? Maybe I missed a couple posts earlier. Process specs from test chips sort of go like:

*** Ignore the below if I completely missed your point.
If we're talking about tick-project (and clearly I'm speaking with a design bias and egging on the process guys)
1) Process test chip returned this, it's kind of fubared but we think we can deliver this, run with these specs.
2) Ok, so we made some progress, but RC delays are a little worse than we expected, run with these new specs.
3) Ok, we're pretty sure this is what things will look like, run with these new specs.

and occasionally...

4) Tape out happened, holy crap, you need to use these revised specs, please fix everything.


So... what are we talking about?
 

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
Are we talking as if we were designing a "tick" project for the first time or something else? Maybe I missed a couple posts earlier. Process specs from test chips sort of go like:

*** Ignore the below if I completely missed your point.
If we're talking about tick-project (and clearly I'm speaking with a design bias and egging on the process guys)
1) Process test chip returned this, it's kind of fubared but we think we can deliver this, run with these specs.
2) Ok, so we made some progress, but RC delays are a little worse than we expected, run with these new specs.
3) Ok, we're pretty sure this is what things will look like, run with these new specs.

and occasionally...

4) Tape out happened, holy crap, you need to use these revised specs, please fix everything.


So... what are we talking about?

Process need modifications for the chip to work as projected, a new stepping, using said modded process, will be released as soon as possible.....

That s what Intel stated, they didnt talk of redesigning the chip itself, yet there are tons of people who refuse to admit this, hence the never ending discussion about what is publicly known, anyway first time is see some Intel supporting members denying Intel s official statements, not talking of you of course.

http://www.hardware.fr/news/13891/idf-stepping-f-core-m.html

Par deux fois, on nous a confirmé que ce PCN annonce en réalité un nouveau stepping de Broadwell-Y, le stepping F, lié à des changements sur le process 14nm.
"We were confirmed twice that this PCN is actualy announcing un new stepping of BDW-Y, the F stepping, due to changes in the 14nm process".

I think that it s clear enough.
 

oobydoobydoo

Senior member
Nov 14, 2014
261
0
0
Leakage is too high with the current revision of intel's 14nm FF, but they will fix it before 2H 2015. Clearly Intel has put out NDAs to prevent people from reviewing BW in it's current state. I don't think Apple will even accept 14nm BW in it's current state. Lenovo is OK with screwing it's customers, and make no bones about it anybody who bought a Yoga 3 Pro got screwed. 14nm FF in its current state is a worse performing but denser process than 22nm, and categorically worse than TSMC 20nm planar, and samaung has 20nm FF running great!
 

kimmel

Senior member
Mar 28, 2013
248
0
41
Because they have the tools to extract their process caracteristics out of real silicon, hence simulations will be accurate even at the transistors level, routing caracteristics extraction is accurate by the definition, in principle there will be acceptable prevision of the circuit behaviour, unless the process does not yield the expected results, be it in transistors caracteristics or more likely for caracteristics dispersion, this latter case will often require an higher voltage if it s too high.

They don't extract every corner case in a complex design. They characterize the common cases. The first products through a new process always have a fun ride with process corner cases.

You are trying to indicate that somehow 14nm is an exception and is fundamentally flawed compared to earlier technologies. What you are really seeing is innovation at work. It's not always pretty. Doing things that no one has done before is hard. Posting random comments on the internet is easy.
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Process need modifications for the chip to work as projected, a new stepping, using said modded process, will be released as soon as possible.....

That s what Intel stated, they didnt talk of redesigning the chip itself, yet there are tons of people who refuse to admit this, hence the never ending discussion about what is publicly known, anyway first time is see some Intel supporting members denying Intel s official statements, not talking of you of course.

http://www.hardware.fr/news/13891/idf-stepping-f-core-m.html

"We were confirmed twice that this PCN is actualy announcing un new stepping of BDW-Y, the F stepping, due to changes in the 14nm process".

I think that it s clear enough.

Ah, now I'm caught up. What you said is a possibility since the nature of stepping work outside bug fixes is to do small refinements to design or process anyways. However as a counterpoint, you can't really get a process to process comparison based on the final voltage/frequency result since design has timing/power optimized the design for the target frequency for a given process assumption. If that assumption was wrong, you end up with a very suboptimal design.
 

Excessi0n

Member
Jul 25, 2014
140
36
101
I don't understand this person. Does anyone understand this person? 30% less efficient than 22nm. You are straight up stating this.

He's a lunar-landing conspiracy theorist, so it's really not much of a stretch for him to believe that Intel would take such a massive step back. :awe:

It's best to just ignore him.
 

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
Ah, now I'm caught up. What you said is a possibility since the nature of stepping work outside bug fixes is to do small refinements to design or process anyways. However as a counterpoint, you can't really get a process to process comparison based on the final voltage/frequency result since design has timing/power optimized the design for the target frequency for a given process assumption. If that assumption was wrong, you end up with a very suboptimal design.

We re on agreement on the possibilities, my take is that they got either too much input capacitance, wich is unlikely as this parameter is well modelised in simulations, or more probably that the final process transistors didnt get the expected transconductance value, that is, their conductance when switched on by a 0.7V command voltage is too low for the projected frequency, they can increase this parameter straightfowardly but this would increase leakage if done the simplest way, that is by reducing the threshold voltage.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |