- Nov 14, 2011
- 10,269
- 5,134
- 136
But its not the same 14nm.
Well no. It'll probably be cheaper.
But its not the same 14nm.
does it matter if it crushes it or that apple have more margin on a good enough performing product?
They switched previously because the previous uarch was far behind/obsolete.
Except that in 2016 Apple will have 14nm ARM products not 20nm and Intel will still be at 14nm.
We were talking about 10nm x86, which is due sometime in 2016 (barring any problems). There will be an overlap period where both are at 14nm (at least for Apple).
Intel is on a roll with the delays.
I doubt we will see 10 nm before mid 2017.
Intel is on a roll with the delays.
I doubt we will see 10 nm before mid 2017.
I'd imagine delays have a bit of a domino effect.I don't know how the quote was screwed up, so anyways...
We've only seen one recent significant node delay @ 14nm. This is hardly a trend line. If you have some facts to support your doubts - then please share them. I'd be interested.
Given that 14nm is a 2015 event for all intents and purposes, yeah there is practically no way 10nm is going to be relevant in 2016. Definitely a 2017 deal, if that.
But its not the same 14nm.
Doesn't Intel have multiple teams working on different nodes - or is it all lockstep? I'm curious as to why you think that, practically speaking, even 2017 is iffy.
Intel is on a roll with the delays.
I doubt we will see 10 nm before mid 2017.
If ARM CPUs were competitive at 28nm planar against 22nm FF, then you can imagine how much better they will compete using 14nm FF against Intels 14nm FF.
My general rule of thumb is that opinions based on uninformed observations, emotions,... are always wrong. This one is no exception.
If ARM CPUs were competitive at 28nm planar against 22nm FF, then you can imagine how much better they will compete using 14nm FF against Intels 14nm FF.
They have multiple teams, but you have to depreciate 14nm first and then start 10nm production. So far Intels 14nm is low volume Low Power for 2014, high performance process will only be ready in 2015 onwards. So, they will have to start 10nm later in to 2017 or even 2018.
So you say Intel didn't learn from 14nm and will have yield issues again?Yeah, due for 2016, just like how Broadwell was due for Q2 2014 :\
This is ridiculous. The primary factor in holding back the introduction of a new process is its yields, not the idea that you have to please the accounting department.They have multiple teams, but you have to depreciate 14nm first and then start 10nm production. So far Intels 14nm is low volume Low Power for 2014, high performance process will only be ready in 2015 onwards. So, they will have to start 10nm later in to 2017 or even 2018.
It's not really a question of learning from mistakes, although there were certainly mistakes to be learned from. The problem is that resources that could have gone towards accelerating the development of 10nm have been tied up with the delay of 14nm.So you say Intel didn't learn from 14nm and will have yield issues again?
Why does Intel need to have a lower cost structure and increased performance compared to its competitors? Gee, I wonder.Intels portfolio in windows market dont need new nodes to dominate. Its just an expense if its forced.
And looking at the arm war i dont see new nodes as the most important.
Why do Intel then have to get 10nm full steam 2017?
It's not really a question of learning from mistakes, although there were certainly mistakes to be learned from. The problem is that resources that could have gone towards accelerating the development of 10nm have been tied up with the delay of 14nm.
Intel is in a very unlucky position in the mobile market that was difficult to see 5 years ago when bt and 14/22mm was planned:
Apple is hellbend on own arch that is performing extremely impressive
Samsung have grown to be a dominating player and they happen not only to make their own design but also newest processes on top of it
That means most of the market is closed. And to make it worse qq have executed perfectly with soc that is a perfect match for the needs
As icing on the sour cake chinese manufacturers dump all kind on the market for next to nothing
That is just a crazy market to enter.
It's still up in the air. Intel's made good progress on "post-silicon" channel materials. Not sure if they will be ready in time for 10nm though. It's all about money at this point. The concept is sound, the method is solid... It just needs refinement, so that it can be integrated in a cost effective manner.This makes sense to me. I've seen it happen and it's a classic engineering problem.
In the last con call, Krzanich says "We have done no changes or shift to our 10-nanometer schedule but we won’t really talk about 10-nanometer schedules until next year.
So I suppose we won't have a real answer till (sometime) next year. I imagine Intel needs a better sense of where yields and gate performance are at before giving more details.
Are the rumors still trending around GAA and possible use of SiGe?
Intels portfolio in windows market dont need new nodes to dominate. Its just an expense if its forced.
And looking at the arm war i dont see new nodes as the most important.
Why do Intel then have to get 10nm full steam 2017?
I don't think changing ISAs is something Apple would be afraid to do.
Prior to 2006, All macs were using Power PC after all.
So I think the question is not will Apple change ISAs, but when will they change ISAs?
Refining their ARM processor to be both high frequency and high IPC while maintaining good performance per watt will take time. But I would assume Apple is viewing the work as worth the effort since they would have full control of all the hardware features on their most high end products (MacIntosh).
Okay, now with that out of the way.....What does Intel do to counter this eventuality?
Does Intel begin to promote big core in ways they never did in the past? Maybe instead of four small cores in some SOCs we begin to see a single big core (with hyperthreading enabled)? (ie, a smaller version of Core M with all (or most) of the big core features enabled (AVX feature sets, etc))
Don't forget Intel's growing process advantage.
FinFET has just been implemented 1 node ago, I don't think GAA is necessary yet. However, the last big update on electron mobility was back at 90nm with strained silicon.Are the rumors still trending around GAA and possible use of SiGe?
The obvious answer: it won't last. There's no doubt the situation is the result of a lack of competition, and enabling such features is a good means of keeping its competition at bay. For the meantime, there is no need for them to do so, and they are profiting from that, however they have those cards in their hand to play whenever they need to.Intel's process advantage is true advantage, but I feel they are squandering too much of that advantage at the low end via disabling so many xtors on the entry level chips.
Take for example the Celeron dual cores. They are 130mm2 dies on 22nm with half the cache and iGPU disabled, hyperthreading disabled, AVX disabled. On top of that the clocks are much lower than other chips in their TDP class. That seems awful wasteful to me and I am not sure how long that kind of strategy will last when the competition with ARM really begins to heat up.
While I understand what you're saying, your really don't want a single core. There are significant performance degradations involved in regressing from dual to single, even if you're not running multithreaded software.With that mentioned, I don't deny that Intel has a right to have product segmentation. I just hope they would look for different ways of doing it than they have in the past. (Personally I would take a fast clocked single core SOC with HT and AVX and AVX II enabled over a slow clocked two thread dual core with AVX/AVX II disabled. It should also cost Intel less money to make while enhancing the access of AVX and AVX II to software developers)
Well, do keep in mind that silicon straining is still improving over time. Today, PMOS performance is nearly caught up to NMOS performance, with each successive node upping the ratio of Germanium to Silicon in the SiGe S/D, whereas PMOS used to be significantly behind NMOS. Replacement HKMG actually synergized very well with strained silicon as well.FinFET has just been implemented 1 node ago, I don't think GAA is necessary yet. However, the last big update on electron mobility was back at 90nm with strained silicon.