[SeekingAlpha] Cherry Trail not ramping until March 2015

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Sure, but if there's just done 4+ generations of material advances to keep leakage at bay, one would expect to see new advances at the other end of the efficiency duality, even if strained silicon is still useful at its 6th generation.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The obvious answer: it won't last. There's no doubt the situation is the result of a lack of competition, and enabling such features is a good means of keeping its competition at bay. For the meantime, there is no need for them to do so, and they are profiting from that, however they have those cards in their hand to play whenever they need to.

If they have to wait until ARM gets to that point, then they have waited too long. I would hope they would have a processor design to intercept ARM at a much lower point. Atom already exists for that purpose, I only wish it had more of the advanced features and performance of the Big core (like the AVX extensions that Intel has worked on.)

While I understand what you're saying, your really don't want a single core. There are significant performance degradations involved in regressing from dual to single, even if you're not running multithreaded software.

It would be a move from a slow clocked 2C/2T processor to a fast clocked 1C/2T processor on desktop and probably mobile as well. So it is still a two thread processor.

The problem with these budget cores is that they just don't bring in enough revenue to justify making a "built from the ground up" Celeron.

Intel already does have a specific die for low cost processors. These, of course, are the quad core atom processor dies of various sorts. What they would have to do is pick the tablet based quad core atom processor and replace with a single big core with hyperthreading enabled. Large two thread processor > four thread processors (made up of small cores), it works for Apple's tablets....so I say why not Intel as well?

Then take that 1C/2T SOC design and scale up to laptops and desktops like Intel already does with Bay Trail-M and Bay Trail-D.

The only thing that's really viable would be to fab dies without LLC, but with mask costs as high as they are, that hasn't been an option for a while. That used to be the way things were done, though.

Low level cache, do you mean L1 or L2?

If so, I have wonder what would happen to performance per watt for mobile. As I understand things, processor cache is fairly low power.

Now with this idea I am proposing, the single big core with HT (1C/2T) basically replaces the quad core atom tablet dies and the desktop/mobile Celeron 2C/2T (which are based on the Core i3 die). So the mask costs should be spread out even more than they are today with just the atom quad core tablet chips.

Then leave atom for the phones, Avoton, and IoT, etc.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Low level cache, do you mean L1 or L2?

If so, I have wonder what would happen to performance per watt for mobile. As I understand things, processor cache is fairly low power.

LLC == Last Level Cache. Typically L3$ in CPUs (though it's edram, or L4$, in CPUs with Iris Pro). Relatively low power, but with a large number of xtors (or large portion of the die). When it comes to tablets and phones - any drop in TDP matters. Since mobile apps typically have a much smaller memory footprint than desktop software packages - dropping the L3$ wouldn't hurt performance as much as it would on a laptop or PC.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Also, with GAA and other emerging technologies, it's less of a question of "is it necessary," and more of a question of "is it better, both physically and economically?" Or, if it is not necessarily better on the cost front, is the cost penalty outweighed by the performance improvement? Right now, the performance is very sound, but it's just uneconomical at this point in time to implement either GAA or SiGe/Ge/III-V.

This is my thinking - Intel had allot of trouble pushing FinFET this far, so another geometry may be needed. We don't know the exact cause of the delay, but getting the fins that tall and thin couldn't have been a cake walk. There is an paper at Intel (c 2011): http://download.intel.com/newsroom/...esearch_Enabling_Breakthroughs_Technology.pdf which points towards Omega or Tri-gate xtors. I'm guessing on SiGe for higher mobility (in part to keep maintain fast gate switching and a high current density with such small geometries, but obviously there is allot more to it). It seems, from my limited knowledge, that Tri-gate would be much easier to manufacture than GAA.

Sure, but if there's just done 4+ generations of material advances to keep leakage at bay, one would expect to see new advances at the other end of the efficiency duality, even if strained silicon is still useful at its 6th generation.

Yes. Again, I think a new geometry and high mobility gate material will be necessary to get good electrostatics. Even if Intel sticks with FinFETs, how do they get the current and switching speeds they need without SiGe or another high mobility material? And, AFAIK, Intel is saving III-V materials for single digit nodes.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
This is my thinking - Intel had allot of trouble pushing FinFET this far, so another geometry may be needed. We don't know the exact cause of the delay, but getting the fins that tall and thin couldn't have been a cake walk.
No. We know the exact cause of the delay. It has everything to do with multiple patterning.

http://intelstudios.edgesuite.net/im/2013/archive/qa1/archive.html

At around 24:00.

There is an paper at Intel (c 2011): http://download.intel.com/newsroom/...esearch_Enabling_Breakthroughs_Technology.pdf which points towards Omega or Tri-gate xtors.

I'm guessing on SiGe for higher mobility (in part to keep maintain fast gate switching and a high current density with such small geometries, but obviously there is allot more to it). It seems, from my limited knowledge, that Tri-gate would be much easier to manufacture than GAA.
Huh, in which year do you live? Tri-gate transistors are being produced since 2011. FinFET isn't going to disappear.


And, AFAIK, Intel is saving III-V materials for single digit nodes.
The same presentation you linked actually points towards III-V at 10nm. But in general, it isn't clear; some sources say III-V, others say Ge, others say both.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
This is my thinking - Intel had allot of trouble pushing FinFET this far, so another geometry may be needed. We don't know the exact cause of the delay, but getting the fins that tall and thin couldn't have been a cake walk.

Making them tall isn't necessary the challenge, it is a challenge but a relatively straightforward one to solve; rather, the challenge is evenly doping the fin from top to bottom and all the way through given that they are so close to each other and your implant angles are ever more restricted to near-vertical vantages.

Not that that problem can't be solved, it just requires a different approach to doping the fins if they want to keep making the fins taller and closer together in future nodes.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
Making them tall isn't necessary the challenge, it is a challenge but a relatively straightforward one to solve; rather, the challenge is evenly doping the fin from top to bottom and all the way through given that they are so close to each other and your implant angles are ever more restricted to near-vertical vantages.

Not that that problem can't be solved, it just requires a different approach to doping the fins if they want to keep making the fins taller and closer together in future nodes.


Thanks!
 
Aug 27, 2013
86
0
0
Not much different than A7 CPU wise

yes and no, Apple has done something rather atypical in targeting long run performance with the thermal constraints of a phone. They've clearly sacrificed some top end short run performance for that long run consistency which is rather unusual considering the usage model in phones but it shouldn't be ignored in evaluating the A8.
 
Aug 27, 2013
86
0
0
I don't think changing ISAs is something Apple would be afraid to do.

Prior to 2006, All macs were using Power PC after all.

So I think the question is not will Apple change ISAs, but when will they change ISAs?

Refining their ARM processor to be both high frequency and high IPC while maintaining good performance per watt will take time. But I would assume Apple is viewing the work as worth the effort since they would have full control of all the hardware features on their most high end products (MacIntosh).

Okay, now with that out of the way.....What does Intel do to counter this eventuality?

Does Intel begin to promote big core in ways they never did in the past? Maybe instead of four small cores in some SOCs we begin to see a single big core (with hyperthreading enabled)? (ie, a smaller version of Core M with all (or most) of the big core features enabled (AVX feature sets, etc))
Apple certainly is more willing than any other vendor to change ISA's, they have a very loyal userbase that is willing to endure inconvenience.
Pros:

  1. Profitability. As long as Apple is able to reach a large enough scale and spread the development costs over enough chips, it can be far more profitable for them to have Samsung or TSMC manufacture their own designs than keep paying Intel $200+ per.
  2. Flexibility. Don't underestimate the advantages of having the option to bring critical things on die. eg QUALCOMM Enables new functionality, drops overall cost, allows better form factors etc.
  3. NIH. Don't discount the sheer ego factor, Apple has a serious case of NIH.

Cons:

  1. Always the risk that Intel will make a process or cpu design breakthrough that would leave them at a disadvantage. While they are on the same platform as everyone else, they have the $ to pay for the earliest release and best sku from Intel.
  2. Scale will be much, much harder to achieve with Macs than it was with iPhones and iPads. We aren't talking about an insignificant development cost.
  3. A7 & A8 have had the advantages of being fast followers, they haven't had to develop new technologies for cutting edge CPU performance. All well and good to have 70% of performance of Intel core designs, different proposition to actually exceed them on worse process tech.
I think it's a closer decision than most of the posters here do. IMO, if Apple feels like Samsung and/or TSMC can close the process gap down to under a year 9on a process that is optimized for higher perf than a smartphone optimized one like HPM and they can design a core that has 85% of the peak performance of a current Core design, Apple is gone
 
Aug 27, 2013
86
0
0
They've switched architectures twice, from Motorola 680 to PowerPC and then from PowerPC to Intel. They could easily do it again to go from Intel to ARM.

They could do it but there is very little that is easy about it. Much like Microsoft's struggles with RT, an ARM based OS <whatever #> would have a serious tail of software compiled for Intel. Its not insurmountable but it's a serious pain in the neck for your user base so you had better have some compelling user feature to offset the inconvenience and Apple having higher $ margins isn't it. It would need to be significantly better battery life, functionality, form factor etc, not just more profitable for Apple.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
No. We know the exact cause of the delay. It has everything to do with multiple patterning.

http://intelstudios.edgesuite.net/im/2013/archive/qa1/archive.html

At around 24:00.

Huh, in which year do you live? Tri-gate transistors are being produced since 2011. FinFET isn't going to disappear.

The same presentation you linked actually points towards III-V at 10nm. But in general, it isn't clear; some sources say III-V, others say Ge, others say both.
Thank you for your response.

First, thanks for the link. Around the 24 min mark it's mentioned the extracting more capability out of existing patterning equipment is the problem - not specifically multi-patterning. I'll take your word for it. If that's the case then Intel learned something from the 14nm problems, but it is also going to be even harder to to build FinFETs @ 10nm.

Second, My Bad. I meant Pi or Omega FETs as the next step:

Though it seems that if one were to chose OmegaFETs, why not just go all the way with GAA. In that case it seems like Pi or GAA would be the next step (or not, looking at IDC's post).

Lastly, the time frame in the article I linked is off. Intel has been able to stretch some materials tech further and hence the reason I figured that SiGe might be enough of an advance for the next node.

I'm no expert - I'm just reading what I can and hoping someone with experience is able to share more. I understand the basic device physics and quantum mechanical models - I don't understand the chemistry or details of exactly how all the etching, deposition and doping works together in an actual wafer production module.
 

jpiniero

Lifer
Oct 1, 2010
14,839
5,456
136
They could do it but there is very little that is easy about it. Much like Microsoft's struggles with RT, an ARM based OS <whatever #> would have a serious tail of software compiled for Intel.

Objective-C (& presumably Swift) is very portable. Most of what is on both the iOS and OSX stores would be easy to port provided it was updated for the (presumably) new interface. Obviously any x86 apps with C code would be more work but I imagine Apple would start enforcing portability restrictions to get on the App Store as part of the migration. It'd be far cleaner than the mess it would be on Windows.
 
Aug 27, 2013
86
0
0
Objective-C (& presumably Swift) is very portable. Most of what is on both the iOS and OSX stores would be easy to port provided it was updated for the (presumably) new interface. Obviously any x86 apps with C code would be more work but I imagine Apple would start enforcing portability restrictions to get on the App Store as part of the migration. It'd be far cleaner than the mess it would be on Windows.

Yes and no. While what you are saying it technically correct, there are a host of issues beyond that, getting developers to do it for small titles for a new platform with initially small #'s (chicken/egg problem), deployment, user confusion etc. The lack of initial software support also severely impacts early adopters and if any apple based chip doesn't have power to burn the emulation experience will be weak. It's possible but it's a tough nut to crack and it's never easy.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
So to sum things up concisely, here is what I would have liked to have seen happen for Intel in 2015:

1. Cherry Trail and Braswell canceled.

2. Celeron (based on Broadwell Core i3 die) canceled.

3. #1 and #2 replaced with a fully enabled Broadwell single core with HT (ie, 1C/2T) and 16 Gen 8 EUs.

4. Broadwell Pentium variants continue to be based on Broadwell Core i3 die.

Essentially Core and Atom based Celerons merge into one new SOC die which is low in cost and helps promote the adoption of Intel big core instruction sets.

Pentium and Core i3 continue to be based off the same die and use a separate PCH.

Unless I am missing something doesn't such a scheme make more sense that the current one Intel is using?
 

Shivansps

Diamond Member
Sep 11, 2013
3,873
1,527
136
CT is like an "ARM A7" and Broadwell "A15".

In that aspect they are both fine, IF broadwell prices whould be lower.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
CT is like an "ARM A7" and Broadwell "A15".

In that aspect they are both fine, IF broadwell prices whould be lower.

I'm not sure that comparison would be entirely analogous because Intel uses binning and market segmentation tactics we don't see in the ARM world (so much).

For example, I feel both Celeron desktop (eg, G1820, G1840,etc ) and Celeron ULV mobile (eg, 2955U, 2980U, etc) are examples of processors clocked very low for their respective TDP classes.

Because I feel the low designated clocks (for Celeron desktop and mobile) are not being done out of necessity, I would like to see the same overall performance with less silicon. A 1C/2T big core SOC should be able to get the same job done via higher clocks while still meeting requirements for the necessary TDP classes.

And as far as atom goes, there is no need to get rid of it entirely. It can still exist in phones, micro servers and IoT. But unless someone can convince me otherwise I feel there is no need to go there for tablets. Instead bring on a 1C/2T big core SOC with Core M-like characteristics.

1C/2T big core SOC to replace quad core atom SOC for tablets and Core (and atom) based Celerons for desktop and mobile.
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
So to sum things up concisely, here is what I would have liked to have seen happen for Intel in 2015:

1. Cherry Trail and Braswell canceled.

2. Celeron (based on Broadwell Core i3 die) canceled.

3. #1 and #2 replaced with a fully enabled Broadwell single core with HT (ie, 1C/2T) and 16 Gen 8 EUs.
Unfortunately, you didn't take me seriously enough when I said that you really don't want a single core processor. REALLY.

Tell me, have you seen even one single core processor on the market, outside of the embedded space? There's a good reason for this -- even when your applications are all single threaded, single cores are still tremendously inferior to a dual core.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
[/B][/I]Tell me, have you seen even one single core processor on the market, outside of the embedded space? There's a good reason for this -- even when your applications are all single threaded, single cores are still tremendously inferior to a dual core.

I wouldn't be tremendously worried about whether the processor was 1C/2T or 2T/2T.

What matters is the total amount of performance.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
A8 doesnt have about the same IPC as Haswell. Lets not try and fool anywone about that part.

Also Apple would have to mess up the software support for their entire userbase. And depend even more on paying 10000$ for initial wafers and multibillion $ R&D. And Apple knows it can get first dips with Intel.

I don't think so. On a IPC basis improved Cyclone is quite close to Haswell

http://ark.intel.com/products/75802/Intel-Core-i5-4200Y-Processor-3M-Cache-up-to-1_90-GHz
http://browser.primatelabs.com/geekbench3/575799
http://browser.primatelabs.com/ios-benchmarks

Integer
core i5 4200Y (1.4 ghz base 1.9 ghz max) - 1780
A8(1.4 Ghz) - 1609

Haswell core i5 has the benefit of turbo and a FINFET process. I am confident that Apple A9 on TSMC 16FF+ will be very competitive with Intel Core M 14nm. Apple just needs to deliver 2 key features - SMT and turbo. I am sure they are waiting for a 16 FINFET process to deliver those two features without increasing power draw significantly. :thumbsup:
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
But which 28nm ARM CPU was actually competitive against 22nm FF? In terms of both performance and efficiency.

Intel ATOM Z3770 Quad Core
Single Core = 964
Multi Core = 3045

Qualcomm MSM8974AC Snapdragon 801 Quad Core
Single Core = 986
Multi Core = 2937

NVIDIA K1 on Shield Tablet
Single Core = 1099
Multi Core = 3476

Well i believe those are pretty competitive in performance and efficiency. Not to mention ATOM is way slower in iGPU and Snapdragon 800 and 801 was used in Mobile Phones (lower TDP headroom).
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Unless you have actual power consumption numbers, those benchmarks mean nothing. You must calculate performance per watt with 2 variables, not one. For example, we know that S800 throttles a lot. In the case of Nexus 5, its performance is halved within seconds, while Z3770 doesn't experience any throttling as far as I know.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Unless you have actual power consumption numbers, those benchmarks mean nothing. You must calculate performance per watt with 2 variables, not one. For example, we know that S800 throttles a lot. In the case of Nexus 5, its performance is halved within seconds, while Z3770 doesn't experience any throttling as far as I know.

Nexus 5 is a 5" Phone, Z3770 havent been used to anything bellow 8-10" Tablets.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Nexus 5 is a 5" Phone, Z3770 havent been used to anything bellow 8-10" Tablets.

Sure, but we can do some well-educated estimates.

1) We know that Apple's A7 consumes 3W during the Kraken benchmark.

2) We know that Z3770 consumes at most 2.4W during Cinebench, so we'll take this wost-case number.

3) Cyclone needs 5617ms.

4) Z3770 scores 4599ms.

=> Doing the math, one finds that Silvermont is 53% faster per joule. This isn't exactly apples to apples since they run different software, we only have the higher bound on Z3770 power consumption and the power measuring technique used for the iPad has a unknown margin of error as well, but since the performance per watt numbers are so far apart, it should be safe to conclude that Silvermont is superior. No surprise.

Compared against Qualcomm's SoCs will yield the same result since both SoCs are built on the same process, giving them similar characteristics. Also note that Qualcomm uses a much higher clock speed, so if anything, efficiency would be even less.
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I don't think so. On a IPC basis improved Cyclone is quite close to Haswell

http://ark.intel.com/products/75802/Intel-Core-i5-4200Y-Processor-3M-Cache-up-to-1_90-GHz
http://browser.primatelabs.com/geekbench3/575799
http://browser.primatelabs.com/ios-benchmarks

Integer
core i5 4200Y (1.4 ghz base 1.9 ghz max) - 1780
A8(1.4 Ghz) - 1609

Haswell core i5 has the benefit of turbo and a FINFET process. I am confident that Apple A9 on TSMC 16FF+ will be very competitive with Intel Core M 14nm. Apple just needs to deliver 2 key features - SMT and turbo. I am sure they are waiting for a 16 FINFET process to deliver those two features without increasing power draw significantly. :thumbsup:
There is absolutely no point in doing cross platform hardware comparisons to try to determine who's got the better what.

Also, TSMC's 16nm process will only be roughly equal to Intel's 22nm process in performance, and even then, TSMC's PMOS performance lags significantly. They will still be over a node behind in performance. (Actually, just took another look at their numbers from IEDM 2010... their PMOS was much improved over IEDM 2009 -- ~1100/100. Not bad, but still behind Intel 45nm at the same voltage for NMOS. Now doubt they've improved since 2010, but it just goes to show how far behind TSMC is -- roughly 3 years).
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |