S|A: (rumour) Apple dumps Intel from laptop lines

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

drizek

Golden Member
Jul 7, 2005
1,410
0
71
"Semi-accurate" is right. He mentions how awesome his moles are, but they were semi-wrong on this one:

The difference is that the metal notebooks are going to be based on mobile Nehalems, while the plastic ones will stick with Penryn Core Number Numeral CPUs. This may seem counter-intuitive, but the reason is GPUs. On the aluminum models, there is room for an external GPU, so that is exactly what you get. The plastic ones don’t have room for a separate GPU, so they are forced to stick with Penryn and the bug infested 9400.

As an owner of a painfully Penryny, painfully Aluminumy and painfully recently purchased Macbook Air, I have to say that he was wrong on this. Until a couple of weeks ago, All Apple systems smaller than 15" were using Penryn. He was right about the graphics part, but Apple only switched the MBP13 to Nehalem/SB recently.

Anyway, from the title, I was hoping they were going with Llano rather than ARM. The current STARS setup is pretty crappy, but it's not bad, and I think the 13" MBP could really use Llano graphics, nevermind that Llano is quad core while Apple is using dual core SB.

Also, Apple loves multi-socket on the desktop. What if... they went with Zambezi? Quad socket, 64-core Mac Pro anyone? Then they could advertise it as the fastest PC ever and not be lying about it.
 

zeekr

Member
Nov 3, 2009
30
0
61
I agree with the Ars analysis on this.

All the information available indicates that Ivy Bridge would wipe the floor with a "performance" ARM-based processor for the foreseeable future, and we have a hard time believing users would willingly downgrade from an Ivy Bridge-based laptop to an ARM-based laptop all other things being equal.
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
Look at the Macbook Air, Arm doesn't need to exist. 7 hours of battery life in a 0.68" shell. It literally can't get any thinner, I already can notice a tiny bit of flex in it.

So unless ARM chips can scale really really well, I just don't see it. There might be a niche for it in some sort of workstation class laptop. 17" laptop with a multicore/multidie/multisocket ARM that allows for extremely high performance in custom applications and long idle battery life, but that is a really niche market.
 

Ares1214

Senior member
Sep 12, 2010
268
0
0
I think a lot of people are forgetting a lot of things. The best current ARM CPU's are on 45nm. They have 2 cores, and a die size of 100-120mm^2. AMD got its C-50 on 40nm to be about 75mm^2. Important thing though is that the A5 sips power, some 80Mw of power. Granted that can go higher, but AMD can barely get anything below 5w! Intel does even worse as far as can be seen from the likes of Atom. So everybody is saying that ARM cant compete with Sandy Bridge. ARM can, the current CPU's cant. They arent meant to. Sandy Bridge is 32nm, 2x the die size (of a rather monolithic ARM die, most of its previous designs were far smaller), 3x the clock speed, 2-4x the core count, and who knows how much more power is used. It could be up to 1000x times more power, but for most uses probably closer to 200-500x more power is being used for Sandy Bridge vs an A5. Just imagine how fast an ARM CPU would be if it was 32nm, 4+ cores, 2.5+ GHz, 300mm^2+, and used more than say, 25w. Thats where ARM is headed, and just because it isnt there at the moment isnt because 'ARM' is slow, its because nobody has pushed it yet. Nvidia will, thats why they project 100x faster ARM CPU's by 2015, IN THE SAME FORM FACTOR!!! With ARM, there is no slow going, its performance doubling after performance doubling. With x86, we have reached its limit so to speak, 10% here, 25% there, maybe 30% after 2 years.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
They wanted to ditch x86 and go with Itanium at the time so they would have an architecture they wouldn't have to license to AMD.



There is an x86 cross licensing agreement between the two companies. Nobody goes begging.



Here's the odds of AMD building a CPU containing both the x86 and ARM instruction sets:
If it happens in the next several years I will buy you one

1. Yes i know that, that what i mean in the first place.

2. Yes I know that too, but before athalon64 released amd didn't have sse3, intel offered it for amd x64 license.

3. Hey amd can build apu, why can't they build something like that? Remember before athalon64 released? Ppl are talking how to transition to 64 bit, and intel even talking about building dual core hybrid that combine itanium core and x86, but then amd pouring their pixy dust and athalon64 was born and shocking everybody including intel, So its not entirely impossible.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
Intel would love to ditch x86. . . for their own proprietary ISA that they have complete control over. Not for ARM.

The ONLY way Intel would ever transition to ARM is if they have no choice. They would much rather stay in the x86 market, where their profit margins are MUCH higher and their only competition is AMD. Why would Intel ever want to switch to selling cheap ARM processors and competing against several other ARM licensees?

It's not impossible that Intel could get back into the ARM market, but they won't go willingly. It will only happen if ARM starts seriously eating Intel's lunch in mobile, PCs, and servers. And even then, they would still keep x86 around for legacy and niche applications, much in the way they keep making Itanium CPUs.

I think you are missunderstand me, I don't mean intel will use arm architecture it never will, especially in their current position, what do I mean to is intel can and will ditch x86 to more efficient architecture
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
is there a reason why intel can't do their own ARM socs on this new node or the next for phones/tablets? i mean no one elses SOC could come close in power usage could it? everyone else is on 45nm right?

they could make a mint and have the fab capacity to back it up.
 

Khato

Golden Member
Jul 15, 2001
1,225
280
136
I think a lot of people are forgetting a lot of things. The best current ARM CPU's are on 45nm. They have 2 cores, and a die size of 100-120mm^2. AMD got its C-50 on 40nm to be about 75mm^2. Important thing though is that the A5 sips power, some 80Mw of power. Granted that can go higher, but AMD can barely get anything below 5w! Intel does even worse as far as can be seen from the likes of Atom.

Sure the A5 sips power, but it sure had better considering its comparatively anemic performance. As well, I'm pretty sure that even it ends up at more than just 80mW, considering that that figure matches up with ARMs stated dynamic power consumption for that design. Either way, the A8 definitely requires more power, and then once you get up to a dual core A9 they state 0.5W for the 800 MHz 'power optimized' core design and 1.9W for the 2GHz 'performance optimized' core design. Remember, those figures are -only- for the actual processor core, which doesn't include L2 cache, graphics implementation, memory interface, or any other I/O necessary.

As for Atom... it's nowhere near so bad as many love making it out to be. Though, of course, it could be so much better if Intel had actually put resources into improving it instead of just letting it linger pretty much unchanged since introduction. Part of the impression of poor power consumption is certainly due to the fact that its initial implementation used a horrible chipset, while the other part is thanks to the simple fact that the N and D varieties used by netbooks/nettops are 'low cost' products, not low power. I mean, they start at 6.5W and go up to 13W just for the processor with another 2.1W for the NM10 I/O hub. Meanwhile the actual low power implementation, the Z series, is at 3W or less for the processor and then 0.75W for the SM35 - that's all just on Intel's 45nm process too. I really won't find just a simple process shrink of the current design that interesting though... rather I'm anxious to see what the atom core redesign will end up looking like, not to mention when it'll arrive. It's clear that they can get down to ARM active power levels, especially if they'd leverage their process technology advantage, but compared to the A9 I'd bet that performance is merely comparable, and ARM almost surely wins at idle power.
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
A5 is going quad core in a few months on 28nm, all in a tablet and even smartphone-friendly power envelope. 28nm A9 versus 45nm Atom is no contest. Atom is multithreaded but it is still only one core. Also, if i'm not mistaken, Atom is in order while A9 is out of order.


is there a reason why intel can't do their own ARM socs on this new node or the next for phones/tablets? i mean no one elses SOC could come close in power usage could it? everyone else is on 45nm right?

they could make a mint and have the fab capacity to back it up.

I think there are a number of reasons why they wont.

1. Because they're Intel. They're invested in x86 and their current designs. They don't want to license an architecture from a competitor, especially since doing so essentially means the death of x86.
2. There isn't a lot of money to be made here.
3. Manufacturing isn't a big deal in terms of power consumption. ARM by its nature has very low idle power draw. Even if Intel builds it on their best stuff, the idle life of a phone will be the same as if samsung or gloflo made it, since idle battery life is basically determined by the wireless radios and not the chipset. Turning off the 3G on my Palm Pre will allow it to last for days on a charge compared to less than a day with it on and downloading emails and stuff. I bet most smartphones can run for weeks just idling on any old arm chip. Intel can only offer benefits in terms of performance and,
3. Performance doesn't really matter. We hit the sweet spot a couple of years ago with ~600MHz Cortex A8. Since then all smartphones have basically been "snappy enough" to do smartphoney things. The 1GHz A8 iPad is good enough for Tablety things. The current crop of dual core A9s are already overkill and, with quad cores on the horizon, there just isn't all that compelling a reason to have a more powerful chip in your phone. Also, Qualcom has shown hat there is quite a bit of room to play with existing designs and processes. They can deliver up to 1.5ghz dual cores in a phone now on 40nm. Intel might be able to offer 2GHz on 22nm, but is anyone really willing to pay a premium for that? I'm not sure I would. I wouldn't have said that two or three years ago, but I'm quite happy with my 1GHz A8 at this point and see no reason to upgrade. If I had a 1.5ghz dual core, I would be even less motivated.
 
Last edited:

Dark Shroud

Golden Member
Mar 26, 2010
1,576
1
0
That's the inverse here. Atom & Bobcat both have more than enough power right now. DLink dumped the Tegra 2 in favor of Atom for the Boxee Box because it couldn't deliever on the output.

The current refresh of Atom are 32nm the next will be the 22nm while AMD hits 28nm. Both companies can scale back on the voltage & clocks. So it just comes down to how much they can cut energy usage.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Well you know I've been in South Korea, Japan, Europe and the US in the last time and apart from the US with its atrocious networks (although as long as AT&T stays out you may still stand a chance!) I don't see much problems from that direction.
We're all backwards hicks, what can I say? Ultimately, I don't know what AT&T is going to do. Everyone knows Verizon is the only network worth anything, and most people I know on AT&T, like myself, inherited AT&T, and will move away as soon as there is a reason to get into a contract (travel, home w/ bad service, phone that only Verizon offers while we're still network-locked, etc.).

But, for phones/tablets to be able to operate as basically thin clients, doing everything in the cloud, would take wireless networks with performance/capacity that I haven't even seen proposed, and would only be halfway efficient if the computing was distributed across the network, as well. It's been a pipe dream since long before wireless telephony, and every time it looks almost here, there are either too many logistical roadblocks, or there are powerful enough 'fat client' devices to not bother.
You guys though? Goodness you should really hope it doesn't take off - but just looking at the US is a bit narrow for such trends.
There are cell networks bringing tens to hundreds of MB/s (big B) out to the boonies, per cell user (maybe they'll only use 1MB/s right now, but when they fire up a 3D game, and you have to send all the video over the network...), in most of the world? Most places can't even do that wired, yet.

It might be worth trying, if performance/watt had an absolute limit, and we had already hit it, but I see no reason not to expect 30% or more year-over-year improvements in mobile performance for many years to come. Doing it in the cloud would waste resources, when it will be so cheap to have all the computational power you need right in your pocket, instead. Only latency-insensitive tasks, that can be best taken care of by a distributed system, will be primarily "cloud"-based (hello, Google). The rest is all pipe dreams from the minds of buzzword addicted marketing weenies.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
is there a reason why intel can't do their own ARM socs on this new node or the next for phones/tablets? i mean no one elses SOC could come close in power usage could it? everyone else is on 45nm right?

they could make a mint and have the fab capacity to back it up.

http://en.wikipedia.org/wiki/XScale

See no reason why not. They'd need to expand their existing ARM license to include the newer cores from what I can tell.

The thing with Intel is that they will only pursue diversification if the diversification does not dilute their gross-margins.

Nvidia, Qualcomm, TI - they can all get away with 40% gross margins when pricing their ASP targets and their respective shareholders are accustomed to it.

No "inordinate gross margin expectation" premium has been built into their stock price like it has for Intel.

We saw it with Intel's earlier ventures into HDTV's and mobile phones. They got in just far enough to realize the crushingly desperate gross-margin landscape of those markets and then they bailed. Same with Larrabee and the discreet GPU market.

It's not enough for Larrabee to be competitive, it must also be able to command such a good ASP that it will deliver >50% gross-margins for Intel or else they cannot justify getting into the market to begin with.

With Apple, the same story, their stock price is built upon crazy high expectations of continued superior gross margins. You don't get great margins if your suppliers (Intel) are pricing their product to you such they too get great margins.

Seeing how the iPad's are ARM-based products, and selling like hotcakes with good margins, I can see Apple licking their chops and planning to migrate the iPad platform "upstream" into the latop form factor as well.

Who needs Intel if your customer doesn't care whose cpu is powering their iApps?
 

Ares1214

Senior member
Sep 12, 2010
268
0
0
Sure the A5 sips power, but it sure had better considering its comparatively anemic performance. As well, I'm pretty sure that even it ends up at more than just 80mW, considering that that figure matches up with ARMs stated dynamic power consumption for that design. Either way, the A8 definitely requires more power, and then once you get up to a dual core A9 they state 0.5W for the 800 MHz 'power optimized' core design and 1.9W for the 2GHz 'performance optimized' core design. Remember, those figures are -only- for the actual processor core, which doesn't include L2 cache, graphics implementation, memory interface, or any other I/O necessary.

As for Atom... it's nowhere near so bad as many love making it out to be. Though, of course, it could be so much better if Intel had actually put resources into improving it instead of just letting it linger pretty much unchanged since introduction. Part of the impression of poor power consumption is certainly due to the fact that its initial implementation used a horrible chipset, while the other part is thanks to the simple fact that the N and D varieties used by netbooks/nettops are 'low cost' products, not low power. I mean, they start at 6.5W and go up to 13W just for the processor with another 2.1W for the NM10 I/O hub. Meanwhile the actual low power implementation, the Z series, is at 3W or less for the processor and then 0.75W for the SM35 - that's all just on Intel's 45nm process too. I really won't find just a simple process shrink of the current design that interesting though... rather I'm anxious to see what the atom core redesign will end up looking like, not to mention when it'll arrive. It's clear that they can get down to ARM active power levels, especially if they'd leverage their process technology advantage, but compared to the A9 I'd bet that performance is merely comparable, and ARM almost surely wins at idle power.

Well, if they are projected to be that high, for one i dont believe they are, the A15 is supposed to use even less than the A5. Secondly they couldnt possibly be used in phones if their power was that high since that alone excedes the battery, and that would be horribly dumb. Id like to see where you are getting this info. If they wouldnt be used in phones, then they would obviously be faster, but seeing as how there are no numbers for that, I cant say on that one.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
wouldn't intel build arm SOCs with a 50% margin on them still be competitive, if not cheaper than the competition? i mean if they did this and it came down to a price war, intel could cut prices more easily than the rest due to the fab advantage right? same as they can do currently against AMD.

that's ignoring that intels chips could be faster as well as use less power than the competitions socs (assuming intel does a good job which i assume they would since they seem to be damned good in this area).
 

Ares1214

Senior member
Sep 12, 2010
268
0
0
wouldn't intel build arm SOCs with a 50% margin on them still be competitive, if not cheaper than the competition? i mean if they did this and it came down to a price war, intel could cut prices more easily than the rest due to the fab advantage right? same as they can do currently against AMD.

that's ignoring that intels chips could be faster as well as use less power than the competitions socs (assuming intel does a good job which i assume they would since they seem to be damned good in this area).

Damned good? I wouldnt say so seeing Atom. There are 2 main problems with this though. Intel is x86, odds of them leaving it are between 0-.1%. Problem number 2 is that an Intel CPU at 22nm with Tri-Gate probably couldnt even reach the power efficiency of ARM CPU's. x86=faster, ARM= more power efficient. ARM can be as fast as x86, and x86 could use as much power as ARM, but seeing as how the mobile market is dominated by ARM, it wont likely change. Furthermore, Intel already said their 22nm Tri Gate will cost more to make than normal 22nm, and obviously more than 32nm. You take Tri Gate out, intel will have a hard time getting to the needed power level, you put it back in, and you have a more expensive CPU that most phone and tablet manufacturers wont really see the need for. You do have a point on the superior fabs though.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
wouldn't intel build arm SOCs with a 50% margin on them still be competitive, if not cheaper than the competition? i mean if they did this and it came down to a price war, intel could cut prices more easily than the rest due to the fab advantage right? same as they can do currently against AMD.

that's ignoring that intels chips could be faster as well as use less power than the competitions socs (assuming intel does a good job which i assume they would since they seem to be damned good in this area).

The answer is self-evident, is it not? You think Intel forgot to run the numbers or are just avoiding ARM and 50% gross-margins (if they are possible) for the heck of it?

If they could then they would. Its really not any more complicated than that.

The same goes with AMD. It is no accident that AMD has avoided jumping into the existing, already competitive, ARM landscape.

What does that tell you when given the choice between competing with Intel, under the boot of their effective monopoly, versus competing with dozens upon dozens of other industry titans in the ARM landscape that AMD chooses to pursue an all x86-based product lineup?

ARM is the ISA of last resort, it means you've given up considerable product differentiation leverage and are now attempting to eke out a living by selling an otherwise commoditized product.

Apple sees money in this because they aren't supplying the chips, they sell them to the end-user and make all the profits for themselves.

TI doesn't see money in this, that is why they have been pursuing a strategy to get out of digital CMOS and becoming ever more an analog company.

Companies like Nokia and Qualcomm are not interested in buying your ARM chip if you are going to insist on selling it to them at prices that enable you to command a 50% gross-margin. It just doesn't work that way. Intel observed this with Itanium too.

Nvidia has an angle with their superior GPU IP, they can bootstrap their GPU to an existing ARM ISA and have a product that can be easily/readily differentiated and that differentiation can be leveraged into a gross-margin premium. Maybe.

If Intel or AMD beat them to the market by virtue of their aggressive x86/fusion products then Nvidia may need Denver simply so they can survive versus it being the proverbial pot of gold at the end of the rainbow.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
works fine for me, yesterday too

IDC - ta for the info

Ares1214 - i was referring to them making their own ARM soc on the newest node, not an x86 chip

anyhoo, IDC killed that thought off lol
 
Last edited:

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Nvidia has an angle with their superior GPU IP, they can bootstrap their GPU to an existing ARM ISA and have a product that can be easily/readily differentiated and that differentiation can be leveraged into a gross-margin premium. Maybe.

Nvidia didn't become the big daddy of the GPU world by commanding high margins on their products, they did it by maintaining an aggressive release cycle that allowed them to hit the entire market in short order and rarely let any competitors maintain a lead over them before their next product was out. Nvidia is used to iterating products must faster than the typical ARM giants, and the willingness to cannibalize their own product is what may make them a big player in the market. Of course, their competition in the ARM world is larger than anything they ever faced in the GPU world.
 

Khato

Golden Member
Jul 15, 2001
1,225
280
136
Well, if they are projected to be that high, for one i dont believe they are, the A15 is supposed to use even less than the A5. Secondly they couldnt possibly be used in phones if their power was that high since that alone excedes the battery, and that would be horribly dumb. Id like to see where you are getting this info. If they wouldnt be used in phones, then they would obviously be faster, but seeing as how there are no numbers for that, I cant say on that one.

Numbers are courtesy of ARM's product page for the A9, under the 'performance' tab - http://www.arm.com/products/processors/cortex-a/cortex-a9.php It's also quite important to realize the simple fact that going from the power optimized to performance optimized results in 2.5x the performance for almost 4x the power.

I do find it interesting that the A9 is the only design where they're willing to give actual core power figures. But those numbers match up quite nicely with the simple fact that there's nothing magical about ARM's design, despite the fact that many sure want to believe that such is the case. Once they start offering comparable performance to x86, their power usage goes up accordingly. Heh, and battery life tests of phones using the Tegra 2 actually match up with the 'power optimized' design figures pretty well - specifically, the LG optimus 2x with it's 5.6Whr battery manages 3.85 hours on the Anandtech 'gaming' battery life test. (It should be noted that the AP20H version of tegra 2 uses the VFPv3-D16 FPU instead of the SIMD NEON 'FPU', and hence should be a bit lower power than the default A9 ARM hard macros.)
 

sonoran

Member
May 9, 2002
174
0
0
>>>Once they start offering comparable performance to x86, their power usage goes up accordingly.

Yes, it should be noted that ARM and x86 both have to follow the same laws of physics. You don't get something for nothing.
 

Ares1214

Senior member
Sep 12, 2010
268
0
0
Numbers are courtesy of ARM's product page for the A9, under the 'performance' tab - http://www.arm.com/products/processors/cortex-a/cortex-a9.php It's also quite important to realize the simple fact that going from the power optimized to performance optimized results in 2.5x the performance for almost 4x the power.

I do find it interesting that the A9 is the only design where they're willing to give actual core power figures. But those numbers match up quite nicely with the simple fact that there's nothing magical about ARM's design, despite the fact that many sure want to believe that such is the case. Once they start offering comparable performance to x86, their power usage goes up accordingly. Heh, and battery life tests of phones using the Tegra 2 actually match up with the 'power optimized' design figures pretty well - specifically, the LG optimus 2x with it's 5.6Whr battery manages 3.85 hours on the Anandtech 'gaming' battery life test. (It should be noted that the AP20H version of tegra 2 uses the VFPv3-D16 FPU instead of the SIMD NEON 'FPU', and hence should be a bit lower power than the default A9 ARM hard macros.)
"delivers its peak performance of 4000 DMIPS while consuming less than 250mW per CPU when selected from typical silicon."

Same page...I said as low as 80mW, and that would be 1000x less power than a typical desktop CPU. I then said on a more typical basis it would probably be 250-500x less power. Your 2w number was no where near.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |