AMD's Feldman says ARM processors have some advantages over x86 chips

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
Servers are changing to meet this new workload by adopting ARM chips, which are better suited for this role than traditional x86 servers.

I have a feeling Mr. Feldman would view things a bit differently if his company was kicking Intel's ass in the server market rather than the other way around. Sounds like mostly sour grapes to me.
 

ams23

Senior member
Feb 18, 2013
907
0
0
The problem for AMD is that they don't have low enough power/die size/cost to effectively compete against ARM, and they don't have high enough performance to effectively compete against Intel. AMD did not realize until far too late that challenging Intel in the x86 market will always be a losing proposition in the long-term. That is why they are jumping on the ARM ship now.
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
Who, except Nvidia, is marketing SoCs for one year only? Life of a common SoC is 3 years in the bleeding edge, more in embedded.

Everyone? Samsung with Exynos, Apple with AX, Qualcomm with Snapdragon, Nvidia with Tegra, TI with OMAP for example. Apple and Qualcomm have their own custom cores and Nvidia is designing its own as well. Last year Tegra 3 is a lazy bum for today's standards.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Everyone? Samsung with Exynos, Apple with AX, Qualcomm with Snapdragon, Nvidia with Tegra, TI with OMAP for example. Apple and Qualcomm have their own custom cores and Nvidia is designing its own as well. Last year Tegra 3 is a lazy bum for today's standards.

None of those chips you mentioned have a 12 month life span on the market, except for Tegra.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
I have a feeling Mr. Feldman would view things a bit differently if his company was kicking Intel's ass in the server market rather than the other way around. Sounds like mostly sour grapes to me.

Oh, what did you expect? This is the same management that declared the Moore's law dead because they could not afford it anymore.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
I have a feeling Mr. Feldman would view things a bit differently if his company was kicking Intel's ass in the server market rather than the other way around. Sounds like mostly sour grapes to me.

Me too. Marketing occupies a unique position though within a company in that they know exactly how difficult it is to sell their company's products in light of the competition. This gives them the insight and motivation to judge themselves (as a company) rather harshly but truthfully.

What engineer on the big-core x86 team is going to view their current project as a waste of time? There is an inherent personal conflict of interest there when it comes to an individual justifying to themselves, their spouses, their children and extended family why it is that they spend so much time at the office "working" instead of coming home earlier in the evening or taking more time off for family vacation.

If your project were really second-best and over-rated then you'd be hard pressed to justify your personal choices in terms of time investiture and commitment to the work team.

But marketing is not so clouded, they can take it or leave when it comes to calling it like it is and demonizing the engineering departments so long as it isn't going to come back to haunt them in terms of job loss.

Why would an AMD marketing guy demonize the products of his own employer's engineering teams? Probably because he realizes he needs to light a fire under their asses, and the management that is resourcing them, if he is going to have anything worth trying to sell in a couple years down the road. (remember he sees things now, internally, that we won't hear about publicly for years yet)

He may well be trying to right a listing ship that few realize is listing at the moment.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Everyone? Samsung with Exynos, Apple with AX, Qualcomm with Snapdragon, Nvidia with Tegra, TI with OMAP for example. Apple and Qualcomm have their own custom cores and Nvidia is designing its own as well. Last year Tegra 3 is a lazy bum for today's standards.
Qualcomm, at least, keeps them out for ~3 years, sometimes longer. There are new phones coming out, FI, with 2-year-old chips, and many of those are shrunk refreshes for even older chips (FI, you might be able to walk into Walmart today, and buy a brand new phone with a MSM7227A or MSM7625A). Note that no one has said anything about release schedule, but obsolescence/EOL.
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
None of those chips you mentioned have a 12 month life span on the market, except for Tegra.

A year or less. Just an example:

http://en.wikipedia.org/wiki/Exynos_(system_on_chip)

Exy Single 3 arm8 45nm: 2010
Exy Dual 4 arm9 45nm: 2011
Exy Dual 4 arm9 32nm: 2011
Exy Quad 4 arm9 32nm: 2012
Exy Dual 5 arm15 32nm: Q3 2012
Exy Octa 5 arm15 28nm: Q2 2013

Feel free to take a look at Qualcomm SoC release cadence if Samsung's not enough.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
A year or less. Just an example:

http://en.wikipedia.org/wiki/Exynos_(system_on_chip)

Exy Single 3 arm8 45nm: 2010
Exy Dual 4 arm9 45nm: 2011
Exy Dual 4 arm9 32nm: 2011
Exy Quad 4 arm9 32nm: 2012
Exy Dual 5 arm15 32nm: Q3 2012
Exy Octa 5 arm15 28nm: Q2 2013

Feel free to take a look at Qualcomm SoC release cadence if Samsung's not enough.

I didn't know that Samsung was in the same cadence as of Nvidia. I would not count the die shrinks as a new SoC tough.
 

Third_Eye

Member
Jan 25, 2013
37
0
0
I have a feeling Mr. Feldman would view things a bit differently if his company was kicking Intel's ass in the server market rather than the other way around. Sounds like mostly sour grapes to me.
He!He!
More like Andrew Feldman IIRC came from Sea Micro acquisition. So his fabric interconnect is useful for the dense server world that ARM is targeting. So no wonder he is using that.

Also one thing in companies' post acquisition is to prove the "inflated values" of a merged/acquired division when a company acquires it. That is standard in corporate america esspecially noticable in the last 2 companies I worked. Esspecially in the current place of work, where our CEO plays these word games to justify to our parent's shareholders....
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
The ever tightening cadence is one reason why TI, Freescale, and ST-E bowed out of smartphones. Even the shrinks will often contain some other changes around the SoC. Intel is also releasing new smart phone SoCs at least once a year now, not even counting separate tablet SoCs. Chinese and Taiwanese SoC makers are pumping out new iterations at least as quickly, probably even more so.

nVidia is actually well behind Samsung, Apple, and Qualcomm when it comes to introducing new mobile SoCs - the gap between Tegra 3 and Tegra 4 showing up in devices is over 1.5 years. nVidia said they were adopting a tick-tock strategy and were supposed to be releasing a 28nm Tegra 3 but that never happened.

A new mobile product using a 3 year old or even 2 year old SoC is practically unheard of; sometimes really cheap stuff like Archos Carbon 35 end up with old SoCs that are maybe be cleared out at very low prices but even that's uncommon (and said old SoC is only Q4 2011 anyway)
 

ams23

Senior member
Feb 18, 2013
907
0
0
Most of the players in the ultra mobile space are adopting a one year design cadence, at least for the time being. In NVIDIA's case, T4 will come to market a few months later than expected, but NVIDIA plans to release three chips (T4, T5, T6) within the next two years. T2, T3, and T4 were designed one after another after another. That will not be the case with future generations.

In general, ultra mobile performance has been increasing at a rate faster than that predicted by Moore's Law, but at some point over the next few years the performance percentage gains will start to taper off year after year.
 
Last edited:

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
The problem for AMD is that they don't have low enough power/die size/cost to effectively compete against ARM, and they don't have high enough performance to effectively compete against Intel.

It may be because I've been spending too much time reading up on the history of computers, but this immediately struck me as similar to what happened to the minicomputer industry -- they got squeezed out by cheaper, smaller more powerful mainframes on the one side, and microcomputers on the other.
 

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0

Company that makes ARM chips and not x86 chips says that ARM is the future and not x86. Quick, someone call Reuters!

There's enough sour grapes around here to start a winery.

Sure, ARM is going to have a future, but the death of x86 has been predicted before. Many times.

As for supercomputers, they're the Italian sportscars of the computer industry -- they get lots of attention, they give those involved with them prestige, and they have basically nothing to do with the rest of the market.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
As for supercomputers, they're the Italian sportscars of the computer industry -- they get lots of attention, they give those involved with them prestige, and they have basically nothing to do with the rest of the market.

lmao, they also aren't powered by one-cylinder engines, which is what an ARM CPU would equate to, in the automotive world.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
As for supercomputers, they're the Italian sportscars of the computer industry -- they get lots of attention, they give those involved with them prestige, and they have basically nothing to do with the rest of the market.

This is so very true. I had the "luxury" of having access to a supercomputer once, time allocated to me and a bunch of my fellow lab mates to run computational chemistry experiments.

The allocated time was paid for by a grant, and it was a silly princely sum of money that we had to pay (per core-hour) to use the supercomputer. (on the order of $50 per hour per core based on 100% utilization rate, all metered and monitored of course)

So our grant afforded some 10 of us people the opportunity to each have about 100 hours of "supercomputer" time for our individual projects. Only what happened in reality is that one of the people decided to load up their computation job on a Friday evening and didn't bother checking on it until Monday morning...and in the span of those 60 hours they managed to occupy enough cores at high enough utilization rate (and still not get the job done) that the entire grant was consumed (and then some) so the other 9 of us were out of luck thereafter

So what did we do? COTS. We for a few thousand bucks we built ourselves our own little cluster and it actually outperformed the supercomputer in terms of jobs/day and CPU cycles because the supercomputer (while fast if could use all 1000+ cores) was financially prohibitive to gain access too.

So all you could do is gain access to maybe 1-2% of its hardware power...and 1-2% of the power of a supercomputer is easily within the realm of the processing horsepower of a small cluster that might cost $10k (vastly less than the initial grant money we obtained for use with the supercomputer).

I walked away from that experience realized that supercomputers serve a purpose, like jet fighters or nuclear attack submarines, but the purpose they serve is hardly anything like that to which the industry that created them actually serves (aerospace and marine engineering where 98% of the TAM is in commercial enterprise, not government).

Very little of what has gone into making supercomputers over the past 20yrs has actually "trickled down" to what sits in our desktop and laptop computers today. The software is no better for it, and at best the hardware can be said to be marginally better for it but only if you happen to be using your Ethernet port instead of wireless networking. But the microarchitectural innovations responsible for your single and multi-threaded performance were absolutely not motivated by anything learned in the development of supercomputers.

They benefit from a falsely perceived value-add in so many ways in that regard.
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
Car analogies are almost as annoying as people quoting walls of text for a single "agreed" or "+1".
 

dastral

Member
May 22, 2012
67
0
0
was financially prohibitive to gain access too
I assume that for some calculations time becomes more important than cost ?
I mean you can't wait 50 years for that single i7 to finish Climate/Nuke simulations.
So while it costs 1000 times more, getting it in 1 year instead of 50 still remains very desirable.
Even if the overall project efficiency/cost went down the drain ? and is twenty times worse....
 
Last edited:

Maximilian

Lifer
Feb 8, 2004
12,603
9
81
This is so very true. I had the "luxury" of having access to a supercomputer once, time allocated to me and a bunch of my fellow lab mates to run computational chemistry experiments.

The allocated time was paid for by a grant, and it was a silly princely sum of money that we had to pay (per core-hour) to use the supercomputer. (on the order of $50 per hour per core based on 100% utilization rate, all metered and monitored of course)

So our grant afforded some 10 of us people the opportunity to each have about 100 hours of "supercomputer" time for our individual projects. Only what happened in reality is that one of the people decided to load up their computation job on a Friday evening and didn't bother checking on it until Monday morning...and in the span of those 60 hours they managed to occupy enough cores at high enough utilization rate (and still not get the job done) that the entire grant was consumed (and then some) so the other 9 of us were out of luck thereafter

So what did we do? COTS. We for a few thousand bucks we built ourselves our own little cluster and it actually outperformed the supercomputer in terms of jobs/day and CPU cycles because the supercomputer (while fast if could use all 1000+ cores) was financially prohibitive to gain access too.

So all you could do is gain access to maybe 1-2% of its hardware power...and 1-2% of the power of a supercomputer is easily within the realm of the processing horsepower of a small cluster that might cost $10k (vastly less than the initial grant money we obtained for use with the supercomputer).

I walked away from that experience realized that supercomputers serve a purpose, like jet fighters or nuclear attack submarines, but the purpose they serve is hardly anything like that to which the industry that created them actually serves (aerospace and marine engineering where 98% of the TAM is in commercial enterprise, not government).

Very little of what has gone into making supercomputers over the past 20yrs has actually "trickled down" to what sits in our desktop and laptop computers today. The software is no better for it, and at best the hardware can be said to be marginally better for it but only if you happen to be using your Ethernet port instead of wireless networking. But the microarchitectural innovations responsible for your single and multi-threaded performance were absolutely not motivated by anything learned in the development of supercomputers.

They benefit from a falsely perceived value-add in so many ways in that regard.

+1
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
I've noticed that nearly 100% of the time, the person complaining about someone's analogy is the same person who is unable to refute said analogy. COTS is the only efficient way to use computers, if you need to do anything that is computationally extensive, but don't have Sandia Labs yearly budget.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |