Who, except Nvidia, is marketing SoCs for one year only? Life of a common SoC is 3 years in the bleeding edge, more in embedded.
Only due to cost.
Who, except Nvidia, is marketing SoCs for one year only? Life of a common SoC is 3 years in the bleeding edge, more in embedded.
Servers are changing to meet this new workload by adopting ARM chips, which are better suited for this role than traditional x86 servers.
Who, except Nvidia, is marketing SoCs for one year only? Life of a common SoC is 3 years in the bleeding edge, more in embedded.
Everyone? Samsung with Exynos, Apple with AX, Qualcomm with Snapdragon, Nvidia with Tegra, TI with OMAP for example. Apple and Qualcomm have their own custom cores and Nvidia is designing its own as well. Last year Tegra 3 is a lazy bum for today's standards.
I have a feeling Mr. Feldman would view things a bit differently if his company was kicking Intel's ass in the server market rather than the other way around. Sounds like mostly sour grapes to me.
I have a feeling Mr. Feldman would view things a bit differently if his company was kicking Intel's ass in the server market rather than the other way around. Sounds like mostly sour grapes to me.
Qualcomm, at least, keeps them out for ~3 years, sometimes longer. There are new phones coming out, FI, with 2-year-old chips, and many of those are shrunk refreshes for even older chips (FI, you might be able to walk into Walmart today, and buy a brand new phone with a MSM7227A or MSM7625A). Note that no one has said anything about release schedule, but obsolescence/EOL.Everyone? Samsung with Exynos, Apple with AX, Qualcomm with Snapdragon, Nvidia with Tegra, TI with OMAP for example. Apple and Qualcomm have their own custom cores and Nvidia is designing its own as well. Last year Tegra 3 is a lazy bum for today's standards.
None of those chips you mentioned have a 12 month life span on the market, except for Tegra.
A year or less. Just an example:
http://en.wikipedia.org/wiki/Exynos_(system_on_chip)
Exy Single 3 arm8 45nm: 2010
Exy Dual 4 arm9 45nm: 2011
Exy Dual 4 arm9 32nm: 2011
Exy Quad 4 arm9 32nm: 2012
Exy Dual 5 arm15 32nm: Q3 2012
Exy Octa 5 arm15 28nm: Q2 2013
Feel free to take a look at Qualcomm SoC release cadence if Samsung's not enough.
He!He!I have a feeling Mr. Feldman would view things a bit differently if his company was kicking Intel's ass in the server market rather than the other way around. Sounds like mostly sour grapes to me.
The problem for AMD is that they don't have low enough power/die size/cost to effectively compete against ARM, and they don't have high enough performance to effectively compete against Intel.
As for supercomputers, they're the Italian sportscars of the computer industry -- they get lots of attention, they give those involved with them prestige, and they have basically nothing to do with the rest of the market.
As for supercomputers, they're the Italian sportscars of the computer industry -- they get lots of attention, they give those involved with them prestige, and they have basically nothing to do with the rest of the market.
I assume that for some calculations time becomes more important than cost ?was financially prohibitive to gain access too
Car analogies are almost as annoying as people quoting walls of text for a single "agreed" or "+1".
This is so very true. I had the "luxury" of having access to a supercomputer once, time allocated to me and a bunch of my fellow lab mates to run computational chemistry experiments.
The allocated time was paid for by a grant, and it was a silly princely sum of money that we had to pay (per core-hour) to use the supercomputer. (on the order of $50 per hour per core based on 100% utilization rate, all metered and monitored of course)
So our grant afforded some 10 of us people the opportunity to each have about 100 hours of "supercomputer" time for our individual projects. Only what happened in reality is that one of the people decided to load up their computation job on a Friday evening and didn't bother checking on it until Monday morning...and in the span of those 60 hours they managed to occupy enough cores at high enough utilization rate (and still not get the job done) that the entire grant was consumed (and then some) so the other 9 of us were out of luck thereafter
So what did we do? COTS. We for a few thousand bucks we built ourselves our own little cluster and it actually outperformed the supercomputer in terms of jobs/day and CPU cycles because the supercomputer (while fast if could use all 1000+ cores) was financially prohibitive to gain access too.
So all you could do is gain access to maybe 1-2% of its hardware power...and 1-2% of the power of a supercomputer is easily within the realm of the processing horsepower of a small cluster that might cost $10k (vastly less than the initial grant money we obtained for use with the supercomputer).
I walked away from that experience realized that supercomputers serve a purpose, like jet fighters or nuclear attack submarines, but the purpose they serve is hardly anything like that to which the industry that created them actually serves (aerospace and marine engineering where 98% of the TAM is in commercial enterprise, not government).
Very little of what has gone into making supercomputers over the past 20yrs has actually "trickled down" to what sits in our desktop and laptop computers today. The software is no better for it, and at best the hardware can be said to be marginally better for it but only if you happen to be using your Ethernet port instead of wireless networking. But the microarchitectural innovations responsible for your single and multi-threaded performance were absolutely not motivated by anything learned in the development of supercomputers.
They benefit from a falsely perceived value-add in so many ways in that regard.
Car analogies are almost as annoying as people quoting walls of text for a single "agreed" or "+1".