Moo.
Unfortunately, I'll likely be livestock too when Has-E rolls around.
Moo.
Even in my quote that you selected I stated that it hurts the foundries, not the SoC designers. You're quite correct that the development costs for the SoC designers aren't all that bad, even for Qualcomm and Apple with their custom cores. (It's actually similar for Intel as well.) But it costs the foundries just as much to develop new processes and build new fabs as it does Intel. If Intel starts taking substantial marketshare in the mobile realm it's going to hit the foundries that make those chips hard... especially because if they start slipping further behind than they already are it'll only increase Intel's marketshare even further.
Son, this ain't Slashdot. No need for a car analogy
I think the fabs situation gets particularly complicated, because (my understanding) is that some stuff is best made using the latest and greatest (smallest, e.g. 14 nm) feature size, such as for making the fastest/most powerful cpu's. But, it is also (in its early years of availability), the most expensive.
But other things DON'T really have to be made using the most expensive, smallest, and latest fab plants. Which then allows these chips to be made in huge quantities, and sold for relatively small amounts of money.
(Going beyond my understanding of the economics of fab plants) I have read that (confusingly, to me at least), the latest and smallest feature size fab plants, can also produce the cheapest chips, because the tiny IC size, allows many more of the same IC's to be produced per (same sized) wafer, because of the smaller feature size.
Where I'm confused is that some articles claim that the smallest feature size chip plants, produce the cheapest chips. And yet, the older (larger feature sized chip plants) ones, ALSO make the cheapest chips, because it is OLD technology, which makes the plant (fab) much cheaper to buy.
In other word, if old AND new chip plants BOTH produce the cheapest chips, then it cancels out, and they BOTH produce chips at the same price.
Maybe the complication is that the new chip plants need to sell in huge quantities in order to make the cheapest chips and recoup the huge investment, whereas, OLD chip plants DON'T need to necessarily sell in huge quantities, to make a profit as they cost much less in the first place.
In the "old" computer days we had Mainframe computers, and they were sort of killed off by Mini-computers. Then LARGE-Microprocessors killed off mini-computers.
So I assume IDC means that (Intel) Microprocessors (Large, high power consuming, meant for desktops) are killed off in coming years by (Arm) SMALL-Microprocessors (small, low power consuming, meant for hand held devices).
Arm processors come in at prices around a few dollars.
Higher end haswell processors come in at prices around a few hundred dollars.
Usually when there are such big price differentials, history tells that the thing which costs 100 times more, disappears.
E.g. In the 1960's, desktop calculators were something like $8,000.
Hand held calculators in coming years were $1.
How many shops stock $8,000 "desktop" calculators now ?
There is a very good reason why you've read about both sides of that situation, and have simultaneously grappled with which side is correct...the reason being that of course one side or the other can be true depending on certain conditions.
As is true with everything in life, you have fixed costs and per-part costs. And when you amortize the fixed costs over the course of the volume produced (which adds to the per-part costs) you have a net cost-per-part that produces a curve based on volume of parts.
For an IC the fixed costs include amortization of the node's development expense as well as the IC's development expense. The variable cost is that of the production expense associated with the producing the wafer itself (which will not include the amortized expense of developing the node if you are an IDM, but it will include it if you are fabless and buying wafers from a foundry).
The IC development cost doesn't go away with time, but the node development cost will. And the price per wafer for production is always more expensive on newer nodes than older nodes. But contrary to prices for newer nodes, which tend to go down over time (for a while anyways), the prices for older nodes actually go up over time to account for inflation and price increases in labor, electricity, water, and other consumables.
This all conspires to create a nicely shaped price curve which depends entirely on the complexity of the IC (impacts design costs and ultimately the die-size), the intended performance capability of the IC (clockspeeds, power consumption, i.e. general marketability), and the volume of parts one is intending to sell (what is the TAM?).
And of course the curve is not static, nor is one going to produce and sell all the intended parts over the course of 30days. So the price curve itself will have a third dimension (which is time) that captures expected shifts in both demand (impacts the ASP as a function of time) and variable production costs (shifts in currency exchange rates, shipping expenses, production costs from inflation, etc).
The goal being to maximize the area under the curve with respect to gross margins and total profits, requires some hedging in advance of designing the IC itself.
ARM prices also tends to go up fast with performance. The S4 for example is estimated to have a 30$ CPU cost. And we are not talking about listprice here.
This is a pretty large oversimplification, even to the extent that it applies to this situation at all. Mainframes aren't dead even today.
When two products or technologies clash in the marketplace, one of two things happens: they coexist, or one drives the other out. Which occurs depends on whether one is nearly completely better than the other, or whether each is better than the other in some way.
Good current example is SDDs and HDDs. An SDD is smaller, lighter, faster and (arguably) more reliable -- but an HDD is significantly cheaper. And so they each have a market niche. If and when the price of SDDs drops to below that of HDDs, or even close, HDDs will likely be driven out of the market.
Same thing with CPUs right now -- ARM chips are not strictly better than x86 CPUs, they are only better in certain circumstances. I not only do not see it as obvious that ARM will ever be strictly better than x86, I see a good chance that the opposite may happen. x86 has been left for dead too many times in the past, and has proven remarkably difficult to kill off, something even Intel knows from repeated direct experience.
ARM processors that cost a few dollars can't do what higher-end x86 processors can. If they could, Intel wouldn't be able to sell the latter for hundreds of dollars. This isn't complicated.
Again, this is not a valid analogy, because you are talking about equivalent items where one is strictly better than the other. That's not the case with ARM versus x86.
ARM prices also tends to go up fast with performance. The S4 for example is estimated to have a 30$ CPU cost. And we are not talking about listprice here.
Yep, ARM SOC costs are commensurate based on performance, and there are a wide variety of products available. Obviously you can buy extremely low performance yet high battery life ARM SOCs which are dirt cheap, but there are products at the other end of the spectrum as well.
That's exactly what has been bugging me, worrying about possible future cpu's.
Even if Intel are partially matched or even beaten (probably unlikely), by other cpu manufacturers, the competing cpus could well cost broadly equivalent amounts of money. Meaning that their main advantage to consumers would be increased competition against Intel, forcing Intel to charge reasonable prices for their processors.
NOT, mega cheap, very powerful Arm processors (bah bah, double bah, I want my 1000,000,000,000,000 core, low cost personal cluster, and change from the $1).
If an ARM CPU performed like a 4770. It would also cost something like a 4770. Its an illusion to think we suddenly get cheap CPUs. R&D and other expenses still needs to be payed.
If an ARM CPU performed like a 4770. It would also cost something like a 4770. Its an illusion to think we suddenly get cheap CPUs. R&D and other expenses still needs to be payed.
Sadly, I'm sure you are right. I am still secretly hoping that potentially bigger "Arm" sales, may lower the prices, as the fixed R&D would be divided into a larger number of sold units.
Any custom cores cost extra, alot extra.
And the default ARM core is pathetic. You get exactly what you pay for.
If an ARM CPU performed like a 4770. It would also cost something like a 4770. Its an illusion to think we suddenly get cheap CPUs. R&D and other expenses still needs to be payed.
I agree with Shintai here, costs associated with the complexity of an IC do not scale linearly.
They don't scale linearly with the design team, nor the layout team, nor the validation team, nor the yield team, nor the packaging team...
To make little things become bigger things, you pay an outsized expense for that priviledge.
Look at nature. Look at ants. Awesome performance/weight ratios. Scale that ant up to human-size on paper and you find yourself saying silly things like "an ant the size of a human could lift a school bus over its head and walk the equivalent of 30 kph with it!"...but nature shows you that you can't scale an ant to human size (its exoskeleton cannot scale like that), so the hypothetical scenario is completely absurd and irrelevant at the same time.
An ARM processor vs a big-core processor like the 4770 is like that IMO.
(also, not a grammatical nitpick but just an FYI - it is "paid" not "payed"...English sucks as a rules-based language, it happens )
Moo.
I'm selling cow bells engraved with intel logos. OK, not really but I should.
Heh, no worries. Just couldn't resist poking fun at the car analogy, since on Slashdot, the usual stuff is :I will stop it at once, as I don't want to 'brake' forum rules, exceed the analogy speed limits, or get honked at and get my forum driving licence confiscated.
Anyway, sorry, I know that analogies do not always go down well.
Heh, no worries. Just couldn't resist poking fun at the car analogy, since on Slashdot, the usual stuff is :
1) Car analogy
2) Imagine a Beowulf cluster of those
3) Can it run Quake
4) I for one welcome our *** overlords
5) This will be the year of the Linux desktop