I completely disagree. Soon enough they will all meet the same dead end ILP extraction wise that Intel (and soon AMD) met. It's easier to follow than to trail blaze, don't expect the meteoric performance jumps we see right now to continue for long. There isn't much more that can be improved hardware wise for absolute performance, not without completely blowing up power budgets to the point where just adding more cores is far more efficient.
And even though it appears like Apple and soon Samsung are gaining on Intel and AMD, remember that the two have been stuck on 14nm for years now, while Apple and Samsung are getting the benefits of a node shrink. Once Intel and AMD move to 7nm (Well, 10nm for Intel), the bar will be set higher for ARM designs to beat.
What is there to catch up to? This statement about "it will stop at some time" has been repeated over and over again, and has been proven wrong again and again. It was first said to me on this forum when the A9 released. Now we're at A11 with 60% improved single core performance since then. And they (and Samsung) are achieving this with 4W SOCs! This fact can't be repeated enough. Compare that to Intel's 4.5W Core-M CPUs, and there is a clear difference in actual performance between them.
We might as well mention Intel's attempt at entering the smartphone segment with their Atom SOCs, which despite a ton of resources behind it and enough attempts, was still considerably inferior to Qualcomm, Samsung and Apple alternatives. After stupendous amount of subsidies, Intel ended up acknowledging defeat and escaping the segment altogether. When a company of Intel’s size, and with their enormous R&D, walks away from a huge money-making machine like the smartphone sector, it should tell you something…Intel also knew these SOCs would eventually catch up to and threaten the laptop segment of which they are currently dominating in.
Also, you say they (Apple, Samsung, Qualcomm, etc.) have the benefit of lower process node, which I also am unsure about is the case. One, because Intel's 14nm++ is as good as (or better than) TSMC and Samsung's 10nm. And two, because process node shrink in the past has not shown any attempt by Intel to make larger cores or increase performance in any way. SB to IB gave us next to no difference in performance. HW, which was on the same process node as IB, had "huge" leap of ~10%. HW to BW again had little no to difference, with BW to SL -- again same process node -- being around ~6% jump.
People on this forum have constantly defended Intel, claiming they aren't doing more because more can't be done. But look at the mobile platform. Even AMD are beating Intel's ass with their first mobile Ryzen attempt, with almost equal CPU performance, and a whopping ~250% GPU performance advantage. And they're doing that on an inferior 14nm process. Now, Apple and Samsung are catching up to them, with similiar CPU performance on a third of the power usage. And there's no sign of them stopping their yearly performance improvements either. For Apple that's now down to 15-20% improvements -- but that's still as much as Intel did in 5
redacted years with 3 different architectures.
28nm to 14nm (or rather "fake 14nm", as TSMC's and GloFo's defnition seems to be different of Intel) gave us a performance increase of around ~70-80% in for example the GPU segment, where AMD and NVIDIA took advantage of smaller nodes to increase the amount of transistors. What did 32m to 14nm give us on the desktop CPU segment? IPC increase of 20%....I've never understood that. Then again, we're talking about the same high tech company whose IHS soldering is so bad, a simple delid and removal of residue glue improves temperatures by 15C. A company whose thermal paste is as effective as toothpaste in terms of cooling abilities. A company whose "let's do
redacted-all" attitude has now made ARM catch up to them on mobile -- something that has been happening for a long while. Hell, even Qualcomm with their weak SOCs, are now striking deals with Microsoft, Lenovo, HP and others to use Snapdragon chips in their future products. Intel are even being pressured in the server segment.
It kind of makes you think what their R&D budget has been going to.
While I disagree about the GPU notion, I do agree that we should be more careful about interpretation of comments. It just degrades discussion quality otherwise. +1
I agree as well, which is why I won't allow Thala to lie himself out of this situation by claiming I'm putting words in his mouth.
In Thala's comment about valuing GPU perf more than singlethreaded perf, I sarcastically remarked "Yes, 10% higher GPU performance is clearly better than 70% higher singlethreadead performance..."
To which Thala responded (To make it easier, I have shortened part of the post; you can find it in its entirety
here).
Might depend on the particular use-case...when firing up a game CPU performance will gain you nothing while you see immediate benefits of higher GPU performance.
To be clear i am talking about user-experience and do not just go by numbers, which would indicate that 70%>10%.
He is very well aware we're talking about a 70% singlethreaded advantage and 10% GPU perfm disadvantage for the E9810 vs the SD845. And yet he goes on about how he values the GPU performance increase more, again and again and again.
You've been told several times already,
but there is NO profanity in the tech areas.
AT Mod Usandthem