The obvious answer: it won't last. There's no doubt the situation is the result of a lack of competition, and enabling such features is a good means of keeping its competition at bay. For the meantime, there is no need for them to do so, and they are profiting from that, however they have those cards in their hand to play whenever they need to.
While I understand what you're saying, your really don't want a single core. There are significant performance degradations involved in regressing from dual to single, even if you're not running multithreaded software.
The problem with these budget cores is that they just don't bring in enough revenue to justify making a "built from the ground up" Celeron.
The only thing that's really viable would be to fab dies without LLC, but with mask costs as high as they are, that hasn't been an option for a while. That used to be the way things were done, though.
Low level cache, do you mean L1 or L2?
If so, I have wonder what would happen to performance per watt for mobile. As I understand things, processor cache is fairly low power.
Also, with GAA and other emerging technologies, it's less of a question of "is it necessary," and more of a question of "is it better, both physically and economically?" Or, if it is not necessarily better on the cost front, is the cost penalty outweighed by the performance improvement? Right now, the performance is very sound, but it's just uneconomical at this point in time to implement either GAA or SiGe/Ge/III-V.
Sure, but if there's just done 4+ generations of material advances to keep leakage at bay, one would expect to see new advances at the other end of the efficiency duality, even if strained silicon is still useful at its 6th generation.
No. We know the exact cause of the delay. It has everything to do with multiple patterning.This is my thinking - Intel had allot of trouble pushing FinFET this far, so another geometry may be needed. We don't know the exact cause of the delay, but getting the fins that tall and thin couldn't have been a cake walk.
Huh, in which year do you live? Tri-gate transistors are being produced since 2011. FinFET isn't going to disappear.There is an paper at Intel (c 2011): http://download.intel.com/newsroom/...esearch_Enabling_Breakthroughs_Technology.pdf which points towards Omega or Tri-gate xtors.
I'm guessing on SiGe for higher mobility (in part to keep maintain fast gate switching and a high current density with such small geometries, but obviously there is allot more to it). It seems, from my limited knowledge, that Tri-gate would be much easier to manufacture than GAA.
The same presentation you linked actually points towards III-V at 10nm. But in general, it isn't clear; some sources say III-V, others say Ge, others say both.And, AFAIK, Intel is saving III-V materials for single digit nodes.
This is my thinking - Intel had allot of trouble pushing FinFET this far, so another geometry may be needed. We don't know the exact cause of the delay, but getting the fins that tall and thin couldn't have been a cake walk.
Making them tall isn't necessary the challenge, it is a challenge but a relatively straightforward one to solve; rather, the challenge is evenly doping the fin from top to bottom and all the way through given that they are so close to each other and your implant angles are ever more restricted to near-vertical vantages.
Not that that problem can't be solved, it just requires a different approach to doping the fins if they want to keep making the fins taller and closer together in future nodes.
Not much different than A7 CPU wise
Apple certainly is more willing than any other vendor to change ISA's, they have a very loyal userbase that is willing to endure inconvenience.I don't think changing ISAs is something Apple would be afraid to do.
Prior to 2006, All macs were using Power PC after all.
So I think the question is not will Apple change ISAs, but when will they change ISAs?
Refining their ARM processor to be both high frequency and high IPC while maintaining good performance per watt will take time. But I would assume Apple is viewing the work as worth the effort since they would have full control of all the hardware features on their most high end products (MacIntosh).
Okay, now with that out of the way.....What does Intel do to counter this eventuality?
Does Intel begin to promote big core in ways they never did in the past? Maybe instead of four small cores in some SOCs we begin to see a single big core (with hyperthreading enabled)? (ie, a smaller version of Core M with all (or most) of the big core features enabled (AVX feature sets, etc))
They've switched architectures twice, from Motorola 680 to PowerPC and then from PowerPC to Intel. They could easily do it again to go from Intel to ARM.
Thank you for your response.No. We know the exact cause of the delay. It has everything to do with multiple patterning.
http://intelstudios.edgesuite.net/im/2013/archive/qa1/archive.html
At around 24:00.
Huh, in which year do you live? Tri-gate transistors are being produced since 2011. FinFET isn't going to disappear.
The same presentation you linked actually points towards III-V at 10nm. But in general, it isn't clear; some sources say III-V, others say Ge, others say both.
They could do it but there is very little that is easy about it. Much like Microsoft's struggles with RT, an ARM based OS <whatever #> would have a serious tail of software compiled for Intel.
Objective-C (& presumably Swift) is very portable. Most of what is on both the iOS and OSX stores would be easy to port provided it was updated for the (presumably) new interface. Obviously any x86 apps with C code would be more work but I imagine Apple would start enforcing portability restrictions to get on the App Store as part of the migration. It'd be far cleaner than the mess it would be on Windows.
CT is like an "ARM A7" and Broadwell "A15".
In that aspect they are both fine, IF broadwell prices whould be lower.
Unfortunately, you didn't take me seriously enough when I said that you really don't want a single core processor. REALLY.So to sum things up concisely, here is what I would have liked to have seen happen for Intel in 2015:
1. Cherry Trail and Braswell canceled.
2. Celeron (based on Broadwell Core i3 die) canceled.
3. #1 and #2 replaced with a fully enabled Broadwell single core with HT (ie, 1C/2T) and 16 Gen 8 EUs.
[/B][/I]Tell me, have you seen even one single core processor on the market, outside of the embedded space? There's a good reason for this -- even when your applications are all single threaded, single cores are still tremendously inferior to a dual core.
A8 doesnt have about the same IPC as Haswell. Lets not try and fool anywone about that part.
Also Apple would have to mess up the software support for their entire userbase. And depend even more on paying 10000$ for initial wafers and multibillion $ R&D. And Apple knows it can get first dips with Intel.
But which 28nm ARM CPU was actually competitive against 22nm FF? In terms of both performance and efficiency.
Unless you have actual power consumption numbers, those benchmarks mean nothing. You must calculate performance per watt with 2 variables, not one. For example, we know that S800 throttles a lot. In the case of Nexus 5, its performance is halved within seconds, while Z3770 doesn't experience any throttling as far as I know.
Nexus 5 is a 5" Phone, Z3770 havent been used to anything bellow 8-10" Tablets.
There is absolutely no point in doing cross platform hardware comparisons to try to determine who's got the better what.I don't think so. On a IPC basis improved Cyclone is quite close to Haswell
http://ark.intel.com/products/75802/Intel-Core-i5-4200Y-Processor-3M-Cache-up-to-1_90-GHz
http://browser.primatelabs.com/geekbench3/575799
http://browser.primatelabs.com/ios-benchmarks
Integer
core i5 4200Y (1.4 ghz base 1.9 ghz max) - 1780
A8(1.4 Ghz) - 1609
Haswell core i5 has the benefit of turbo and a FINFET process. I am confident that Apple A9 on TSMC 16FF+ will be very competitive with Intel Core M 14nm. Apple just needs to deliver 2 key features - SMT and turbo. I am sure they are waiting for a 16 FINFET process to deliver those two features without increasing power draw significantly. :thumbsup: