iPhone is around 2/3'rds of Apples massive revenue. The certainly aren't short of resources or motivation of where to spend that revenue. Why is anyone shocked that their chips improve rather quickly?
What's silly is underestimating Apple's boatload of money.
http://intelstudios.edgesuite.net/im/2015/pdf/2015_InvestorMeeting_Bill_Holt_WEB2.pdf
slide 19
30% of the die is sram on A9
9% of the die is sram on SKL 2+2
That is a pretty big difference. Also A9 doesn't have reg files.
http://browser.primatelabs.com/geekbench3/compare/4093592?baseline=3833358
ipad pro looks terrible compared to skylake! ():)
edit: whoops, meant to edit the previous post.
Making sure we are on the same page. This is about the graphics precision. Not the core precision.
For example....
http://imgtec.com/powervr/graphics/series7xt/
Take a look at the table on that page. Executing the "same" code with less precision would most likely give you better benchmarks as...
Being slightly more specific, there aren't Atom cores in KNL, there are Atom cores in KNC. It's not pedantic. They are virtually indistinguishable after the modifications. I highly doubt even what remains of the common logic has the same layout between KNL and Silvermont.
If I actually wanted...
Atom cores are not used in Xeon Phi. They may have started with that design but it's not even close after modifications.
http://www.realworldtech.com/knights-landing-details/
Officially how does anyone outside of Intel or NDA know what server Skylake is? Yes, we have AVX512 leaks aplenty, which since we know consumer Skylake doesn't have them it would seem that would be a big difference. However, positing the development history of an unreleased core on an unreleased...
What segmentation are we talking?
Are we talking about product line segmentation?
Atom,Core,Xeon
Nothing really overlaps there so I'm assuming you don't mean that.
Are we talking about binning?
6600 vs 6700?
Frankly Intel would be ecstatic to be able to sell every part they make as the...
So you are saying there's competition or not? ARM and AMD are competing but Intel has a monopoly? Also Intel's competition aren't as good as the Intel who doesn't innovate?
I can't make heads or tails of your reasoning.
For the workloads I do on a phone performance has been good enough. So 70% on an artificial workload is not really significant to me. Now if you could double my battery life I would be vastly more happy. So this battery test is much more interesting to me. Maxing your cpu will help to isolate...
Why do I care?
Again assuming the data is corroborated:
If you are playing a game, or as was mentioned earlier, vlc or doing anything mildly processor intensive you will most likely have less battery life simply due to the luck of the draw of your manufacturer and yet you paid the same amount...
It's a simple test for exposing differences between silicon. It may not be what you or Apple are looking for in a test, but it is informative nonetheless.
If we get a larger sample size and the data holds up then sure there is a superior and inferior version. You may not care, but not caring...
And I've handed my iphone 6s (no idea what chip is in it) to my wife and she commented on how warm/hot it was and I wasn't really doing anything other than browsing the web, recording some videos and updating some apps. Battery life is merely ok but it's always so hard to tell with new phones...
You're holding the phone wrong. :colbert:
If geekbench is a good benchmark then the difference is significant to anyone. Are you suggesting that the workloads are artificial and not applicable?
I'm sure Intel dreams of the mainstream moving to chips which have dozens of companies competing in the same performance levels instead of 1 or 2. :rolleyes:
Thanks for perfectly explaining why comparing them is mostly pointless. You did forget about different datasets causing cache residency changes as well which makes it even worse.
Remember the custom per benchmark overclocking switches to show Arm chips in a better light?
The best part is how posting benchmarks vs powerpoint triggers some global warming conspiracy reactions in people.
You forget the most important part, control what benchmark you are using to claim the gen on gen performance gain and not publicly disclose what benchmark you are claiming that gain on. I'm fairly certain I could find a benchmark to tell me whatever I wanted on whatever architecture I wanted.
From someone who went from a Q6600 to a 4770k (both with ssds and same video card) in the last year you apparently have a high tolerance for slowness. I immediately noticed a marked speedup in just about everything from boot, to game load times, to transcode times. Q6600 is not in any way shape...
I'm not sure what you mean by your comment as a core 2 duo running win 8.1 or 10 is much nicer than vista from a performance standpoint. The focus has been on low power, less performance optimization ever since Apple released the iphone. :confused:
The thing is Kryzanich is a fab guy more than anything. If he thinks that he can meet his manufacturing goals without making it public then I would likely believe him. I am familiar with what goes into many of these transitions, though I'm sure not to the level you are.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.