Question Geekbench 6 released and calibrated against Core i7-12700

Page 26 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
The m2 max (posted by nothingness above) and the m4 loses by a lot to the 12900k. The m3 max barely scratches a win, which is embarrassing considering it is what, ~19 times bigger in transistor count? 19 freaking times. Let that sink in...Of course any modern high end chip from amd or intel walks all over it, but sure, let's compare it to the 12900k, no problem.

The year old M2 Ultra beats Intel's latest and best dual socket server CPU in MT and ST, despite the Intel CPUs having 96 more cores, and using way more power. That's an extremely disappointing showing from Intel's best.

 
Last edited:

Bencher

Member
Apr 21, 2022
54
10
51
The year old M2 Ultra beats Intel's latest and best dual socket server CPU in MT and ST, despite the Intel CPUs having 96 more cores, and using way more power. That's an extremely disappointing showing from Intel's best.

Naaah, a chip 1/10th the size of the m2 ultra embarrasses it. Please, let's stop. It is what it is man

 

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
Naaah, a chip 1/10th the size of the m2 ultra embarrasses it. Please, let's stop. It is what it is man


So why can't Intel compete at the high end then? Pretty disappointing. They have to use so many more cores and way more silicon area just to almost catch up. I guess Intel just sucks at making high core count CPUs.
 

Bencher

Member
Apr 21, 2022
54
10
51
So why can't Intel compete at the high end then? Pretty disappointing. They have to use so many more cores and way more silicon area just to almost catch up. I guess Intel just sucks at making high core count CPUs.
What do you mean they can't compete in the high end? Their desktop part already decimates the m2 ultra you just linked, even though it's 1/20th the transistor count.

Let's wait for a 300b transistor chip and then apple might be onto something
 

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
What do you mean they can't compete in the high end? Their desktop part already decimates the m2 ultra you just linked, even though it's 1/20th the transistor count.

Let's wait for a 300b transistor chip and then apple might be onto something

I just showed you their latest and best high core count CPU that even when running dual socket with 120 cores and over double the silicon area can’t beat Apple’s 24 core CPU from a year ago. It is what it is.
 

Bencher

Member
Apr 21, 2022
54
10
51
I just showed you their latest and best high core count CPU that even when running dual socket with 120 cores and over double the silicon area can’t beat Apple’s 24 core CPU from a year ago. It is what it is.
Obviously their latest and greatest is the CPU that scores the highest, which is the 14900ks. Why compare against an Intel cpu that scores lower than that? You might as well compare to a pentium4 then, sure, an m2 ultra decisively beats a pentium 4. But it loses to a 14900ks which is 1/20th the transistor count. Kinda pathetic but keep up the defense force.

I don't understand why you keep going at it, it's obvious that apple isn't putting those transistors to good use since their chips are highly inefficient, requiring 20x more transistors than competing chips and even then, they get humiliated.
 

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
Obviously their latest and greatest is the CPU that scores the highest, which is the 14900ks. Why compare against an Intel cpu that scores lower than that? You might as well compare to a pentium4 then, sure, an m2 ultra decisively beats a pentium 4. But it loses to a 14900ks which is 1/20th the transistor count. Kinda pathetic but keep up the defense force.

So Intel should just quit server CPUs I guess, just sell desktop chips only. Why even put out server CPUs? Not very smart. It’s amazing they sell any when everyone could just buy a 14900k. There must be some reason this isn’t what happens in the real world, right?

Maybe it has something to do with its bottom of the industry efficiency, just a guess.

 

Bencher

Member
Apr 21, 2022
54
10
51
So Intel should just quit server CPUs I guess, just sell desktop chips only. Why even put out server CPUs? Not very smart. It’s amazing they sell any when everyone could just buy a 14900k. There must be some reason this isn’t what happens in the real world, right?

Maybe it has something to do with its bottom of the industry efficiency, just a guess.
Sure, intel should quit servers and apple should quit the cpu space. Win for everybody, right?
 

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
Sure, intel should quit servers and apple should quit the cpu space. Win for everybody, right?

Apple is vertically integrated, they don’t need to quit making CPUs, even if they fall behind on performance, it’s about the whole package. Additionally, the vast majority of their CPUs are sold in passive devices where they dominate in performance and efficiency, they’re fine.

It’s interesting that you think Intel’s server CPUs are worthless though, I wouldn’t have gone that far, but everyone can have their own opinion.
 

Bencher

Member
Apr 21, 2022
54
10
51
Apple is vertically integrated, they don’t need to quit making CPUs, even if they fall behind on performance, it’s about the whole package. Additionally, the vast majority of their CPUs are sold in passive devices where they dominate in performance and efficiency, they’re fine.

It’s interesting that you think Intel’s server CPUs are worthless though, I wouldn’t have gone that far, but everyone can have their own opinion.
Honestly, I don't really care what Intel does with their servers. They don't interest me. With the little I do know, they are dominating the market, having the majority of sales, but honestly, who cares.

Regarding apple's domination, in all seriousness, what does "they dominate n performance and efficiency" even mean? Compared to what? Anything that uses that many transistors - and therefore it's as expensive - probably beats the crap out of the the m2 ultra you mentioned before. For some reason you (and reviewers) are comparing m2 ultra to desktop parts like the 14900k or the 7950x which is freaking insane to me.

To put things into context, the macbook pro with the m2 ultra can set you back 4k$ up to 10k$ depending on the configuration. A prebuilt PC with a 14900 will set you back 1k$. Maybe 1.5k if you put ungodly amounts of ram in there. So yes, obviously the freaking 10k$ mac should be wiping the floor with the 1.5k PC. Only problem is, it doesn't. It's more efficient sure, but it ain't faster, and the reason it's more efficient is because it's freaking HUGE. It's for the same reason the Xeon platinum you linked before is more efficient than the 14900k. I mean...duh?

When you actually compare the m2 ultra with an equivalent workstation PC, I'm sitting here comfortably and expect it to get booty blasted to space. 10k$ is basically a 64c epyc chip, 3 x 4080 super and half a TB of ram. Every review I've seen the m2 ultra is trading blows with the 14900 in performance, it obviously doesn't hold a handle to a 64c epyc and an army of 4080s. So what domination are you talking about, really?

This is from LTT, it looks sad but sure, put a positive spin to them. CPU or GPU based, the ultra's are getting beaten to a pulp by desktop parts people are using to play lol and call of duty.




 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
Honestly, I don't really care what Intel does with their servers. They don't interest me. With the little I do know, they are dominating the market, having the majority of sales, but honestly, who cares.

Regarding apple's domination, in all seriousness, what does "they dominate n performance and efficiency" even mean? Compared to what? Anything that uses that many transistors - and therefore it's as expensive - probably beats the crap out of the the m2 ultra you mentioned before. For some reason you (and reviewers) are comparing m2 ultra to desktop parts like the 14900k or the 7950x which is freaking insane to me.

To put things into context, the macbook pro with the m2 ultra can set you back 4k$ up to 10k$ depending on the configuration. A prebuilt PC with a 14900 will set you back 1k$. Maybe 1.5k if you put ungodly amounts of ram in there. So yes, obviously the freaking 10k$ mac should be wiping the floor with the 1.5k PC. Only problem is, it doesn't. It's more efficient sure, but it ain't faster, and the reason it's more efficient is because it's freaking HUGE. It's for the same reason the Xeon platinum you linked before is more efficient than the 14900k. I mean...duh?

When you actually compare the m2 ultra with an equivalent workstation PC, I'm sitting here comfortably and expect it to get booty blasted to space. 10k$ is basically a 64c epyc chip, 3 x 4080 super and half a TB of ram. Every review I've seen the m2 ultra is trading blows with the 14900 in performance, it obviously doesn't hold a handle to a 64c epyc and an army of 4080s. So what domination are you talking about, really?

This is from LTT, it looks sad but sure, put a positive spin to them

View attachment 102481

View attachment 102482

I said very specifically where they are dominating, so maybe re-read my post?

As far as the rest, it's almost like context matters. Seems almost like comparing a desktop chip to a workstation chip doesn't make much sense. Kind of like comparing a top of the line desktop chip to a base chip meant for tablets and passive devices doesn't make much sense either. Yet, you seem to be the only one insisting on making such comparisons, but only when it favors one side.

As far as the LTT comparison, it doesn't have a single Intel APU without a discrete GPU against the Apple chips, so it's another pointless comparison to make. I mean, the RTX 4090 is over 600 mm^2 and can draw over 400W on its own. Why not compare MTL against the base Apple chips in Blender? That's a much more appropriate comparison. MTL is actually significantly more silicon area than the Apple chips but it's the closest comparison.

In terms of cost, congratulations, you just told us something everyone has known for decades. Apple's pricing sucks for the performance you get. Again, if you buy Apple, you are buying into a whole ecosystem, the price of each individual component is only a very small part of the price they charge. If you don't like it or don't think it's worth the price, don't buy it. I've never owned an Apple computer for this reason. Nothing about that changes the technical comparison though.
 

Bencher

Member
Apr 21, 2022
54
10
51
I said very specifically where they are dominating, so maybe re-read my post?

As far as the rest, it's almost like context matters. Seems almost like comparing a desktop chip to a workstation chip doesn't make much sense. Kind of like comparing a top of the line desktop chip to a base chip meant for tablets and passive devices doesn't make much sense either. Yet, you seem to be the only one insisting on making such comparisons, but only when it favors one side.

As far as the LTT comparison, it doesn't have a single Intel APU without a discrete GPU against the Apple chips, so it's another pointless comparison to make. I mean, the RTX 4090 is over 600 mm^2 and can draw over 400W on its own. Why not compare MTL against the base Apple chips in Blender? That's a much more appropriate comparison. MTL is actually significantly more silicon area than the Apple chips but it's the closest comparison.

In terms of cost, congratulations, you just told us something everyone has known for decades. Apple's pricing sucks for the performance you get. Again, if you buy Apple, you are buying into a whole ecosystem, the price of each individual component is only a very small part of the price they charge. If you don't like it or don't think it's worth the price, don't buy it. I've never owned an Apple computer for this reason. Nothing about that changes the technical comparison though.
Uhm, the m2 ultra is more than twice the transistor count of the 4090. A 4090 + a 4080 + 13900k and you still have a lower transistor count than the m2 ultra. Saying that the 4090 draws 400w on it's own is nonsensical, it's so much faster than the m2 ultra that you can cut down the power to half and still end up faster. I assume you know this already, you are just trying to find a silver lining.

Why should the LTT comparison have an Intel APU without a dGPU?? Do you freaking realize - again - the m2 ultra is 20 times the transistor count of the 13900k he is comparing it against? Are you basically saying that the apple chips dominate - as long as they are being compared against cpus that are 1/20th their transistor count? What are you actually saying?

I don't think Apple's pricing sucks at all. If you want a chip as fast as the 13900k + a 4090 combined you have to make it huge. And that's what apple did. Thye made a 140b transistor giant chip then cut down it's power to make it efficient. And that's why im not surprised with it's supposed efficiency. Cause any 140b transistor chip can hit those numbers simply by being clocked low. Comparing it to a 13900k and saying "oh look, the m2 ultra is more efficient" is just beyond silly. That's like power limiting a 4090 to 200watt and then comparing it to a 4050 pointing at it's efficiency. Well yeah, obviously a 4090 limited to low power will be much more efficient than the 4050, it's a 5 times bigger chip. That's not an achievement. An achievement would be if the 4090 was as big as the 4050 while being more efficient. Now that, sure.
 

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
Uhm, the m2 ultra is more than twice the transistor count of the 4090. A 4090 + a 4080 + 13900k and you still have a lower transistor count than the m2 ultra. Saying that the 4090 draws 400w on it's own is nonsensical, it's so much faster than the m2 ultra that you can cut down the power to half and still end up faster. I assume you know this already, you are just trying to find a silver lining.

Why should the LTT comparison have an Intel APU without a dGPU?? Do you freaking realize - again - the m2 ultra is 20 times the transistor count of the 13900k he is comparing it against? Are you basically saying that the apple chips dominate - as long as they are being compared against cpus that are 1/20th their transistor count? What are you actually saying?

I don't think Apple's pricing sucks at all. If you want a chip as fast as the 13900k + a 4090 combined you have to make it huge. And that's what apple did. Thye made a 140b transistor giant chip then cut down it's power to make it efficient. And that's why im not surprised with it's supposed efficiency. Cause any 140b transistor chip can hit those numbers simply by being clocked low. Comparing it to a 13900k and saying "oh look, the m2 ultra is more efficient" is just beyond silly. That's like power limiting a 4090 to 200watt and then comparing it to a 4050 pointing at it's efficiency. Well yeah, obviously a 4090 limited to low power will be much more efficient than the 4050, it's a 5 times bigger chip. That's not an achievement. An achievement would be if the 4090 was as big as the 4050 while being more efficient. Now that, sure.

First, transistor count is almost meaningless. Unless you can tell me the cost per transistor that Apple is paying versus Intel for each of their chips? Maybe you have that info and that's why you are so adamant on using this metric. If so, let's hear it. If not, then it's just another pointless metric that you think makes Intel look good.

The rest of your post is just more nonsense. You like comparing different class chips when the comparison favors what you want to be true while ignoring all the negatives, but if the shoe is on the other foot, the comparison makes no sense. You've completely exposed yourself and motives. You'll quickly find that this type of posting won't be received well here. Good luck.
 

Bencher

Member
Apr 21, 2022
54
10
51
First, transistor count is almost meaningless. Unless you can tell me the cost per transistor that Apple is paying versus Intel for each of their chips? Maybe you have that info and that's why you are so adamant on using this metric. If so, let's hear it. If not, then it's just another pointless metric that you think makes Intel look good.

The rest of your post is just more nonsense. You like comparing different class chips when the comparison favors what you want to be true while ignoring all the negatives, but if the shoe is on the other foot, the comparison makes no sense. You've completely exposed yourself and motives. You'll quickly find that this type of posting won't be received well here. Good luck.
What apple pays per transistor I agree is meaningless, who said it isn't? But if you are comparing arm vs x86 ceteris paribus (everything else being equal) what matters is how much performance do you get per transistor on x86 vs how much you can get on arm. Or in other words, per die area, but since they are on a different nodes the only way to compare that would be with....TRANSISTORS! It doesn't take a genius, does it? If intel was on the same node I wouldn't be using transistors, I'd be using die sizes.

So at the end of the day, any chip Apples produces loses horribly to any equivalent chip in transistor count (or DIE area if they were on the same node) from either x86 vendor, be it Intel or AMD (or even nvidia) and it's not even close. But some reason you are comparing 140b transistor chips to 5b transistor chips and conclude that Apple is dominating. If you can't see the folly of your comparison it's the apple tinted glasses.

I'll remain open minded and here, I'll make you an offer. Since you don't like GB6 anymore, show me any review testing a number of workloads and concluding any apple chip is faster than the equivalent transistor count chip from amd or intel. I honestly looked myself but haven't found any, usually they are comparing them against 14900k and 7950x, and even against those it is trading blows. So if it can't even beat a desktop chip that people use to play call of duty, what chance does it have against an actual workstation? But please, go ahead and how me otherwise.

EG1. Just to demonstrate how incredibly crazy your comparison is, if the 12900k and the m2 ultra were on the same node, you'd be basically comparing a 1000mm die vs a 31.3mm die. One thousand vs thirty. LOL, yeah, apple is totally dominating.




Banned once now banned twice.
Please stay away.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
What apple pays per transistor I agree is meaningless, who said it isn't? But if you are comparing arm vs x86 ceteris paribus (everything else being equal) what matters is how much performance do you get per transistor on x86 vs how much you can get on arm. Or in other words, per die area, but since they are on a different nodes the only way to compare that would be with....TRANSISTORS! It doesn't take a genius, does it? If intel was on the same node I wouldn't be using transistors, I'd be using die sizes.

So at the end of the day, any chip Apples produces loses horribly to any equivalent chip in transistor count (or DIE area if they were on the same node) from either x86 vendor, be it Intel or AMD (or even nvidia) and it's not even close. But some reason you are comparing 140b transistor chips to 5b transistor chips and conclude that Apple is dominating. If you can't see the folly of your comparison it's the apple tinted glasses.

I'll remain open minded and here, I'll make you an offer. Since you don't like GB6 anymore, show me any review testing a number of workloads and concluding any apple chip is faster than the equivalent transistor count chip from amd or intel. I honestly looked myself but haven't found any, usually they are comparing them against 14900k and 7950x, and even against those it is trading blows. So if it can't even beat a desktop chip that people use to play call of duty, what chance does it have against an actual workstation? But please, go ahead and how me otherwise.

EG1. Just to demonstrate how incredibly crazy your comparison is, if the 12900k and the m2 ultra were on the same node, you'd be basically comparing a 1000mm die vs a 31.3mm die. One thousand vs thirty. LOL, yeah, apple is totally dominating.

I never compared the M2 Ultra to a desktop CPU, you really should read my posts more carefully.

Even on the same node, transistor counts aren’t very meaningful unless you know the density achieved (depends greatly on design choices, not just the process) and how the transistors are being counted (dummy gates, IO, etc.).

Intel doesn’t even publish their transistor counts as far as I know, but if you have that info for their recent products, that would be interesting info.
 

Geddagod

Golden Member
Dec 28, 2021
1,250
1,223
106
I never compared the M2 Ultra to a desktop CPU, you really should read my posts more carefully.

Even on the same node, transistor counts aren’t very meaningful unless you know the density achieved (depends greatly on design choices, not just the process) and how the transistors are being counted (dummy gates, IO, etc.).

Intel doesn’t even publish their transistor counts as far as I know, but if you have that info for their recent products, that would be interesting info.
SRF will have ~100 Billion transistors (dunno if its the 144C or 288C variant), Emerald Rapids has 61 Billion, SPR 46 Billion, Raptor Lake 12 Billion.
 

Hitman928

Diamond Member
Apr 15, 2012
5,656
8,926
136
SRF will have ~100 Billion transistors (dunno if its the 144C or 288C variant), Emerald Rapids has 61 Billion, SPR 46 Billion, Raptor Lake 12 Billion.

As far as I know, those are estimates based on die area and process node but could have pretty big error margins.
 

Geddagod

Golden Member
Dec 28, 2021
1,250
1,223
106
What apple pays per transistor I agree is meaningless, who said it isn't? But if you are comparing arm vs x86 ceteris paribus (everything else being equal) what matters is how much performance do you get per transistor on x86 vs how much you can get on arm. Or in other words, per die area, but since they are on a different nodes the only way to compare that would be with....TRANSISTORS! It doesn't take a genius, does it? If intel was on the same node I wouldn't be using transistors, I'd be using die sizes.

So at the end of the day, any chip Apples produces loses horribly to any equivalent chip in transistor count (or DIE area if they were on the same node) from either x86 vendor, be it Intel or AMD (or even nvidia) and it's not even close. But some reason you are comparing 140b transistor chips to 5b transistor chips and conclude that Apple is dominating. If you can't see the folly of your comparison it's the apple tinted glasses.

I'll remain open minded and here, I'll make you an offer. Since you don't like GB6 anymore, show me any review testing a number of workloads and concluding any apple chip is faster than the equivalent transistor count chip from amd or intel. I honestly looked myself but haven't found any, usually they are comparing them against 14900k and 7950x, and even against those it is trading blows. So if it can't even beat a desktop chip that people use to play call of duty, what chance does it have against an actual workstation? But please, go ahead and how me otherwise.

EG1. Just to demonstrate how incredibly crazy your comparison is, if the 12900k and the m2 ultra were on the same node, you'd be basically comparing a 1000mm die vs a 31.3mm die. One thousand vs thirty. LOL, yeah, apple is totally dominating.
You could pretty easily compare Intel 4 RWC and TSMC N5 Apple M1 with the data David Huang's perf/watt testing provided us.

The M1 scores ~20% higher at 11 watts package power vs RWC.
Comparing just core+cache power:


It's even worse.
Area comparisons between the M1 core and RWC, since they are on similar nodes, would look something like this:
RWC + core private cache: 5.33mm2
M1 + core private cache: 2.28mm2
Ok, but that's not really fair considering the M1 still has a sizable L2 shared with other cores that doesn't get counted. Also, AMX seems to be separated from the core?
We can do this a couple ways:
M1 but with shared L2, L2 logic control and AMX area split up equally per core: 3.889mm2
Or we can just compare an entire quad core P-core complex:
RWC quad core complex: 30.588mm2 (I think I was being a bit generous towards Intel with this calc)
RWC quad core complex without L3: 21.32mm2
M1 quad core complex: 15.555mm2
Ok, now we can also look at total perf:

Just looks like a worse core from every perspective tbh. Perf, power, and area.
 

Jan Olšan

Senior member
Jan 12, 2017
334
448
136
The year old M2 Ultra beats Intel's latest and best dual socket server CPU in MT and ST, despite the Intel CPUs having 96 more cores, and using way more power. That's an extremely disappointing showing from Intel's best.

Just stop. It's known that Geekbench 6 simply doesn't come close to stressing (scaling to) processors with even modest core counts. Pretending it evaluates a server CPU fairly (much less dual-socket system) in these apple vs. something posts is typically sign of ignorance or bad faith.

GB6 MT test probably rewards having +10 % performance on a few P-Cores more than having 10× more cores / 10× more actual multithread performance. That's not an actual observation, but may even be close, lol.

Only look at ST scores for the benchmark.
 

Elfear

Diamond Member
May 30, 2004
7,116
696
126
Just stop. It's known that Geekbench 6 simply doesn't come close to stressing (scaling to) processors with even modest core counts. Pretending it evaluates a server CPU fairly (much less dual-socket system) in these apple vs. something posts is typically sign of ignorance or bad faith.

GB6 MT test probably rewards having +10 % performance on a few P-Cores more than having 10× more cores / 10× more actual multithread performance. That's not an actual observation, but may even be close, lol.

Only look at ST scores for the benchmark.
Not trying to speak for @Hitman928 but I believe it was very much tongue in cheek in his response to another poster's wildly illogical claims. GB6 is a poor MT test (ESPECIALLY across different market products) so why are we testing Apple tablet chips against Intel desktop SKUs?
 

Nothingness

Platinum Member
Jul 3, 2013
2,811
1,502
136
You could pretty easily compare Intel 4 RWC and TSMC N5 Apple M1 with the data David Huang's perf/watt testing provided us.
View attachment 102487
The M1 scores ~20% higher at 11 watts package power vs RWC.
Comparing just core+cache power:
View attachment 102488
View attachment 102489
It's even worse.
Area comparisons between the M1 core and RWC, since they are on similar nodes, would look something like this:
RWC + core private cache: 5.33mm2
M1 + core private cache: 2.28mm2
Ok, but that's not really fair considering the M1 still has a sizable L2 shared with other cores that doesn't get counted. Also, AMX seems to be separated from the core?
We can do this a couple ways:
M1 but with shared L2, L2 logic control and AMX area split up equally per core: 3.889mm2
Or we can just compare an entire quad core P-core complex:
RWC quad core complex: 30.588mm2 (I think I was being a bit generous towards Intel with this calc)
RWC quad core complex without L3: 21.32mm2
M1 quad core complex: 15.555mm2
Ok, now we can also look at total perf:
View attachment 102490
Just looks like a worse core from every perspective tbh. Perf, power, and area.
Great post, thanks!

I bet that won't prevent the other poster to still claim Apple loses given the complete SoC is much larger. He's not even able to understand one can defend GB6 while being aware of its weaknesses, for some it's either black or white.

But that was definitely worth your time, I learned a lot 😀
 
Reactions: Hitman928

Nothingness

Platinum Member
Jul 3, 2013
2,811
1,502
136
Only look at ST scores for the benchmark.
Exactly and that's what I told Bencher very early in the discussion. But he insists on taking MT score and pick the whole SoC area/transistor count. As I said I'm pretty sure he just wants to "demonstrate" his preconceived idea that Apple CPU is poorly implemented. Not worth continuing the discussion IMHO.
 

poke01

Golden Member
Mar 8, 2022
1,529
1,795
106
The best class of Apple chip to compare with the Ultra 9 or i9 is the Max chip. Not a tablet base Mx chip.

Apple is ahead because they planned this ahead, out of all flaws Apple has their planning and execution is top notch and is only matched by Nvidia. They have one of the best hardware silicon teams and quite frankly Intel is behind it is sad to see a "lifestyle" company crap on Intel that's sole purpose is CPUs.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |