Apple A12 benchmarks

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,752
1,286
126
The interesting question is when Apple decides to get back into the server market. Server products have very high margins, and are Intel's bread and butter; if Apple can come up with a chip that has better perf/watt than Intel's best offerings, and scale it up to massive multicore levels with the expected professional features, then there's a good chance they can bite off some market share even without x86 support. An advantage is that server parts generally care less about super-high per-core clock speeds than desktop/workstation parts (and this is the one area left where Intel still has a sizable edge on the competition).

Most other ARM-based server chips have fallen flat, but Apple's hardware (and reputation) is good enough that they could make it work.
I'm not in the industry but I've read lots of posts here and there that Apple's server group pissed off a lot of potential customers. While the hardware was well respected, they didn't like Apple's lack of flexibility and sub-par on-site service support. Furthermore, they did not like the fact that Apple was so secretive. There was no way to plan for future hardware and software upgrades because nobody ever knew what the roadmap was.

Basically it seemed the main market for their servers was small to medium sized business, and Apple did not seem to want to cater to large corporate clients that can be much more demanding than businesses with 50 employees.

I may be totally off on this, but that's the gist of what I got when Apple servers were still a thing... and then Apple simply killed off the entire division.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
The 2700U has Geekbench scores of 3500/10,000 (single/multi, rough average) while the 8650U has scores of 5500/18,000. So even comparing within the same architecture, at the same TDP, intel has an 80% advantage over AMD based on multithreaded geekbench. Does anyone believe it is actually that much faster? I certainly dont.

The top scores show 40% difference, not 80%, but that's still quite large.

https://browser.geekbench.com/v4/cpu/compare/7820178?baseline=4534181

PC benchmarks only show 10-20% difference between the two.
https://www.anandtech.com/show/12709/the-acer-swift-3-sf31541-review-ryzen-meets-laptop/3

Where it differs as radically as Geekbench is on mobile benchmarks such as Kraken/Octane/WebXPRT.
 

BeepBeep2

Member
Dec 14, 2016
86
44
61
The 8650U is actually only a 15W Chip. Assuming the TDP-Up don't change its Turbo-Max Frequency.
It doesn't matter, the chip is not running the maximum turbo frequency under multi-threaded load. Only single thread will be similar, but if intel normally allocates >15w to a single core for the maximum turbo frequency (highly likely) that will be a little lower too.

Remember that with Turbo Boost 2.0 the CPU can violate the TDP temporarily.
Yes.

I never said the 8700k was using six times the power of the 8650U. I said it had six times the TDP, which is correct. But this whole thread is based on TDP comparisons, so if that is not valid, then the A12 benchmarks are no more valid. I have not seen one post of actual measured power usage in this thread.

In any case, if Intel has these horrible cpu designers that are so inept, AMD must have even worse, if you accept Geekbench as the sole benchmark of cpu performance. The 2700U has Geekbench scores of 3500/10,000 (single/multi, rough average) while the 8650U has scores of 5500/18,000. So even comparing within the same architecture, at the same TDP, intel has an 80% advantage over AMD based on multithreaded geekbench. Does anyone believe it is actually that much faster? I certainly dont.
There are no Linux results in the database for the 2700U. The highest score for the 8650U (25w TDP-up) in Windows is 5.28K/17.2K. Normally, they score around 5K/15K.

AMD (2700U) is at ~4K/11K. Manufacturers have been reluctant to scale the TDP above 15w for those so far, from what I've gathered. (Platform power consumption seems higher overall, especially at idle, which result in lower battery life)... regardless, AMD is about 25% behind in GB4 from what I see.

In other real-world workloads, AMD seems to be a lot closer than 25% back. They win in some highly threaded workloads and lose in others.


I'm going to be honest, I hate Geekbench. At least I don't hate it as much as UserBench. There is such a spread of results in GB4... the 8th Gen U-series chips from intel are also all over the map in general. In some 15w configurations, the 8550u outperforms the 8650u, and some laptops with those chips only score 12-13K multithreaded in GB4.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I'm going to be honest, I hate Geekbench. At least I don't hate it as much as UserBench. There is such a spread of results in GB4...

The spread only exists because user submitted results are accepted.

In professional reviews, everything in their power is done to make the systems as equal as possible. You can't expect a user submitted result to be like that.

Linux results don't exist for 2700U, but Android results do, and it scores just as higher compared to Windows as Linux does over Windows.
 

BeepBeep2

Member
Dec 14, 2016
86
44
61
The spread only exists because user submitted results are accepted.

In professional reviews, everything in their power is done to make the systems as equal as possible. You can't expect a user submitted result to be like that.

Linux results don't exist for 2700U, but Android results do, and it scores just as higher compared to Windows as Linux does over Windows.
I don't agree. Many reviewers leave GeekBench out of their test suites because of the inconsistency. Maybe laptop reviewers are using it most, but even then, the spread is all over the place across different devices, even if the results look more equal in other benchmarks.

Are you assuming the gain is the same, or have you tested both? I'm well aware of the Android results. Even then, it just makes the spread between AMD and Intel in performance even smaller.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I don't agree. Many reviewers leave GeekBench out of their test suites because of the inconsistency.

The spread is true with 3DMark, and PCMark too. Because users don't care about having equal driver versions, making sure to see if the system is having thermal problems, or they are running demanding background tasks, or HDD vs SSD, or whether its running dual channel memory.

I agree Geekbench sucks for PC, because better tests exist on PC. At least if you are doing Geekbench tests, you need to break the scores down into Integer/FP, because Crypto and Memory are also part of the final score.

Are you assuming the gain is the same, or have you tested both? I'm well aware of the Android results. Even then, it just makes the spread between AMD and Intel in performance even smaller.

I'm not sure what you are referring to here. I was responding to this quote "There are no Linux results in the database for the 2700U."

I should have clarified. I normally categorize it only for top single thread scores, because that's the easiest way to isolate for variables like thermals and arbitrary clocks. You can't really do that in an easy fashion for multi-thread scores. When looking at ST, Linux/Android gains ~10% over Windows. In multi-thread, it does seem a bit different.

In other real-world workloads, AMD seems to be a lot closer than 25% back. They win in some highly threaded workloads and lose in others.

I'm being nitpicky, but it should be

"33% ahead"

rather than

"25% behind"

Because its the more accurate statement that doesn't confuse as much.
 

Nothingness

Platinum Member
Jul 3, 2013
2,758
1,413
136
Even SPEC CPU shows variations in score >5%. Some reboot the OS and remove as much as possible to get the best possible score.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
but if intel normally allocates >15w to a single core for the maximum turbo frequency (highly likely) that will be a little lower too.

I did testing on the XPS 12 Ultrabook. PL1 is set to 15W and PL2 is set to 17W.

Did two tests, both with power adapter connected. The first one got little over 2K in ST and 5K in MT, but with background processes running. In that scenario with the CPU close to 100%, it never went above 14W.

Second test, I made sure the updates were completed so the CPU can get back down to low power mode. Then I ran Geekbench on Windows 10 64-bit. Brief spikes to 13W shown, but most tests hovered in the 7-10W range. I have seen 2 or 3 times the CPU reaching the max Turbo of 3GHz, which indicates ST testing at 9W or so. Got the highest Geekbench 4 score for the XPS 12 9Q23. 3075 in ST and bit over 5700 in MT.

So, I doubt Geekbench goes anywhere near the 15W TDP in ST.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Testing on the Core i3 7100 desktop. 3.9GHz Turbo clock, Windows 10 64-bit.

ST: 4860, MT: 9533, https://browser.geekbench.com/v4/cpu/8244686

Had HWInfo64 on the side. CPU package power again hovers at a very low point, while the CPU frequently reaches 3.9GHz. I've seen 21W for a second or so, and 16W for about 5 seconds. For the rest of the 2 min duration benchmark, package power stayed between 5-8W.

I doubt even the 8700K reaches anywhere near 100W on Geekbench.

Update: The 16W/21W number is only reached on the second run of the test, which must be the multi-threaded portion. 16W during HDR, 21W during Speech Recognition. On the first runs they are still the top 2 highest for power, but numbers are much lower, about half. All the other tests use much less power.
 
Last edited:

BeepBeep2

Member
Dec 14, 2016
86
44
61
The spread is true with 3DMark, and PCMark too. Because users don't care about having equal driver versions, making sure to see if the system is having thermal problems, or they are running demanding background tasks, or HDD vs SSD, or whether its running dual channel memory.

I agree Geekbench sucks for PC, because better tests exist on PC. At least if you are doing Geekbench tests, you need to break the scores down into Integer/FP, because Crypto and Memory are also part of the final score.
This is part of the reason I said the spread is larger. Most other PC benchmarks are not as sensitive to crypto or memory bandwidth / latency. PCs are often configured with different memory speeds, which have a relatively large influence over the overall score in GB4, smaller than the real-world performance impact in other benchmarks.

I'm not sure what you are referring to here. I was responding to this quote "There are no Linux results in the database for the 2700U."

I should have clarified. I normally categorize it only for top single thread scores, because that's the easiest way to isolate for variables like thermals and arbitrary clocks. You can't really do that in an easy fashion for multi-thread scores. When looking at ST, Linux/Android gains ~10% over Windows. In multi-thread, it does seem a bit different.

You correctly guessed what I was referring to. I don't see any Android x86 results for KBL-R, and I don't see any Linux results for Raven Ridge. I was asking if you had tested these yourself, or just assuming based on incomplete information. OS overhead may be different even if the compiler alone makes a lot of difference, too.

I'm being nitpicky, but it should be

"33% ahead"

rather than

"25% behind"

Because its the more accurate statement that doesn't confuse as much.


I did testing on the XPS 12 Ultrabook. PL1 is set to 15W and PL2 is set to 17W.

Did two tests, both with power adapter connected. The first one got little over 2K in ST and 5K in MT, but with background processes running. In that scenario with the CPU close to 100%, it never went above 14W.

Second test, I made sure the updates were completed so the CPU can get back down to low power mode. Then I ran Geekbench on Windows 10 64-bit. Brief spikes to 13W shown, but most tests hovered in the 7-10W range. I have seen 2 or 3 times the CPU reaching the max Turbo of 3GHz, which indicates ST testing at 9W or so. Got the highest Geekbench 4 score for the XPS 12 9Q23. 3075 in ST and bit over 5700 in MT.

So, I doubt Geekbench goes anywhere near the 15W TDP in ST.
What CPU is in that? A 3rd Gen i5 / i7? I appreciate your testing, but your tests have little relevance to modern platforms (8th Gen Intel, AMD Raven Ridge) due to changes both companies have made in turbo behavior, and the fact that their new CPUs consistently violate the stated TDP rules as long as the CPU is within thermal limits. The CPUs are binned only to meet the specified base clock at the default TDP (15w). Otherwise, performance and power consumption are a highly dependent on the thermal solution. It would not surprise me at all if an 8th Gen Intel CPU consumes well over 15 or even close to 25w in a single-threaded workload, especially when configured to 25w TDP-up. (I am aware that TDP and power consumption are not the same.) However, I don't have the resources to test.

PL2 in intel 8th Gen U-Series is 44w and intel specifies maximum Processor IA Core current at 64 amps. (All 8th Gen U-series laptops should be able to handle a minimum of 64 amps to CPU cores)
 

BeepBeep2

Member
Dec 14, 2016
86
44
61
You missed the Desktop CPU testing. That's a CPU with no limits and TDP at 51W.

Geekbench goes nowhere near that. Actually it seems to use less power in average than the 3517U.
I see it, but you can't make a direct comparison with these 8th Gen SKUs, the 8650u reaches 300 MHz higher frequency and will consume more power if given the same voltage, due to higher thermals (increased transistor leakage current related to higher temperature).

Platform power also matters the most when sucking it out of a battery. I'm just saying, it wouldn't surprise me if the 8650u uses upwards of or more than 15w in its single thread workloads.

Back to the Apple A12, the original point I was trying to make was that Apple may be utilizing a similar turbo scheme, especially in phones and tablets that have short "burst" workloads. Most smartphones also only reach their target frequency at load when temperature is low, and likely at power consumption much higher than the SoC's specified TDP, until thermal runaway, and then throttle to keep within a defined temperature threshold.

Most smartphones also slow down when the battery is significantly depleted after some period of cell aging, even with a "healthy" battery, to keep the input voltage from dropping too low and causing an unexpected shutdown. Only Apple took a lot of criticism for it in recent news. This leads me to believe that the power consumption of these SoC's during heavy workloads is significantly higher than 4.5w.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I see it, but you can't make a direct comparison with these 8th Gen SKUs, the 8650u reaches 300 MHz higher frequency and will consume more power if given the same voltage, due to higher thermals (increased transistor leakage current related to higher temperature).

I gave this a thought, and realized the answer is right in front of me. The 8550U, also gets 5400 ST Geekbench, and with a Turbo of 4GHz, mere 100MHz up from the 7100, which might as well be identical.

I also told you the Kabylake 7100 is more efficient than the 3517U despite 51W TDP, and despite the fact its on a desktop platform.

I also told you in the previous posts that in MT workloads it reaches 16/21W power, and only in the top 2 most demanding tests, while the rest is under 10W, and Single Threaded tests well under that.

Kabylake 7100, on a Desktop, running at 3.9GHz, uses 4-6W while running Geekbench.
Bolded so you don't "forget" or "miss" it this time. Which may be the whole point of this thread! What's the point of arguing about TDP figures when Geekbench doesn't even load the chip to 1/3rd of it, and often 1/10th?
 
Last edited:

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Kabylake 7100, on a Desktop, running at 3.9GHz, uses 4-6W while running Geekbench.

Didn't you say 16W and 21W for some benchmarks and most of the time between 5-8W? Looks like your numbers getting smaller the more you talk about them
Other than this - TDP is totally useless when trying to reason about efficiency - agreed.
 

scannall

Golden Member
Jan 1, 2012
1,948
1,640
136
Apple has just been killing it at CPU design for a while now. Though that seems to make some people angry. Not sure why, cool new tech is always good.
 
Reactions: FIVR and IEC

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Didn't you say 16W and 21W for some benchmarks and most of the time between 5-8W? Looks like your numbers getting smaller the more you talk about them

Nope, Go read the back-and-forth between me and him.

He said basically

15W can be easily reached on Single Thread.

I said

4-6W on Single Thread but about double in Multi-Thread.

I also said

Certain parts of the bench does little more, 8-10W on ST and 16-21W on MT.

What you are doing is called selective quoting to favor your arguments.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
What you are doing is called selective quoting to favor your arguments.

I did not quote selectively nor did a make an argument. I just was wondering why your numbers are lower than in your original statement.

And yes, power is very workload dependent. Therefore i did mention in my previous posts, that in order to reason about efficiency you need to fix the workload and measure the power instead of using TDP as metric for efficiency.
 
Reactions: FIVR and BeepBeep2

BeepBeep2

Member
Dec 14, 2016
86
44
61
So much of what I said got completely ignored or only bits and pieces of what I said were analyzed with a very limited perspective to drill the point about single threaded wattage / TDP into the ground, and that wasn't even stated clearly by the other person in the first place, though the disparaging reply with the reiteration of that later that had even smaller numbers tried to make it seem so.

I would like to point out too that I already differentiated TDP (...which is going to relate to package power / PL1) and platform power and stated that platform power is more important, though I did mention TDP in my posts.

So why would I have mentioned TDP at all, or intel "violating" the specified TDP?
Intel defines TDP: “TDP. . . The thermal design power is the maximum power a processor can draw for a thermally significant period while running commercially useful software. . . TDP includes all power dissipated on-die from VDD, VDDNB, VDDIO, VLDT, VTT and VDDA."

PL1 is usually equals this specification, and package power as reported by software is compared with TDP / PL1 / PL2. I did state that I believed intel routinely violates its specified TDP (PL1), especially in the 8th Gen U series which require a 64A capable VRM and 44w PL2 (on most 8th Gen notebooks), which means both package power and performance of the 8th Gen CPUs greatly varies depending on both the TDP(-up) specification (PL1 15w / 25w) and if the thermal solution can actually handle much higher clock speeds for longer periods of time for PL2 (usually 44w)

My statements weren't meant only to apply to Geekbench either. Also, testing on a Desktop CPU with half the cores, significantly lower running temperature (hello, I already mentioned this affects power consumption) and significantly less cache, to prove me wrong, when I was speculating about a laptop CPU with twice the cores, threads, much higher temperature (resulting in higher leakage current) and a turbo speed 300 MHz higher (which, of course, was conveniently swapped in a reply for an 8550u to prove a newly fabricated point) makes little sense.

https://www.notebookcheck.net/Dell-XPS-13-9370-i5-8250U-4K-UHD-Laptop-Review.279736.0.html
https://www.notebookcheck.net/fileadmin/_processed_/8/5/csm_stresscpu1_cc30c28da7.png

So in the above links, it is interesting that we see a Core i5 8250u in the Dell XPS 13 allowing 46w maximum package power until the CPU borders Tjunction Max at up to 3.4 GHz (settling to 25w @ 2.5 GHz). Cinebench results also much better during the first run with significant falloff afterwards as reported by the reviewer, and I'm betting that power consumption during that initial run is very high.

Of course, they are stress testing with Prime95, but that is still a maximum of 11.5w package power per core at only 3.4 GHz. I must have been crazy to claim it was highly likely that intel allows CPUs to exceed 15w package power with single threaded loads at much higher speeds and voltage, right? That hypothesis was really not that far fetched.

Initially, I was more interested in the maximum power Apple or Intel are allowing their processors to draw when under different loads - the comments about intel and its TDP specification were really not the main focus, though another user made it that. I was wondering if Apple's turbo behavior was similar to intel processors, allowing very high power draw for short periods of time or during very short benchmark durations.

We don't actually have solid data for the intel chips, except half-truths from a user with a processor based on a different die, and no data for Apple, so it doesn't really matter. This thread should get back on topic.
 
Last edited:
Aug 11, 2008
10,451
642
126
The top scores show 40% difference, not 80%, but that's still quite large.

https://browser.geekbench.com/v4/cpu/compare/7820178?baseline=4534181

PC benchmarks only show 10-20% difference between the two.
https://www.anandtech.com/show/12709/the-acer-swift-3-sf31541-review-ryzen-meets-laptop/3

Where it differs as radically as Geekbench is on mobile benchmarks such as Kraken/Octane/WebXPRT.

I was using a rough average of the following posted GB4 scores: URL=[https://browser.geekbench.com/v4/cpu/search?dir=desc&q=i7-8650u&sort=score] 8650U and
URL=[https://browser.geekbench.com/v4/cpu/search?utf8=✓&q=2700u] 2700u

Obviously the scores will vary based on the source, but the point still stands that *any* x86 cpu is behind the A12 and AMD is even farther behind than Intel. Also that AMD is much closer to Intel in real world workloads than indicated by Geekbench.
 
Reactions: scannall

jpiniero

Lifer
Oct 1, 2010
14,841
5,456
136
Obviously the scores will vary based on the source, but the point still stands that *any* x86 cpu is behind the A12 and AMD is even farther behind than Intel. Also that AMD is much closer to Intel in real world workloads than indicated by Geekbench.

Again, some of the tests are memory speed dependent, and Ryzen is going to get thumped by Core in SGEMM and SFFT due to not having 256-bit vector units which is realistic performance for a workload that is that vector heavy (which may not be much outside of HPC but is realistic). You cannot look at total score to make a proper comparison.
 

FIVR

Diamond Member
Jun 1, 2016
3,753
911
106
I was using a rough average of the following posted GB4 scores: URL=[https://browser.geekbench.com/v4/cpu/search?dir=desc&q=i7-8650u&sort=score] 8650U and
URL=[https://browser.geekbench.com/v4/cpu/search?utf8=✓&q=2700u] 2700u

Obviously the scores will vary based on the source, but the point still stands that *any* x86 cpu is behind the A12 and AMD is even farther behind than Intel. Also that AMD is much closer to Intel in real world workloads than indicated by Geekbench.

You should call intel and tell them about this finding, they could use it as their next great slogan now that "intel inside" is dead.


Something like "Yes, our SoCs suck. Especially when compared to Apple. But do not worry: this small company with no marketshare to speak of in our most important market sucks more."


Perhaps it could be shortened to Intel: "It could be worse!"
 
Last edited:

FIVR

Diamond Member
Jun 1, 2016
3,753
911
106
Apple is clearly on another level to any x86 CPU or SoC manufacturer. There is simply no way they can compete.


Intel should've been able to compete with all its money. However due to terrible management spanning the last decade they have lost the edge they had in manufacturing and their design teams were never any better than AMD's anyway (arguably, they have always been worse).


I simply don't see a situation where AMD or Intel ever catch up to Apple again. Eventually Apple won't have any need for such component manufacturers and will produce all chips in house.
 

BeepBeep2

Member
Dec 14, 2016
86
44
61
Apple is clearly on another level to any x86 CPU or SoC manufacturer. There is simply no way they can compete.


Intel should've been able to compete with all its money. However due to terrible management spanning the last decade they have lost the edge they had in manufacturing and their design teams were never any better than AMD's anyway (arguably, they have always been worse).


I simply don't see a situation where AMD or Intel ever catch up to Apple again. Eventually Apple won't have any need for such component manufacturers and will produce all chips in house.
If AMD gets anything close to GloFo's stated 55% reduction in power and improves their architecture somewhat at 7nm, they will be closer. A Zen 2 SoC could provide around the same performance at "7.5w" package TDP as the A12 does in GB4. Unless I've missed some other information, we also have no clue how much power the A12 actually consumes for these results and nothing on a similar node from Intel or AMD to compare to.

As far as intel goes, 10nm looks really, really screwed.The i3-8121U looked worse than 7th Gen KBL-U.
 
Aug 11, 2008
10,451
642
126
You should call intel and tell them about this finding, they could use it as their next great slogan now that "intel inside" is dead.


Something like "Yes, our SoCs suck. Especially when compared to Apple. But do not worry: this small company with no marketshare to speak of in our most important market sucks more."


Perhaps it could be shortened to Intel: "It could be worse!"
Maybe you should call them, it seems you have taken on as your sole mission in life to deride intel whenever possible and excuse AMD. The point was that even among x86 cpus, the GB4 results between AMD and Intel are wildly different than everyday uses.
 
Reactions: Zucker2k
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |