Discussion Apple Silicon SoC thread

Page 374 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,953
1,567
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:
Mar 11, 2004
23,409
5,823
146
Let me follow up the above with two more references.

The first is

You don't have to buy into the argument being made for why AI matters economically. What matters is that plenty of people DO buy into the argument. Which means, as I see it, there are two takeaways relevant to recent discussions:

1- if energy usage is going to grow as rapidly as expected, those with performance advantages in respect of inferences/joule will have a substantial advantage. This would appear to work to Apple's favor, both in terms of (we expect) being able to offload more inference locally [which may still mean higher energy usage, but Apple isn't paying for it] AND in terms of Apple probably being able to provide the highest inferences/joule, even at scale.
This latter is not certain, but seems likely given Apple's obsessive (in the past and still) concern with reducing energy anywhere and everywhere. One could imagine that new architectures designed from the ground up for inference might be more efficient, but I've not yet seen an indication of such.

Which suggests that things like Apple clusters, and Apple-sold compute services have perhaps a more promising future (in terms of being cheaper TCO) than it seems right now. Remember, our concern is say half a decade out; not just today's LLM's but the (possible? ridiculous?) future in which LLMs are no longer just a cute trick but the equivalent of the spreadsheet or the compiler, the tool that defines the work (and output, and compensation) of various professionals...

2- the talk includes a slide 13 minutes in that I have not seen elsewhere giving the amount of energy used in the US by the largest data warehouse companies. The interesting item I see there is that Apple comes in at 2GW - substantially behind Google and MS, but 2/3 of Amazon, or the same size as Meta/Facebook (and twice the size of X/Twitter).

People have frequently scoffed that Apple's native data center footprint is insignificant (or, more sensibly, have wondered what it is). This gives us elements of an answer -it's as large as Meta, and not too different from Amazon.
Which in turn suggests that if it makes business sense for those companies to develop various chips (eg Meta's inference server chip, or Graviton+Trainium+Nitro) it makes as much sense for Apple to do so -- REGARDLESS of issues of whether these "server" chips are sold externally... Apple may be slightly smaller but their server chip development is probably also cheaper given the infrastructure they can reuse. And Apple's footprint may grow rapidly, not just once Private Cloud Compute takes off, but also if/as they see value in moving other Apple services off AWS storage or Trainium training or whatever else they currently outsource.

My second recommendation link is

Again you don't have to buy into my love of Mathematica, that's not the point. The point is that Mathematica is a vast system, ridiculously powerful but also, as a consequence, difficult [or at least slow, in terms of constant lookups] to use as soon as you move out your the area in which you usually work. This provides an extremely powerful and useful tool for improving that situation. I've not used things like Copilot for say C++, but this feels to me like not just what I'd hope for from Copilot but a whole lot more in terms of handling optimization, refactoring, providing quick solutions, and so much more.

Now imagine something similar for other tools that are too complex for one person to fully understand - Photoshop, or Blender, or even Linux system management, or (and these may well exist as prototype internal tools) "assistants" for working on the Apple code base, or the MS code base -- tools that make use of the company-wide conventions, can easily point you to possibly already written versions of the function you want, that can at least provide a first pass at possible performance, security, or obsolescense issues, etc. Presumably most of the training that went into the Wolfram Assistant (internal documentation, stackoverflow posts, code repositories, etc) is available in more or less similar form inside Apple or MS.

It's with this in mind that the first part of my comment, I think, might make more sense. Look, sure, it's possible that we have, in 2024, gone as far as this particular set of ideas will take us, that Wolfram Assistant's successes (and failures), like ChatGTP 4o-whateverItIsTheseDays is as good as it gets for sline-level interactive chat, and nVidia's or Google's chip layout experiments are also as good as it gets. But it seems foolish to assume that given the past two years.
Meaning that even IF you don't see anything that excites you in the current crop of LLM assistants, all that *probably* means is that someone hasn't yet created one for your particular interests.
But 2025 might be the year such an assistant is released for Windows sysadmins... Or for Linux kernel coders... Or for Star Wars fan fiction writers... Or...

Wolfram basically have everything in place to do this "first". Well, sure, maybe Copilot is first, but Wolfram is an "independent developer" in a way that possibly suggests to people who are not Microsoft or Apple or Google some combination of "hey I could do that" and "OMG, if we don't do this but our competitors do".
The other tricky thing is that Wolfram has a history of charging for its products, so no-one is surprised (or ranting all over the internet) that this is an add-on cost. The same idea is *possible* for organizations that work with open source (for example Blender could be free but charge a monthly fee for an assistant, likewise for Ubuntu. Even Google could offer a Google Premium [cf X/Twitter Premium] that gives you ad-free search, a much more powerful AI element to the search, and various other things - some amount of image generation or video generation? Summaries of web sites based on knowledge of what interests you?).

Would these all then back down in the face of the usual mindless screams and rants from the masses? Hmm. We have a once-in-generation chance to restructure after the known issues of the free (ie ad-supported) web...

People also bought into Beanie Babies and HawkTuah coin. The argument that things are worthwhile just because some people think they should have value is, well, its why the world f'ing sucks now. No idea why you're even talking like that while trying to claim that AI actually has value for everyone (and then proceeding to cite specific use case that doesn't apply to 99.999% of the world's population).

Certainly there are legitimate and good use of AI, but as you show, that's largely for highly specialized stuff (you're straight up delusional if you think normal people are using AI for math, outside of trying to cheat on tests and homework), by people working with very complex things that would be otherwise be vastly time consuming. Normal people are not using this for that and its getting tiring hearing people try to tout what they know is a lie (hence your constantly trying to couch "you don't have to believe its the best thing ever, just know that some people do so therefore it is the most amazing thing ever!" nonsense). You're exactly like Sam Altman declaring that AI will "solve physics" trying to justify your optimism while AI burns humankind to the ground (and that's before we even use it for weapons). That you're arguing that it'll fix software issues, when the companies that would directly benefit from that now, aren't even bothering to do that with it, and instead are trying to fraudulently sell it as being capable of things they know it isn't (there was a recent article citing internal Microsoft employees saying how CoPilot cannot do most of the things they're advertising it can do now - that's straight up fraud and Microsoft knows it), you'd think should be a sign for you. I'm sure we'll hold Microsoft and other AI sellers for their fraud and lies...(nah instead we'll let them accelerate climate change so dbags can make nudes of female co-workers/classmates and write papers for them so they can spend more time doomscrolling social media). In another thread someone posted similar type of stuff as you, claiming Google was doing that and it was shipping in the Pixel 6. And yet performance on Google's phones has been notoriously bad for years, meaning, its more smoke being blown, because if that's the result of it, then that's not at all impressive and seems like they'd have been much much much better off keeping all the people they fired to do that stuff. Further, its not like its resolved any of the issues. If AI coding was so amazing, you'd think Google would have used it to make a real proper competitor to iMessage (which they've always been able to do, and as we found out, they're certainly not above paying Apple to get what Google wants).

There are bunch of good uses for AI. I haven't seen any company market any of those tangible uses, because that stuff is a tool for someone with knowledge and expertise in a specific field to use, not sold to the masses with claims of it solving all the world's problems. The people capable of using it for such are the ones that were already investigating it and helped build the stuff to accomplish those needs. Since you like to try to make real world examples, what I'm talking about is stuff like surgery robots. They are amazing machines, doing incredible things, helping and built with input from proficient surgeons. The companies shouldn't be marketing them to tie someone's shoe laces or cut the crust off your sandwich. That's the equivalent of your argument for how AI actually totally is the best thing ever for everyone, look people proving math theorems and working on theoretical physics can use AI so you totally need to replace your calculator with a data center!

I digress, and instead, can we please stop posting this AI blather that has nothing to do with the actual topic? Nothing you posted is about Apple's AI. There's no reason for you to be posting it in this thread. Punt this to Highly Techincal or OT since it belongs there far more than here.

That’s a good point, to be honest I actively forget that particular mouse exists

Understandable. The iPad Pro I bought earlier this year came with one, but I hadn't bothered using it (cause I had a hunch I'd hate it and the stupid charge port on the bottom of the mouse), until I bought a Mac Mini. I was surprised at how natural scrolling felt with it (zero learning curve, its better than scrolling on a touchpad/screen, although I haven't used MacBook touchpads in a long while so maybe it is as good). I'd been using a Logitech MX Master 2S and was not a fan of scrolling on Apple devices, but I didn't use it often enough to delve into trying to change things. I need to test if it switches between Apple devices or if I'll have to manually switch it every time which could be a pain, but might not be a huge deal with how sidecar and similar works (where you're basically controlling one device).
 
Reactions: Vattila

Mopetar

Diamond Member
Jan 31, 2011
8,200
7,027
136
People also bought into Beanie Babies and HawkTuah coin. The argument that things are worthwhile just because some people think they should have value is, well, its why the world f'ing sucks now.

You'll need to propose some other system of value if you think there's a problem with that one. It's how the world works, how it's always worked, and alternative systems that tried to set values in other ways have failed catastrophically.

If you think that there's some ideal, perfect solution to this problem I have some bad news for you. Also, the world is the best it's ever been for the greatest percentage of people living on it. I don't think you have a good understanding of the past or how awful it was for most people.

If other people want to invest their money in AI because they see great potential in it, why should you care? It's not your money. Perhaps you should list all of the things you're investing in so that some jerk on the internet can similarly tear you to shreds for being so foolish for not doing as they themselves would have done.
 
Reactions: Vattila

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,754
106
Source
While the jury is deliberating on the Qualcomm-ARM case, and we wait, here is an interesting, and somewhat related topic - What is the ALA rate Apple pays Arm ?
This theinformation report suggests it is less than 30 cents per device, no matter how many cores the device has...

In the emails revealed during the case, Arm execs were looking to unwind two ALA s. One was Qualcomm & other was "Fender" which I assumed was Apple.That seems to be correct...
This is interesting. ARM's coming not only for Qualcomm, but Apple too.

The information article that is referenced (paywalled):

 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,754
106
Apple and Qualcomm aren't the only players who are making custom ARM cores. There are also:

- Ampere
- Huawei
- Google (rumoured)
- Nvidia (rumoured)

No doubt, they are all watching this case closely.

It also begs the question: Why are there so many companies doing custom ARM cores via ALAs?

Are TLA cores not performant enough? Are TLA royalty rates too high? Both?

Huawei of course has a legitimate reason for doing custom cores (sanctions). I am wondering more about Google and Nvidia...
 

jdubs03

Golden Member
Oct 1, 2013
1,155
799
136
Apple and Qualcomm aren't the only players who are making custom ARM cores. There are also:

- Ampere
- Huawei
- Google (rumoured)
- Nvidia (rumoured)

No doubt, they are all watching this case closely.

It also begs the question: Why are there so many companies doing custom ARM cores via ALAs?

Are TLA cores not performant enough? Are TLA royalty rates too high? Both?

Huawei of course has a legitimate reason for doing custom cores (sanctions). I am wondering more about Google and Nvidia...
I think they were making these decisions not knowing how good the X925 was going to be. That was quite a big leap in performance. Should be interesting to see how the X930 fares.
 

gdansk

Diamond Member
Feb 8, 2011
3,768
6,015
136

Doug S

Diamond Member
Feb 8, 2020
3,005
5,167
136
Source

This is interesting. ARM's coming not only for Qualcomm, but Apple too.

The information article that is referenced (paywalled):


I can't read the article but Apple has a signed contract, and aren't acquiring any third party ARM licensees (and definitely wouldn't try to do so now) so there isn't anything ARM can do about it. If they didn't like it they shouldn't have negotiated the new deal with Apple not that long ago that goes out beyond 2040.

Interesting aside: usually when I see a paywalled article I'll google the headline to see if some blogger somewhere has summarized it, or depending on the site the same article is carried unpaywalled on another site (msn.com seems to do this a lot) When I googled (or actually duckduckgo'ed it, but googling it got me the same thing) I was presented with a lot of articles, some dating back more than a year, with the same or very similar title. They also have bizarre errors l can't imagine the original article would have made since you would recognize those errors and not post the article if it contained them.

This article claimed that Apple's relations with ARM soured in 2016 when Apple tried to buy Nuvia, which threatened ARM's position in the industry. Later it claimed that Apple DID buy Nuvia. I'm 99% sure this was written by AI based on the article you linked. Does it look like it has a lot of the same stuff, or is this some totally different AI hallucation?
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,754
106
This article claimed that Apple's relations with ARM soured in 2016 when Apple tried to buy Nuvia, which threatened ARM's position in the industry. Later it claimed that Apple DID buy Nuvia. I'm 99% sure this was written by AI based on the article you linked. Does it look like it has a lot of the same stuff, or is this some totally different AI hallucation?
I have no idea, because I haven't read that paywalled article from The Information. I only linked it because that's what the person who made the tweet referenced.

But yeah, that thing you linked is probably some AI generated slop.
 

Eug

Lifer
Mar 11, 2000
23,953
1,567
126
Oh definitely. I use an app called Scroll Reverser when using a USB mouse on macOS. Natural scrolling sucks.
For the usb mouse, did they take the scroll direction config option away? Should be able to turn “Nature scrolling” off for mice in system settings -> mouse and not need an extra app. It’s been a minute as I got too used to the trackpad but scroll wheels on mice 1000% agree that setting needs to be flipped.
They did not take that option away. I just use the included regular scrolling setting in Sequoia 15.2, to turn off "natural scrolling".



BTW, "natural scrolling" has been an option in OS X for eons. The main change is that more recent macOS versions turned it on by default, whereas previously the default setting was off.
 
Reactions: retnuh

Auream

Junior Member
Sep 4, 2020
1
2
81
BTW, "natural scrolling" has been an option in OS X for eons. The main change is that more recent macOS versions turned it on by default, whereas previously the default setting was off.
I'm pretty sure they switched the default to "natural" quite a while ago, perhaps MacOS 10.7 or 10.8? The annoying thing, at least if you have a non-Apple mouse (perhaps the Magic Mouse is different), is that you can't set "natural" for the trackpad but "standard" for a mouse wheel. That's why I need to use BetterTouchTool (although Scroll Reverser or a dozen other apps will do the job as well, BTT just does a bunch of other things I like as well).
 
Reactions: Gideon and Viknet

Eug

Lifer
Mar 11, 2000
23,953
1,567
126
For those of you looking to buy one of those new M4 series Macs, and not wanting to pay the Apple tax on storage:

I can confirm that the current crop of ASM2464PD chipset based USB 4 NVMe enclosures do not suffer the compatibility issues that plagued the USB 3 drives and enclosures. Occasionally with USB 3 SSDs, after sleep, the machine would wake up with a message saying the drive was incorrectly disconnected. It would automatically reconnect, but that disconnect / reconnect could sometimes cause glitches in certain software.

This has NEVER happened with my 4 TB USB 4 drive (Samsung 990 Pro in Qwiizlab ES40UR enclosure), paired with my M4 Mac mini after a month, probably because macOS sees it as a PCIe drive.



I've since set up a second 4 TB USB 4 drive (Kioxia XG8 in identical Hagibis MC40 enclosure), this time thru a Thunderbolt 4 / USB 4 hub, for a total of 8 TB in external PCIe drives on this 512 GB M4 Mac mini. Hopefully the results are equally great thru that hub.



I'm pretty sure they switched the default to "natural" quite a while ago, perhaps MacOS 10.7 or 10.8?
Yes, it was a long time ago. I just checked and it was 10.7 that they made natural scrolling the default, but it was available as an option before that.

The annoying thing, at least if you have a non-Apple mouse (perhaps the Magic Mouse is different), is that you can't set "natural" for the trackpad but "standard" for a mouse wheel.
Yeah, that's pretty damn stupid.
 
Last edited:

Eug

Lifer
Mar 11, 2000
23,953
1,567
126

I was the only person working on it for ~2 years, and I wrote the majority of the code in the first version that shipped. That said, I’m definitely glad that I eventually found someone else (and later a whole team) to work on it with me, and it wouldn’t have been as successful without that.
When people think of a binary translator, they usually just think of the ISA aspects, as opposed to the complicated interactions with the OS etc. that can consume just as much (or even more) engineering effort overall.
 

Mopetar

Diamond Member
Jan 31, 2011
8,200
7,027
136
Not really, just separate pieces of silicon that still end up packaged together on the SoC. Apple doesn't have any products or markets where they would have one without the other. The pieces that get combined still need to be designed around that and while this can enable the ability to exchange modules, it still requires careful design with that in mind. It just helps with yields in the same way that AMD had better results with their chiplets than trying to make monolithic CPUs.

Perhaps they'll get into the server space, but they don't have the software ecosystem for it and have generally been perfectly happy selling Mac minis in bulk to anyone that wants to rack mount them and deal with the rest of the logistics.

If anything it may just mean more variety in the kind of configurations you can get in one of their top-end Pro products. Currently you get a lot of GPU even if you may not want it and this move may enable them to sell a computer with far more CPU (or GPU, NPU, etc.) cores for the users who want it. However, it won't be cheap, but the people who want it will still pay because it's cheaper for them.
 

fkoehler

Senior member
Feb 29, 2008
214
175
116
I've only just recently turned to the dark side, however Apple nowadays is clearly different than the past Intel, PPC, Moto days.

Good article on Apple's nascent servers at https://appleinsider.com/articles/2...th-broadcom-to-make-ai-processors-for-servers
When you re-task the team instrumental in moving you from Intel to Apple Silicon from a secretive high-performance chip/project to the AI-Server chip, its quite possible Apple is re-thinking taking another bite of that market.

A number of folks believe Apple's past server history is predictive of their future, and could easily be wrong.
1. Apple has now constructed full s/w and h/w homogeneity on a level which we haven't seen before, have we? Maybe IBM AIX I suppose.
2. Apple has seen their truely unbelievable capitol hoovering machine met and actually bested YonY by nVidia. And nVidia has already made a play for ARM, and as a rapidly rising and almost equally monstrously founded competitor, Apple would be nuts to ignore this threat.
3. Apple is of a size and momentum that it needs another industry with deep pockets available to continue its growth and capitol extraction. AI is an easy low hanging fruit it can also pick, and redefining a part of the Server ecosystem with its own vision of The Apple Way check several of those boxes.

A couple years ago I predicted Apple was going to be in for some plateuing after the M1 and their loss of key talent in uA. Couple folks poo-poo'd that, but the M2 and M3 were that.
I'm more convinced now than then, that Apple recognizes several serious threats/challenges to its continued growth/expansion slope.
nVidia is marshaling resources at a level similar to Apples, which is really akin to limitless fusion, at least while the AI hype-train rolls on.
If nVidia continues sucking all the capitol out of the world, while R&D'ing equally as hard, Apple could see it eclipsed h/w-wise much as Intel 'review mirror, glued chips' to AMD.

Whoops, did I go O/T?

Yes, I expect Apple to start having more and more of these "Interesting" things coming out in 2025. Apple has to be looking at that giant untapped Server market and figuring out that the lessons learned before while true could be flipped on their heads if one postulated the main lesson learned should be that Apple didn't redefine the entire server concept as they usually do.
They certainly have the capitol to start sharding off portions of that ecosystem like cloud, and a competitor breathing on their neck to boot.
 

poke01

Diamond Member
Mar 8, 2022
3,035
4,008
106
Not really, just separate pieces of silicon that still end up packaged together on the SoC. Apple doesn't have any products or markets where they would have one without the other. The pieces that get combined still need to be designed around that and while this can enable the ability to exchange modules, it still requires careful design with that in mind. It just helps with yields in the same way that AMD had better results with their chiplets than trying to make monolithic CPUs.

Perhaps they'll get into the server space, but they don't have the software ecosystem for it and have generally been perfectly happy selling Mac minis in bulk to anyone that wants to rack mount them and deal with the rest of the logistics.

If anything it may just mean more variety in the kind of configurations you can get in one of their top-end Pro products. Currently you get a lot of GPU even if you may not want it and this move may enable them to sell a computer with far more CPU (or GPU, NPU, etc.) cores for the users who want it. However, it won't be cheap, but the people who want it will still pay because it's cheaper for them.
So Panther Lake but with actually good efficient packaging?
 
Reactions: Mopetar

Doug S

Diamond Member
Feb 8, 2020
3,005
5,167
136
3. Apple is of a size and momentum that it needs another industry with deep pockets available to continue its growth and capitol extraction. AI is an easy low hanging fruit it can also pick, and redefining a part of the Server ecosystem with its own vision of The Apple Way check several of those boxes.

By the time Apple could get into the AI market en masse, the gold rush will be over. Few people seriously believe that the market for AI picks and shovels that Nvidia is selling is going to keep growing without end. The LLM strategy for achieving "intelligence" is already showing its limits with the problems OpenAI is having with ChatGPT 5 and even entrants trying to be extra careful to limit their AIs to minimize issues are getting egg on their face (see Siri's hallucinated "summary" of a BBC story that claimed Mangione had shot himself)

Once demand stops growing so supply can catch up AI computation will become a commodity, with players in that market unable to extract the massive margins Nvidia is currently enjoying. It makes perfect sense for Apple to produce its own capability internally, since they already have most of the pieces and can tailor it exactly to their needs. Selling to third parties is a totally different animal, and if Apple tries it will just be a lot of wasted money and effort. They can afford to waste an awful lot of money without doing any real harm to the overall business, but that effort would be better redirected at their forte which is the premium mass consumer market.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |