Discussion Apple Silicon SoC thread

Page 318 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,777
1,349
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:



M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

name99

Senior member
Sep 11, 2010
482
362
136
I still maintain that since there's a ton of overlap between what GPUs, NPUs, and now AMX/SME does that building something that can fill all those roles makes a lot of sense. Now obviously there's a role for smaller batch lower latency results and large batch higher throughput results so you can't get rid of the SME unit but theoretically you could supplement it for really big tasks. At any rate something flexible that could be partitioned so that when you're pushing the GPU with a CAD program or game and your AI demand is low it is most doing GPU tasks and only a small number of its cores are doing AI, and the reverse is true when you're just sitting at the GUI doing whatever task can use all the NPU power it can get.

Obviously there isn't perfect overlap between the implemention of the various functions, but there's enough that I think someone will eventually solve this. Apple is best positioned to do so since it controls the whole platform and doesn't have to support third party GPUs, but Nvidia is inarguably closer to that goal as far as hardware design. The software piece is probably the harder nut to crack though.
(1) Dark silicon means (ie is an acknowledgement of the fact that) you can't run all the silicon at full speed all the time. So you might as well optimize different parts of the si for different jobs.

(2) The CPU is optimized for latency, other hardware is optimized for throughput. Yes you could use the CPU for everything -- at the cost of a lot more power and area. This was what Intel tried to force on the world for 15 years, and is the primary reason they are where they are right now, having lost mobile and rapidly losing much of the data center to nVidia.

(3) For Apple/ARM specifically, there's an obvious hierarchy of when you use what.
If you have a small throughput task (think small regular loop) you execute it on NEON.
If the loop touches more than ~1000 items, the overhead of going to SME/AMX is probably justified.
If the loop touches more than ~1000, 000 items, the overhead of going to GPU (or ANE, depends on exactly what's being done) is probably justified.

These aren't going to change in detail. Apple will probably keep hacking away at the overhead cost to try to make use of AMX/SME and GPU justified for smaller tasks, simply because of the power benefits.
But the reason each of these (NEON, then AMX/SME, then GPU) is cheaper is because each gives up successively more control; each demands successively more structure in the algorithm and data access being implemented. Which is great IF THAT MEETS YOUR TASK -- but every task is different.
 

pj-

Senior member
May 5, 2015
483
251
136
The analogy is inaccurate as it assumes that there are capacities and capabilities that PC can provide over and above post PC devices. everything that the general population would want to do with PCs will be possible and preferable with the post PC devices. They will have all of the compute power and memory and storage capacity to do all the tasks and give all the results the people computers to do.

Not that there won’t be enthusiasts who continue to use them because they’re into the technology. but PCs will not be trucks. They will be horses. Horses used to be the backbone of transportation but now only people who enjoy being around horses ride them. PC enthusiasts will have the same relationship. I will probably be one who keeps one or more PCs around.



iPadOS 18. yet another year of Apple, having a powerful fully capable processor in a device that people who want to use it as a PC, creating frustration about the operating system’s capability to perform like other PCs they have access to.

It is on purpose. Apple knows there are people who want full PC capability, and they are more than capable of providing it. They could have put a smooth, fully functional traditional PC environment on the iPad years ago, even before they split iPadOS off from iOS.

The reason is because while iPad can do the “ look at display, type on keyboard, point at stuff” that is the foundation of the PC paradigm, that is not its future which is why iPads will always be an inferior experience using it that way. A maximal “PC experience” is not why the M4 is in the new iPad, nor any M chip for that matter.

Another key aspect of what a PC is, is how people get the results they want from it. In addition to “ look at display, type on keyboard, point at stuff” Is what the software is on the PC. Currently you have to find software that does functions that you can manipulate data on to get the results you want.

MS Office, Photoshop, Blender, Abelton, Da Vinci Resolve, Quickbooks, Visual Studio, etc. these are all examples of the PC paradigm. Developers define a set of tasks that would allow their users to get useful results, then code those functions in a systematic way. Then the user has to learn how to get to, and use those functions to achieve the specific results that they want with their own data. most of those users do not want to learn the software, it is necessary for them to get the results they want.

While technically all computers, owned by persons, our personal computers, I am talking about the death of the Personal Computer, established by IBM and Microsoft in 1981, given graphical user interfaces, but Apple and Microsoft in the mid-1980s and with many refinements essentially what we are still using.

This is what is going to die for the general market and the change will affect what hardware people use to get their results. The post PC era, the shoots of which are now in 2024/2025 just barely piercing up above the soil is the machine learning era.

Instead of providing functions that users have to learn to get their results, deep learning algorithms will be given the data that allows them to acquire the capabilities of providing the results the user wants specific to said users datasets.

From years ago to any given moment now, Apple could put on the iPad a most fantastic way of organizing, keeping track of, and finding your files. They could, in fact, have also made improvements to the filesystem management on Macs. The reason they haven’t is because they’re not interested in iPads replacing PCs by being as good or better than PCs at what those PCs do.

For nearly every user files and filesystems the best current means to achieve their goals, which is to get the results from their data that they want. The “What is a PC?” plan for iPads is just that the user get results from their data upon request with no thought of or relationship to a filesystem, if there even is one. already large language models can extract data that you want from a single giant multi GB blob of model weights.

Machine learning researchers are now working hard to figure out how to extract all types of data from these blobs of model weights. Text, image and video (including particular subjects in the image and video data), numbers and patterns in those numbers of any type financial, scientific, engineering and more.

There is still a lot of work needed for it to be effective, accurate, and comprehensive. but once users are able to retrieve an interact with their data upon request very few will care how smooth their devices filesystem is or whether it even has one.

That is what I mean by the death of the PC. The system of the users organizing and tracking data via a set a files, then, using applications that have sets of functions to be learned by the user so that they can then pull data from files to interact with it with those application functions.

And it absolutely will happen.

I don't think it's particularly helpful to ascribe thoughtful intent to things which apple is capable of doing but chooses not to.

They could have done widgets a decade before they did, or customizable app locations, or an ipad calculator app, or multitasking, or any of a number of other things they chose to not do, until they did.

The m4 ipad pro i'm typing this on feels like a big phone with a few "productivity" features haphazardly slapped on. Not a whole lot about it feels intentional, really. Even when I'm not doing laptop-y things (like typing long-ish forum posts) that it's clearly not intended for, it is a more generally awkward experience than I was expecting as a first time ipad owner.

Also, while I generally agree with your vision of what computing may look like in the future, I don't see how that would require or even benefit from the form factor of an ipad vs. a laptop or PC or anything else. I can envision headless voice controlled systems like Siri that deeply understand my data and intentions and are therefore actually useful, or voice/thought controlled systems that project images through unobtrusive AR glasses. What I don't see in the future is a large battery powered slab of glass that is a bad phone and a worse laptop.

I think apple's intent is better described as "we don't want you to do that" which is not the same as "in the future you won't need to do that"
 
Reactions: dr1337

Eug

Lifer
Mar 11, 2000
23,777
1,349
126
The m4 ipad pro i'm typing this on feels like a big phone with a few "productivity" features haphazardly slapped on. Not a whole lot about it feels intentional, really. Even when I'm not doing laptop-y things (like typing long-ish forum posts) that it's clearly not intended for, it is a more generally awkward experience than I was expecting as a first time ipad owner.
For surfing forums, Safari on the iPad is treated as a desktop browser, whereas Safari on the iPhone is treated as a phone browser. IOW, the interfaces for forum posts on iPads and iPhones are fundamentally different.

However, if you're using the on-screen keyboard to type on the iPad, I agree that it can be awkward and can get tiresome quickly. The iPad used as a laptop is best configured as a laptop as well, with a Magic Keyboard or another third party keyboard.

The ironic part though is that an iPad with Magic Keyboard will weigh more than a Mac of similar size.
 
Reactions: igor_kavinski

pj-

Senior member
May 5, 2015
483
251
136
For surfing forums, Safari on the iPad is treated as a desktop browser, whereas Safari on the iPhone is treated as a phone browser. IOW, the interfaces for forum posts on iPads and iPhones are fundamentally different.

However, if you're using the on-screen keyboard to type on the iPad, I agree that it can be awkward and can get tiresome quickly. The iPad used as a laptop is best configured as a laptop as well, with a Magic Keyboard or another third party keyboard.

The ironic part though is that an iPad with Magic Keyboard will weigh more than a Mac of similar size.
I use the magic keyboard, which is actually great in terms of the typing and trackpad feel. If I am ever going to type more than 10 words I snap it to the keyboard where i can type 5x faster. The weight is borderline comical, but I mostly use it on the couch or in bed and at worst it is a stable stand.

Most apps are of course built on the assumption that you will not be using a physical keyboard and a lot of minor annoyances result from that when you do use a keyboard. I am also kinda disappointed in the app ecosystem in general. I see a lot of abandoned apps on the store, or things that are just the iphone version.

I feel like I'm in a ferrari going to the grocery store with this thing
 
Reactions: igor_kavinski

The Hardcard

Member
Oct 19, 2021
182
273
106
You are speaking word salad.

The whole point of matrix multiply is that it's ridiculously NON-memory-bandwidth intensive. It's a running joke in HPC that if you're boasting about your great matrix multiply performance what you're actually saying is our memory sucks.
There *are* neural ops that are bandwidth intensive but matrix multiply is not one of them. The concern with quantizing (and otherwise shrinking) large LLMs is primarily to reduce memory footprint; reduced execution bandwidth is just a nice side benefit that reduces power a little.

Second I have no idea why you imagine the ANE is (sharing?) the CPU L2. Why would it? Or what does this even mean in the context of say an M Pro with two P clusters.

The ANE has its own local storage (manually managed, so not a "cache" but plays the same sort of role as an L2 cache) along with DMA into that storage.
I don't know that anyone has tested the bandwidth into the ANE, but I expect it's "appropriate" given that Apple seems to get this correct for every other unit on the SoC.

The fact that the entire internet is telling you that the bandwidth of an NPU is what matters (when they aren't telling you that the TOPs of an NPU is what matters...) doesn't make it true.
I expect there are also papers by other people, but there are DEFINITELY papers by Apple that discuss this issue, comparing the performance [quality and speed] of multiple networks relevant to Apple's interests to both required bandwidth and required TOPs, and the conclusion is that there's very little correlation right now...
This is inaccurate. running high parameter neural networks are bandwidth limited on local devices. Matrix operations are efficient, however, when for instance in transformers, every weight needs to be moved through the GPU for each token, it ends up being a lot of data.

The math is simple. Take Llama 3 70B. At 4-bit quantization, the model weights occupy 35 GBs of RAM. In a Max chip (all generations are equally fast since it is memory bandwidth dependent) itis straightforward. 400 GB/s memory bandwidth divided by 35 GBs of model weights puts you at a theoretical maximum of 11.4 tokens per second - or about 9 words per second. in real world use, people are getting about 8 tokens or 6.5 words per second which is tolerable, but not ideal.

The next Ultra will fit 4-bit quantized Llama 3 400B, but 800 GB/s memory bandwidth divided by 200 GBs of model weights gives you less than 4 tokens a second. Very slow. Memory bandwidth is critical to practical usefulness. Remember, people are trying to chat with these models. 3.5 words per second makes for a slow chat bot that will make many listeners or readers impatient.

While token generations speeds could be improved (LPDDR5x 10700 giving the Max 668 GB/s or better yet, LPDDR6) the concern I am raising is the biggest weakness for using Macs on large language models (probably other aspects of generative AI as well.) They do horribly on large prompt processing and handling large context windows. These are compute limited - for simple prompts not a big deal. But for huge 3000 token+ prompts an extended back-and-forth sessions that require the model to keep track of the entire chat, this aspect causes Mac performance to plummet down to abysmal response times.

The metric here is time to first token. And for a 14,000 token context window, Nvidia cards remain a chatty (less than 10 seconds) time to first token while an M2 Ultra may take as long as 20 or more *minutes* to start responding. Unacceptably poor for chatting. This currently the main roadblock for Nvidia users otherwise tempted by the siren song of the top RAM Mac’s ability to run huge parameter language models. Having to wait half an hour for every response is rejected by many.

That is the basis for my posts about increasing matrix multiply compute on Macs. I didn’t bring up the location of the ANE because I think it’s using the L2 cache. Instead, again, my question is how much memory bandwidth Is available in that area of the chip given that the CPU clusters don’t use the full SOC bandwidth.

The point is that boosting the TOPS of the ANE higher than what the GPU is capable of (and I don’t know what the low precision TOPS of the M3 Max GPU is) won’t help the next Max chip run Llama 3 70B satisfactorily if moving the model weights through the compute block to generate tokens does not have access to the full 400 GB/s SOC bandwidth. The number of tokens generated per second will drop by the same percentage that the ANE memory bandwidth is relative to total SOC bandwidth.

If Apple uses the ANE as the path to increase matmul TOPS then the token generation will be limited by how much memory bandwidth it has access to. That’s why I brought that up.

Hopefully now, you can see why it is important. Given that, do you know how much memory bandwidth the ANE has access to? I am contending that it is important for Apple to increase matrix compute in a location on the die where it can cycle the model weights through at full SOC memory bus speed. If the ANE has access to all 800 GB/s in an Ultra, then a great place to jack up the TOPS, otherwise…

Well, Apple would also need allow it to be directly programmed. I don’t think the Accelerate framework is acceptable to the state of the art machine learning community.
 
Last edited:
Reactions: mikegg
Jul 27, 2020
19,315
13,250
146
The iOS security layer that prohibits what you describe dates to the when the iPhone was first developed.

Apple is not about to redesign two of their platforms, with 10x the user share of MacOS to appease a handful of MacOS users.

They could easily do it if they weren't arrogant and adamant about forcing their users to eat whatever dogfood they prepare. Who says that security layer cannot be carefully breached to allow MacOS compatibility yet keep it secure from malicious users/software? With so many brilliant software engineers at their behest, all they have to do is utter the command and it will be done. But like that developer I quoted earlier said, they are cowards. And in my eyes they would still be cowards even with MacOS support unless they allow official Linux boot support on the iPad.
 

johnsonwax

Member
Jun 27, 2024
72
137
66
They could easily do it if they weren't arrogant and adamant about forcing their users to eat whatever dogfood they prepare. Who says that security layer cannot be carefully breached to allow MacOS compatibility yet keep it secure from malicious users/software? With so many brilliant software engineers at their behest, all they have to do is utter the command and it will be done. But like that developer I quoted earlier said, they are cowards. And in my eyes they would still be cowards even with MacOS support unless they allow official Linux boot support on the iPad.
We just had a global systems failure that hinged on 3rd party software on the largest PC platform in the world that was designed to solve this very problem which Apple solved in iOS in 2008. If you're going to throw something like this out there - pick a better day to do it. You'll look less dumb.
 

poke01

Golden Member
Mar 8, 2022
1,923
2,447
106
We just had a global systems failure that hinged on 3rd party software on the largest PC platform in the world that was designed to solve this very problem which Apple solved in iOS in 2008. If you're going to throw something like this out there - pick a better day to do it. You'll look less dumb.
yeah, agree Also Linux got had the same problem couple months ago.

Security is an illusion, nothing is secure and privacy is a fools errand. What Linux offers is freedom from Apple and MS but it doesn't give secruity nor privacy that is up to the user to maintain which Linux offers to power users.

Allowing Linux to bootable on iPad goes against to what an iPad stands for, middle ground between a phone and a computer.
 
Jul 27, 2020
19,315
13,250
146
Allowing Linux to bootable on iPad goes against to what an iPad stands for, middle ground between a phone and a computer.
Not everyone would install Linux on it. But the option should be there for those who want to. Heck, I'll go as far as suggesting that Apple introduce a subscription feature for running Linux. One more income stream for them.
 

The Hardcard

Member
Oct 19, 2021
182
273
106
I don't think it's particularly helpful to ascribe thoughtful intent to things which apple is capable of doing but chooses not to.

They could have done widgets a decade before they did, or customizable app locations, or an ipad calculator app, or multitasking, or any of a number of other things they chose to not do, until they did.

The m4 ipad pro i'm typing this on feels like a big phone with a few "productivity" features haphazardly slapped on. Not a whole lot about it feels intentional, really. Even when I'm not doing laptop-y things (like typing long-ish forum posts) that it's clearly not intended for, it is a more generally awkward experience than I was expecting as a first time ipad owner.

Also, while I generally agree with your vision of what computing may look like in the future, I don't see how that would require or even benefit from the form factor of an ipad vs. a laptop or PC or anything else. I can envision headless voice controlled systems like Siri that deeply understand my data and intentions and are therefore actually useful, or voice/thought controlled systems that project images through unobtrusive AR glasses. What I don't see in the future is a large battery powered slab of glass that is a bad phone and a worse laptop.

I think apple's intent is better described as "we don't want you to do that" which is not the same as "in the future you won't need to do that"
My vision of future computing has nothing to do with hardware form factors, iPad or otherwise. It is the software that fully determines what that vision is.

The iPad is just something that Apple is doing currently on its journey to that goal. There’s nothing about iPad hardware that dictates whether it will or will not be around at the end of that goal.

And Apple is extremely deliberate and intentional about every aspect of the products they provide software and hardware. I am an old head who got into computing before the Macintosh and Windows existed. I heavily use an M1 iPad Pro, I could easily and joyfully use macOS on it. but it is clear to me that they don’t intend to put macOS on the iPad and that it is pointless to wait for it to happen.

“We don’t want you to do that” is not an Apple intent. they are trying to provide a secure and stable environment that limits what a lot of enthusiasts want to do with their computing devices. there are no arbitrary restrictions.
 

johnsonwax

Member
Jun 27, 2024
72
137
66
Not everyone would install Linux on it. But the option should be there for those who want to. Heck, I'll go as far as suggesting that Apple introduce a subscription feature for running Linux. One more income stream for them.
So, I do think that Apple should be required to provide indefinite OS support on all of their hardware, and allow users to install linux or some alternate OS on Mac hardware - providing support to Asahi and/or Microsoft if needed. Apple wouldn't exactly be burdened by providing security patches for their OSes indefinitely.

It doesn't make sense to apply that to iPad when there's no similar demand to do that for other restricted hardware like PS5, etc.
 
Reactions: scannall

scannall

Golden Member
Jan 1, 2012
1,960
1,678
136
So, I do think that Apple should be required to provide indefinite OS support on all of their hardware, and allow users to install linux or some alternate OS on Mac hardware - providing support to Asahi and/or Microsoft if needed. Apple wouldn't exactly be burdened by providing security patches for their OSes indefinitely.

It doesn't make sense to apply that to iPad when there's no similar demand to do that for other restricted hardware like PS5, etc.
It would be a good thing to provide driver blobs to Ashai. They likely already have a Windows on ARM bootcamp built and working, but until Microsoft starts selling WARM licenses without a device they couldn't legally release it.
 
Jul 27, 2020
19,315
13,250
146
So, I do think that Apple should be required to provide indefinite OS support on all of their hardware, and allow users to install linux or some alternate OS on Mac hardware - providing support to Asahi and/or Microsoft if needed. Apple wouldn't exactly be burdened by providing security patches for their OSes indefinitely.
Not even sure if you are the same person saying all that and probably Doug will disagree with that but I appreciate you saying that
 

johnsonwax

Member
Jun 27, 2024
72
137
66
Not even sure if you are the same person saying all that and probably Doug will disagree with that but I appreciate you saying that
I don't dispute that there is platform lock-in that regulators should be able to address. My issue with the EU is that they're doing it in the dumbest possible way. The Mac is already by its nature a security compromised platform, so requiring a degree of openness around that isn't unreasonable to me. Forcing Apple to provide drivers or indefinite support for a minimal bootloader into hypervisor environment also doesn't seem unreasonable. Requiring Apple provide indefinite support for iOS hardware isn't unreasonable given the other platform size requirements that trigger regulatory intervention. But requiring Apple to tear down security on the platform, which is the primary customer story for iOS, is just stupid and destructive.

So you demanding linux support on M4 Macs isn't unreasonable. Demanding it on M4 iOS devices, is. It's like asking for a gas engine in your Tesla. You're missing the goddamn point.
 
Reactions: scannall

Doug S

Platinum Member
Feb 8, 2020
2,671
4,511
136
Not everyone would install Linux on it. But the option should be there for those who want to. Heck, I'll go as far as suggesting that Apple introduce a subscription feature for running Linux. One more income stream for them.

Why do you keep banging on about wanting Apple to completely change their business to conform to YOUR wishes?

There is not a big market out there for Apple to allow people to run Linux on an iPad Pro. If they made it free they probably wouldn't get 100 people to do it. If you want to run Linux on a tablet, get an Android tablet, flash a new ROM, and go to town. Oh but that's too much work, right? You want someone else to make it easy for you and have it "just work" with minimal effort, I suppose? Huh, what a concept!
 
Reactions: MuddySeal

johnsonwax

Member
Jun 27, 2024
72
137
66
I think Apple is fine without DIY/linux/windows, certainly Intel and MS proved this week.
As Apple has improved their device longevity, their OS support hasn't kept pace. These M1 devices are going to last a decade and still be as fast as new Windows machines being sold. Users should have means to keep those device in use and in support. And if that's linux, that's reasonable. Apple can hand off support to them. Apple can also afford to keep some greybeards on staff to keep security patches for 30 years of OS releases.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |