Discussion AMD Acquires Xilinx

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,687
6,235
136

Straight from the Horse's mouth
Strategic transaction strengthens AMD’s industry-leading technology portfolio

  • Expands AMD’s rapidly growing data center business
  • Xilinx, the No. 1 provider of adaptive computing solutions, increases AMD TAM to $110 billion
  • Immediately accretive to AMD margins, cash flow and EPS
  • All stock transaction with combined enterprise value of approximately $135 billion

All stock. No Cash. Could not have been a better way to make use of that inflated market valuation.
Lots of people were questioning AMD why they are not capitalizing on their Market valuation. But this is it.
Management and Board of Directors
Dr. Lisa Su will lead the combined company as CEO. Xilinx President and CEO, Victor Peng, will join AMD as president responsible for the Xilinx business and strategic growth initiatives, effective upon closing of the transaction. In addition, at least two Xilinx directors will join the AMD Board of Directors upon closing.

They got decent advisors too.
Advisors
Credit Suisse and DBO Partners are acting as financial advisors to AMD and Latham & Watkins LLP is serving as its legal advisor. Morgan Stanley is acting as lead financial advisor to Xilinx. BofA Securities is also acting as a financial advisor and Skadden, Arps, Slate, Meagher & Flom LLP is serving as legal counsel.

Ex AMD rejoins the family bringing Xilinx with him
“We are excited to join the AMD family. Our shared cultures of innovation, excellence and collaboration make this an ideal combination. Together, we will lead the new era of high performance and adaptive computing,” said Victor Peng, Xilinx president and CEO. “Our leading FPGAs, Adaptive SoCs, accelerator and SmartNIC solutions enable innovation from the cloud, to the edge and end devices. We empower our customers to deploy differentiated platforms to market faster, and with optimal efficiency and performance. Joining together with AMD will help accelerate growth in our data center business and enable us to pursue a broader customer base across more markets.”
 
Last edited:

Joe NYC

Platinum Member
Jun 26, 2021
2,331
2,942
106
If you all allow me to toot my horn on this happy occasion for me as an AMD shareholder, I predicted the AMD+Xilinx merger back in 2019:

What do you think would be the next acquisition opportunity for AMD?

I think the best one would be in networking, someone with innovative switches. I have not researched this market. The only one who might be pissed off would be Cisco, not sure if any others.

Another, and this would be a big one, would be MediaTek, but it might be too big. That one would piss off primarily Samsung...
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
You mean they'll move open source ROCm to closed source? Doesn't seem you know what you are talking about.

Open source and proprietary are not mutually exclusive concepts. Just because ROCm is open source doesn't mean it'll run on anything else but AMD HW ...

Back to the main argument at hand, people should not assume that SYCL will be in the their plans because it absolutely isn't as explicitly laid out in the presentation. AMD eventually wants their FPGAs to be programmed not with SYCL but with HIP as is already the case with their GPUs. This fact in plain sight goes over people's head all of the time ...
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
I think supercomputing users are interested in SYCL.

It's just too bad that they don't have an army of compiler engineers to make this happen and it would be antithetical of being able to extract maximum performance ...

Making SPIR-V kernel compilers for every hardware vendor would mean introducing another layer of abstraction with performance costs. Instead of just having a simple and powerful offline compiler where the source language is directly translated to native hardware binaries you now have a source language that get's compiled to some SPIR-V virtual machine (or another multi-architecture intermediate representation) where a driver will have to do JITing against the result to produce native hardware binaries ...

I can't seriously believe how HPC programmers would pass on the joys of offline compilation such as more performance, no drivers, no JITing, and best of all less bugs/lower maintenance. Abstracting all hardware against a virtual machine just ruins everything all in the name of an industry that clearly forsakes portable standards ...
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
Open source and proprietary are not mutually exclusive concepts. Just because ROCm is open source doesn't mean it'll run on anything else but AMD HW ...

Back to the main argument at hand, people should not assume that SYCL will be in the their plans because it absolutely isn't as explicitly laid out in the presentation. AMD eventually wants their FPGAs to be programmed not with SYCL but with HIP as is already the case with their GPUs. This fact in plain sight goes over people's head all of the time ...
May I suggest you a new user name: ThatBlackNWhitePainter?

HIP is a reimplementation of CUDA. As in compute AMD's focus so far has been GPUs CUDA is the ecosystem to topple, and HIP's compatibility with CUDA is a requirement of that. Xilinx with its FPGAs is in no such situation. And as @Vattila mentioned before Intel's oneAPI is also looking to expand compatibility. This is essentially turning into a competition of open tools ecosystems where the one with the best TAM support is likely to win. CUDA being limited to Nvidia won't be it, ROCm HIP extending it to AMD may just be a temporary extension of that limitation, SYCL may well turn out the common ground with the widest support.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
HIP is a reimplementation of CUDA. As in compute AMD's focus so far has been GPUs CUDA is the ecosystem to topple, and HIP's compatibility with CUDA is a requirement of that. Xilinx with its FPGAs is in no such situation. And as @Vattila mentioned before Intel's oneAPI is also looking to expand compatibility. This is essentially turning into a competition of open tools ecosystems where the one with the best TAM support is likely to win. CUDA being limited to Nvidia won't be it, ROCm HIP extending it to AMD may just be a temporary extension of that limitation, SYCL may well turn out the common ground with the widest support.

HIP is NOT a reimplementation of CUDA so let's just get this out of the way. The only similarity HIP has in common with CUDA are their programming concepts at which AMD attempts to mimic as much as possible. Everything else between them is different such as the compilers with NVCC being the accepted CUDA compiler while HIP-CLANG is the appropriate compiler for HIP ...

As for Intel being interested in making their compute stack compatible with other vendors, we'll see if it's actually the case but they have yet to follow through ...

For now, expect to use HIP to be able to program Xilinx FPGAs in the future because that's what they'll officially support. Expecting SYCL to somehow unify everything else is unfounded hype when there's no sign of this with other hardware vendors ...
 

Vattila

Senior member
Oct 22, 2004
805
1,394
136
If anything AMD acquiring Xilinx means death for SYCL. Xilinx will likely drop all of their future plans on working with SYCL and will start doubling down by working on their proprietary ROCm stack.

That is what I fear, too. Well, I do not fear for the future of SYCL — I feel pretty confident it will be bright, eventually. But I fear Nvidia and AMD will leave it to Intel and others to promote it, while Nvidia tries to hold on to their head start with CUDA, and AMD focusses all their resources on providing a painless migration path from CUDA to ROCm with HIP and automated translation tools (HIPify).

That said, I have been encouraged lately by AMD's apparent participation in the SYCL Working Group, according to SYCL chair Michael Wong's latest slides:



SYCL Webinar - December 2021 - YouTube

Notably absent is Nvidia. However, AMD+Xilinx, Intel, Arm and Qualcomm have all joined. Also to be noted is that SYCL chair Michael Wong, VP of R&D at CodePlay, is a long-time C++ standard committee member, highly influential in setting the direction for ISO C++, especially regarding support for programming heterogeneous systems. The explicit aim is for SYCL to be subsumed by ISO C++ at some point. The adopted development model for SYCL will track ISO C++.

people should not assume that SYCL will be in the their plans because it absolutely isn't, as explicitly laid out in the [Xilinx + AMD Converged Runtime Technology Demonstration (@SuperComputing 2020)].

There is nothing in that presentation that says anything at all (and hence nothing "explicit") about SYCL, regrettably.

Whatever the plans are for the programming model, i.e. to extend HIP to FPGAs or adopt SYCL as a more general heterogenous programming model (which seems to be the inevitable direction of ISO C++), we know that SYCL can be very nicely implemented using ROCm and consequently CUDA as backends, as hipSYCL has so brilliantly demonstrated. CodePlay is working on direct support for Nvidia and AMD GPUs in DPC++/oneAPI as well, and I presume they will use ROCm and CUDA as backends to achieve that. With that support in place, and with DPC++ upstreamed to the mainstream Clang C++ compiler, things are starting to look good for SYCL on Linux. If Microsoft could just get aboard and replace their outdated C++ AMP dialect by SYCL instead, forcing AMD to put some effort into ROCm support on Windows, we would have pretty broad industry support for SYCL.

Making SPIR-V kernel compilers for [blah blah blah]

Those are backend details. SYCL concerns the programming model, akin to HIP and CUDA C++. The kernel code to be run on the accelerator is written in pure C++. What it is compiled into is an implementation detail not specified by the SYCL standard.

HIP is NOT a reimplementation of CUDA so let's just get this out of the way.

Are you trying to trip us up on a technicality? Of course, due to copyright and trademark law, AMD cannot call their programming model CUDA, but HIP is as close as they can get without breaking any laws, it seems. Functions have identical specification with names just differing in prefix ("cu" vs "hip"). Likewise, the support libraries are similarly specified and named (e.g. "cuBLAS" vs "hipBLAS/rocBLAS"). The following porting guide for the LUMI supercomputer is a good read on the similarities:

"The HIP API syntax is very similar to the CUDA API, and the abstraction level is the same, meaning that porting between the two is easy [...] The big distinction to make here is that HIP is the parts that are portable between AMD and Nvidia hardware, whereas the rest of [the] ROCm stack is for AMD hardware. [...] The way HIP handles the GPU hardware is the same as CUDA, as such the terminology is very similar. [...] Generally, the kernel code looks the same for CUDA and HIP and the same C++ language features are supported. [...] The HIP runtime API generally mirrors the CUDA one, simply by replacing the cuda text in the call with hip gets you the equivalent HIP runtime call in most cases."

Preparing codes for LUMI: converting CUDA applications to HIP - LUMI (lumi-supercomputer.fi)



The only similarity HIP has in common with CUDA are their programming concepts at which AMD attempts to mimic as much as possible.

AMD is obviously aiming for the same programming model and libraries with a few name changes — I take that as pretty much everything of essence to a user. AMD strives to make the porting process effortless with automated tools (HIPify) doing most of the work.

Hopefully, my long reply has swayed your negative take on SYCL somewhat, but if not, hopefully it has been helpful to other readers. Those with a genuine interest in SYCL are invited to participate in my SYCL thread in the Programming forum:

 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
But I fear Nvidia and AMD will leave it to Intel and others to promote it, while Nvidia has the head start with CUDA, and AMD focusses all their resources on providing a painless migration path from CUDA to ROCm with HIP and automated translation tools (HIPify).
Xilinx and its own ecosystem won't disappear into thin air, and AMD would be ill-advised to throw away Xilinx' efforts. There will be convergence, and with open source tools you want to expand the audience, not alienate significant portions of it. So unless something better replaces it and HIP/CUDA at once SYCL will be part of it.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
That is what I fear, too. Well, I do not fear for the future of SYCL — I feel pretty confident it will be bright, eventually. But I fear Nvidia and AMD will leave it to Intel and others to promote it, while Nvidia has the head start with CUDA, and AMD focusses all their resources on providing a painless migration path from CUDA to ROCm with HIP and automated translation tools (HIPify).

That said, I have been encouraged lately by AMD's apparent participation in the SYCL Working Group, according to SYCL chair Michael Wong's latest slides:

SYCL Webinar - December 2021 - YouTube

Notably absent is Nvidia. However, AMD+Xilinx, Intel, Arm and Qualcomm have all joined. Also to be noted is that SYCL chair Michael Wong, VP of R&D at CodePlay, is a long-time C++ standard committee member, highly influential in setting the direction for ISO C++, especially regarding support for programming heterogeneous systems. The explicit aim is for SYCL to be subsumed by ISO C++ at some point. The adopted development model for SYCL will track ISO C++.

The fact that there are less than ideal actors like ARM or Qualcomm who are notorious for their subpar implementations of Khronos standards and no major ISVs like Google (sponsors Tensorflow) or Facebook (sponsoring Pytorch) are backing the standard should be seen as a red flag. AMD and Xilinx participating in the SYCL working group doesn't inspire any more confidence than before ...

AMD also "participated" in the OpenGL and OpenCL working group before and their implementations on both accounts are disasters. What should matter specifically to anyone is if this participation is done in "good faith" otherwise you get bad actors like Nvidia still participating in the OpenCL working group ...

There is nothing in that presentation that says anything at all (and hence nothing "explicit") about SYCL, regrettably.

Whatever the plans are for the programming model, i.e. to extend HIP to FPGAs or adopt SYCL as a more general heterogenous programming model (which seems to be the inevitable direction of ISO C++), we know that SYCL can be very nicely implemented using ROCm and consequently CUDA as backends, as hipSYCL has so brilliantly demonstrated. CodePlay is working on direct support for Nvidia and AMD GPUs in DPC++/oneAPI as well, and I presume they will use ROCm and CUDA as backends to achieve that. With that support in place, and with DPC++ upstreamed to the mainstream Clang C++ compiler, things are starting to look good for SYCL on Linux. If Microsoft could just get aboard and replace their outdated C++ AMP dialect by SYCL instead, forcing AMD to put some effort into ROCm support on Windows, we would have pretty broad industry support for SYCL.

I think I need to show this slide ...



Do you see any room on this slide where AMD will somehow develop SYCL or let alone for SYCL at all ? Cause I sure don't since AMD are definitely expecting everyone to program their FPGAs with HIP exactly as shown in this slide ...

Those are backend details. SYCL concerns the programming model, akin to HIP and CUDA C++. The kernel code to be run on the accelerator is written in pure C++. What it is compiled into is an implementation detail not specified by the SYCL standard.

Then how exactly is SYCL an improvement over the last industry standard, OpenCL in terms of portability ? OpenCL was pushed aside for other solutions like CUDA or ROCm in the industry because implementations were not portable across each other so if SYCL is broken to the same extent then it will naturally meet the same demise as OpenCL once did which is to be pushed aside for non-portable alternatives such as CUDA/ROCm ...

Are you trying to trip us up on a technicality? Of course, due to copyright and trademark law, AMD cannot call their programming model CUDA, but HIP is as close as they can get without breaking any laws, it seems. Functions have identical specification with names just differing in prefix ("cu" vs "hip"). Likewise, the support libraries are similarly specified and named (e.g. "cuBLAS" vs "hipBLAS/rocBLAS"). The following porting guide for the LUMI supercomputer is a good read on the similarities:

"The HIP API syntax is very similar to the CUDA API, and the abstraction level is the same, meaning that porting between the two is easy [...] The big distinction to make here is that HIP is the parts that are portable between AMD and Nvidia hardware, whereas the rest of [the] ROCm stack is for AMD hardware. [...] The way HIP handles the GPU hardware is the same as CUDA, as such the terminology is very similar. [...] Generally, the kernel code looks the same for CUDA and HIP and the same C++ language features are supported. [...] The HIP runtime API generally mirrors the CUDA one, simply by replacing the cuda text in the call with hip gets you the equivalent HIP runtime call in most cases."

It is not a trivial statement as you make it out to be. If HIP were just truly a "reimplementation" in the strict sense of the word then the EXACT same syntax would work across both cases, you wouldn't have another different compiler as opposed to just using NVCC, wouldn't need a source-to-source translation tool like HIPify as you mention later on, and HIP would unconditionally accept PTX assembly as an intermediate representation but AMD's HIP implementation doesn't do anything of this ...
 
Reactions: NTMBK and Vattila

itsmydamnation

Platinum Member
Feb 6, 2011
2,863
3,417
136
AMD also "participated" in the OpenGL and OpenCL working group before and their implementations on both accounts are disasters. What should matter specifically to anyone is if this participation is done in "good faith" otherwise you get bad actors like Nvidia still participating in the OpenCL working group ...
That's reinventing history, AMD/ATi in effect dropped OpenGL once it was basically dead the market they cared about. When i had a Rv770 at launch it's opengl performance was just fine https://www.anandtech.com/Show/Index/2553?cPage=3&all=False&sort=0&page=8
 
Reactions: lightmanek

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
That's reinventing history, AMD/ATi in effect dropped OpenGL once it was basically dead the market they cared about. When i had a Rv770 at launch it's opengl performance was just fine https://www.anandtech.com/Show/Index/2553?cPage=3&all=False&sort=0&page=8

Outside of AAA games or professional productivity applications on Windows where AMD actually does try to Optimize their OpenGL stack for these use cases, their OpenGL drivers were arguably what one could describe as being a travesty especially in cases like native Linux game ports, emulation, or even Minecraft with mods prior to the GL 3.2 update ...

Either way it doesn't change the fact that AMD still has a bad OpenGL implementation and just because OpenGL fell out of relevance doesn't mean AMD should've held back the entire standard until the very end. At least AMD was smart enough to quit the OpenCL working group years ago as opposed to making it endure their bikeshedding like did for OpenGL ...

"Bikeshedding" is a real concern with committee developed standards that should not be brushed aside so easily since that sounds exactly like what's happening behind SYCL. How AMD can still show up in these meetings when they aren't even working on an implementation is beyond me but an even bigger mystery is how others like Codeplay or Intel are still somehow tolerating this behaviour without sanctioning/ignoring AMD. AMD's behaviour is even more atrocious in the SYCL working group than the OpenGL working group and is just as bad as Nvidia's participation in the OpenCL working group ...
 

Bigos

Member
Jun 2, 2019
138
322
136
AMD has arguably the best OpenGL driver there is, it's called radeonsi and is part of mesa open source suite.

Example benchmarks (some of these are Vulkan benchmarks): https://www.phoronix.com/scan.php?page=article&item=radeon-rx6600xt-linux&num=6

I have no idea about Windows OpenGL driver though, it is probably trash as you mention (I remember having issues with Minecraft as well on Windows).

Also please notice that many OpenGL issues stem from applications using the compatibility profile. This is essentially undocumented and people most likely wrote these applications on NVidia, so whatever NVidia proprietary driver did was "standard". The core profile on the other hand is rigorously tested with an official conformance test suite and is a lot more stable in term of the behavior on various drivers.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Also please notice that many OpenGL issues stem from applications using the compatibility profile. This is essentially undocumented and people most likely wrote these applications on NVidia, so whatever NVidia proprietary driver did was "standard". The core profile on the other hand is rigorously tested with an official conformance test suite and is a lot more stable in term of the behavior on various drivers.

Even if one doesn't use "compatibility profiles" or deprecated OpenGL features as you mention, AMD's official OpenGL driver are still bad and their community drivers (radeonsi) aren't useable for professional applications either. You're basically shooting yourself in one foot or another by picking one over the other. If you want to do any professional work you have to put with AMD's subpar official implementation but if you choose the community implementation to get higher gaming performance then you can't get any work done. No matter which OpenGL implementation you choose for AMD, it'll be bad in one way or another ...

Conformance tests are trash and developers have straight up unambiguously singled out AMD as both being slow and more buggy compared to their leading competitor ...
 

Vattila

Senior member
Oct 22, 2004
805
1,394
136
What do you think would be the next acquisition opportunity for AMD?

For me it is too early to say. I think Lisa Su is the cautious type, and I think she'll focus on making AMD+Xilinx a success before making another substantial acquisition. The integration of Xilinx alone is realistically a multi-year and considerable endeavour (the AMD+ATI forced marriage almost wiped the company out; hopefully AMD+Xilinx turns out more harmonious). Smaller deals may of course happen as opportunities arise, but those are hard to predict. As for weaknesses that need reinforcement, software is the obvious one. If they continue to lag behind Nvidia — and perhaps also Intel's oneAPI, as the latter has gained tremendous momentum now, due to openness and support for the SYCL standard — then they may need to do something. Perhaps CodePlay will come up for sale; they are already a notable player in AMD's key area (high performance and heterogeneous compute) and they have very competent people (Michael Wong, in particular) and top level clients (e.g. USA national labs). CodePlay is a UK company, though — I don't know how that will play in current and future geopolitics. Maybe partnering will suffice. Lisa Su is good at that.
 
Last edited:

Vattila

Senior member
Oct 22, 2004
805
1,394
136
Then how exactly is SYCL an improvement over the last industry standard, OpenCL in terms of portability ?

I think the portability aspect of OpenCL vs SYCL is moot.

As programming models though, they differ vastly. If you go through any presentation on SYCL that mentions OpenCL you will have the advantages spelled out, so I will not go over them here. Go to the SYCL discussion thread. Let me just say, the keywords are single-source and productivity. As a C++ programmer myself, I have never had the slightest temptation to have even a brief look at OpenCL to learn about heterogenous system programming. All I've heard and sensed is that it is a horrible C-based programming model with clunky implementation and driver issues (something you seem to have been deeply scarred by). On the other hand, Microsoft's (i.e. the excellent Herb Sutter's) C++ AMP dialect had me tempted, and SYCL (which in many ways resembles C++ AMP) even more so, to the point I have started to read up on it (ref. the free SYCL introduction book Data Parallel C++). The important keywords for me are modern C++.

In any case, I reiterate that SYCL has been successfully implemented on top of ROCm and CUDA (hipSYCL) and oneAPI/Level Zero (DPC++). So the backend implementations available are already miles ahead of what OpenCL achieved in terms of robustness and support (on Linux, that is). Since SYCL has implementations in terms of the vendors' favoured frameworks, you are much less likely to see the issues of past stemming from poor vendor support and languishing implementations. Also, studies show that SYCL has good performance portability versus CUDA, OpenCL and Kokkos/Raja, while getting top marks on programmer productivity metrics. See the SYCL thread for references.

Notably, I don't think OpenCL was ever much used in supercomputing, for instance, while SYCL is being adopted at a rapid pace. In time, SYCL will hopefully trickle down into the client space as well, as the SYCL frontend implementers as well as the vendors of the backends broaden their focus (to the Windows platform, in particular).

I think you need to take a wider perspective here (rather than focusing on details such as SPIR-V, drivers and other horrible nerdy nightmare stuff) so that you don't miss the forest for the trees. For programming heterogenous systems, programmers want an open, simple, powerful and productive programming model with robust and well-supported implementations. SYCL is getting there on Linux, and Windows is hopefully next.
 
Last edited:

Vattila

Senior member
Oct 22, 2004
805
1,394
136
So the acquisition is finally closed.

Yeah! Lisa + Victor — what a team!



"Former Xilinx CEO Victor Peng will join AMD as president of the newly formed Adaptive and Embedded Computing Group (AECG). AECG remains focused on driving leadership FPGA, Adaptive SoC and software roadmaps, now with the additional scale of the combined company and the ability to offer an expanded set of solutions including AMD CPUs and GPUs."

AMD Completes Acquisition of Xilinx :: Advanced Micro Devices, Inc. (AMD)

PS. So Victor takes control of the software roadmaps. Great!
 

Saylick

Diamond Member
Sep 10, 2012
3,385
7,151
136

Good short read from Patrick on the acquisition. Big surprise that AMD+Xilinx have been working together behind the scenes already, so it goes to show that the two companies have a good working relationship regardless of whether or not the acquisition would have been approved.
Combined IP products in 2023

So when will we see the combined fruits of the AMD-Xilinx combination? Early than anyone might have imagined. Lisa Su told me we should see the first AMD processor with Xilinx AI IP in 2023. Apparently, AMD has had a long-term development agreement with Xilinx and was able to work together even as the deal was getting approved. I believe it took AMD and ATI many years after the deal completed for combined IP products.


Maybe instead of closing this thread, we turn it into the official Xilinx news thread. Any developments from Xilinx, or rather AMD's Adaptive and Embedded Computing Group, we put in here?
 

moinmoin

Diamond Member
Jun 1, 2017
4,994
7,765
136
So Victor takes control of the software roadmaps. Great!
"Xilinx offers industry-leading FPGAs, adaptive SoCs, AI engines and software expertise"
Nice indeed!

Big surprise that AMD+Xilinx have been working together behind the scenes already
Really? I always had the impression the good working relationship was there first and that's why they considered the merger to begin with. So I'm surprised anybody would consider that a "big surprise". But I agree it's certainly a good sign for the future working of the combined company.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |