[Rumor - WCCFTech] AMD Arctic Islands 400 Series Set To Launch In Summer of 2016

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MrTeal

Diamond Member
Dec 7, 2003
3,586
1,748
136
Let's use Tonga as an example. AMD said GCN2.0 on 16nm is aiming to deliver 2X the perf/watt of old GCN parts.

That means 380/380X's successors would be aiming for 174% / 192% on this chart:



There you go, it's theoretically enough for them to beat Fury/Fury X with a Tonga successor. Tonga is more or less this gen's low-end chip. (Tonga = low end, Hawaii = mid-range, Fiji = high-end this gen). That means in the theoretical Arctic Islands 16nm generation, Hawaii successor's performance also goes up 2X, Fury X's also 2X.

It stands to reason that a true mid-range next gen 16nm AI card should smoke Fury/Fury X based on AMD's own 2X perf/watt claims. Now, are they going to price those cards at old historical mid-range HD5850/5870/7850/7870 prices? Not a chance.

Tonga's really only the low end because they didn't bother releasing a GCN1.2 version of anything smaller. In pretty much any other generation than the second half of 28nm Tonga's die would be in range of AMD's other biggest dies. It's really the 5th largest GPU ATI has ever made, only smaller than Fury (596), Hawaii (438), R600 (420), and Cayman (389). We're "benefiting" from the soon to be 4 year old 28nm process in that regard, it's so cheap that AMD is willing to put out monster dies.

We'll only get midrange AI performance that smokes Fury X if AMD releases a midrange die around the same size as Hawaii. That's not historically how they've operated though, and we might have to wait another two or three years until GCN2.1 and South American Islands or GCN2.2 and Islands-That-Napoleon-Has-Been-Stranded-On to get those massive dies that give us 2x performance on the current top end part.
 

Head1985

Golden Member
Jul 8, 2014
1,866
699
136
I expect with next gen for both firms to start pricing faster cards at $650. There is no point not to. The market has spoken and paid $550 for 7970 then $550 for 290X and now $650 for Fury X. At this point the general market isn't looking at price/performance or die sizes or where the next gen chip falls in that entire generation. If something is 20-30% faster than the last gen's flagship, they can price it at $650+. 680/980 are as much proof as they need that this works. Since AMD is no longer interested in price/performance, why would they price a card 30-40% faster than the 980Ti/Fury X for $400-550? The only way for that to happen is if the competitor outperforms them so dramatically that they are forced to lower prices.
No way they will price GTX970 successor at 500-550USD.We talking about 300mm2 GPU.This is no way flagship.

They need some card in 300-400USD range and that will be 1070.
1080 will cost same as GTX980 500-550USD.
Why GTX970 cost 320USD when it was at same level as 700USD 780TI?AND why GTX980 cost 550USD when it was 15% faster than 700USD 780TI?

650-1000USD will cost big pascal GTX980TI/TITANX successor.

Same with AMD.
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,586
1,748
136
No way they will price GTX970 successor at 500-550USD.We talking about 300mm2 GPU.This is no way flagship.

They need some card in 300-400USD range and that will be 1070.
1080 will cost same as GTX980 500-550USD.
Why GTX970 cost 320USD when it was at same level as 700USD 780TI?AND why GTX980 cost 550USD when it was 15% faster than 700USD 780TI?

650-1000USD will cost big pascal GTX980TI/TITANX successor.

Same with AMD.

970 was really weird. It offered 85% of the performance of the 980 but only cost 60% as much, and pretty much invalidated the reason for the 980 existing once the 980Ti stole its halo. The 670 launched at 80% of the cost of the ($400 vs $500), and I kind of would guess (barring AMD really pushing them) that the 1070 will be $399 and the 1080 will be $550. I'm expecting die size regression from GM104 to GP104 though, so $550 would be reasonable for a card that's 10-30% faster than a 980Ti at launch.

That being said, I wouldn't be terribly surprised if RS was right. I'd probably just wait another year for the bigger dies to come out and for some DX12 games to actually launch at that point. Maybe add another two 290s if I can find someone selling them with blocks mounted just for laughs while I wait.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
No way they will price GTX970 successor at 500-550USD.We talking about 300mm2 GPU.This is no way flagship.

GTX670 = $399
GTX670 4GB = $459
GTX680 2GB = $499
GTX680 4GB = $579
Die size 294mm2

HD7970 = $549
Die size 352mm2

I put up with this because with bitcoin mining I paid $0 but such perk isn't available in 2016 so...if a GPU costs $600+ it better be the real flagship. That's just me and I know most people will not care and will just want a faster card no matter what the die size is, etc.

They need some card in 300-400USD range and that will be 1070.
1080 will cost same as GTX980 500-550USD.

Ya, for $300-400 they'll probably use a cut-own GP104. It's also possible for them to introduce 3 mid-range cards. Remember last time NV did exactly that with 660Ti.

Tying this back to AMD, AMD could do the same thing.

Why GTX970 cost 320USD when it was at same level as 700USD 780TI?AND why GTX980 cost 550USD when it was 15% faster than 700USD 780TI?

650-1000USD will cost big pascal GTX980TI/TITANX successor.

Same with AMD.

We'll see. Before NV dropped the true $700 flagship 780Ti, they first milked the market with $500 680, then $650 780 and only by end of 2013 did they release the real flagship 780Ti for $700.

780Ti ended up 2X faster than 580. Looking at it another way? Why would AMD/NV release a $650 flagship that's 80-100% faster than Fury X/980Ti in 2016 when they can just release a $550-650 card that's 30-40% faster in 2016, profit from it, then another $650 card 30-40% faster in 2017. It's exactly what both of them did to us starting 2012.

There is another reason for using this strategy -- you have something new to up-sell in 2017. Back in the days, all one had to do was buy just 1 flagship in a generation and there was no incentive to upgrade until next gen:

FX5900U vs. FX5950U, 6800U vs. 6800UE, GTX280 vs. 285, GTX480 vs. 580

9700Pro vs. 9800XT, X800XT vs. X850 XT PE, X1900XTX vs. X1950XTX, HD4870 vs. 4890, etc.

But then someone got smart and said hmm...why would we give them 80-100% increase for $650 when we can split the generation into parts and slowly trickle down the performance increases? This always makes them (us gamers) feel like out GPUs are outdated by next year's card and we maximize profits by marketing each new faster card as that year's new flagship. Brilliant, brilliant. :wub:

That being said, I wouldn't be terribly surprised if RS was right. I'd probably just wait another year for the bigger dies to come out and for some DX12 games to actually launch at that point. Maybe add another two 290s if I can find someone selling them with blocks mounted just for laughs while I wait.

I would love to be wrong. Considering the new way of launching GPUs since 2012 is working so well for one of them (record profits, record profit margins, record revenues, record market share) and the other has no $$$ to just magically pull off a chip 80-100% faster just like that as they are severely lagging behind in perf/watt, it stands to reason that more likely than not the company on top will milk the next gen just like they did us with their last 2 and the company behind will be content just to keep up. You pretty much know which firm is which.

In any case though, even if the 2016's cards are just 30-40% faster than 980Ti, that's still a heck of a lot better than what we get in the CPU market so I guess can't be too negative . 30-35% faster @ 1440p than Fury X would actually be a viable upgrade for 290/290X users.
 
Last edited:

turtile

Senior member
Aug 19, 2014
618
296
136
I don't see how increased efficiency is going to make the cards faster to the same ratio. I would make the assumption that the dies will all be smaller due to the extra cost of 16+ vs 28.

It makes more sense to keep 28nm rebadges at the low-end, replace the Fury series with a 35% jump but a smaller die and then a really high-end card to almost double performance but at near double the cost.
 

MrTeal

Diamond Member
Dec 7, 2003
3,586
1,748
136
In any case though, even if the 2016's cards are just 30-40% faster than 980Ti, that's still a heck of a lot better than what we get in the CPU market so I guess can't be too negative . 30-35% faster @ 1440p than Fury X would actually be a viable upgrade for 290/290X users.

Yeah, GPU manufacturers have a huge advantage in that they can really drive innovation to suit their hardware and extract full benefit from any changes they make. CPU vendors add in cores or put in new features like AVX that give a massive boost in certain workloads, and then get to watch as no one really uses those functions or keep writing single threaded code for years.
 

jpiniero

Lifer
Oct 1, 2010
14,841
5,456
136
My Guess on AI compared to Pascal:

GP100 = AI Bigger ($1049+)
GP102 = AI Smaller ($699+)
GP104 (GDDR5X) = Fiji ($350-$500)
GP106 (GDDR5?) = Grenada? ($200-$300)
 

flopper

Senior member
Dec 16, 2005
739
19
76
I would love to be wrong. Considering the new way of launching GPUs since 2012 is working so well for one of them (record profits, record profit margins, record revenues, record market share) and the other has no $$$ to just magically pull off a chip 80-100% faster just like that as they are severely lagging behind in perf/watt, it stands to reason that more likely than not the company on top will milk the next gen just like they did us with their last 2 and the company behind will be content just to keep up. You pretty much know which firm is which.

In any case though, even if the 2016's cards are just 30-40% faster than 980Ti, that's still a heck of a lot better than what we get in the CPU market so I guess can't be too negative . 30-35% faster @ 1440p than Fury X would actually be a viable upgrade for 290/290X users.

The GCN last longer as a tech as it holds up better over time issue with GPU is they are kind of a replace stuff a cpu and such we dont upgrade as more often that needs also a new motherboard but a GPU we upgrade if a new game comes out.
seems to me amd bet on 22nm die but once that was canceled they really didnt have a good answer.

Nvidia strategy paid off better.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I don't see how increased efficiency is going to make the cards faster to the same ratio. I would make the assumption that the dies will all be smaller due to the extra cost of 16+ vs 28.

What do you mean by the same ratio? You mean why would a 596mm2 16nm AI have 80-100% performance increase over Fury X? Because this is what happened every generation with every major node shrink. Since graphics performance scales with more shaders, ROPs, TMUs and is highly parallel in nature, the more of these functional units theh GPU makers can add, the higher the performance. The uknowns are the prices we pay and how quickly the higher end SKUs/large die (i.e., flagship) chips are going to be released vs. lower-end and mid-range products.




AMD has been forced to increase die sizes to remain competive because the competition is making 480-600mm2 die flagships (see above charts).

HD2900XT = 80nm 420mm2
Then
HD3870 = 55nm 190mm2
HD4870 = 55nm 256mm2
HD4890 = 55nm 282mm2
HD5870 = 40nm 334mm2
HD6970 = 40nm 389mm2
HD7970 = 28nm 365mm2 (*this is a GPU-Z error, the actual size per AMD is 352mm2)
R9 290X = 28nm 438mm2
R9 Fury X = 596mm2

The competition is likely to have 500-600mm2 flagship on 16nm so if AMD doesn't, they will get destroyed. Since over 2 decades of GPU history we already know that a full node shrink + new architecture brings roughly 80-100% increase in perf/watt, that means with 250-300W power usage, the next gen's AMD flagship cards should be faster than Fury X by 80-100%. The real question is over how many years will this 80-100% be spread out before we see new generation in 2018.
 

MrTeal

Diamond Member
Dec 7, 2003
3,586
1,748
136
The competition is likely to have 500-600mm2 flagship on 16nm so if AMD doesn't, they will get destroyed. Since over 2 decades of GPU history we already know that a full node shrink + new architecture brings roughly 80-100% increase in perf/watt, that means with 250-300W power usage, the next gen's AMD flagship cards should be faster than Fury X by 80-100%. The real question is over how many years will this 80-100% be spread out before we see new generation in 2018.

I'm not sure the bolded really follows. nVidia has for at least the last decade put out a 500-600mm^2 die in each architecture, but until the last couple years AMD never followed suit. R600 was the closest, but after that they were content to cede the large die and performance crown to nVidia and compete on efficiency and price, and they were a lot more successful doing it than they are today.

It's just as possible that AMD and nVidia will release AI and GP104 in similar die sizes with nVidia also releasing a big GP100, and AMD not following suit with a 500mm^2+ die until the next gen and competing against Volta.
 

MrTeal

Diamond Member
Dec 7, 2003
3,586
1,748
136
10nm GPUs in 2018? Hard to believe seeing so many problems with 14/16 and intels own 10nm nodes.

New generation, not necessarily new node. I think we have to come to terms with the fact that we'll probably have two full architectures on each node before we get a shrink.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'm not sure the bolded really follows. nVidia has for at least the last decade put out a 500-600mm^2 die in each architecture, but until the last couple years AMD never followed suit. R600 was the closest, but after that they were content to cede the large die and performance crown to nVidia and compete on efficiency and price, and they were a lot more successful doing it than they are today.

By all financial and market share metrics, this strategy was not successful because it wasn't sustainable. Do you think Rory Read and Lisa Su raised prices starting with HD7970 back to $550 and now to $650 because the old strategy worked better? No, it's because the old strategy didn't work so they tried a new strategy but thus far both failed on the execution side by launching late, with poor reference blowers, less than stellar drivers (esp. for HD7000 series micro-stutter with CF), or being supplied constrained. The execution itself with this gen was also flawed by withholding R9 380/380X/390/390X to launch alongside Fury products.

It's just as possible that AMD and nVidia will release AI and GP104 in similar die sizes with nVidia also releasing a big GP100, and AMD not following suit with a 500mm^2+ die until the next gen and competing against Volta.

If you read AT's or other Fury X reviews, this is clearly completely against AMD's new GPU strategy and what has actually happened to AMD's graphics chips since HD5870 era. I already provided the die size trends - they keep growing. In the past, AMD decided to use dual mid-range products in CF to combat the competitor's flagship GPU. With newer game engines like UE4 and some developers flat out failing to provide proper multi-GPU support, it would be more risky for AMD to rely on this strategy.

Furthermore, to be able to support higher prices and healthier profit margins, AMD must have the performance to go hand in hand. Lisa Su also specifically outlined that she wants to shift AMD from its budget image. The only ways to achieve these goals simulantenously to deliver high-end/class-leading performance. This is not possible with 300-380mm2 16nm flagship HBM2 products if your competitor is going to go against you with 500mm2+ products. If a single flagship competitor is 35-50% faster, by definition the best AMD card would only end up 3rd, 4th or 5th from the top, which is disastrous for the image of their products.

While it's true that we may not see $650 500mm2 consumer GPUs in 2016, to suggest that AMD is going to suddenly go back sub-400mm2 flagships is an odd prediction to make in light of what's exactly happening. I mean it's odd to me how you even derived to this conclusion considering AMD has been purposely moving away from small die size flagships. Outside of gaming, the only way for AMD to gain market share in the professional and compute segments is also to have competitive high-end cards and that's also not possible with mid-range die "flagships". IMO, your prediction seems contrarian to AMD's Radeon Technologies Group's goals.

Also, I think you are forgetting something, no? Think about - Fiji is 596mm2 so how in the world is AMD going to increase performance 70-80% with next gen's flagship by keeping it a 300-380mm2 chip? Or are you predicting AMD's best single GPU over the next 2 years will be only 20-30% faster than the Fury X?

10nm GPUs in 2018? Hard to believe seeing so many problems with 14/16 and intels own 10nm nodes.

Not necessarily. 28nm has served AMD for 3 flagship cards: 7970/7970Ghz, R9 290X and Fury X. It isn't necessarily a guarantee that 2018 GPUs have to be made on 10nm to advance performance. AnandTech and other industry experts anticipate that GPU makers will need to rely more heavily on combining new GPU architectures with nodes because easy node shrinks are now in the past.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
WCCFtech reiterates the rumor -- again take it with a giant salt bucket:

Rumor: AMD Greenland GPU Coming Next Summer On 14nm – To Enter Mass Production In Q2 2016

"AMD’s Radeon 400 series flagship GPU “Greenland” is reportedly debuting next summer and is set to enter mass production in Q2 2015 on the Samsung/Globalfounries 14nm LPP process. According to this latest rumor, AMD’s graphics flagship will enter mass production specifically at the end of the second quarter of next year around June, in time for its introduction at the end of the summer and around the back to school shopping season." ~ WCCFtech
 
Mar 10, 2006
11,715
2,012
126
WCCFtech reiterates the rumor -- again take it with a giant salt bucket:

Rumor: AMD Greenland GPU Coming Next Summer On 14nm – To Enter Mass Production In Q2 2016

"AMD’s Radeon 400 series flagship GPU “Greenland” is reportedly debuting next summer and is set to enter mass production in Q2 2015 on the Samsung/Globalfounries 14nm LPP process. According to this latest rumor, AMD’s graphics flagship will enter mass production specifically at the end of the second quarter of next year around June, in time for its introduction at the end of the summer and around the back to school shopping season." ~ WCCFtech

Why a bucket of salt?

Theories of the phrase's origin include Pliny the Elder's Naturalis Historia, regarding the discovery of a recipe for an antidote to a poison.[2] In the antidote, one of the ingredients was a grain of salt. Threats involving the poison were thus to be taken "with a grain of salt", and therefore less seriously.

https://en.wikipedia.org/wiki/Grain_of_salt

A bucket of salt would outright kill you and would be even more serious than the "threat" that the grain of salt would serve to neutralize
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Why a bucket of salt?

A bucket of salt would outright kill you and would be even more serious than the "threat" that the grain of salt would serve to neutralize

Sorry if I used the term incorrectly as I meant to imply that it's even less credible than taking it with a grain of salt --> need a full bucket given the source.

It's cuz some rumours out of WCCFtech are as good as dead but at the time they did seem reasonable.

Right now they are predicting that mass production will only start in June of 2016 which suggests we wouldn't even see retail availability in volume until September 2016. Considering outside of Fiji, AMD is using 2012-2013 GPUs/architectures, if they try to pull off something like that by having nothing for sale until late Q3 2016, their market share could dip to 5-10%.

This site is all over the place. In October 2015, they reported that AMD's R9 400 was already taped out and was just waiting for mass production of TSMC's 16nm.
"It was confirmed back in August that AMD is in fact one of TSMC’s clients for the 16nm node. That, in addition to historical precedent and a number of reports indicate that AMD will indeed manufacture its next generation GPUs using TSMC’s 16nm process.

It usually takes about 6 months from tape out to market availability.

Now read their latest rumor I posted above regarding retail availability + who is manufacturing the GPUs. :sneaky:

Bucket of salt needed to reconcile.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
AMD should be focusing on perf/area, not perf/watt on their slides ...

That means getting performance statistics for the latest games including GameWorks featured ones and tailoring their hardware design around those ...

Try beating Nvidia at their own game (pun intended) ...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
AMD should be focusing on perf/area, not perf/watt on their slides ...

If the competitor increases perf/watt 2X for of its product stack and AMD achieves the same, AMD automatically loses since they are currently way behind. That means they need to increase perf/watt > 2X. Also, without improvement in perf/watt, they won't get many design wins in laptops which means it doesn't even matter if the games run better on their architecture since there simply won't be dGPU cards of their in laptops start with.

Try beating Nvidia at their own game (pun intended) ...

Cannot win this game because they'd have to outspend their competitor - not possible at this time.

On paper, AMD is in a very dangerous space right now for GPUs because they already used up the HBM1 advantage over GDDR5 and yet they are still behind in perf/watt, perf/mm2 and overclocking headroom. Since both firms are aiming for a 2X increase in perf/watt, on paper, AMD can't win.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
If the competitor increases perf/watt 2X for of its product stack and AMD achieves the same, AMD automatically loses since they are currently way behind. That means they need to increase perf/watt > 2X. Also, without improvement in perf/watt, they won't get many design wins in laptops which means it doesn't even matter if the games run better on their architecture since there simply won't be dGPU cards of their in laptops start with.

That's why AMD should focus on perf/area instead of perf/watt because even if AMD doesn't beat Nvidia in the latter there's a decent chance that they could take the performance crown and that could go a long way for AMD's image been recognized as the best, not the cheaper option ...

These ultra high end gaming laptops are practically useless without any continuous power source so what difference does it make in the end if another comparable device only lasts half an hour more on an intensive session ? You could always attract OEMs with cheaper prices when your parts are cheaper to produce and if the heat output gets uncomfortable you have an option of lowering the clocks ...

Did Nvidia focusing on perf/watt give them any success in the mobile market when they sued their competitors and later got thwarted by the likes of puny (tinier than AMD) Imagination Technologies with the iPhone 6S in that area ?

It's clear that newer transistor technology will give lower power on an automatic basis but the bigger challenge is getting higher perf/transistor ...

Cannot win this game because they'd have to outspend their competitor - not possible at this time.

On paper, AMD is in a very dangerous space right now for GPUs because they already used up the HBM1 advantage over GDDR5 and yet they are still behind in perf/watt, perf/mm2 and overclocking headroom. Since both firms are aiming for a 2X increase in perf/watt, on paper, AMD can't win.

AMD doesn't need to outspend their competitor as Imagination Technologies who are FAR smaller than AMD found some success in supplying GPU designs to Apple ...

In fact GPUs are much easier to design compared to CPUs when there's tons of duplication going on in a lot of the circuits so what turned into designing for billions of transistors turned into designing tens of millions of transistors ...

I suspect the reason why AMD wasn't able to push out as many microarchitectural changes to their GPUs was because of Zen ...
 

MeldarthX

Golden Member
May 8, 2010
1,026
0
76
Also reworked Fury could easily be used; people are confusing.....Fury is just the gpu chip; its not tied to hbm 1; as the memory controller could also run hbm 2.....wouldn't be a problem to use a reworked fury with hdmi 2.0a and 1.3 dp......as mid end chip.

but from what little we have; Arctic Islands looks to be a full rework of GCN; expanding where they need and getting power savings.

We know Pascal will be more GCN like as A sync sucks for Nvidia on DX 12 we have a few DX 12 games coming; though honestly I don't know if they will use a sync.....*if they don't will be a shame; but sounds like they will*

That's in Feb; landscape is changing; those expecting huge things in Pascal I think will be disappointed as its not a radical change from maxwell; also if everyone remembers the last couple times Nvidia tried a die shrink; and change memory. They failed miserable to do it. I don't actually have faith they can do it this time either.

AMD will have a much better understanding of hbm; their memory controllers are always better specially on new memory than Nvidia's.

AMD's push will also be to own VR.....so I think big things will come from Greenland; and with AMD's legendary silence when it comes to their video cards; there's going to be a lot of false rumors until it lands.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
AMD's push will also be to own VR.....so I think big things will come from Greenland; and with AMD's legendary silence when it comes to their video cards; there's going to be a lot of false rumors until it lands.

I sure hope they will go where the big money is at the time of release and that is not VR but 1080/1440p and from 2017 onward 4K.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Nobody said DX-11 games are going to disappear, but the heavy weights of high-end hardware requirements will be the DX-12 titles.

Such as? I think last time we discussed this we got maybe 6 games that are DX12 for 2016.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |