Official AMD Polaris Review Thread: Radeon RX 480, RX 470, and RX 460

Page 20 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
LOOOL... but the 480 is about to fight the 1060... however I can't see the 1060 winning since is a 50% cut of the 1080.... That is worse than 980!

PS: It's me or Intel Iris Pro ended into a eternal oblivion?
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
That's not true. I'm usually very favorable towards AMD but this is a failure. 14nm FF, such a small chip, should be not chewing 150W during gaming.

Compare to it's predecessor of this class, anyone remember Pitcairn, the 7850 and 7870?

Polaris 10 is worse when factoring in the node jump.

That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.

Now, I would have thought the RX 480 a much better product if it's gaming load was ~100-110W. That would imply AMD low ball clocks to get perf/w, leaving more performance on the table for overclockers or custom cards. But it's right at the edge.

As a gamer, I still think it's a great GPU at the price.

* I bought 2x RX 480 8GB, $379 AUD each.

GTX 970 3.5GB & AMD 390 8GB are ~$449 AUD. GTX 980 4GB is $629 AUD (got a price cut last week from its usual $749!!).

390X 8GB is $529 AUD. 1070s are ~$779 here and 1080s are $1199, ridiculous prices.

Logically, you can't say RX 480 is a bad GPU for the price. It is good for gamers to have that performance class down at mainstream prices.

But as a tech enthusiast, I am very disappointed at seeing such a small FinFet chip suck down that much power. To me, that's a failure, most likely GloFo but in the final analysis, AMD takes the blame because they should have known better and be more honest about expectations.

You don't get to stand there and claim 2.8x perf/w and talk about all this efficiency and coolness you get from 14nm FF, when the card runs 82C and at the limits of its power PCB.

I can tell you right now with facts, that 1.26ghz is operating beyond it's optimal clocks for the process. Why? Look here:

1.4ghz OC with a aftermarket cooler:

http://oc.jagatreview.com/2016/06/t...deon-rx480-ke-1-4ghz-dengan-cooler-3rd-party/

Power usage jumps to 183W, which is insane for such a small clock speed bump.

All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.

I raised these points in the other thread and some of you accuse me of being negative on AMD (falsely even). But, AMD don't get to go to a new node AND HYPE UP efficiency gains and talk about 2.8x perf/w and be so far being Pascal on perf/w.

This is what my logic tells me, I don't need to sugar coat the analysis because I am not a blind fanboy.

That's what some people here don't seem to understand. I am a tech enthusiast. I love to see progress. AMD failed here. I agree with everything you said. I'm actually more than fair towards AMD. Almost irresponsible amounts of fairness sometimes. I want AMD to do well. Always preferred them over Nvidia.

I was looking to sidegrade from my GTX970 (needed HDMI 2.0) that powers my 4k TV just to lower its power consumption. I wanted 110watts. I am definitely not doing that now. Lets not forget, the idle power consumption on the RX480 is hideous, too!! Man, GloFo sucks big time.

As a tech enthusiast, I am disgusted by how awful this Polaris 10 turned out. There is no excuse to have such poor power efficiency from a node shrink this significant. Well, the excuse is GloFo, but you get my point.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Absolutely not true. If you had any memory of history.

Pitcairn was very efficient.

The 7950 and 7970 was very efficient, actually almost the same as Kepler.

It did not get bad until the Ghz ed came and Hawaii.

Though Hawaii itself, if you compare it's competitor, big Kepler, was actually close. Now Hawaii destroys big Kepler for similar power usage.

With Maxwell, NV had a big leap forward, leaving AMD behind. But Fury & Nano actually brought them back very close. Depending on the review, Asus Fury & Nano beat Maxwell on perf/w.

Even the Fury X vs 980Ti, similar power usage (~235W vs 250W), similar performance. These were all on 28nm at TSMC, the same as NV's GPU.

AMD going to Polaris, enhanced GCN designed for perf/w and they failed.. I don't think it's the architecture when they've shown in the past they can do it. The difference this time is the node.

I said performance parts. Pitcarn is not a performance part..

I could dig up the numbers, but we all know when the GTX680 (Kepler) came out, it was not only faster for the games in those days but consuming less power than the 7970 (Tahiti). Just look at AT's own review - http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/19

Then with Hawaii, AMD somewhat went the other way as oppose to nVIDIA in terms of power consumption - http://www.anandtech.com/show/7481/the-amd-radeon-r9-290-review/15

When you look at the power efficiency of Fermi (28nm) -> Kepler (28nm) -> Maxwell (28nm) -> Pascal (16nm) they've all shown significant perf/w gains each generation (with a uarch changes to do so). But in terms of Tahiti (28nm) -> Hawaii (28nm) -> Fiji/Tonga (28nm) -> Polaris (14nm) its quite disappointing because the gains have been relatively been flat. They've done nothing substantial in the uarch department other than minor tweaks + HBM at one point to address better power consumption.

GCN in terms of power consumption don't seem to scale linearly (more like exponentially) because im thinking that the low-mid range Polaris/GCN4 parts will be quite good like pitcarn in terms of power efficiency. But for the performance parts the power characteristics get out of control. Like you have mentioned, the only time they've made some inroads here was due to HBM.

Its quite sad from an engineering point of view that a GTX 1070 has similar if not better power consumption figures compared to the RX480.. while being twice as fast in some cases. However when you do compare it to AMD's previous gen Hawaii, its a massive improvement. AT's own power figures show from 428W (R390X) its gone down to 301W.

Ultimately though, most of that savings would have come from the node jump. But its hard to ignore the fact that all high-mid end AMD products based on GCN has been power hungry and under performing vs the competition overall. The gap has widened even moreso with Pascal. We are also about to find out when the RX480 goes up against the 1060.

Yet i guess its hard to blame them because they can't afford to make huge architectural changes either given their limited R&D budget.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
It's further disappointing, because if a 232 mm^2 Polaris chip is pulling 150W+ there's no hope for AMD to be competitive with Vega unless something changes.

Right. There were people here in this thread claiming that perf/area was a critical metric when extrapolating Vega. I always was telling them that from architecture perspective perf/power is much more critical - and it would be perf/power where AMD needs to succeed with Polaris going forward to Vega.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
That's not true. I'm usually very favorable towards AMD but this is a failure. 14nm FF, such a small chip, should be not chewing 150W during gaming.

Compare to it's predecessor of this class, anyone remember Pitcairn, the 7850 and 7870?

Polaris 10 is worse when factoring in the node jump.

That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.

Now, I would have thought the RX 480 a much better product if it's gaming load was ~100-110W. That would imply AMD low ball clocks to get perf/w, leaving more performance on the table for overclockers or custom cards. But it's right at the edge.

As a gamer, I still think it's a great GPU at the price.

* I bought 2x RX 480 8GB, $379 AUD each.

GTX 970 3.5GB & AMD 390 8GB are ~$449 AUD. GTX 980 4GB is $629 AUD (got a price cut last week from its usual $749!!).

390X 8GB is $529 AUD. 1070s are ~$779 here and 1080s are $1199, ridiculous prices.

Logically, you can't say RX 480 is a bad GPU for the price. It is good for gamers to have that performance class down at mainstream prices.

But as a tech enthusiast, I am very disappointed at seeing such a small FinFet chip suck down that much power. To me, that's a failure, most likely GloFo but in the final analysis, AMD takes the blame because they should have known better and be more honest about expectations.

You don't get to stand there and claim 2.8x perf/w and talk about all this efficiency and coolness you get from 14nm FF, when the card runs 82C and at the limits of its power PCB.

I can tell you right now with facts, that 1.26ghz is operating beyond it's optimal clocks for the process. Why? Look here:

1.4ghz OC with a aftermarket cooler:

http://oc.jagatreview.com/2016/06/t...deon-rx480-ke-1-4ghz-dengan-cooler-3rd-party/

Power usage jumps to 183W, which is insane for such a small clock speed bump.

All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.

I raised these points in the other thread and some of you accuse me of being negative on AMD (falsely even). But, AMD don't get to go to a new node AND HYPE UP efficiency gains and talk about 2.8x perf/w and be so far being Pascal on perf/w.

This is what my logic tells me, I don't need to sugar coat the analysis because I am not a blind fanboy.

This is all 100% true. I said the same myself; at $199 it's a good buy for the card for the chip itself is an epic engineering failure vs. the competition. AMD would have been best served to have scrapped Vega long ago and concentrate on making Polaris 10 and Polaris 11 much more efficient than they are now.
 
Last edited:
Feb 19, 2009
10,457
10
76
I couldn't have said it better. I'll root for AMD for no other reason than I tend to like an underdog, but this release is unacceptable in my opinion.

Multiple reviewers have had cards that draw power above spec in a way that could be dangerous.

It's further disappointing, because if a 232 mm^2 Polaris chip is pulling 150W+ there's no hope for AMD to be competitive with Vega unless something changes. I hope that it's just an issue with GF and that AMD can sort it out, either by changing to a different fab or just waiting for the process to mature.

I wouldn't link it to Vega. New uarch & HBM2 (Vega on AMD's roadmap is a step up in perf/w compared to Polaris). Don't know where it's going to be produced as well, GF would be fail for a big chip.

But for me as a gamer, I just hope titles like BF1 and Deus Ex: MD and this TW Warhammer DX12 patch brings CF support.
 
Feb 19, 2009
10,457
10
76
This is all 100% true. I said the same myself; at $199 it's a good buy for the card for the chip itself is an epic engineering failure vs. the competition. AMD would have been best served to have scrapped Vega long ago and concentrate on making Polaris 10 and Polaris 11 much more efficient than they are now.

Yes, and when GP106 arrives, imagine half a GP104, it's going to be around 100W gaming load and at the performance of between 970 and 980.

It's going to basically match the RX 480 and beat it in perf/w by 50%.

What this equate to for some of you who are still not appreciating why I have kept on raising perf/w...

THINK about what made the 750Ti and 950/960 so popular. It's not just because it's NV branding. It's because they were very low power. They could go into any crap OEM system and be just fine. No need for a PSU upgrade.

You simply cannot say that about 380 power class that the RX 480 falls into it. It's in another tier higher.

The 1060 will automatically win that mainstream battle because the RX 480 is power hungry for mainstream. It will be the 960 vs 380 situation all over again. Similar perf, but the 960 won that fight due to perf/w.

As for Vega/Polaris, Polaris is IP8, Vega is IP9, a new architecture. Don't draw comparisons to Vega based on Polaris, it will not be accurate. And they also need to hurry up on Vega, once 1060 is out, RX 480 sales will die down and they have to compete against NV that has the mainstream, mid-range and performance segment with strong products.
 

tvdang7

Platinum Member
Jun 4, 2005
2,242
5
81
Its been maybe 6 years since I built a computer. I was ready this summer to build around this 480. The performance is a little underwhelming. For a new person building we are stuck in a hard spot. The 480 is not faster than a 290x/390x and the 1070 1080 is not easily obtainable price or supply wise. I want to get a 2k monitor but this leaves me with the only choice but to buy a last gen fury X ( which I do not want since it sucks so much power) or 980/980 ti. If amd were to drop or at least announce the 490 then I would not be as disappointed but now all I have to do is wait and see how the 1060 is.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Well im thinking Vega's improved perf/w will primarily come from HBM2. GCN5 will just be further minor tweaks. This is my expectation for it.
 
Feb 19, 2009
10,457
10
76
Its been maybe 6 years since I built a computer. I was ready this summer to build around this 480. The performance is a little underwhelming. For a new person building we are stuck in a hard spot. The 480 is not faster than a 290x/390x and the 1070 1080 is not easily obtainable price or supply wise. I want to get a 2k monitor but this leaves me with the only choice but to buy a last gen fury X ( which I do not want since it sucks so much power) or 980/980 ti. If amd were to drop or at least announce the 490 then I would not be as disappointed but now all I have to do is wait and see how the 1060 is.

Don't waste ur money on last-gen. Maxwell is not going to age well.

1080 is around 25% faster than 980Ti in current games, but look at the new one, Mirror's Edge, it's 40% faster already. NV has not optimized their drivers for old gen in new games since Kepler.

Don't waste money on Fury X, 4GB is just zero future proof for 1440p.

RX 480 actually is about 390X in newer games, slower in older games. It's a strange situation but I noticed after reading a lot of reviews, it runs better in recent titles and especially, NV sponsored titles.

This suggest there's room to improve over time via drivers, ie, it *should* age well.

I was in the same spot last night, deciding what to buy since I sold my 290X for $250 USD. There's nothing else competitive in the price range. The only step up that can justify itself is a custom 1070.

So atm, either go with RX 480, or pay more for a 1070.
 
Feb 19, 2009
10,457
10
76
Well im thinking Vega's improved perf/w will primarily come from HBM2. GCN5 will just be further minor tweaks. This is my expectation for it.

Doesn't really matter where they get it from, but Vega is due for a big jump in perf/w over Polaris.

That's why I am not concerned for Vega. Just Polaris is what we have now and it's decent for gamers, but crap for tech enthusiasts and forum warriors.
 
Feb 19, 2009
10,457
10
76
I said performance parts. Pitcarn is not a performance part..

I could dig up the numbers, but we all know when the GTX680 (Kepler) came out, it was not only faster for the games in those days but consuming less power than the 7970 (Tahiti).

This was at it's launch, before AMD's Omega drivers boosted the performance across the board by 5-20% for many games.





Power:





I had a 7970 reference for mining, I know it wasn't power hungry.

In the above charts, you all can see the 7850 and 7870. This is basically Polaris 10 equivalent.

See how much less power these mainstream chips used?

This is the problem with the RX 480. It's targeted at mainstream in price, but mainstream does NOT appreciate such high power usage. It's bad for OEM systems where gamers want to plug in a new GPU without hassles.

People who make statements like GCN is power hungry is simply not true. Pitcairn and Tahiti was excellent. Still is excellent today. Hawaii was decent (custom variants which didn't throttle) compared to Titan, 780/Ti, which was it's intended competitor, obviously it aged even better. It looked bad due to Maxwell's arrival but Fiji was decent, Nano was great.

The switch to Polaris, either due to 14nm FF or AMD, has regressed in performance per TFlops and performance per watt relative to the node.
 
Last edited:

Elixer

Lifer
May 7, 2002
10,376
762
126
That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.
...
All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.
I agree, something isn't right with the GloFlo 14nm LPP process.
FinFETs have a 150% decrease in power, so, what caused AMD to have to pump so much voltage to the chips?
If you look at the GPU-Z shots, most of the cards are revision C7, so, it looks like AMD tried 7 times to get the power monster under control (I doubt it was anything else).
Raja was saying that they had Polaris 11 up & running first, then Polaris 10, so, I think it is safe to assume Polaris 11 was using the 14nm LPE process to get the power requirement they needed for the deadline of the OEMs (which has already passed), and using 14nm LPP for Polaris 10.

To make this easier to understand for people who love to flame, this issue that we are talking about here doesn't mean that it is using more power than the cards that they replaced, they are not. It is lower.

The issue here is, that the 14nm node was supposed to have greater power savings overall, and we aren't seeing as big of a power savings using GloFlo's 14nm LPP process. In fact, it looks like the same power curve for their 28nm process that AMD's CPU is on. Which is why this is puzzling.

The 480 is still a good card for the target audience, but, it should have been using even lower power than what we are seeing.

Samsung has made their products using 14nm LPP & LPE, and form what I read, they aren't having the same power issues, but, then again, they don't have as big of chips as what AMD is doing.

The question is why, what is it that is needing so much more power than expected from the new 14nm node?
 
Last edited:

Rezist

Senior member
Jun 20, 2009
726
0
71
That's not true. I'm usually very favorable towards AMD but this is a failure. 14nm FF, such a small chip, should be not chewing 150W during gaming.

Compare to it's predecessor of this class, anyone remember Pitcairn, the 7850 and 7870?

Polaris 10 is worse when factoring in the node jump.

That tells me GloFo failed them. Call it what it is, but FinFet should NOT be struggling to get 1.26ghz with massive power consumption. The entire point of the FinFet transistor is to minimize current leakage, allowing it to operate at a higher clockspeed & higher voltage tolerances.

Now, I would have thought the RX 480 a much better product if it's gaming load was ~100-110W. That would imply AMD low ball clocks to get perf/w, leaving more performance on the table for overclockers or custom cards. But it's right at the edge.

As a gamer, I still think it's a great GPU at the price.

* I bought 2x RX 480 8GB, $379 AUD each.

GTX 970 3.5GB & AMD 390 8GB are ~$449 AUD. GTX 980 4GB is $629 AUD (got a price cut last week from its usual $749!!).

390X 8GB is $529 AUD. 1070s are ~$779 here and 1080s are $1199, ridiculous prices.

Logically, you can't say RX 480 is a bad GPU for the price. It is good for gamers to have that performance class down at mainstream prices.

But as a tech enthusiast, I am very disappointed at seeing such a small FinFet chip suck down that much power. To me, that's a failure, most likely GloFo but in the final analysis, AMD takes the blame because they should have known better and be more honest about expectations.

You don't get to stand there and claim 2.8x perf/w and talk about all this efficiency and coolness you get from 14nm FF, when the card runs 82C and at the limits of its power PCB.

I can tell you right now with facts, that 1.26ghz is operating beyond it's optimal clocks for the process. Why? Look here:

1.4ghz OC with a aftermarket cooler:

http://oc.jagatreview.com/2016/06/t...deon-rx480-ke-1-4ghz-dengan-cooler-3rd-party/

Power usage jumps to 183W, which is insane for such a small clock speed bump.

All this screams that AMD was forced to clock it outside it's optimal zone, because the node is giving them such a bad result.

I raised these points in the other thread and some of you accuse me of being negative on AMD (falsely even). But, AMD don't get to go to a new node AND HYPE UP efficiency gains and talk about 2.8x perf/w and be so far being Pascal on perf/w.

This is what my logic tells me, I don't need to sugar coat the analysis because I am not a blind fanboy.

This probably the answer right here, they should have went to TSMC. This probably means Zen is done till this is fixed, since it's out of AMD's hands they can't release Zen. GF will probably kill AMD
 
Last edited:
Feb 19, 2009
10,457
10
76
I agree, something isn't right with the GloFlo 14nm LPP process.
FinFETs have a 150% decrease in power, so, what caused AMD to have to pump so much voltage to the chips?
If you look at the GPU-Z shots, most of the cards are revision C7, so, it looks like AMD tried 7 times to get the power monster under control (I doubt it was anything else).
Raja was saying that they had Polaris 11 up & running first, then Polaris 10, so, I think it is safe to assume Polaris 11 was using the 14nm LPE process to get the power requirement they needed for the deadline of the OEMs (which has already passed), and using 14nm LPP for Polaris 10.

To make this easier to understand for people who love to flame, this issue that we are talking about here doesn't mean that it is using more power than the cards that they replaced, they are not. It is lower.

The issue here is, that the 14nm node was supposed to have greater power savings overall, and we aren't seeing as big of a power savings using GloFlo's 14nm LPP process. In fact, it looks like the same power curve for their 28nm process that AMD's CPU is on. Which is why this is puzzling.

The 480 is still a good card for the target audience, but, it should have been using even lower power than what we are seeing.

Samsung has made their products using 14nm LPP & LPE, and form what I read, they aren't having the same power issues, but, then again, they don't have as big of chips as what AMD is doing.

The question is why, what is it that is needing so much more power than expected from the new 14nm node?

Definitely FF should behave much better. Could be GF, could be AMD's design failure. Don't know for sure but one can lean towards GF due to historic reasons.

But here's a thought. Apple. There's 2 scenarios here, I don't know which is truer. I'll just mention one.

Apple takes the best binned chips on leakage and power metrics. They need it to fit their new MACs. The rest of the chips are the ones that didn't make it, so they have worse perf/w.
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
I agree, something isn't right with the GloFlo 14nm LPP process.
FinFETs have a 150% decrease in power, so, what caused AMD to have to pump so much voltage to the chips?
If you look at the GPU-Z shots, most of the cards are revision C7, so, it looks like AMD tried 7 times to get the power monster under control (I doubt it was anything else).
Raja was saying that they had Polaris 11 up & running first, then Polaris 10, so, I think it is safe to assume Polaris 11 was using the 14nm LPE process to get the power requirement they needed for the deadline of the OEMs (which has already passed), and using 14nm LPP for Polaris 10.

To make this easier to understand for people who love to flame, this issue that we are talking about here doesn't mean that it is using more power than the cards that they replaced, they are not. It is lower.

The issue here is, that the 14nm node was supposed to have greater power savings overall, and we aren't seeing as big of a power savings using GloFlo's 14nm LPP process. In fact, it looks like the same power curve for their 28nm process that AMD's CPU is on. Which is why this is puzzling.

The 480 is still a good card for the target audience, but, it should have been using even lower power than what we are seeing.

Samsung has made their products using 14nm LPP & LPE, and form what I read, they aren't having the same power issues, but, then again, they don't have as big of chips as what AMD is doing.

The question is why, what is it that is needing so much more power than expected from the new 14nm node?

67DF:C7 is the version of the chip not the stepping. It's the same as saying RX 480. 67DF:C4 is likely the RX 470 for instance.

I'm not totally convinced this is completely or at all a GloFo issue. Whatever the bug is that is causing the card to over draw more juice from the PCI-E slot could very well be pulling gobs more power than the chip/board need. AMD is already looking into it. Possibly a bug in that new BTC function? Probably a BIOS update coming shortly.
 

lukart

Member
Oct 27, 2014
172
8
46
LOOOL... but the 480 is about to fight the 1060... however I can't see the 1060 winning since is a 50% cut of the 1080.... That is worse than 980!

PS: It's me or Intel Iris Pro ended into a eternal oblivion?


From what I know, it will beat 980 .. so yea AMD might be in trouble.
Also heard about 239$ but this one Im not 100% positive.

I was right one month ago about P10 being slower than 390X and guess if Im going to be right again
 

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
I know there is a lot of heated discussion going on, but I am getting myself a RX 480.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |