AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 43 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
i dont agree GCN is good architekture.It WAS good in fermi days and kepler days.But it is clearly inferior to maxwell/pascal by large margin.Its like NV would stay on fermi 6 years and only add some new stuff like delta color compression.Thats what AMD doing- staying on their fermi for 6 years now.
They need change it because GCN is old power hungry architecture like fermi was.
What part of GCN do you want changed? Because GCN is a bunch of IP that constantly gets updated. Memory controller? Media blocks? Shaders?
The shaders and front end are both the most dated parts of the architecture, since while they were tweaked, they never got changes that have large ripple effects throughout the device, meaning nothing too major. But supposedly that's what's happening with Vega. So what exactly do you want them to do?

And in case you didn't realize, Pascal has its roots in G80. That's right, the good ol' 8800 series of cards. Nothing gets done from scratch, and if you look at architectures backwards, they're all built the way you describe. New things get added, things get adjusted to fit in the new things, and refinements of existing things are done. It's never just throw the previous design out the door and hope for the best next time around. You solve the problems presented previously and move forward. (With the exceptions I described previously)
If anything GCN is relatively more modern, as AMD moved to a more compute capable architecture later than NVIDIA, meaning if you trace the architectures back, GCN is relatively younger.

GCN is good precisely because AMD can use it to compete with Tesla, Quadro and Geforce - all with one architecture. Just like on the CPU front with Zen.
Pascal is no different in that regard. The differences between GP102 and GP100 aside from the memory controllers and the cores implemented (FP16 and FP64) can be boiled down to more compartmentalization to help utilization in workloads that would otherwise have problems, by having 64 CUDA cores per SM instead of 128. Gaming doesn't really have that problem, so they saved die space by staying with 128 CUDA cores per SM like Maxwell. They could have done it AMD style and just have GP100 on the gaming market, they would just eat lower margins like AMD does.
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
CUDA is a fast moving target. The incompatibility percentage is increasing at each new CUDA release.
Thus its a useless effort...
A lot of the scientific community, especially individuals and in particular students, for whom coding in a familiar C++ environment is far easier than coding in CUDA due to various constraints, will find this extremely useful. In spite of what you might think CUDA adoption outside of large groups and collaborations is nonexistent, depending on which field you're in.

Why do you think FORTAN is still in use today? It's not because it's the fastest, but because its easiest to pick up and use for scientists who care about the computations, first and foremost - not about implementation details.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Pascal is no different in that regard. The differences between GP102 and GP100 aside from the memory controllers and the cores implemented (FP16 and FP64) can be boiled down to more compartmentalization to help utilization in workloads that would otherwise have problems, by having 64 CUDA cores per SM instead of 128. Gaming doesn't really have that problem, so they saved die space by staying with 128 CUDA cores per SM like Maxwell. They could have done it AMD style and just have GP100 on the gaming market, they would just eat lower margins like AMD does.
Yes, it was not until Maxwell, and to a much lesser extent Kepler, that NVIDIA used a little modification of the same underlying architecture for both gaming and HPC markets. G80, Fermi - these were always compute first architectures.
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
GPUs arent good for mining at all.
Sure that's why GPUs are used to do it.... Clearly you don't realize that the term GPU is antiquated and not even accurate for the way they currently function. Modern GPUs are general processors that are good at operations that benefit from their parallelism. Currencies that can be mined on GPUs were specifically developed to be ASIC resistant because they were designed to run on GPUs. Your statement is laughably ignorant.

You need a gazillion of them to even get a decent profit. In addition to that you cant buy GPUs for mining now and think that in 6-10 months you can break even when difficulty is skyrocketing.
This is a gross oversimplification. Generally speaking the price of cryptos goes up as the difficulty goes up. Perhaps its that the price tends to catch up the difficulty. EIther way that's why there is a mining craze right now. You don't need a gazillion of them to make a profit. Smaller "home rigs" that people have been mining on since ETH launched are still viable. They are also the ones making a MINT right now because their wallets are nice and fat. Again, pure ignorance on your part.

The people that mine are either big players. Or you have the same people that think they buy stocks that also mine. Running 1 GPU 24/7 and making like $50 a month. Wohooo you are awesome lol. Mining is too damn painful unless you rent a room in a warehouse. Running a leafblower or like 10 of these loud GPUs 24/7 in your house, with 1000s of watt spewing in to your house and maybe you break even in like a half year or even a year. Yeah no thank you

Again it depends when you started mining. Also, it IS awesome when a computer can literally print money. I have a wallet of ETH that I mined on a single GPU when ETH first turned on. I have a bunch more money now that the price is up. Let me just say it's a lot more than $50 bucks. Have fun with your storytelling. It's preventing you from making money. I find it laughable that every one of your comments on this forum are nothing more than ignorant storytelling to fit whatever narrative you want to push. There is a reason why few people bother to respond to you. My advice....more research or experience, less posting.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
And in case you didn't realize, Pascal has its roots in G80. That's right, the good ol' 8800 series of cards. Nothing gets done from scratch, and if you look at architectures backwards, they're all built the way you describe. New things get added, things get adjusted to fit in the new things, and refinements of existing things are done. It's never just throw the previous design out the door and hope for the best next time around. You solve the problems presented previously and move forward. (With the exceptions I described previously)
If anything GCN is relatively more modern, as AMD moved to a more compute capable architecture later than NVIDIA, meaning if you trace the architectures back, GCN is relatively younger.

Exactly. Nothing gets done from scratch. These companies tweak and optimize each new generation of CPUs or GPUs learning from their previous generations. Even clean sheet CPU designs like Zen use a lot of the useful and well working stuff from existing designs as AMD did by using parts of their small core Jaguar and big core Bulldozer.
 

Head1985

Golden Member
Jul 8, 2014
1,866
699
136
What part of GCN do you want changed? Because GCN is a bunch of IP that constantly gets updated. Memory controller? Media blocks? Shaders?
The shaders and front end are both the most dated parts of the architecture, since while they were tweaked, they never got changes that have large ripple effects throughout the device, meaning nothing too major. But supposedly that's what's happening with Vega. So what exactly do you want them to do?

And in case you didn't realize, Pascal has its roots in G80. That's right, the good ol' 8800 series of cards.
Nope G80 was fisrt GPU that have shaders + they was clock at more than 2x clock than rest GPU(576mhz GPU but shaders runs at 1350mhz).And they keep it untill kepler where they change it back to same clock for whole GPU and went from 512SP at high clock(744mhzGPU/1544mhz for shaders) in fermi to 2880SP at bellow 1Ghz with 780TI but with shaders at same clock like rest GPU.

GCN is just bad.Compare wega to GP102:
It is bigger so it should be faster than FULL GP102 whitch is only TITANXP.But it wont be faster than TITANXP.It will be slower probably by 30-40%.
it uses HBM2 so it should use way less power because GDDR5x is more power hungry.But it wont.Vega will probably eat like 50w more(300w tdp vs 250w)
So as you can see GCN is inferior and old.It cant keep up with latest NV architecture and they need change it radically and not only add new stuff to 6years old GCN.

They need same change like it was fermi to kepler.Not only add new stuff to old GCN
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
bla bla bla personal attacks.
bla bla bla clearly upset and angry.

Here, let me quote todays etherum mining situation:
A RX 580 running 24/7 for an entire month will give you a profit of about $50. That is with taking todays value in to account

RX 580 sell for about $500 today. But lets go down to $300 just to say you made a great deal. It will take you 6 months (like I said to pay for the GPU). Running all the time. Getting one for $500 will take you 10 months...

So 6 months have passed and now you are beginning to make money.
Whoopdidooo, how awesome.

Keeping an eye on the GPU for power loss, replacing dead GPUs only to have to mine more months to make up for the cost again, heat spewed in to your house, leaf blowing noise all the time....

What a silly thing to do as an amateur.
But I guess this makes you feel like you are a "miner".
The same as the people who think the invested in AMD and is a stock owner when they bought stocks for $20. What a big playa you are.

In the meantime you have the big players with hundreds of these GPUs running in a warehouse with proper ventilation and power available. And can make all the noise they want without bothering anyone. Truly making profit when they have passed the breaking even point.
All laughing at the amateurs that does this inside their house.

No, please. The only smart people is the ones that invest in cryptocurrency as the price goes up along with difficulty. That doesnt do jack other than let other people mine for them. Or the people above in huge farms of mining equipment.

Neither of you amateur miners impress anyone but yourself.
If you think its fun, by all means, keep doing what you love, occupying the GPUs for just mining and nothing else, but you are hardly any "miner"
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Nope G80 was fisrt GPU that have shaders + they was clock at more than 2x clock than rest GPU(576mhz GPU but shaders runs at 1350mhz).And they keep it untill kepler where they change it back to same clock for whole GPU and went from 512SP at high clock(744mhzGPU/1544mhz for shaders) in fermi to 2880SP at bellow 1Ghz with 780TI but with shaders at same clock like rest GPU.

GCN is just bad.Compare wega to GP102:
It is bigger so it should be faster than FULL GP102 whitch is only TITANXP.But it wont be faster than TITANXP.It will be slower probably by 30-40%.
it uses HBM2 so it should use way less power because GDDR5x is more power hungry.But it wont.Vega will probably eat like 50w more(300w tdp vs 250w)
So as you can see GCN is inferior and old.It cant keep up with latest NV architecture and they need change it radically and not only add new stuff to 6years old GCN.

They need same change like it was fermi to kepler.Not only add new stuff to old GCN
There's no correlation between having a bigger die and getting more performance, within a generation. GTX 285 wasn't consistently faster than the HD 4890, given the differences between their die sizes. Same with HD 5870 and GTX 480, or with the HD 6970 and GTX 580.

GCN dies are bigger because there's no bifurcation between a compute-oriented GPU and a gaming-oriented GPU. It seems you're unable to grasp this simple fact.
 
Last edited:

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Here, let me quote todays etherum mining situation:
A RX 580 running 24/7 for an entire month will give you a profit of about $50. That is with taking todays value in to account

RX 580 sell for about $500 today. But lets go down to $300 just to say you made a great deal. It will take you 6 months (like I said to pay for the GPU). Running all the time. Getting one for $500 will take you 10 months...

So 6 months have passed and now you are beginning to make money.
Whoopdidooo, how awesome.

Keeping an eye on the GPU for power loss, replacing dead GPUs only to have to mine more months to make up for the cost again, heat spewed in to your house, leaf blowing noise all the time....

What a silly thing to do as an amateur.
But I guess this makes you feel like you are a "miner".
The same as the people who think the invested in AMD and is a stock owner when they bought stocks for $20. What a big playa you are.

In the meantime you have the big players with hundreds of these GPUs running in a warehouse with proper ventilation and power available. And can make all the noise they want without bothering anyone. Truly making profit when they have passed the breaking even point.
All laughing at the amateurs that does this inside their house.

No, please. The only smart people is the ones that invest in cryptocurrency as the price goes up along with difficulty. That doesnt do jack other than let other people mine for them. Or the people above in huge farms of mining equipment.

Neither of you amateur miners impress anyone but yourself.
If you think its fun, by all means, keep doing what you love, occupying the GPUs for just mining and nothing else, but you are hardly any "miner"


Your argument is fairly valid for today's state of crypto mining but that hasn't always been the case. Do you have any idea how many of us have made hundreds of thousands, and in some cases millions, off crypto mining in our homes? Yeah, late adopters are kidding themselves if they think they'll make it big but BTC and ETH have been very good to many of us. I personally have seen over a 20x ROI on ETH thus far (I plan to mine for another year or so with my current hardware) and I had a similar experience with BTC. There's no need for the negativity and insults.
 
Reactions: Cloudfire777

PhonakV30

Senior member
Oct 26, 2009
987
378
136
I really don't know What the hell is going on with Vega ? for clock to clock with Fiji and With 40% or 4.2B transistors more , yet Vega is showing Absolutely zero performance gain ? what are those extra transistors ?
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Nope G80 was fisrt GPU that have shaders + they was clock at more than 2x clock than rest GPU(576mhz GPU but shaders runs at 1350mhz).And they keep it untill kepler where they change it back to same clock for whole GPU and went from 512SP at high clock(744mhzGPU/1544mhz for shaders) in fermi to 2880SP at bellow 1Ghz with 780TI but with shaders at same clock like rest GPU.

GCN is just bad.Compare wega to GP102:
It is bigger so it should be faster than FULL GP102 whitch is only TITANXP.But it wont be faster than TITANXP.It will be slower probably by 30-40%.
it uses HBM2 so it should use way less power because GDDR5x is more power hungry.But it wont.Vega will probably eat like 50w more(300w tdp vs 250w)
So as you can see GCN is inferior and old.It cant keep up with latest NV architecture and they need change it radically and not only add new stuff to 6years old GCN.

They need same change like it was fermi to kepler.Not only add new stuff to old GCN
How does your little clockspeed story go against what I said? You can still trace things backwards, with the architectures building upon what came before.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Your argument is fairly valid for today's state of crypto mining but that hasn't always been the case. Do you have any idea how many of us have made hundreds of thousands, and in some cases millions, off crypto mining in our homes? Yeah, late adopters are kidding themselves if they think they'll make it big but BTC and ETH have been very good to many of us. I personally have seen over a 20x ROI on ETH thus far (I plan to mine for another year or so with my current hardware) and I had a similar experience with BTC. There's no need for the negativity and insults.
I was actually thinking about Bitcoins when the whole thing was new. I had the hardware, ie powerful GPU but didnt bother because I thought it was something that would never catch up. Also was the ready to pull the trigger on one of the first ASICs that mined like crazy. And if I purchased one, I would have been one of the few that actually got a hold on these since ASICs since they only delivered a handful to the first couple of waves of people that ordered. After that the manufacturers held on to them and mined for themselves. Or just scammed people that thought they ordered the biggest money printing decice on the planet but never delivered.

Still kicking myself for not going for Bitcoins when it was new.

Yeah dont disagree that early adopters could make a lot of money at all. I say could, because I know some that had a lot of Bitcoins but sold when they were worth like $20. The rich people is those who held on to them. Clever bastards
 

Head1985

Golden Member
Jul 8, 2014
1,866
699
136
How does your little clockspeed story go against what I said? You can still trace things backwards, with the architectures building upon what came before.
When AMD last time did something like fermi-kepler changes?Well it was 6years back when they create GCN and move from vliw to GCN.Vega is basicaly 2x 7970 + some add stuff on that.
Fermi to kepler was totaly different architecture.
And AMD need that again because GCN is old and its not working for them.They need something completely new.
 
Reactions: crisium

Head1985

Golden Member
Jul 8, 2014
1,866
699
136
There's no correlation between having a bigger die and getting more performance, within a generation. GTX 285 wasn't consistently faster than the HD 4890, given the differences between their die sizes. Same with HD 5870 and GTX 480, or with the HD 6970 and GTX 580.

GCN dies are bigger because there's no bifurcation between a compute-oriented GPU and a gaming-oriented GPU. It seems you're unable to grasp this simple fact.
Back then ATI was waaaaaay ahead of NV because vliw was better than anything NV have at that time so yes you just said it.ATI was better back then and now its nv who is better because GCN is old and not working for AMD.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
videocardz.com
AMD Radeon Vega Frontier News Roundup
https://videocardz.com/70837/amd-radeon-vega-frontier-news-roundup
So far, AMD has not confirmed the exact die size of Vega. Raja Koduri did mention that Vega 10 die is smaller than 500 mm2 and later in a tweet, he suggested that the actual size is the closest perfect square to that number. This means 484 mm2. The only problem is: it was never officially confirmed.



Ryan Shrout reached out to AMD to confirm if the die is really 484 mm2, but yet again, AMD did not confirm it. There’s absolutely no reason to keep this information secret anymore. Same applies to the number of transistors.

I made this chart to give you a comparison between the last-gen 4096-core part and current-gen 14nm GPU. While the size of the GPU has doubled compared to Polaris, the core count did not. The density of stream processors is lower, but thanks to higher clock speed the actual performance per mm2 is closer to RX 580. If we were to assume the same transistors density for 14nm part, we would expect around 11 billion transistors.

AMD VEGA 10 vs FIJI XT vs POLARIS 10
Specifications
AMD Vega 10
Vega Frontier
AMD Fiji XT
R9 Fury X
AMD Polaris 20
RX 580
Fabrication Node
14 nm 28 nm 14 nm
Die Size 484 mm2 596 mm2 232 mm2
Transistors ? 8.9 B 5.7 B
Transistors / mm2 ? 14.9 M 24.6 M
Stream Processors 4096 4096 2304
Cores per mm2 8.46 C 6.87 C 9.93 C
TFLOPs 13.1 TFLOPS 8.2 TFLOPS 5.8 TFLOPS
GFLOPS / mm2 23.4 GFLOPS 13.7 GFLOPS 24.9 GFLOPS
The video starts at die size discussion:

AMD Vega DirectX 12_1
HardwareLuxx has a story about Vega DirectX 12 support. According to the information contained in the drivers, Vega would feature DirectX level 12_1, which is the latest tier for DX12 support. This means that Vega will support Conservative Rasterization and Rasterized-ordered Views, both already supported by Pascal.



Source: HardwareLuxx, Overclock3D

AMD Radeon Vega Frontier clock to clock with R9 Fury X: no IPC gain?
Another interesting story was published by GamersNexus. They tested both Frontier and Fury X clock to clock (1050 MHz) to compare the actual IPC gain on two parts. Surprisingly Vega is showing zero performance gain when games are considered. However, Vega actually does perform better in SPECviewperf — a benchmark tool Vega Frontier was actually designed for. This means that there are some architectural upgrades, but they are unlikely to be reflected in games. Frontier is only doing better in games, because of the higher clock speed.





Source: GamersNexus

AMD Radeon Vega Frontier in CrossFire
Now let’s move on to CrossFire. PCPerspective had the exclusive first review of Vega Frontier. Now they have the first Vega Frontier review in CrossFire. Multi-GPU drivers for Vega are clearly not optimized yet. Depending on the resolution we are looking at 12 to 84% gain. Only in Witcher3 4K two Frontiers outperformed single GTX 1080 Ti.

Radeon Vega FE CrossFire versus
Vega Frontier Edition
GTX 1080 Ti GTX 1080 Fury X
Dirt Rally

2560×1440 +20% -24% +7% +39%
3840×2160 +54% -2% +47%
Fallout 4
2560×1440 +12% -23% +3% +43%
3840×2160 +16% -29% +1%
Grand Theft Auto V
2560×1440 +19% -20% -18% +38%
3840×2160 +84% -7% +27%
Hitman
2560×1440 -1% -30% -15% +25%
3840×2160 +0% -33% -16%
The Witcher 3
2560×1440 +42% -13% +18% +64%
3840×2160 +75% +4% +46%
Source: PCPerspective

AMD Radeon Vega Frontier: Gaming Mode vs Pro Mode has no performance difference?
Another story from GamersNexus focuses on Frontier’s built-in modes. As you remember, there are Pro and Game modes for the Frontier. As the name suggests one should be for gaming, while the other should be for professional applications. In reality, both modes are offering the same kind of performance, no matter the use. This means that currently, the only difference between the modes is to toggle the Radeon Setting UI to get rid of the unnecessary features.



Source: GamersNexus

AMD Radeon Vega Frontier has 16GB, but is it enough for 4K?
PC Games Hardware team also have the Frontier. They have two videos, one is a short unboxing and the other is raw performance overview with Tomb Raider. They are running it in 4K and demonstrating how much memory does Tomb Raider in DX12 really need.

Raja Koduri earlier said that RX Vega might also launch in 16GB variant (honestly I don’t see why it shouldn’t), but chances are this is going to be a limited edition of some sort, maybe that’s the path water-cooled RX Vega will go?

If Frontier’s performance figures are anyhow representative for RX Vega, then we should expect this part that will fit in-between GTX 1080 and GTX 1080 Ti. The first one has 8GB GDDR5X memory and 320 or 352 GB/s bandwidth (since we also have 11 Gbps edition), while the latter has 11GB of GDDR5X with 484 GB/s bandwidth. If Vega is to compete with the 1080 Ti, then it should also have more memory, as the bandwidth is essentially the same.
 
Last edited:

Veradun

Senior member
Jul 29, 2016
564
780
136
"So far, AMD has not confirmed the exact die size of Vega. Raja Koduri did mention that Vega 10 die is smaller than 500 mm2 and later in a tweet, he suggested that the actual size is the closest perfect square to that number. This means 484 mm2. The only problem is: it was never officially confirmed.

Ryan Shrout reached out to AMD to confirm if the die is really 484 mm2, but yet again, AMD did not confirm it. There’s absolutely no reason to keep this information secret anymore. Same applies to the number of transistors."

The reason is simple: they haven't launched it yet.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
"So far, AMD has not confirmed the exact die size of Vega. Raja Koduri did mention that Vega 10 die is smaller than 500 mm2 and later in a tweet, he suggested that the actual size is the closest perfect square to that number. This means 484 mm2. The only problem is: it was never officially confirmed.

Ryan Shrout reached out to AMD to confirm if the die is really 484 mm2, but yet again, AMD did not confirm it. There’s absolutely no reason to keep this information secret anymore. Same applies to the number of transistors."

The reason is simple: they haven't launched it yet.

They have launched Vega. People have it. RX Vega is the same size as Vega FE.

The reason is probably simpler: embarrassment.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
"So far, AMD has not confirmed the exact die size of Vega. Raja Koduri did mention that Vega 10 die is smaller than 500 mm2 and later in a tweet, he suggested that the actual size is the closest perfect square to that number. This means 484 mm2. The only problem is: it was never officially confirmed.

Ryan Shrout reached out to AMD to confirm if the die is really 484 mm2, but yet again, AMD did not confirm it. There’s absolutely no reason to keep this information secret anymore. Same applies to the number of transistors."

The reason is simple: they haven't launched it yet.
RX Frontier edition is out? I'm confused how isn't it launched?
You're saying the GPU is out, but the underlying chip isn't?
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Regarding the overall shortcomings of GCN, this post probably gives the best overview.

Highlights:
AMD's design spends so much space and energy tilting at windmills that it's at a massive size and energy disadvantage compared to NVIDIA's pared-down/graphics-oriented architecture. This chip uses twice the die size and twice the power of a GP104 for the same performance.

By "tilting at windmills" I mean that AMD devotes an enormous amount of hardware to solving general scheduling problems and work-stealing and spinning off async tasks to try and fill its pipeline bubbles. All of that die space and power go to something that is not drawing graphics on your monitor. AMD tries to optimize shitty instruction streams in hardware, rather than optimizing them on the CPU where it's fast like NVIDIA does and spitting out a single optimized command queue that minimizes pipeline bubbles in the first place. GPUs are not a general architecture and it's perfectly sensible to feed them in the way they like to be fed. It's not bad, it's not "cheating", etc etc. It's called bare-metal optimization.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Back then ATI was waaaaaay ahead of NV because vliw was better than anything NV have at that time so yes you just said it.ATI was better back then and now its nv who is better because GCN is old and not working for AMD.
This isn't about who was better back then. It's about your shaky claims that something with a bigger die size must be better.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |