Vega/Navi Rumors (Updated)

Page 59 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

zinfamous

No Lifer
Jul 12, 2006
111,143
30,099
146
What matters at the end..?
OK, buy a 1080ti & Vega this summer. See which one is more powerful in 2 years time, when you are playing 2018 releases.

And again, Pascal is EOL bro. Nvidia's tech is EOL, that is why they are redesigning new chip with new technology for 2018. Right?


But you really give yourself away here:
" ..but that is pure speculation to assume this happens, and is essentially based on ignoring the fundamental history of AMD's design with relation to actual developers. It is the same story over and over again, and I am not sure why you keep buying it. "

What fundamental part of history..?
That AMD makes CPUs, APUs and GPUs..? While Nvidia only has experience in making a GPU..? Or the fundamental aspect that NVidia doesn't have, nor invested in HSA uArch..?


History..?
Nvidia has always had a GPU advantage, because Jen-Hsun Huang ALWAYS penned better fab deals, than AMD's CEOs. (Vega & Volta will be on par.) So it will come down to tech, not fab. Nvidia's cuda tech has dried up.

You are substituting (not-yet existing) data for wild speculation to make your argument. I think everyone but you knows where the beef is, here.
 

zinfamous

No Lifer
Jul 12, 2006
111,143
30,099
146
If we would be looking for history we would be expecting Ryzen CPUs to be crap, as Bulldozer was.

When you get technology - nothing is 100% sure. Until we will see how hardware performance of Vega translates into Software performance your claims about it being more advanced are rendered baseless.

100% that.

Further, comparing AMD's gains with Ryzen compared to Intel as a basis for Vega vs nVidia is also ludicrous. Not to knock on AMD, but when it comes to CPU design, they were largely helped out by physical limits on x86 design, which Intel had more or less reached about 3 years ago, thus the justified (and sometimes un-justified) claims of Intel simply stagnating on performance. This is not the case with the GPU sector, and is why nVidia has been crushing it @25-30% gains each year, in that same time. AMD has been fairly good here, as well, but you can't compare these designs across sectors.
 

Glo.

Diamond Member
Apr 25, 2015
5,802
4,776
136
100% that.

Further, comparing AMD's gains with Ryzen compared to Intel as a basis for Vega vs nVidia is also ludicrous. Not to knock on AMD, but when it comes to CPU design, they were largely helped out by physical limits on x86 design, which Intel had more or less reached about 3 years ago, thus the justified (and sometimes un-justified) claims of Intel simply stagnating on performance. This is not the case with the GPU sector, and is why nVidia has been crushing it @25-30% gains each year, in that same time. AMD has been fairly good here, as well, but you can't compare these designs across sectors.
Funniest part is that this is actually correct. The only way current CPU uArch designs to improve more than 3-5% each generation is to get rid of legacy hardware, and move into something new, and fresh, and let the legacy be just emulated.

End of off-topic.
 

w3rd

Senior member
Mar 1, 2017
255
62
101
And I think, you know now why I was so hesitant to bring the "rumor" on this forum.

And you seem to be extremely confident in what you know about the Vego GPUs.

Nobody is buying what you are selling.

P.S. What does mean to you "that Nvidia 1 year old architecture is not as advanced as Vega is?". How does it translate into actual SOFTWARE performance?

Explain to us how it doesn't.
Then attempt to explain to us all, how AMD is going to fold and that Vega is just faster CGN, etc. Why are you are seemingly dismissing Vega, but not Volta..? Because no proof..? What makes you think Volta will be more powerful than Pascal..? See what I did there..?

I find your form of reasoning ignorant & illogical. And essentially you are saying AMD can not compete and their brand new GPU technology is no better than Nvidia's released a year ago. Without any proof.



Show a link of Vega performance of any kind. You have no facts to base your opinion on. You are speculating and passing it off as fact. You're not fooling anyone here.

Well, in contrast to your argument/logic, can you show me a link were Volta is better than Pascal..? Or tell us why you would assume that older Pascal is better than Volta, even though we have no benchmarks for Volta.

Essentially, You are saying there is Zero evidence, so therefore Vega is worse than Pascal, BECAUSE Nvidia Yo..

You are not being objective, so If I can see it, then you can to. Everyone understands/knows that little Vega will match then GTX1080 and probably sell for $100 less. And big Vega will push Pascal aside in power, price & efficiency.

Why do you feign ignorance..? The writing is on the wall.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
When people are comparing Pascal's theoretical peak TFLOPs to Vega's theoretical peak TFLOPs as an indicator of performance, they seem to be speculating that Pascal is operating at 100% efficiency. Perhaps Vega is more efficient.
 

Glo.

Diamond Member
Apr 25, 2015
5,802
4,776
136
Explain to us how it doesn't.
Then attempt to explain to us all, how AMD is going to fold and that Vega is just faster CGN, etc. Why are you are seemingly dismissing Vega, but not Volta..? Because no proof..? What makes you think Volta will be more powerful than Pascal..? See what I did there..?

I find your form of reasoning ignorant & illogical. And essentially you are saying AMD can not compete and their brand new GPU technology is no better than Nvidia's released a year ago. Without any proof.
Read twice my post.

You seem to fail in simplest thing which is reading comprehension. After you will fix this, then try to prove to everyone that Vega is better.

It might be. The problem which you seem do not understand is that part from my quote: "How hardware performance translates into software performance". Until we will see how it performs, your speculation is rendered baseless.

What you are doing is your are creating unnecessary hype over the hardware.
 

w3rd

Senior member
Mar 1, 2017
255
62
101
Look in the mirror people..!
I find it odd, that so many people here believe that DICE, or Bethesda(etc), would not code for efficiency within AMD's Vega new cores/tech..?

Without explaining to us, why..?

Is it fanboism?, ignorance?, bias?, or the mentality of the past practice, when the industry was in bed with NVidia, & did exclusive PhysX (The Way It's Meant To Be Played) coding...? Or used CUDA for their own means..?

And that somehow, game developers & software companies today, simply will not find use at all for Vega's new tech..?

Ever...?


So therefore, AMD will forfeit any benefit it has and just compete with Pascal in raw math calculations and not real world usage & games..? Is that the argument some of you people are trying to have here..?
 

zinfamous

No Lifer
Jul 12, 2006
111,143
30,099
146
Look in the mirror people..!
I find it odd, that so many people here believe that DICE, or Bethesda(etc), would not code for efficiency within AMD's Vega new cores/tech..?

Without explaining to us, why..?

Is it fanboism?, ignorance?, bias?, or the mentality of the past practice, when the industry was in bed with NVidia, & did exclusive PhysX (The Way It's Meant To Be Played) coding...? Or used CUDA for their own means..?

And that somehow, game developers & software companies today, simply will not find use at all for Vega's new tech..?

Ever...?


So therefore, AMD will forfeit any benefit it has and just compete with Pascal in raw math calculations and not real world usage & games..? Is that the argument some of you people are trying to have here..?

It seems that you simply don't understand the difference between making bold claims on pure, untested speculation and using actual data. You aren't fooling anyone.

Yeah, DICE and Bethesda may just do that, and that would be great. But look at what happened when BF1 "built for AMD!" was released and with 480 performance compared to 1060 in that title. 480 was not demonstrably better. It has gotten better, but it took about 6 months. The same issue with Polaris: greater TFLOP advantage on PAPER compared to the comparable 1060, but relatively equal actual performance.

Again: where's the beef?

You are confusing people's legit skepticism over your unfounded claims as some anti-AMD argument. Calling us [nVidia] fanbois? lol. good luck with that. You aren't explaining anything to anyone. You are making the most irrational claims anywhere, yet demanding the non-existing proof from others to prove you wrong? What is your deal--not yet gotten through the critical thinking and rational argument phase of your current education curriculum?
I don't think you will last long here unless you adjust your attitude, my friend.
 

Mopetar

Diamond Member
Jan 31, 2011
8,110
6,754
136
When people are comparing Pascal's theoretical peak TFLOPs to Vega's theoretical peak TFLOPs as an indicator of performance, they seem to be speculating that Pascal is operating at 100% efficiency. Perhaps Vega is more efficient.

Comparing peak theoretical performance across architectures like that doesn't work. You can look back and see that Fury had substantially higher theoretical FLOPs than the 980 Ti, but often didn't perform nearly as well. A lot of that was said to be due to problems utilizing all of the Fury's hardware, so if AMD has focused on solving that problem, perhaps it will be more indicative, but I wouldn't accept it as a given.
 

w3rd

Senior member
Mar 1, 2017
255
62
101
edit:
prime example: "Until we will see how hardware performance of Vega translates into Software performance your claims about it being more advanced are rendered baseless."

May I ask, why not just: "Until I see how hardware performance translates into software performance..." I won't believe you (AMD)..!


You and Others are suggesting, that AMD's VEGA cant compete with Pascal, because nobody has knocked on your door and showed you proof. Therefore Nvidia tech is better, because AMD sucks.

Nvidia Yo..!


I am asking since when has later technology, (from either company) not been better than old tech..? What reasons are you giving for you lack of logic as to why Vega's tech isn't better, or more powerful than Pascal..?

Where is your proof that AMD next gen tech, is piss-poor and no greater than something Nvidia developed. Not only that (developed), but released a year ago.

Because no matter your answer, it comes back to Volta vs Pascal. If you believe Volta can be better than Pascal, then you have to believe Vega will to.

If not, then see Navi.... right? Because I already said by this time next year, the GTX1080ti will be under $400.

1080p
1444p
1440p wide
1600p
4k


That is the market.
AMD can sell the Radeon 480 for $99 bucks via 580, locking up 1080p & 1440p. Leaving 1080 & 1080ti fighting the middle tiers.

Vega is 4k.. Pascal is not!
 

Glo.

Diamond Member
Apr 25, 2015
5,802
4,776
136
Comparing peak theoretical performance across architectures like that doesn't work. You can look back and see that Fury had substantially higher theoretical FLOPs than the 980 Ti, but often didn't perform nearly as well. A lot of that was said to be due to problems utilizing all of the Fury's hardware, so if AMD has focused on solving that problem, perhaps it will be more indicative, but I wouldn't accept it as a given.
Compute benchmarks show different story. If you are basing this on only on gaming benchmarks, you may be right, but in Compute, R9 390X is on par with GTX 980 Ti. Fury X is faster.

Here is an example:
RX 480 being 10% slower than GTX 1070 on average in compute benchmarks.

How come? One has 5.8 TFLOPs(RX 480) second has 6.5 TFLOPs of compute power(GTX 1070).

P.S. About obvious trolling in this section of forum. I think some people turned into religious fanatics, right now in terms of AMD brand. "Believe, before you will see it!". God, we're doomed.

 

w3rd

Senior member
Mar 1, 2017
255
62
101


I am not a troll and I am completely unbiased. My pros/questions point to reasoning & logic.
I own & buy whatever.

thats^ the w3rd.
 

Mopetar

Diamond Member
Jan 31, 2011
8,110
6,754
136
Compute tasks can typically get closer to peak performance as most are embarrassingly parallel. The problem is that when you have some kind of mixed workload, the additional SPs become under-utilized.

I don't think anyone was legitimately going to say that AMD wasn't king for consumer compute considering that's all people run for dedicated mining rigs. However, you can't just look at the TFLOPs and draw such a broad comparison.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Comparing peak theoretical performance across architectures like that doesn't work. You can look back and see that Fury had substantially higher theoretical FLOPs than the 980 Ti, but often didn't perform nearly as well. A lot of that was said to be due to problems utilizing all of the Fury's hardware, so if AMD has focused on solving that problem, perhaps it will be more indicative, but I wouldn't accept it as a given.

I'm not. But people seem to be accepting it as a given that it won't increase in efficiency. Personally, i think it will. If that is the case, the performance delta will be higher than the delta between Pascal peak and Vega peak. Forget about Fury, the architecture is completely different.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Well, in contrast to your argument/logic, can you show me a link were Volta is better than Pascal..? Or tell us why you would assume that older Pascal is better than Volta, even though we have no benchmarks for Volta.

I never made any such claim. It is now apparent you are trolling. I'm out.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
Explain to us how it doesn't.
Then attempt to explain to us all, how AMD is going to fold and that Vega is just faster CGN, etc. Why are you are seemingly dismissing Vega, but not Volta..? Because no proof..? What makes you think Volta will be more powerful than Pascal..? See what I did there..?
Okay, please listen to yourself. How can we assume Volta is more powerful than Pascal? Because it's made by the same company, as an evolution of the aforementioned architecture. Is that so damn hard to grasp? Then again, no-one here is claiming Volta will be the bee's knees. In fact, no-one knows sh*t about Volta performance. Except possibly some engineers at Nvidia.
You're arguing that because Vega is made after Pascal, it must be faster. There is no relation between the two. Heck, I could go out and design a GPU tomorrow. Will it also be faster than Pascal, given that it would be newer? All we can assume is that Vega will be faster than Fiji and Polaris. Why? Because those are the architectures it builds on, and thus is set to improve on. Might it be a lot better? A lot faster, and a lot more efficient? Sure! But we don't know that. We can't know that until we actually get to see some benchmark numbers.
I find your form of reasoning ignorant & illogical. And essentially you are saying AMD can not compete and their brand new GPU technology is no better than Nvidia's released a year ago. Without any proof.
No. What we're saying is that a) Polaris is a damn good GPU architecture. No doubt about it. b) Pascal is a significantly better GPU architecture. More power efficient, better frequency scaling. Does it have deficiencies even when compared to Polaris? Absolutely! But its gaming performance is significantly better at the same die size and power level. If you deny that, you're denying reality. The 480 is a bigger die than the 1060, consumes ~20% more power, and performs roughly the same.
Well, in contrast to your argument/logic, can you show me a link were Volta is better than Pascal..? Or tell us why you would assume that older Pascal is better than Volta, even though we have no benchmarks for Volta.
Who here is saying that? You're the one bringing up Volta! Seriously, leave that alone.
Essentially, You are saying there is Zero evidence, so therefore Vega is worse than Pascal, BECAUSE Nvidia Yo..
NO. We're saying that because there's zero evidence, it's just illogical to assume that AMD has pulled another Ryzen-like coup out of their hat. They might have, sure. But as of now, we don't know. Will vega be better than Polaris? I don't doubt that for a second. But will it perform more than 30% better for the same power? That would be nigh on miraculous. It can still happen, but assuming that it'll happen, like you are, is just silly. You're setting yourself up for disappointment.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91

Yeah I posted that a few days ago:

http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...da-dice-talk-from-gdc-2017-frostbite.2501218/

Very cool tech.

AMD has stated that a lot of the features in Vega will take a while to get optimally used. Just like Async Compute for Hawaii, took years before it was ever used. Will probably have amazing longevity but show up "weak" at launch due to many engines not supporting a lot of its new architecture features.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Yeah I posted that a few days ago:

http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...da-dice-talk-from-gdc-2017-frostbite.2501218/

Very cool tech.

AMD has stated that a lot of the features in Vega will take a while to get optimally used. Just like Async Compute for Hawaii, took years before it was ever used. Will probably have amazing longevity but show up "weak" at launch due to many engines not supporting a lot of its new architecture features.

This push towards intelligent upscaling with special hardware registers is super cool. If we keep pushing resolution we're going to have to find more efficient ways of drawing those pixels than the brute force method of today. When the new 1080 Ti that came out after at least 2 full video card generations (4 half/bifurcated generations) is the first card to actually be comfortable at 60fps 4k with settings reasonably high, you know that there has to be a better way to approach the resolution problem. Way, way beyond my pay grade but super interesting nonetheless
 

Mopetar

Diamond Member
Jan 31, 2011
8,110
6,754
136
I'm not. But people seem to be accepting it as a given that it won't increase in efficiency. Personally, i think it will. If that is the case, the performance delta will be higher than the delta between Pascal peak and Vega peak. Forget about Fury, the architecture is completely different.

I think it's a given that they're working more on increasing the performance of their shaders (or at least the ability to feed all of them more effectively) as opposed to just adding more. However, it's still not a good idea to make performance comparisons in anything but compute workloads with perfect scaling based on TFLOPs alone, especially when it hasn't worked well in the past.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
This push towards intelligent upscaling with special hardware registers is super cool. If we keep pushing resolution we're going to have to find more efficient ways of drawing those pixels than the brute force method of today. When the new 1080 Ti that came out after at least 2 full video card generations (4 half/bifurcated generations) is the first card to actually be comfortable at 60fps 4k with settings reasonably high, you know that there has to be a better way to approach the resolution problem. Way, way beyond my pay grade but super interesting nonetheless

Yep, Raja was talking about wanting to push 16k per eye at high frame rates (144->240hz!) for things like VR

This part, is the most exciting, where Riguer added: "LiquidVR will continue to drive hardware and software technologies that will ultimately lead to the nirvana of VR: 16K/eye, 144Hz and above refresh rate, and virtually no latency, all in a wireless, small form factor package".

http://www.tweaktown.com/news/50820/amd-chasing-16k-per-eye-32k-144hz-above-vr/index.html

But when it came to the future of GPUs, Koduri said that VR will drive a large portion of this and that he won't be happy until we get 3D graphics that support 16K screens, and at 240Hz - yeah, 16K @ 240FPS. Just as Neo said in The Matrix: "whoa". Koduri says that this is when we'll reach the point of "true immersion that you won't be able to tell apart from the real-world". Koduri makes me excited about the future of not just RTG, but GPUs and VR, especially with quotes like that.

But VB asked Koduri "you're very excited about VR" to which the RTG boss responded: "We're just entering the VR era. You see all these 4K headsets. If you see what it takes to drive a VR headset, the pixel rate requirements are almost doubled up compared to the previous iteration. That's driving up demand for discrete graphics quite a bit. Now, if you push that forward, when you get to 16K by 16K resolution, 120Hz, you get to a pixel rate of 6 billion pixels per second. We're not going to get there if we just rely on Moore's Law. We have to do disruptive things to get there. That's the goal of our division, to get to the immersive era. We need to double up our products and technology, step-by-step. You'll see the key initiatives and technology this year (Polaris). You'll see more next year". (Vega)

He added:

There are several opportunities to take us to the immersive era. We'll be working with game developers and engine developers and so on. If we keep on the current trajectory, we need a million [uncertain - 6:12] per year to get us to the immersive era. This includes the performance you need not just at 200 watts. We need this performance at five watts, so that the VR experience is completely mobile. You'll need that sense of presence.

When I set the goal, I said, "We need to get here in our lifetime." We can't do that with Moore's law and hardware alone. We have to unleash software on this problem. We've been working with developers on all of these ideas. How can we get 16K by 16K displays refreshing at 240Hz with the picture that you want to draw? Developers want more control, on their side. They want console level GPU access on the PC.

What they've been able to achieve on consoles in the current generation, versus the current high-end PC-The current high-end PC specs are at least four to eight times faster than current consoles. The Fury X is an eight teraflop machine. The PS4 is a two teraflop machine. It's four times more compute in that single Fury. You can build a dual Fury PC. But PC doesn't give you that much better an experience with cutting edge content, because they can extract more performance from a console. They're also investing a lot of IP into that architecture. They're doing some really clever things that are not possible on the PC yet .
http://www.tweaktown.com/news/49693/amds-graphics-boss-vr-needs-16k-240hz-true-immersion/index.html

AMD have added support for MultiView Rendering (http://gpuopen.com/amd-liquidvr-multiview-rendering-in-serious-sam-vr/) and MultiRes Rendering (https://youtu.be/kfq0Z41oIyE) as well so yeah... lots of stuff to push for high fps at crazy high resolutions and I'm sure a lot of vega / navi will be pushing that even farther.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
What was the last time AMD matched Nvidia in performance-per-TFLOP in equal generations? At least when they were the latest architectures, so Kepler post-2014 doesn't count.

Boost clocks also sullies things a bit, now-a-days, but it may not matter since in the boost era I don't think it has happened.

If say looking at aggregate indexes like TPU, computerbase, I'd be curious about when it has happened across a broad range of games.
 

Det0x

Golden Member
Sep 11, 2014
1,264
4,020
136
No. You have no basis for speculating like that, let alone with the level of certainty you're giving off. The only demo we've seen of Vega has shown it slightly beating a 1080 in a relatively friendly game. We don't know how well (or badly) optimized the drivers were at that point, the clocks of the ES chip in question, or really anything else. We don't know the specs of any Vega GPU. Do I believe AMD has the knowledge, skills and engineers to compete with Pascal, and even Volta? Sure. But it would take a lot for them to leapfrog Pascal completely. After all, they'd need to (at the very least) catch up on efficiency for that to be feasible. Or do you see AMD launching a 300W+ GPU with Vega? That would be suicide. Rather than getting your hopes up for something sensational, how about we all keep a sane critical distance and hope AMD stays competitive? I'd much rather get a happy surprise than be disappointed.

You should follow this thread more closely.. Post #1330 and #1343

Some CompuBench results




VideoCardz - AMD Vega with 64 Compute Units spotted

Not sure about frequencies... final ones or will it change for a production cards?

BTW, CompuBench detected 4GB VRAM only

If we assume Vega should be running atleast @ 1500mhz core (look further down for "proof"), then we can extrapolate what the scores would be once clocks are finalized

1500/1200 = should net a 25% performance increase with 100% clockspeed scaling

Face detection: 1080TI is ~60% faster then Vega
"simulated 1500mhz Vega" = 194.58
1080TI = 313.26

TV optical flow: Vega is ~4% faster then 1080TI
"simulated" 1500mhz Vega = 72.13
1080TI = 69.22

Ocean surface: Vega is ~20% faster then 1080TI
"simulated" 1500mhz Vega = 4760.57
1080TI = 3944.8

Particle simulation: Vega is ~16% faster then 1080TI
"simulated" 1500mhz Vega = 2123.35
1080TI = 1816.88

T-Rex: 1080TI is ~7% faster then Vega
"simulated" 1500mhz Vega = 18.48
1080TI = 19.773

Video composition: Vega is ~5% faster then 1080TI
"simulated" 1500mhz Vega = 202.2
1080TI = 192.586

Bitcoin mining: Vega is ~7% faster then 1080TI
"simulated" 1500mhz Vega = 1509.07
1080TI = 1408.061

Now the big question is, how much above 1500mhz will the consumer Vega10 be running at ?
Afterall we have gotten that 1.5ghz number from a professional workstation-card leak, and as we know, consumer version tends to be clocked higher..

Not bad considering the 1500mhz Vega10 should be running at a 225w terminal envolope.



https://videocardz.com/65521/amd-vega-10-and-vega-20-slides-revealed
VideoCardz said:
With a single precision compute of 12 TeraFLOPs per second on a GPU with 4096 cores, and considering TeraFLOPs is a function of Clock Speed * 2 Instructions Per Clock * Cores, you are looking at a Vega 10 graphics card that is clocked at roughly 1500 MHz assuming it has the same amount of NCUs. Considering the fact that the MI25 is passively cooled and also if the consumer version has less cores, it will be clocked significantly higher than the 1500 MHz mark!

https://videocardz.com/67275/amd-vega-spotted-with-4096-cores-and-8gb-2048-bit-memory

Just yesterday we found the first CompuBench result revealing 64 Compute Units in Vega. Today we share another leak, which confirms 64 CUs, but also memory subsystem configuration.

AMD Radeon RX Vega: 4096 Stream Processors and 8GB memory
SiSoft benchmark detected 64 Compute Units on 687F:C3 device (so not C1 like in the previous leak). This device has 8GB 2048-bit memory configuration, which means two HBM2 stacks, each 4GB and 1024-bit.

This particular variant is clocked at 1200 MHz which translates into 9.8 TFLOPs (in theory) single-float. Quite some distance away from AMD’s promised performance number of 12 TFLOPs so its clear that the performance can increase by atleast +25% once the engineers have refined the clocks.

Here is the GTX 1080's scores for comparrison. And This Vega part is ahead by a reasonable amount, even if its only running by alpha drivers and clocks(1200mhz): http://imgur.com/z68xbLd


So indeed we have atleast some basis for believing AMD will open a can of whoopass
 
Last edited:
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |