Ryzen's poor performance with Nvidia GPU's. Foul play? Did Nvidia know?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Unreal Engine has a long development process optimizing for a given API. You're not going to have a completely revamped DX12-only engine that sheds all the previous baggage just because Microsoft decided to port Gears of War to the PC.

I never said, and even acknowledged that Gears of War 4 wasn't a 100% DX12 title. But it's as much a DX12 title as Ashes of the Singularity, or any other DX12 title that's currently available.

Telling me it's admissible because you don't like what the results mean, is just ridiculous.

What's so enlightening about that? GPU-bound tests showed better scaling on the Fury X vs the 980Ti. Also, this is straight from Kyle Bennet:

The entire point of DX12 and Vulkan is to reduce the CPU overhead, not GPU overhead

So if AMD still gains a lot from DX12 at GPU bound settings, then it reveals that their DX11 driver is totally inefficient and even garbage compared to NVidia's.


Honestly, could care less what Kyle thinks about the benchmark. It's just a benchmark, and it's non deterministic. But as an indication of DX12 performance, it's one of the best tools we have, because more than any other current 3D engine, the Nitrous Engine is the most geared towards DX12.


That's an old benchmark with old drivers. Totally inadmissible. Also, Total War Warhammer is one of those games that runs much faster for DX11 on NVidia.

In fact, the game runs faster on NVidia on DX11, than it does for AMD on DX12.
 
May 11, 2008
20,041
1,289
126
Sorry I missed this post, but those benchmarks I posted only come at those low resolutions, which is the point of CPU benchmarking because it takes the burden off of the GPU and puts it on the CPU.

That makes sense, but i prefer to see what happens at resolutions people actually play at.
I always have this idea, that as the resolution increases, it is the GPU that has to do all the heavy lifting. But the cpu also has to provide the gpu with more data nd test more data. The gpu driver is also handling more data, at least that makes sense.
I mean, i always wonder if the 3d engine has to do more work when the resolution increases. I wonder if collision detection and all that stuff would also increase as the resolution increases.
So, the burden on the cpu would increase as well, but obviously not as much as the gpu experiences.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Honestly, could care less what Kyle thinks about the benchmark. It's just a benchmark, and it's non deterministic. But as an indication of DX12 performance, it's one of the best tools we have, because more than any other current 3D engine, the Nitrous Engine is the most geared towards DX12.
Given the context of what you said in this paragraph alone, I have serious doubts whether you know what 'non-deterministic' means. You yourself admitted that it's 'just a benchmark'.
I never said, and even acknowledged that Gears of War 4 wasn't a 100% DX12 title. But it's as much a DX12 title as Ashes of the Singularity, or any other DX12 title that's currently available.

Telling me it's admissible because you don't like what the results mean, is just ridiculous.
No it isn't - they way you worded it seems to imply that every game out there supporting DX12 implements it in the same way, while the reality cannot be any farther from that.
The entire point of DX12 and Vulkan is to reduce the CPU overhead, not GPU overhead

So if AMD still gains a lot from DX12 at GPU bound settings, then it reveals that their DX11 driver is totally inefficient and even garbage compared to NVidia's.
Yeah, then please explain why even in GPU-bound tests the RX480 is so close between DX11 and DX12 and it can get better performance in DX12 in some instances while much more powerful cards like the GTX 1080 and GTX 1080Ti on average have slightly lower performance in DX 12 compared to DX11?
https://www.hardocp.com/article/2017/03/22/dx12_versus_dx11_gaming_performance_video_card_review/
That's an old benchmark with old drivers. Totally inadmissible. Also, Total War Warhammer is one of those games that runs much faster for DX11 on NVidia.

In fact, the game runs faster on NVidia on DX11, than it does for AMD on DX12.
Who cares if it's old? If you can't explain it then it exposes your double standards - you are doing the same thing that you accuse me of by dismissing data that doesn't agree with your views.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That makes sense, but i prefer to see what happens at resolutions people actually play at.
I always have this idea, that as the resolution increases, it is the GPU that has to do all the heavy lifting. But the cpu also has to provide the gpu with more data nd test more data. The gpu driver is also handling more data, at least that makes sense.
I mean, i always wonder if the 3d engine has to do more work when the resolution increases. I wonder if collision detection and all that stuff would also increase as the resolution increases.
So, the burden on the cpu would increase as well, but obviously not as much as the gpu experiences.

I'm pretty sure that isn't how it works. The CPU has nothing to do with putting pixels on screen, so the CPU burden decreases as resolution increases because the GPU is churning out frames at a slower pace. At lower resolutions it's the exact opposite, where the GPU is churning out into the hundreds of frames per second and the CPU workload is increased as a result.

The CPU just tells the GPU what to draw. Also CPU workloads like collision detection, physics, animation etcetera don't scale with resolution to my knowledge.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I'm not dismissing that the driver is at fault. I think it is, somehow, but not intentionally. Likely it's just that NVidia has to optimize their drivers for AMD's SMT. This is the first SMT enabled CPU that AMD has ever produced to my knowledge, and it's obviously different from Intel's. That combined with the fact that AMD never sent any samples to NVidia to test, means that NVidia never got a chance to optimize their drivers for Ryzen.

Intel on the other hand has had SMT capable CPUs for years out in the field, and NVidia has definitely optimized for it as seen by the benchmarks I posted a few pages back.
Which is exactly what I hypothesized myself. Not that the game devs screwed it up.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Given the context of what you said in this paragraph alone, I have serious doubts whether you know what 'non-deterministic' means. You yourself admitted that it's 'just a benchmark'.

I remember reading somewhere that Ashes of the Singularity has unscripted A.I routines or some such which made the gameplay segments very different each time, and this obviously affects the benchmark. Every time you run the benchmark, the A.I does a different routine, which can affect the performance outcome in subtle ways.

But maybe I'm wrong *shrugs.*

No it isn't - they way you worded it seems to imply that every game out there supporting DX12 implements it in the same way, while the reality cannot be any farther from that.

Well I'm sorry you have bad reading comprehension, because I never said anything like that at all.

Yeah, then please explain why even in GPU-bound tests the RX480 is so close between DX11 and DX12 and it can get better performance in DX12 in some instances while much more powerful cards like the GTX 1080 and GTX 1080Ti on average have slightly lower performance in DX 12 compared to DX11?
https://www.hardocp.com/article/2017/03/22/dx12_versus_dx11_gaming_performance_video_card_review/

This topic has already been beaten to death ad nauseam in the VC&G forums. I suggest you search for some threads there rather than railroad this one by going grossly off topic.

Who cares if it's old? If you can't explain it then it exposes your double standards - you are doing the same thing that you accuse me of by dismissing data that doesn't agree with your views.

*Shrugs* Feel free to start a thread in the VC&G section where it belongs, rather than knocking this thread completely off topic.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
I remember reading somewhere that Ashes of the Singularity has unscripted A.I routines or some such which made the gameplay segments very different each time, and this obviously affects the benchmark. Every time you run the benchmark, the A.I does a different routine, which can affect the performance outcome in subtle ways.

But maybe I'm wrong *shrugs.*



Well I'm sorry you have bad reading comprehension, because I never said anything like that at all.



This topic has already been beaten to death ad nauseam in the VC&G forums. I suggest you search for some threads there rather than railroad this one by going grossly off topic.



*Shrugs* Feel free to start a thread in the VC&G section where it belongs, rather than knocking this thread completely off topic.
Firstly you *CLEARLY* said AotS is as much a DX12 game as Gears of War 4.

Secondly, I wonder who was railroading the thread in the first place with Ghost Recon Wildlands DX11 benchmarks at 720p when the rest were having a discussion on DX12 performance on NVIDIA, which seemed to be the real culprit. It doesn't merit yet another thread when the primary discussion here is centered around CPU scalability and overall negative performance delta in DX12 compared to DX11 on NVIDIA.
 
Reactions: DarthKyrie

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
I'm pretty sure that isn't how it works. The CPU has nothing to do with putting pixels on screen, so the CPU burden decreases as resolution increases because the GPU is churning out frames at a slower pace. At lower resolutions it's the exact opposite, where the GPU is churning out into the hundreds of frames per second and the CPU workload is increased as a result.
That is just rubbish. CPU utilization does not decrease with increased resolution. It does the same work regardless of the resolution. It is only the GPU that is affected by changes in resolution. That is why at 4K the graphs are all equal no matter what the CPU because it is the GPU that cannot keep up with the increased demand of outputting lots of pixels.

On lower resolutions faster CPUs are in the lead because they can dispatch the job to the GPU more quickly.

If what you said was true then people who played Crysis back in the day at 1024x768 instead of 1600x1200 would have had a worse experience because their CPU was 'doing more work' at the lower resolution.
 
Reactions: DarthKyrie
May 11, 2008
20,041
1,289
126
I'm pretty sure that isn't how it works. The CPU has nothing to do with putting pixels on screen, so the CPU burden decreases as resolution increases because the GPU is churning out frames at a slower pace. At lower resolutions it's the exact opposite, where the GPU is churning out into the hundreds of frames per second and the CPU workload is increased as a result.

The CPU just tells the GPU what to draw. Also CPU workloads like collision detection, physics, animation etcetera don't scale with resolution to my knowledge.

Well, i do not agree that the cpu has less work when the gpu has more work because of the resolution increase.I think there is a slight increase because of the gpu driver but not much to be a serious limit.
But i agree( I have been thinking about it) that the 3d engine does all collision calculations (bullets and where the vehicles and characters move around and run into) and other ai calculations based on the matrix calculations and 3d position values of all polygons and does so for each frame. So, that is before the gpu actually does the translation of 3d space to 2d space and does all post processing shader work to get a nice picture. and usually done by the cpu for as far as i know. But, with async compute, i wonder if there will be solutions where they can do such stuff on the gpu as well.
I mean a gpu is highly parallel and can do 3d position collision detection much faster and better than a cpu if the gpu has instruction that are designed for that kind of work. A lot of 3d position data could be compared and tested in parallel. I wonder if the gpu would be used for that in the future or is already utilized for things the cpu used to do. Or that the programmers maybe utilize cpu simd instructions for that.
 
May 11, 2008
20,041
1,289
126
Well, this is an interesting article from nvidia devblog about implementing collision detection on the gpu instead of the cpu. I guess the gpu driver would be a lot more busy in these cases.
And async compute(dma engine) and especially hsa alike architectures (with zero copy)would be ideal for it because the sharing of data between cpu and gpu is so much easier and faster than traditional over the bus communication.

https://devblogs.nvidia.com/parallelforall/thinking-parallel-part-i-collision-detection-gpu/

And for a highly detailed explanation (which i only partially understand):
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch33.html
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Firstly you *CLEARLY* said AotS is as much a DX12 game as Gears of War 4.

It is, because it's a hybrid. The Nitrous Engine may have been designed from the ground up first and foremost with DX12 in mind, but it's still handicapped by the DX11 path. Compared to Gears of War 4 which is DX12 only, but with an engine designed primarily for DX11. Either way, both aren't fully DX12.

Secondly, I wonder who was railroading the thread in the first place with Ghost Recon Wildlands DX11 benchmarks at 720p when the rest were having a discussion on DX12 performance on NVIDIA, which seemed to be the real culprit. It doesn't merit yet another thread when the primary discussion here is centered around CPU scalability and overall negative performance delta in DX12 compared to DX11 on NVIDIA.

Those screenshots were posted in response to several members posting erroneous information concerning NVidia's CPU scaling, which pertained to the actual conversation. What you want to do is start a conversation that has nothing to do with the actual topic.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That is just rubbish. CPU utilization does not decrease with increased resolution. It does the same work regardless of the resolution. It is only the GPU that is affected by changes in resolution. That is why at 4K the graphs are all equal no matter what the CPU because it is the GPU that cannot keep up with the increased demand of outputting lots of pixels.

Thanks for proving my point that you have reading comprehension issues

I NEVER said that CPU utilization decreased with increased resolution. I said that CPU utilization decreased because the GPU's framerate output has decreased, as a result of the higher resolution.

It's not rocket science man. 4K resolution puts more stress on the GPU, which reduces framerate. Reduced framerate leads to less CPU activity.
 

Elixer

Lifer
May 7, 2002
10,376
762
126
why must we need drivers anyway? everything should work right out of the box, like linux.... (really, 90% drivers for linux are in the kernel)
What? Why would you think that?
Drivers are needed for every OS.
While there are some distros that try to make it seamless so hardware acceleration works, they just try and guess the right drivers.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Well, i do not agree that the cpu has less work when the gpu has more work because of the resolution increase.I think there is a slight increase because of the gpu driver but not much to be a serious limit.

I never said that the CPU has less work because of the resolution increase. I said that the CPU has less work because of the reduced framerate.

You can test this yourself if you want. Start up any game and test with Vsync enabled and then disabled. You'll notice that CPU usage increases when Vsync is disabled because the GPU is drawing frames at a faster pace, compared to with it on.

But, with async compute, i wonder if there will be solutions where they can do such stuff on the gpu as well.
I mean a gpu is highly parallel and can do 3d position collision detection much faster and better than a cpu if the gpu has instruction that are designed for that kind of work. A lot of 3d position data could be compared and tested in parallel. I wonder if the gpu would be used for that in the future or is already utilized for things the cpu used to do. Or that the programmers maybe utilize cpu simd instructions for that.

I used to be a big proponent of GPU physics, but honestly, CPU physics has come a long way over the years due to multithreading and SIMD optimization. Some of the effects that used to be only possible with GPU physics a few years ago (ie cloth simulation), now run very quickly on the CPU with PhysX 3.xx. NVidia completely rewrote the PhysX API starting with version 3.0 to make effective use of multithreading and SIMD.

So I think that CPU physics has actually become viable for advanced physics effects. The only real problem of course is that quad cores aren't really enough to go balls out, as they get bogged down too easily with modern games.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
I do NOT think Nvidia did anything to intentionally sabotage Ryzens performance with their cards. I think Nvidia, like many others simply did not have enough time to adjust their software for RyZen cpus.
 
Reactions: 3DVagabond

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Those screenshots were posted in response to several members posting erroneous information concerning NVidia's CPU scaling, which pertained to the actual conversation. What you want to do is start a conversation that has nothing to do with the actual topic.
If you had followed the discussion, it was mainly about DX12 CPU scaling and NVIDIA. It was you who derailed the thread with benchmarks at 720p with Ghost Recon Wildlands DX11.


I NEVER said that CPU utilization decreased with increased resolution. I said that CPU utilization decreased because the GPU's framerate output has decreased, as a result of the higher resolution.
Your explanation of why CPU utilization should decrease at higher resolution - because GPU does more work, and hence slows down - is completely wrong. It isn't borne out by facts or experience.

And it is comical to see you attempt to prove this point in your own fashion - by telling another poster to test with Vsync on and off.

First it was lower resolution which made the CPU do more work, then it turned to uncapped FPS by disabling Vsync which did the same.

None of which are true.
 
Reactions: DarthKyrie

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Your explanation of why CPU utilization should decrease at higher resolution - because GPU does more work, and hence slows down - is completely wrong. It isn't borne out by facts or experience.

I'm seriously beginning to think that you're trolling me. Either that, or your reading skills are truly abysmal. I said that the lower CPU utilization at higher resolutions was caused by LOWER FRAMERATE output from the GPU.

This is the second time you've totally misunderstood what I've said. If you do it this time again, I'm going to assume you can't read and I'm just going to put you on ignore.

And it is comical to see you attempt to prove this point in your own fashion - by telling another poster to test with Vsync on and off.

How is this comical? Vsync demonstrates what I'm talking about perfectly. Vsync disabled leads to higher CPU utilization, and Vsync enabled lowers it.

Or do you believe that the CPU is working just as hard at 30 FPS as it does at 60 FPS?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
I'm seriously beginning to think that you're trolling me. Either that, or your reading skills are truly abysmal. I said that the lower CPU utilization at higher resolutions was caused by LOWER FRAMERATE output from the GPU.

This is the second time you've totally misunderstood what I've said. If you do it this time again, I'm going to assume you can't read and I'm just going to put you on ignore.



How is this comical? Vsync demonstrates what I'm talking about perfectly. Vsync disabled leads to higher CPU utilization, and Vsync enabled lowers it.

Or do you believe that the CPU is working just as hard at 30 FPS as it does at 60 FPS?
Your entire explanation of lower CPU utilization at higher resolution due to the GPU doing more work and hence outputting lower fps which in turn reduces the load on the CPU is completely devoid of reason.

Throwing in Vsync to the equation is another poor attempt of deviating from what you said originally. If you cannot prove that lower resolution leads to more CPU utilization when FPS is uncapped, then I'd assume that you have no idea what you're talking about.

Throwing phrases like 'common knowledge', 'rocket science' into the mix doesn't do you any favor when something can be demonstrated by simple experiment.

To prove that you are wrong and that you shifted goal-posts the moment you talked about Vsync, I hereby provide concrete evidence:

 
Last edited:
Reactions: DarthKyrie

TimCh

Member
Apr 7, 2012
55
52
91
Your entire explanation of lower CPU utilization at higher resolution due to the GPU doing more work and hence outputting lower fps which in turn reduces the load on the CPU is completely devoid of reason.

Throwing in Vsync to the equation is another poor attempt of deviating from what you said originally. If you cannot prove that lower resolution leads to more CPU utilization when FPS is uncapped, then I'd assume that you have no idea what you're talking about.

Throwing phrases like 'common knowledge', 'rocket science' into the mix doesn't do you any favor when something can be demonstrated by simple experiment.

To prove that you are wrong and that you shifted goal-posts the moment you talked about Vsync, I hereby provide concrete evidence:

CPU usage(in the rendering pipeline) per frame is generally speaking constant no matter the resolution, if the frame rates drops due to higher resolution the CPU usage drops too.

AI, physics etc might run independent of the frame rate.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
CPU usage(in the rendering pipeline) per frame is generally speaking constant no matter the resolution, if the frame rates drops due to higher resolution the CPU usage drops too.

AI, physics etc might run independent of the frame rate.
Yes, that is what I'm saying. When you have uncapped FPS, or when you are comparing at a fixed FPS cap, changing the resolution would not do anything to CPU utilization. The lower FPS one gets at higher resolution is simply due to the load on the GPU increasing at each step.

Capping the frame rate at 30, 60 or 120 FPS at a fixed resolution and the consequent increase in CPU utilization going from a lower to a higher CAPPED FPS is completely different.

Note that this is only applicable in games with a fixed viewport. Games like RTS, sidescrollers etc. where resolution determines how much stuff is rendered on screen would show increased CPU utilization with increased resolution.
 
Reactions: DarthKyrie

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Yes, that is what I'm saying. When you have uncapped FPS, or when you are comparing at a fixed FPS cap, changing the resolution would not do anything to CPU utilization. The lower FPS one gets at higher resolution is simply due to the load on the GPU increasing at each step

Wrong!!

LOL you are too funny. TimCh said the exact same thing that I've been saying, but yet you "think" that he is saying the same thing as you when he isn't.

TimCh is mirroring what I've been saying, which is that while resolution does not directly affect CPU performance, it can indirectly affect it because of the change in GPU performance.

So even after two people have tried explaining this simple concept to you, you still can't intellectually grasp it.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Wrong!!

LOL you are too funny. TimCh said the exact same thing that I've been saying, but yet you "think" that he is saying the same thing as you when he isn't.

TimCh is mirroring what I've been saying, which is that while resolution does not directly affect CPU performance, it can indirectly affect it because of the change in GPU performance.

So even after two people have tried explaining this simple concept to you, you still can't intellectually grasp it.
Are you daft?

Watch the video I posted. Then tell me how resolution changes affect CPU utilization. You were the first to bring in Vsync into the discussion when it was purely about resolution, with no FPS limit.

Why should I accept your nonsense when there is VIDEO EVIDENCE to the contrary, and I can just run a game and test it on my own?
 
Reactions: DarthKyrie

dogen1

Senior member
Oct 14, 2014
739
40
91
That is just rubbish. CPU utilization does not decrease with increased resolution. It does the same work regardless of the resolution.

It does the same amount of work per frame (unless the game adjusts the LOD based on resolution maybe). Assuming higher resolution reduces the framerate, the CPU will be doing less work.
 

Puffnstuff

Lifer
Mar 9, 2005
16,036
4,799
136
This question involves both Ryzen CPUs and Nvidia GPU's and there is no possible way to separate the two. I had to choose the Nvidia forum or the CPU forum and since the CPU is the main focus, I posted it here.
I want to use this thread to gauge people's opinion on the Ryzen/Nvidia performance issue. Do you think Nvidia was totally innocent regarding this? I think they stand to benefit a lot by Ryzen performing badly with their GPU's. It takes money and mindshare away from AMD, plain and simple. My opinion is that Nvidia was fully aware of the issue when no one else was and they decided it wasn't their responsibility to fix it. That's what I think. What do you think? There are many nuanced responses that are possible, but I made it simple.

Did Nvidia know about the problem and choose to not fix it. Yes or no.
Seems to me that in most benchmarks that even a Radeon performs better with an intel cpu than on a ryzen so what's your point?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
It does the same amount of work per frame (unless the game adjusts the LOD based on resolution maybe). Assuming higher resolution reduces the framerate, the CPU will be doing less work.
Nope.

Stalker: Clear Sky 1366x768
High Preset



With 1932x1086 DSR



If anything, I shifted the bottleneck to the GPU, which dropped frame rate by 10fps, and got slightly higher utilization on the first core.

So, nope this entire premise of increased GPU load decreasing CPU utilization is FLAWED and MISLEADING.

 
Reactions: bononos
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |