[bitsandchips]: Pascal to not have improved Async Compute over Maxwell

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Abwx

Lifer
Apr 2, 2011
11,167
3,862
136
No, you still can't wrap your head around a buffer that empties faster can fill sooner, but whatever.

Since there s only a few frames stored in waiting of being displayed this discussion is just moot as there s only a few dozen MB required to manage the output data flow.

If 10 rendered frames are stored permanently this require 160MB for 8 million pixels, i think that that there s only a few rendered frames stored at a given moment in waiting to be displayed.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
This is so off topic at this point.

This already happened on these forums.

None of this is really meaningful in relation to the thread. If you guys want to talk about Gameworks conspiracies I think you should start a new thread about it.

Amen. This is exactly the type of thread they like to lock around here. People who don't want it locked please ignore those who do.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
HBM's memory bandwidth advantage is between the GPU and the VRAM, not between the card and system memory. So if you need more than 4GB of VRAM your performance tanks regardless of HBM. Having HBM isn't going to make your access to system ram any faster then a card with GDDR5.

You aren't simplifying anything, you're just continuing to show you don't actually know what HBM is.

I'm not even saying 4GB is or is not enough. I'm pointing out the argument bias and goal post shifting and using the 4GB debate that happened with the 980 then again with Fury X as an example of it.

And that's part of the confusion. You see with DX11 games, memory management can be tuned via the software driver. AMD hired two engineers to tackle this with their Fiji lineup. Here's what they're able to do:

In instances where a game requires more than 4GB VRAM...

- Dynamic Memory is used (system memory) in order to store data that is not required for a given frame.

- Data that is required for a frame is stored in the HBM framebuffer. So you're always pulling data from HBM and not system memory even if you go over 4GB. That means no slow downs.

- Replicated data is removed as well because a lot of games replicate textures out of laziness.

- The PCIe bus is used to transfer data back and forth from system memory (dynamic memory) and the HBM framebuffer.

End result is that even if you spill into system memory, performance doesn't tank.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
It's running on the CPU. If NVIDIA cards were to have an advantage it would need to run on the GPU.

There DX11 API overhead is lower than AMD's. Running CPU PhysX is going to effect AMD's performance before it effects nVidia's. So, you dial it up just enough to kill your competitor's performance but not yours. Job done.

Question is can the same work be done with less CPU overhead? Likely so, since other games don't suffer the same performance killing CPU overhead running PhysX on the CPU.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
And that's part of the confusion. You see with DX11 games, memory management can be tuned via the software driver. AMD hired two engineers to tackle this with their Fiji lineup. Here's what they're able to do:

In instances where a game requires more than 4GB VRAM...

- Dynamic Memory is used (system memory) in order to store data that is not required for a given frame.

- Data that is required for a frame is stored in the HBM framebuffer. So you're always pulling data from HBM and not system memory even if you go over 4GB. That means no slow downs.

- Replicated data is removed as well because a lot of games replicate textures out of laziness.

- The PCIe bus is used to transfer data back and forth from system memory (dynamic memory) and the HBM framebuffer.

End result is that even if you spill into system memory, performance doesn't tank.

In other words you can reduce the impact of not having enough memory by having dedicated engineers tweaking the drivers to individually optimise for each game to work around it. That obviously has a number of downsides:
1) it requires dedicated engineers - I bet in one years time they will no longer exist when Fury is no longer the top card and AMD no longer cares as much about it's performance.

2) it requires fixes for all games. Sure the top games right now will be fixed, but there will be others that don't get patched. Some of games might have mid life upgrades that don't get re-patched.

Really you are safer buying a card with enough vram for it's level of performance to start with.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
In other words you can reduce the impact of not having enough memory by having dedicated engineers tweaking the drivers to individually optimise for each game to work around it. That obviously has a number of downsides:
1) it requires dedicated engineers - I bet in one years time they will no longer exist when Fury is no longer the top card and AMD no longer cares as much about it's performance.

2) it requires fixes for all games. Sure the top games right now will be fixed, but there will be others that don't get patched. Some of games might have mid life upgrades that don't get re-patched.

Really you are safer buying a card with enough vram for it's level of performance to start with.

Given that the R9 290 series are receiving similar tweaks and that they released in 2013/2014 then I'd say it's pretty fair to say that AMD will keep this up for quite some time.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Oh and it only requires fixes for games which go over the framebuffer. There are only a few. Rise of the Tomb Raider and GTA V come to mind.
 

flopper

Senior member
Dec 16, 2005
739
19
76
Given that the R9 290 series are receiving similar tweaks and that they released in 2013/2014 then I'd say it's pretty fair to say that AMD will keep this up for quite some time.

yea and they didnt release a card that stated 4gb ram and had 3.5gb and lied to the public.
kinda of a difference
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
970 also need tweaks for every game so they dont map meaningful assets on that 0.5gb and performamce goea down the drain. Talk about being the same lol
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Given that the R9 290 series are receiving similar tweaks and that they released in 2013/2014 then I'd say it's pretty fair to say that AMD will keep this up for quite some time.

290 isn't bottlenecking on memory because it's a slower card running at lower settings, 4GB is just about perfect for it's level of performance so it doesn't need the tweaks. It's also an even older card so that's not really a compelling argument that AMD will keep those engineers on fury.

Anyway this is pretty off topic...
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
290 isn't bottlenecking on memory because it's a slower card running at lower settings, 4GB is just about perfect for it's level of performance so it doesn't need the tweaks. It's also an even older card so that's not really a compelling argument that AMD will keep those engineers on fury.

Anyway this is pretty off topic...

Crossfire?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
And that's part of the confusion. You see with DX11 games, memory management can be tuned via the software driver. AMD hired two engineers to tackle this with their Fiji lineup. Here's what they're able to do:

In instances where a game requires more than 4GB VRAM...

- Dynamic Memory is used (system memory) in order to store data that is not required for a given frame.

- Data that is required for a frame is stored in the HBM framebuffer. So you're always pulling data from HBM and not system memory even if you go over 4GB. That means no slow downs.

- Replicated data is removed as well because a lot of games replicate textures out of laziness.

- The PCIe bus is used to transfer data back and forth from system memory (dynamic memory) and the HBM framebuffer.

End result is that even if you spill into system memory, performance doesn't tank.

I don't disagree with that, with a caveat, and a big one. That's the end result if you have a large enough frame buffer to allow that memory management to work as designed, making it transparent to the end user as it's swapping data in and out. That won't always be the case though. If it were that easy and always work, nVidia and AMD wouldn't bother with 4GB+ of expensive HBM/GDDR5 memory. They'd use 2GB or 1GB. Cut the costs on vram while at the same time upping the TDP of the GPU for even better performance.

I've personally experienced performance tanking when vram gets saturated. It's the sole reason I got rid of my 2GB 680 SLI to my current 980Ti when I went to a 1440p screen. I had every intention of holding on to the 680's until Pascal before that.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
It doesn't affect the gaming experience, but affects reviews. People use reviews and performance indexes to make purchasing decisions. You can take it from here.


It's not a strange thought process: if you can turn off said features, you can compare performance in both cases (on/off) and evaluate their impact. Or do you reckon the game/driver bug manifests itself only with GW features turned off?

It's good to remain skeptic in the face of all the GW doom&gloom this forum may paint, but ain't so good to create a thinking pattern that discards any evidence to the contrary.


You noticed that too? :sneaky:
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
And that's part of the confusion. You see with DX11 games, memory management can be tuned via the software driver. AMD hired two engineers to tackle this with their Fiji lineup. Here's what they're able to do:

In instances where a game requires more than 4GB VRAM...

- Dynamic Memory is used (system memory) in order to store data that is not required for a given frame.

- Data that is required for a frame is stored in the HBM framebuffer. So you're always pulling data from HBM and not system memory even if you go over 4GB. That means no slow downs.

- Replicated data is removed as well because a lot of games replicate textures out of laziness.

- The PCIe bus is used to transfer data back and forth from system memory (dynamic memory) and the HBM framebuffer.

End result is that even if you spill into system memory, performance doesn't tank.

Not really. HBM is irrelevant for this and if you need more system RAM than the buffer can reasonably handle performance will still tank.

HBM or GDDR5 has little to no effect on buffering through system RAM. The HBM or GDDR5 interface only transmitts data between the GPU and the VRAM. All communication to system RAM occurs over the PCI-e bus and RAM interface (DDR3/4), which has the same bandwidth regardless of where GDDR5 or HBM is used on the graphics card.

If a game requires more than 4 GB, some of that data will be stored in system RAM. Data can be transferred back and forth between the GPU and system RAM but speed is limited by the PCI-e interface and by the bandwidth of the system RAM. As game logic uses bandwidth from system RAM, the DDR3/4 bandwidth tends to be the limiting factor (easily seen in games where increasing the DDR3/4 speed increases performance even when VRAM is hugely in excess).

Is some sort of buffering to system RAM possible? Yes. But the performance of the buffering system depends on drivers/game engine design, PCI-e bandwidth, RAM bandwith, etc. none of which are exclusive to HBM.
 

thilanliyan

Lifer
Jun 21, 2005
11,912
2,130
126
I don't disagree with that, with a caveat, and a big one. That's the end result if you have a large enough frame buffer to allow that memory management to work as designed, making it transparent to the end user as it's swapping data in and out. That won't always be the case though. If it were that easy and always work, nVidia and AMD wouldn't bother with 4GB+ of expensive HBM/GDDR5 memory. They'd use 2GB or 1GB. Cut the costs on vram while at the same time upping the TDP of the GPU for even better performance.

Do any games absolutely require that for example 5GB of data is stored in vram at any one time? I'm asking because I've read that some games change the amount of vram required based on what is available. If that is the case, and if for example any game really only requires 2GB of the data in vram at any one time, then there is the possibility for the 4GB buffer (with tweaks) to perform as well as the 6GB buffer (without tweaks). And as mentioned previously, if the 4GB buffer can be emptied and filled more quickly than the 6GB buffer, why couldn't it perform as well?

Now IF the game actually requires a full 6GB of data all at once, then yes the 4GB card would probably suffer, but do any games really need that much vram in a given instant?
 
May 11, 2008
20,055
1,290
126
Polaris will support ROV and CR. That's what the "Primitive Discard Acceleration" is all about.

ROV is a performance reducing feature btw.

What is clear to me is that Maxwell 3.0 (Pascal) needs HBM2 to shine. This is why AMD GCN3 (Fiji) is able to keep up with a GTX 980 Ti (GM200) at 4K despite having a lower ROp count and clockspeed.

NVIDIA went from an 8:1 ratio of ROP to Memory Controller in Kepler (GK110) to a 16:1 ratio with GM20x. In other words Kepler (GK104/GK110) had a ratio of 8 Rops per 64-bit memory controller.

So a 256-bit memory interface would give us 4 x 8 or 32 ROps and a 384-bit memory interface would give us 6 x 8 or 48 ROps.

What NVIDIA did was boost that to 16 ROps per 64-bit memory interface with GM20x. So a 256-bit memory interface now powers 64 ROps and a 384-bit memory interface now powers 96 ROps.

NVIDIA added color compression (Delta) which affects colored pixels and texels but not random pixels and texels in order to make up for the lack of memory bandwidth. It helped out a bit but still couldn't keep up with GCN2s 64 ROps under random scenarios or GCN3s ROps under both random and colored scenarios.


What we're looking at, then, is NVIDIAs initial Pascal offerings being some what nice but not delivering the performance people seem to think it will. GP100, paired with HBM2, will be able to deliver deliver the needed bandwidth for NVIDIAs bandwidth starved 96 ROps (Z testing, pixel blending, anti-aliasing etc devours immense amounts of bandwidth). Therefore I don't think we're going to see more than 96 ROps in GP100. What we're instead likely to see are properly fed ROps.

If the "GTX 1080" comes with 10 Gbps GDDR5x memory on a 256-bit memory interface then we'd be looking at the same 64 ROps that the GTX 980 sports and the same 16:1 ratio (8 ROps/64-bit memory controller) but with 320GB/s memory bandwidth as opposed to 224GB/s on the GTX 980. So the GTX 1080 (320GB/s) should deliver a similar performance/clk as a GTX 980 Ti (336GB/s) at 4K despite sporting 64 ROps to the GTX 980 Ti's 96.

NVIDIA will likely clock the reference clocks on the GTX 1080 higher in order to obtain faster performance than a reference clocked GTX 980 Ti. So the performance increase of a GTX 1080 over a GTX 980 Ti will likely be due to higher reference clocks as it pertains to 4K performance.

I also think that the GTX 1080 will sport the same or around the same CUDA cores as a GTX 980 Ti (2,816). I could be entirely off but that's what I think.

As for FP64, NVLink, FP16 support, those are nice for a data centre but mean absolutely nothing for Gamers... Sad I know.

So what we're looking at from NVIDIA, initially, is GTX 980 Ti performance (or slightly higher performance) at a lower price point with GP104. The real fun will start with GP100 by end of 2016/beginning of 2017.


On the RTG/AMD front..

RTG replaced the geometry engines with new geometry proceasors. One notable new feature is primitive discard acceleration, which is something GCN1/2/3 lacked. This allows future GCN4 (Polaris/Vega) to prevent certain primitives from being rasterized. Unseen tessellated meshes are "culled" (removed from the rasterizer's work load). Primitive Discard Acceleration also means that GCN4 will support Conservative Rasterization.

Basically, RTG have removed one of their weakness's in GCN.

As for the hardware scheduling, GCN still uses an Ultra Threaded Dispatcher which is fed by the Graphics Command Processor and ACEs.


AMD replaced the Graphics Command Processor and increased the size of the command buffer (section of the frame buffer/system memory dedicated to keeping many to-be executed commands). The two changes, when coupled together, allow for a boost in performance under single threaded scenarios.

How? My opinion is that if the CPU is busy handling a complex simulation or other CPU heavy work under DX11, you generally get a stall on ther GPU side whereas the GPU idles, waiting on the CPU to finish the work it is doing so that it can continue to feed the GPU.

By increasing the size of the command buffer, more commands can be placed in-waiting so that while the CPU is busy with other work, the Graphics Command Processor still has a lot of buffered commands to execute. This averts a stall.

So 720p/900p/1080p/1440p performance should be great under DX11 and Polaris/Vega.


Another nifty new feature is instruction prefetching. Instruction prefetch is a technique used in central processor units to speed up the execution of a program by reducing wait states (GPU Idle time).

Prefetching occurs when a processor requests an instruction or data block from main memory before it is actually needed. Once the block comes back from memory, it is placed in a cache (and GCN4 has increased its Cache sizes as well). When the instruction/data block is actually needed, it can be accessed much more quickly from the cache than if it had to make a request from memory. Thus, prefetching hides memory access latency.

In the case of a GPU, the prefetch can take advantage of the spatial coherence usually found in the texture mapping process. In this case, the prefetched data are not instructions, but texture elements (texels) that are candidates to be mapped on a polygon.

This could mean that GCN4 (Polaris/Vega) will be boosting texturing performance without needing to rely on more texturing units. This makes sense when you consider that Polaris will be a relatively small die containing far fewer CUs (Compute Units) than Fiji and that Texture Mapping Units are found in the CUs. By reducing the texel fetch wait times, you can get a more efficient use out of the Texture Mapping Units on an individual basis. Kind of like higher TMU IPC.

On top of all this we have the new L2 cache, improved CUs for better shader efficiency, new memory controllers etc


So what we're looking at from AMD is FuryX performance (or slightly more) at a reduced price point for DX12 and higher than FuryX performance for DX11. Just like with NVIDIA, the real fun starts with Vega by end of 2016/beginning of 2017.


In conclusion,

We have Maxwell 3.0 facing off against a refined GCN architecture. Micron just announced that mass production of GDDR5x is set for this summer so both AMD and NVIDIA are likely to use GDDR5x. It will be quite interesting to see the end result.

So does lacking Asynchronous compute + graphics matter? Absolutely.

Thank you for the good post. :thumbsup:
Makes this subforum interesting again.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Do any games absolutely require that for example 5GB of data is stored in vram at any one time? I'm asking because I've read that some games change the amount of vram required based on what is available. If that is the case, and if for example any game really only requires 2GB of the data in vram at any one time, then there is the possibility for the 4GB buffer (with tweaks) to perform as well as the 6GB buffer (without tweaks). And as mentioned previously, if the 4GB buffer can be emptied and filled more quickly than the 6GB buffer, why couldn't it perform as well?

Now IF the game actually requires a full 6GB of data all at once, then yes the 4GB card would probably suffer, but do any games really need that much vram in a given instant?

As of today, most games are fine with 4GB of vram up to 4k resolutions. But that's not what we are really debating here. We are debating HBM vs GDDR5 when you actually NEED more vram then you have. Be it 2GB, 4GB, 6GB, 8GB etc etc. Which Enigmoid broke down better then I could ever have.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
I don't disagree with that, with a caveat, and a big one. That's the end result if you have a large enough frame buffer to allow that memory management to work as designed, making it transparent to the end user as it's swapping data in and out. That won't always be the case though. If it were that easy and always work, nVidia and AMD wouldn't bother with 4GB+ of expensive HBM/GDDR5 memory. They'd use 2GB or 1GB. Cut the costs on vram while at the same time upping the TDP of the GPU for even better performance.

I've personally experienced performance tanking when vram gets saturated. It's the sole reason I got rid of my 2GB 680 SLI to my current 980Ti when I went to a 1440p screen. I had every intention of holding on to the 680's until Pascal before that.

Memory size gets added for two reasons: need and speed. If you need more room, you can go with bigger chips or add more chips. If you need your memory to be faster you can clock it higher or add more chips, and thus make the connection wider. The overlap of adding more chips has been driving VRAM counts for a while, which is somewhere NV was able to claw back some cost by using compression to lower bandwidth requirements.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Not really. HBM is irrelevant for this and if you need more system RAM than the buffer can reasonably handle performance will still tank.

HBM or GDDR5 has little to no effect on buffering through system RAM. The HBM or GDDR5 interface only transmitts data between the GPU and the VRAM. All communication to system RAM occurs over the PCI-e bus and RAM interface (DDR3/4), which has the same bandwidth regardless of where GDDR5 or HBM is used on the graphics card.

If a game requires more than 4 GB, some of that data will be stored in system RAM. Data can be transferred back and forth between the GPU and system RAM but speed is limited by the PCI-e interface and by the bandwidth of the system RAM. As game logic uses bandwidth from system RAM, the DDR3/4 bandwidth tends to be the limiting factor (easily seen in games where increasing the DDR3/4 speed increases performance even when VRAM is hugely in excess).

Is some sort of buffering to system RAM possible? Yes. But the performance of the buffering system depends on drivers/game engine design, PCI-e bandwidth, RAM bandwith, etc. none of which are exclusive to HBM.

HBM is better at swapping/refreshing/reading/writing data at the same time than GDDR5, which may have some benefit.

I don't remember details, but it can do multiple of those across the stack in the single clock cycle.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
HBM is better at swapping/refreshing/reading/writing data at the same time than GDDR5, which may have some benefit.

I don't remember details, but it can do multiple of those across the stack in the single clock cycle.

True but the limiting factor is system bandwidth and the PCIe interface. This advantage of HBM simply does not matter when you are so hampered by system RAM.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Memory size gets added for two reasons: need and speed. If you need more room, you can go with bigger chips or add more chips. If you need your memory to be faster you can clock it higher or add more chips, and thus make the connection wider. The overlap of adding more chips has been driving VRAM counts for a while, which is somewhere NV was able to claw back some cost by using compression to lower bandwidth requirements.

That's exactly my point, you can't simply say performance doesn't tank because you can swap assets in and out of system ram, because if that were always the case, there would never be a need for more vram and my 2GB 680's would have seen a much longer life span.

Now we have people using complete guesswork about how HBM isn't affected because it's faster. If you're having to fallback on a 40GB/s pipe with higher latency, HBM's bandwidth advantage is almost completely negated.
 

C@mM!

Member
Mar 30, 2016
54
0
36
That's exactly my point, you can't simply say performance doesn't tank because you can swap assets in and out of system ram, because if that were always the case, there would never be a need for more vram and my 2GB 680's would have seen a much longer life span.

Now we have people using complete guesswork about how HBM isn't affected because it's faster. If you're having to fallback on a 40GB/s pipe with higher latency, HBM's bandwidth advantage is almost completely negated.

You really need to stop using absolutes. No one's arguing that dropping down into system ram is slower, its that if you have to, with an apples to apples comparison between HBM & GDDR5, it will impact you less on HBM.

As for if you need more than 4GB of buffer, benchmarks show at this present day, its not a huge issue. But yes, as games start using more of the buffer out of necessity rather than 'because its there', that bottleneck will of course become more apparent.

Talk about flogging a dead horse mate.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
You really need to stop using absolutes. No one's arguing that dropping down into system ram is slower, its that if you have to, with an apples to apples comparison between HBM & GDDR5, it will impact you less on HBM.

Except it won't lol. It will impact you all the same. HBM doesn't do anything special in that regard. If you don't like hearing me say this then you need to stop responding. When I hear nonsense, which is "HBM is affected less" i'll reply calling it nonsense because quite simply, that's a made up statement. If you feel the horse is dead, move along. You've already said "you're done" with this argument several posts back. Was that a made up statement too?
 
Last edited:

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Except it won't lol. It will impact you all the same. HBM doesn't do anything special in that regard. If you don't like hearing me say this then you need to stop responding. When I hear nonsense, which is "HBM is affected less" i'll reply calling it nonsense because quite simply, that's a made up statement. If you feel the horse is dead, move along. You've already said "you're done" with this argument several posts back. Was that a made up statement too?

Then do you want to explain why Rise of the Tomb Raider doesn't kill the Fury when using 5+ GB of RAM?

http://www.hardocp.com/article/2016/02/29/rise_tomb_raider_graphics_features_performance/13

The AMD Radeon R9 Fury X VRAM behavior does make sense though if you look toward the dynamic VRAM. It seems the onboard dedicated VRAM was mostly pegged at or near its 4GB of capacity. Then it seems the video card is able to shift its memory load out to system RAM, by as much as almost 4GB at 4K with 4X SSAA. If you combine the dynamic VRAM plus the on board 4GB of VRAM the numbers come out to equal numbers much higher than the AMD Radeon R9 390X and closer to what the GeForce GTX 980 Ti achieved in terms of dedicated VRAM.

Kudos to the AMD Radeon R9 Fury X for not breaking or seeming to run out of VRAM with no long pauses. Surprisingly there weren't long pauses like one would expect when running out of VRAM capacity. It must be that the dynamic VRAM is able to be leveraged and keep the video card from stalling out.

Performance is of course not playable, and seems to be lower than the GeForce GTX TITAN X on all accounts. Yet, they are still very similar in performance despite the Fury X being technically slower. The performance drops enabling 2X and 4X SSAA seem to be similar between both video cards. HBM doesn't seem to be showing any performance advantages here.

It scaled even better than the 12 GB Titan X, even though obviously both were unplayable frame rates, the 4GB HBM wasn't holding it back even though 50% of it was in system RAM on Fury X vs dedicated on Titan X (7.8-9.4GB dedicated Titan X vs 4 dedicated + 2.5 -3.8 dynamic Fury X)
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |