Discussion Nvidia Blackwell in Q1-2025

Page 71 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Heartbreaker

Diamond Member
Apr 3, 2006
4,653
6,109
136
(Note: I maybe wrong about frame prediction cos' I'm assuming here Jensen was being honest and not deceptive when he said 50 series is using Transformers to predict forward instead of "buffering" and interpolating like CNNs.)

Issues here.

Assuming Jensen was not being deceptive. The entire marketing campaign around FG for DLSS 3 and now 4 is built on deception.

The change from CNN to Transformer was for DLSS spatial upscaling (and related ray reconstitution) , not frame generation.

The actual change for FG, was the replacing the Optical Flow HW, with a more efficient AI model (which means they could backport to 30 series at minimum, but won't). This doesn't mean it has magic power. You still need the end frame to interpolate between them.

Also I don't think you will find he actually made any such claims during the presentation, but that comment was an informal, enthusiastic AI salesman remark made off the cuff later. Jensen is not Lex Luther, he doesn't personally develop all the algorithms, and likely doesn't understand in detail how all this stuff works. He's just boundlessly optimistic about AI being able to do anything, so of course it's "predicting frames".

I'm extremely confident that we will find, when third parties test this, that it has the same 1 frame lag for 2X operation (and even more for 3x/4x).
 

CP5670

Diamond Member
Jun 24, 2004
5,633
733
126
that 4090 in 4k

Seems so, on nvidia slides 4k DLSS perf mode
The graphics here are also nothing special. There is no reason for it to perform like this. I'm playing Horizon Zero Dawn from 2017 and it looks very similar to this but runs at a constant 120fps at all times. I bet the fps doubles if you turn ultra settings down to high with no noticeable visual difference.
 

Golgatha

Lifer
Jul 18, 2003
12,310
790
126
A good example of the issues of Frame-gen:


We'll get much more after launch. It's certainly not for twitch-based shooters, but can work very well in some other titles
Going back to my previous comment about DLSS not being in my value calculation and pure raster being the majority of the calculation for me is because IF DLSS works in the game I'm playing, it's not guaranteed to work well. Pure raster performance benefits all games equally.
 

coercitiv

Diamond Member
Jan 24, 2014
6,956
15,590
136
IF DLSS works in the game I'm playing, it's not guaranteed to work well. Pure raster performance benefits all games equally.
Actually there's a whole new layer on top of this: strong raster performance practically guarantees you'll be able to make the most out of DLSS and FG as well.

FG behaves better when your framerate is already high, it's a "win more" kind of tech. DLSS likely also benefits from higher base FPS (it still has lots of temporal dependencies), but the cherry on top is that once you don't really need upscaling... you can use DLAA instead. That's a pure IQ win, hands down. Once upscaling is out of the picture and your frames are very well defined using DLAA... guess who's going to introduce less artifacts... FG of course

To paraphrase Jensen, the more raster you buy, the more AI win you get!
 

SiliconFly

Golden Member
Mar 10, 2023
1,925
1,280
96
This is the original quote:
When we asked [Jensen Huang] how DLSS 4 multi frame generation works and whether it was still interpolating, Jensen boldly proclaimed that DLSS 4 "predicts the future" rather than "interpolating the past." That drastically changes how it works, what it requires in terms of hardware capabilities, and what we can expect in terms of latency.”

He didn't explicitly mention the AI model that predicts the future frame and I assumed it was transformers. But he did say fake frames are predicted and not interpolated.


The actual change for FG, was the replacing the Optical Flow HW, with a more efficient AI model (which means they could backport to 30 series at minimum, but won't). This doesn't mean it has magic power. You still need the end frame to interpolate between them.
No. With the new AI model for FG, I don't think the end frame is needed.

It appears they removed the Optical Flow Accelerators to run the new AI model purely on Tensors. This directly means the same thing I've been saying all along. This AI model doesn't care if it's predicting or interpolating (either forward or backward). And they tend be less complex, more efficient and performant with less input like 1 frame rather than 2 frames in one cycle. It all depends upon the training. If it's trained to predict forward, it will do the same. And depending on what they've said and revealed (both on and off the record), it obviously points to prediction rather than interpolation.

[Jensen] ... and likely doesn't understand in detail how all this stuff works. ...
OMG! This is the ****est thing I've ever heard! He's the authority on AI. He know more about the AI models that run in his products than us.

I'm extremely confident that we will find, when third parties test this, that it has the same 1 frame lag for 2X operation (and even more for 3x/4x).
On the contrary, I'm extremely confident that we'll find that the latency has reduced from 4090 to 5090 after 3rd party reviews (even with 3X FG).
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,653
6,109
136
OMG! This is the ****est thing I've ever heard! He's the authority on AI. He know more about the AI models that run in his products that anybody else.

Sure, ~30K employees, and Jensen knows more about everything than any of them.


On the contrary, I'm extremely confident that we'll find that the latency has reduced from 4090 to 5090 after 3rd party reviews (even with 3X FG).

We will see soon enough who's confidence was misplaced.
 

SiliconFly

Golden Member
Mar 10, 2023
1,925
1,280
96
Sure, ~30K employees, and Jensen knows more about everything than any of them.
He doesn't need to know every single detail. But saying he doesn't even understand the basics of his own GPU is just not right.

We will see soon enough who's confidence was misplaced.
We will. And my solid bet is on prediction.

Cos, prediction requires a smaller, less complex and faster AI model than interpolation. Of course it requires a super computer to train, which they already have. And it only gets better as they train more.
 

coercitiv

Diamond Member
Jan 24, 2014
6,956
15,590
136
This is the original quote:
When we asked [Jensen Huang] how DLSS 4 multi frame generation works and whether it was still interpolating, Jensen boldly proclaimed that DLSS 4 "predicts the future" rather than "interpolating the past." That drastically changes how it works, what it requires in terms of hardware capabilities, and what we can expect in terms of latency.”

He didn't explicitly mention the AI model that predicts the future frame and I assumed it was transformers. But he did say fake frames are predicted and not interpolated.



No. With the new AI model for FG, I don't think the end frame is needed.

It appears they removed the Optical Flow Accelerators to run the new AI model purely on Tensors. This directly means the same thing I've been saying all along. This AI model doesn't care if it's predicting or interpolating (either forward or backward). And they tend be less complex, more efficient and performant with less input like 1 frame rather than 2 frames in one cycle. It all depends upon the training. If it's trained to predict forward, it will do the same. And depending on what they've said and revealed (both on and off the record), it obviously points to prediction rather than interpolation.


OMG! This is the ****est thing I've ever heard! He's the authority on AI. He know more about the AI models that run in his products than us.
The source you quoted from is likely this Tom's Hardware article:

Here's two updates introduced in the article:
Update: No, it's not. The initial understanding that multi frame generation uses interpolation is the correct one. That's all we can say for now, and hopefully this clears up any confusion. More details to come at a later date.
[We think the misunderstanding or incorrect explanation stems from the overlap between multi frame generation and Reflex 2. There's some interesting stuff going on with Reflex 2, which does involve prediction of a sort, and that probably got conflated with frame generation.]

You need to understand that in the modern era lying and bending the truth are always on the menu. This is why we need to question everything that seems too good to be true or possible.
 

SiliconFly

Golden Member
Mar 10, 2023
1,925
1,280
96
[We think the misunderstanding or incorrect explanation stems from the overlap between multi frame generation and Reflex 2. There's some interesting stuff going on with Reflex 2, which does involve prediction of a sort, and that probably got conflated with frame generation.]

That settles it.
 

Ranulf

Platinum Member
Jul 18, 2001
2,654
1,899
136
There already are monitors that can either do FHD @ 480 Hz or UHD @ 240 Hz:


4K monitors can do lossless scaling to 1080p and 1440p monitors to 720p so this might actually become a thing

About time they fixed the native resolution problem. Wake me up when I can get a 27-32'' screen that can do it for $200 or less. Every time I walk into a Costco or Bestbuy I get to see $400-500 65'' lcd tvs or $250 43'' tvs in 4k. If lucky I can get a 27 inch 1440p monitor for $175.
 
Reactions: Keller_TT

Golgatha

Lifer
Jul 18, 2003
12,310
790
126
About time they fixed the native resolution problem. Wake me up when I can get a 27-32'' screen that can do it for $200 or less. Every time I walk into a Costco or Bestbuy I get to see $400-500 65'' lcd tvs or $250 43'' tvs in 4k. If lucky I can get a 27 inch 1440p monitor for $175.
The 65in 4k set is about 70 ppi while a 1440p 27in monitor is about 110 ppi, so I'm not seeing an issue here.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,653
6,109
136
We will. And my solid bet is on prediction.

Cos, prediction requires a smaller, less complex and faster AI model than interpolation. Of course it requires a super computer to train, which they already have. And it only gets better as they train more.


1 minute later:


Update: No, it's not. The initial understanding that multi frame generation uses interpolation is the correct one. That's all we can say for now, and hopefully this clears up any confusion. More details to come at a later date.

1 minute to "that didn't age well". A new record.
 

GTracing

Senior member
Aug 6, 2021
276
646
106
Is predictive frame gen even feasible? When the user starts an attack, or turns the camera, or an enemy moves, or whatever, any predicted frames are going to be mispredicted. It would add stuttering, and it wouldn't reduce latency where it matters.
 
Reactions: KompuKare

gdansk

Diamond Member
Feb 8, 2011
3,768
6,017
136
Is predictive frame gen even feasible? When the user starts an attack, or turns the camera, or an enemy moves, or whatever, any predicted frames are going to be mispredicted. It would add stuttering, and it wouldn't reduce latency where it matters.
No, I don't think so. People were only entertaining it because they gave Jensen the benefit of the doubt.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,283
5,389
136
Actually there's a whole new layer on top of this: strong raster performance practically guarantees you'll be able to make the most out of DLSS and FG as well.

FG behaves better when your framerate is already high, it's a "win more" kind of tech. DLSS likely also benefits from higher base FPS (it still has lots of temporal dependencies), but the cherry on top is that once you don't really need upscaling... you can use DLAA instead. That's a pure IQ win, hands down. Once upscaling is out of the picture and your frames are very well defined using DLAA... guess who's going to introduce less artifacts... FG of course

To paraphrase Jensen, the more raster you buy, the more AI win you get!

DLAA doesn't get enough love.
 

Hulk

Diamond Member
Oct 9, 1999
4,938
3,367
136
No, I don't think so. People were only entertaining it because they gave Jensen the benefit of the doubt.
I like to play the "believer" for a few minutes at these things. "OMG! This is amazing!" Then after thinking it through, getting more details, it's like "Okay, sales BS with a whiff of truth."

You know, trust but verify. The 4090=570 hoax was fun for the 38 seconds it lasted.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,283
5,389
136
I like to play the "believer" for a few minutes at these things. "OMG! This is amazing!" Then after thinking it through, getting more details, it's like "Okay, sales BS with a whiff of truth."

You know, trust but verify. The 4090=570 hoax was fun for the 38 seconds it lasted.

Wait, an RX 570 is equal to a 4090? Let me check ebay real quick! I know it was an obvious typo. Just having a little fun.
 

poke01

Diamond Member
Mar 8, 2022
3,037
4,018
106
Here is a detailed post that’s researched well about all the RTX 50 Blackwell improvements so far:


Every Architectural Change For RTX 50 Series Disclosed So Far

**Disclaimer:** Flagged as a rumor due to cautious commentary on publicly available information.

Some key changes in the Blackwell 2.0 design or RTX 50 series have been overlooked in the general media coverage and on Reddit. Here those will be covered in addition to more widely reported changes. With that said we still need the Whitepaper for the full picture.

The info is derived from the official keynote and the NVIDIA website post on the 50 series laptops.

If you want to know what the implications are [this igor’sLAB](https://www.igorslab.de/en/ces-2025-nvidia-introduces-the-geforce-rtx-50-series-with-blackwell-architecture-and-sets-new-standards/) article is good+ [this article](https://www.tomshardware.com/pc-components/gpus/nvidia-announces-rtx-50-series-at-up-to-usd1-999) by Tom’s Hardware is good too for additional details and analysis.

**Neural Shaders**

Hardware support for neural shaders is result of the integration of neural networks inside of the programmable shader pipeline. This is possible because Blackwell has tighter co-integration of Tensor and CUDA cores which optimizes performance. In addition Shader Execution Reordering (SER) has been enhanced with software and hardware level improvements. For example the new reorder logic is twice as efficient as Ada Lovelace. This increases the speed of neural shaders.

**Improved Tensor Cores**

New support for [FP6 and FP4](https://www.tomshardware.com/pc-components/gpus/nvidia-blackwell-rtx-50-series-gpus-everything-we-know) is ported functionality from datacenter Blackwell. This is part of the Second Generation Transformer Engine. To drive Multiple Frame Generation Blackwell’s tensor cores have doubled throughput (INT8 + other formats) vs Ada Lovelace and 4x with FP4.

**Flip metering**

The display engine has been updated with flip metering logic that allows for much more consistent frame pacing for Multiple Frame Generation and Frame Generation on 50 series.

**Redesigned RT cores**

The ray triangle intersection rate is doubled yet again to 8x per RT core as it’s been done with every generation since Turing. Here’s the ray triangle intersection rate for each generation per SM at iso-clocks:

1. Turing = 1x
2. Ampere = 2x
3. Ada Lovelace = 4x
4. Blackwell = 8x

Like previous generations no changes for BVH traversal and ray box intersections have been disclosed.

The new SER implementation also seem to benefit ray tracing as per [RTX ](https://developer.nvidia.com/rtx-kit#iq771z)[Kit site](https://developer.nvidia.com/rtx-kit#iq771z):

”*SER allows applications to easily reorder threads on the GPU, reducing the divergence effects that occur in particularly challenging ray tracing workloads like path tracing. New SER innovations in GeForce RTX 50 Series GPUs further improve efficiency and precision of shader reordering operations compared to GeForce RTX 40 Series GPUs.”*

Like Ada Lovelace’s SER it’s likely that the additional functionality requires integration in games, but it’s possible these advances are simply low level hardware optimizations.

RT cores are getting enhanced compression designed to reduce memory footprint. Whether this also boosts performance and bandwidth or simply implies smaller BVH storage cost in VRAM remains to be seen. If it’s SRAM compression then this could be “sparsity for RT” (the analogy is high level, don’t take it too seriously), but technology behind remains undisclosed.

All these changes to the RT core compound, which is why NVIDIA made this statement:

”*This allows Blackwell GPUs to ray trace levels of geometry that were never before possible.”*

This also aligns with NVIDIA’s statements about the new RT cores being made for RTX mega geometry (see [RTX 5090 product page](https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/)), but what this actually means remains to be seen. But we can infer reasonable conclusions based on the Ada Lovelace Whitepaper:

”*When we ray trace complex environments, tracing costs increase slowly, a one-hundred-fold increase in geometry might only double tracing time. However, creating the data structure (BVH) that makes that small increase in time possible requires roughly linear time and memory; 100x more geometry could mean 100x more BVH build time, and 100x more memory.”*

The [RTX Mega Geometry SDK](https://www.youtube.com/watch?v=5PHBXY0FI5o&t=110s) takes care of reducing the BVH build time and memory costs which allows for up to 100x more geometric detail and support for infinitely complex animated characters. But we still need much higher ray intersections and effective throughput (coherency management) and all the aforementioned advances in the RT core logic should accomplish that. With additional geometric complexity in future games the performance gap between generations should widen further.

**Hardware Advances Powering MFG and Enhanced DLSS Transformer Model**

With Ampere NVIDIA introduced sparsity, a feature that allows for pruning of trained weights in the neural network. This compression enables up to a 2X increase in effective memory bandwidth and storage and sparsity allows for up to 2X more math/computations. Ada Lovelace doubles these theoretical benefits with structural sparsity support.

For new MFG, FG and the Ray Reconstruction, Upscaling and DLAA transformer enhanced models it’s likely they’re built from the ground up to utilize most or all the architectural benefits of the Blackwell Architecture: structural sparsity and sparsity for dense math and FP4, FP6, FP8 support (Second Gen Transformer Engine).

It’s unknown if DLSS transformer models can benefit from sparsity, and it’ll depend on the implementation. The DLSS CNN models use of the sparsity feature remains undisclosed. But it’s unlikely given the nature of the neural network.

NVIDIA said the new DLSS 4 transformer models for ray reconstruction and upscaling has [2x more parameters and requires 4x higher compute](https://nvidianews.nvidia.com/news/nvidia-blackwell-geforce-rtx-50-series-opens-new-world-of-ai-computer-graphics). Real world ms overhead vs the DNN model is unknown but don’t expect a miracle; the ms overhead will be significantly higher than the DNN version. This is a performance vs visuals trade-off.

Here’s the FP16 tensor math throughput per SM for each generation at iso-clocks:

1. Turing: 1x
2. Ampere: 1x (2x with sparsity)
3. Ada Lovelace: 2x (8x with sparsity + structural sparsity), 4x FP8 (not supported previously)
4. Blackwell: 4x (16x with sparsity + structural sparsity), 16x FP4 (not supported previously)

And as you can see the delta in theoretical FP16, lack of support for FP(4-8) tensor math (Transformer Engine) and sparsity will worsen model ms overhead and VRAM storage cost with every previous generation. Note this is relative as we still don’t know the exact overhead and storage cost for the new transformer models.

**Blackwell CUDA Cores**

During [the keynote](https://www.youtube.com/watch?v=uDup7cYNU6c&t=498s) it was revealed the Ada Lovelace and Blackwell SMs are different. This is based on the limited information given during the keynote by Jensen:

*"...there is actually a concurent shader teraflops as well as an integer unit of equal performance so two dual shaders one is for floating point and the other is for integer."*

In addition [NVIDIA's website](https://www.nvidia.com/en-us/geforce/news/rtx-50-series-graphics-cards-gpu-laptop-announcements/) mention the following:

*"The Blackwell streaming multiprocessor (SM) has been updated with more processing throughput"*

How this implementation differs from Ampere and Turing remains to be seen. We don’t know if it is a beefed up version of the dual issue pipeline from RDNA 3 or if the datapaths and logic for each FP and INT unit is Turing doubled. Turing doubled is most likely as RDNA 3 doesn’t advertise dual issue as doubled cores per CU. If it’s an RDNA 3 like implementation and NVIDIA still advertises the cores then it is as bad as the Bulldozer marketing blunder. It only had 4 true cores but advertised them as 8.

Here’s the two options for Blackwell compared on a SM level against Ada Lovelace, Ampere, Turing and Pascal:

1. Blackwell dual issue cores: 64 FP32x2 + 64 INT32x2
2. Blackwell true cores: 128 FP32 + 128 INT32
3. Ada Lovelace/Ampere: 64 FP32/INT32 + 64 FP32
4. Turing: 64 FP32 + 64 INT32
5. Pascal: 128 FP32/INT32

Many people seem baffled by how NVIDIA managed more performance (Far Cry 6) per SM with 50 series despite the sometimes lower clocks compared to 40 series. This could explain som of the increase.

**Media and Display Engine Changes**

Display:

”*Blackwell has also been enhanced with PCIe Gen5 and DisplayPort 2.1b UHBR20, driving displays up to 8K 165Hz.”*

Media engine encoder and decoderhas been [upgraded](https://videocardz.com/newz/nvidia-geforce-rtx-50-series-adds-support-for-422-color-format-video-decoding-and-encoding):

”*The RTX 50 chips support the 4:2:2 color format often used by professional videographers and include new support for multiview-HEVC for 3D and virtual reality (VR) video and a new AV1 Ultra High-Quality Mode.”*

Hardware support for 4:2:2 is new and the 5090 can decode up to 8x 4K 60 FPS streams per decoder.

5% better quality with HEVC and AV1 encoding + 2x speed for H.264 video decoding.

**Improved Power Management:**

”*For GeForce RTX 50 Series laptops, new Max-Q technologies such as Advanced Power Gating, Low Latency Sleep, and Accelerated Frequency Switching increases battery life by up to 40%, compared to the previous generation.”*

”*Advanced Power Gating technologies greatly reduce power by rapidly toggling unused parts of the GPU.*

*Blackwell has significantly faster low power states. Low Latency Sleep allows the GPU to go to sleep more often, saving power even when the GPU is being used. This reduces power for gaming, Small Language Models (SLMs), and other creator and AI workloads on battery.*

*Accelerated Frequency Switching boosts performance by adaptively optimizing clocks to each unique workload at microsecond level speeds.*

*Voltage Optimized GDDR7 tunes graphics memory for optimal power efficiency with ultra low voltage states, delivering a massive jump in performance compared to last-generation’s GDDR6 VRAM.”*

Laptop will benefit more from these changes, but the desktop should still see some benefits. These will probably mostly from Advanced Power Gating and Low Latency Sleep, but it’s possible they could also benefit from Accelerated Frequency Switching.

**GDDR7**

Blackwell uses GDDR7 which lowers power draw and memory latencies.

**Blackwell’s Huge Leap in Compute Capability**

The ballooned compute capability of Blackwell 2.0 or 50 series at launch remains an enigma. In one generation it has jumped by 2.9, whereas from Pascal to Ada Lovelace it increased by 2.8 in three generations. Whether this supports Jensen’s assertion of Blackwell consumer being the biggest architectural redesign since 1999 when NVIDIA introduced the GeForce 256, the world’s first GPU, remains to be seen. The increased compute capability number could have something to do with neural shaders and tighter Tensor and CUDA core co-integration + other undisclosed changes. But it’s too early to say where the culprits lie.

For reference here’s the official compute capabilities of the different architectures going all the way back to CUDA’s inception with Tesla in 2006:

Blackwell: 12.8

Enterprise – Blackwell: 10.0

Enterprise - Hopper: 9.0

Ada Lovelace: 8.9

Ampere: 8.6

Enterprise – Ampere: 8.0

Turing: 7.5

Enterprise – Volta: 7.0

Pascal: 6.1

Enterprise - Pascal 6.0

Maxwell 2.0: 5.2

Maxwell: 5

Big Kepler: 3.5

Kepler: 3.0

Small Fermi: 2.1

Fermi: 2.0

Tesla: 1.0 + 1.3
 
Reactions: Tlh97

poke01

Diamond Member
Mar 8, 2022
3,037
4,018
106
*For GeForce RTX 50 Series laptops, new Max-Q technologies such as Advanced Power Gating, Low Latency Sleep, and Accelerated Frequency Switching increases battery life by up to 40%, compared to the previous generation.”*

”*Advanced Power Gating technologies greatly reduce power by rapidly toggling unused parts of the GPU.*

*Blackwell has significantly faster low power states. Low Latency Sleep allows the GPU to go to sleep more often, saving power even when the GPU is being used. This reduces power for gaming, Small Language Models (SLMs), and other creator and AI workloads on battery.*

*Accelerated Frequency Switching boosts performance by adaptively optimizing clocks to each unique workload at microsecond level speeds.*
This could also be used for their upcoming laptops too, so good see some major updates here.
 
Reactions: coercitiv

Saylick

Diamond Member
Sep 10, 2012
3,798
8,666
136
Is predictive frame gen even feasible? When the user starts an attack, or turns the camera, or an enemy moves, or whatever, any predicted frames are going to be mispredicted. It would add stuttering, and it wouldn't reduce latency where it matters.
It's doable, but I think it will be problematic and lead to actual fake frames. How can an AI predict user input? How would it predict animations where objects change direction suddenly without explicitly learning each animation? How would it predict objects which appear out of nowhere or appear from behind a wall?

No, I don't think so. People were only entertaining it because they gave Jensen the benefit of the doubt.
I think people need to stop giving Jensen the benefit of the doubt just because he's the CEO of the most valuable company at time when the AI bubble is hyped like crazy. Just because he's successful doesn't mean he's automatically right. I'd argue it's more likely he's wrong because the people who are going to be the most knowledgeable in a particular subject matter are his engineers who developed it. CEOs tend to have a broad, general understanding of stuff. This error in his statement about how DLSS 4 MFG works is likely a result of that.
 
Reactions: poke01
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |