[TT] Pascal rumored to use GDDR5X..

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Seems the nano is even shorter than the 970 so somewhat true. Most definitely cannot physically have smaller gddr5 cards because you need space for the power and VRAM.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
If HBM 2 is in short supply, why wouldn't they use GDDR5X for their midrange designs? Since when is video card size a priority for anyone besides the very very tiny niche of people that build very specific HTPCs? Man, some of these arguments are really reaching. If AMD is smart, they'll follow in NVIDIA's footsteps or keep seeing production shortfalls and declining market share. I mean, what are they at right now, 18% and falling? Should be interesting to see what the conference call tomorrow reveals.

Most, if not all of the "arguments", are reaching.

This absolutely killed me; the bold part.
"You proved that video cards can be made more small without HBM but you did not disprove that HBM does brings in savings when it came to PCB size ...

As if this argument EVER mattered. Savings when it comes to PCB size. Because everyone looks for PCB saving.
Some real head shakers around here.

Question. Is the Fury Nano at all diminished from it's other Fury brethren? Clocks? Shaders? ROPs? Texture units?
 
Last edited:

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
It's impressive what Nvidia did with the laptop full GTX 980. Imagine making it into an ITX card.



If an ITX design war ever happened certainly HBM would win, but Happy Medium brings up two good points:

1) GDDR5 when engineered right can get close enough in small PCB size (970 ITX is nearly as small as Nano) and 2) they will still slap fat heatsinks and multiple fans on cards anyway since it makes a card quieter and cooler (as a user of a fatass but cool and quiet HBM card, I can attest that this is a good decision).

The power consumption and overall memory bandwidth advantage of HBM2 will leave GDDR5X behind eventually. But I see no reason to question that GDDR5X is a good stopgap until prices and production of HBM2 get into full swing.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
yes it is, see picture above I posted of gtx970.

You might have to elaborate about what "overall reduction in card size means".
Are you saying that without HBM cards can not be made smaller?
If so, your wrong.

You need to rebait your hook.
There is no hook or bait.

Showing a GTX 970 means nothing. So what? Is the GTX 970 the first mini ITX card ever? There are a ton of mITX cards besides the GTX 970 so I don't get your point.

Again, HBM brings an overall reduction in card size.
Are you disputing this fact?

This isn't an Nvidia vs AMD thing, that's why I'm not referencing any cards myself on either side. It's irrelevant, HBM will allow smaller faster cards on average. I'm happy. End point.

Instead of a GTX 970 card class card in an "mitx" form factor, we can get a GTX 980Ti class card with HBM in an "mitx" form factor, except it's in that form factor because it's just that efficient.

Don't see why any person in the world would not be excited for that, but apparently you are not.
 
Last edited:
Feb 19, 2009
10,457
10
76
AMD would likely take the same approach and put GDDR5X on its mid-range and low-end stuff. If not for cost reasons then think about this: HBM has had a troubled birth and yields as well as the TSV procedure/stacking was stated as being difficult... how likely is it that HBM2 is ready for prime time to meet the demands of high volume low-mid-range SKUs?

Not very likely. This leak makes perfect logical sense given the info about GDDR5X.

On the high end if not for consumers, HPC Teslas and Firepros will need HBM2 to compete against Intel's 3D memory next-gen Phi.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Most, if not all of the "arguments", are reaching.

This absolutely killed me; the bold part.
"You proved that video cards can be made more small without HBM but you did not disprove that HBM does brings in savings when it came to PCB size ...

As if this argument EVER mattered. Savings when it comes to PCB size. Because everyone looks for PCB saving.
Some real head shakers around here.

Question. Is the Fury Nano at all diminished from it's other Fury brethren? Clocks? Shaders? ROPs? Texture units?

That was exactly my point Keysplayer actually. It's my own statement everyone is responding to, so yes, that's what I said. That HBM has a great benefit in reducing card size.

Why ANYONE would be upset with this is beyond me.


It allows for a smaller and more efficient card to be made. That's why I'm excited for HBM. Why you guys want to bring the Nano into this, I don't know. HBM allows for smaller designs as shown by Nvidia's own debut of Pascal. So yes, why would I NOT be excited for that? Instead of a full length GPU needed, we can get big pascal performance that can fit into a mini case? Or Arctic Islands?

Being able to fit more powerful GPUs into smaller spaces is cool... and exciting.

Are you saying you're against that and would prefer Pascal to be as long of a card as possible?
 

Spanners

Senior member
Mar 16, 2014
325
1
0
Yea your right just like this HBM 1 gtx970 Mini....oh no nevermind it had GDDR5

*img snipped*

Mabe it was this tiny HBM Fury card?

*img snipped*

Point is it doesn't need to have HBM to be small or have to be small because of HBM.

What a disingenuous way to try and make a point. Fact is HBM allows for smaller cards all other factors being equal. Showing a 10mm DDR card or a 10m HBM card isn't changing that.
 

lilltesaito

Member
Aug 3, 2010
110
0
0
Most, if not all of the "arguments", are reaching.

This absolutely killed me; the bold part.
"You proved that video cards can be made more small without HBM but you did not disprove that HBM does brings in savings when it came to PCB size ...

As if this argument EVER mattered. Savings when it comes to PCB size. Because everyone looks for PCB saving.
Some real head shakers around here.

Question. Is the Fury Nano at all diminished from it's other Fury brethren? Clocks? Shaders? ROPs? Texture units?

Not sure how you can keep poking at people about showing something that they never said and yet not answer when people keep asking you to show the cost or show any facts.

I also do not think they are doing GDDR5X because of saving money, they are most likely doing because they do not have enough to go around. If they did I bet they would use it and charge more because of it.
So this whole cost thing should not even be an issue what so ever. Nvidia has no problem raising prices on video cards.

And now this is turning into a AMD vs Nvidia because someone said that HBM would help make the boards smaller.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
Unless the majority or all of the efficiency gains are leveraged for lower power consumption the high performance cards are still going to comparatively higher power draw. This means that the larger cooling solutions will still have great benefits in terms of noise and temperatures.

Which AMD board designs get the most recommendations? The largest triple fan solutions from Sapphire. Which Fury design gets the most attention it is the largest one from Sapphire.

Even if we go to the smaller drawing mid-level GTX 970 which card would get recommended the most purely in terms of board design, basically not factoring things like price and warranty, the huge Gigabyte Windforce 3X.

Would the majority of the market really be interested in moving to smaller single fan heatsinks away from the larger two/three fan heatsinks?

For me personally if all else were equal and space permitting I'd still prefer a card with a huge heatsink for any higher performance cards. Only down at the low 100w range or lower would I say you can arguably get acceptable noise levels, at least to me, with a single fan small heatsink solution like used on the Nano or the mini-itx cards.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
It seems pretty likely that nv aren't producing cards with HBM until the end of 2016 simply because they can't do it any sooner. Remember that nv altered their roadmap and switched from HMC to HBM pretty late in the game. AMD spent a lot of resources developing the tech, nv isn't going to be able to just swap out a few components and have a card with HBM. All they've even been able to show so far is something with some plastic squares glued on. I suppose there was no wood screws at least. There is and will be a lot of marketing hype to try and persuade investors that it's just around the corner, but I bet we won't see an nv card with HBM until very late 2016 and possibly slipping into 2017.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Unless the majority or all of the efficiency gains are leveraged for lower power consumption the high performance cards are still going to comparatively higher power draw. This means that the larger cooling solutions will still have great benefits in terms of noise and temperatures.

Which AMD board designs get the most recommendations? The largest triple fan solutions from Sapphire. Which Fury design gets the most attention it is the largest one from Sapphire.

Even if we go to the smaller drawing mid-level GTX 970 which card would get recommended the most purely in terms of board design, basically not factoring things like price and warranty, the huge Gigabyte Windforce 3X.

Would the majority of the market really be interested in moving to smaller single fan heatsinks away from the larger two/three fan heatsinks?

For me personally if all else were equal and space permitting I'd still prefer a card with a huge heatsink for any higher performance cards. Only down at the low 100w range or lower would I say you can arguably get acceptable noise levels, at least to me, with a single fan small heatsink solution like used on the Nano or the mini-itx cards.

What you choose to do is your own business. If you prefer the designs that are larger great.

The option is there. Seriously again, I'm confused, are you guys AGAINST the implementation of new technology that is faster, consumes less power, and allows for smaller cards if you want them?

The odd thing is, next gen, when these cards are available, no one will say "Hey, lets go back to DDR5!"
But because AMD is currently using HBM and Nvidia doesn't yet have it in their chips, there is some kind of mind block going on on this forum where people are caught up in the current "battle" of cards, and can't appreciate an advancement in technology.

Quite sad.
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
I found it funny to see the mini 970 thrown around as an argument. The fact that the 970 relative to the rest of nV's lineup isn't even top of the line makes it funnier.

AMD could take Tonga and cram it into a mini 970 PCB for all they care, considering the similar power requirements and both using GDDR5 connected with a 256 bit bus. That wouldn't be as impressive as what GM204 can do, anyway, under those constraints. Still, it'd be the perfect comparison.

On the other hand you have Fiji crammed into that ridiculously small PCB (nano) putting out performance comparable to the 980Ti with a joke of a heatsink, if you're inclined to set the power limit a little higher than 0% so powertune doesn't throttle it as often.







If nV/a third party manages to cram a full GM200 implementation in the PCB size of a Fury/Nano with an accompanying cooling solution, then we could have a decent comparison point. That doesn't seem feasible for them until HBM2 Pascal arrives.

Mini 970 can't be compared to Fiji (something enabled by HBM, after all) and its many versions with a straight face. 4 HBM stacks occupy almost the same space as one or two GDDR5 modules. Performance isn't even in the same category (even with Nano being such a "weird" product and priced as it is). It's apples and oranges.


-------------

HBM frees TDP that is now available to the GPU itself for improved performance apart from all the benefits we've already gone over and over and over in lots of threads around here relative to GDDR5 (and now GDDR5X). There's the interposer cost and increased complexity to pay for at first but it clearly is the future and makes a lot of sense for GPUs or whatever products that are enabled by it.

Just look at Fiji. It's self contained. You only have to add the power components and you're done. It's elegant. No need to bother with all those GDDR5 modules and bus routing on the PCB anymore. Both Fiji PCBs are just that, connectors, sockets and power components in a compact package. An AIO/triple fan open air cooler does it for Fury/X, a ridiculously tiny open air blower is enough for TDP limited Fiji (clock GCN to >1000MHz and you have a problem, nothing new here). Maybe GM200 could do well enough if TDP limited with such a small heatsink, but then you probably can't fit it in such a small PCB, not without some witchcraft. Maybe higher density GDDR5 modules could help with that.

Considering that nV is doing a node shrink (harder than usual, planar -> finfets) + new architecture, using something similar to GDDR5 would eliminate the jump to a new memory type and allow more room for success (we all remember Fermi and the 40nm+new arch+GDDR5 combo...). Still, I don't expect them to use GDDR5X for GP100. The high end requires HBM2! They have a 250-300w TDP budget for the entire card, use a power hungry memory type (GDDR5X) and you have less and less of that available to the GPU itself. Clock HBM/HBM2 high enough to reach, say, 1TB/s and you still won't be nowhere near the power required to drive GDDR5/X to half of that bandwidth on a 384 bit bus. Yeah yeah, node shrink + new arch is a huge deal in terms of performance brought to the table over actual 28nm hardware, but why limit the increase with an inferior technology? Especially when the absolute performance and perf/w enabled by these other two new factors is even higher than before, every watt counts more than ever!

Fiji allowed GCN to become what it is today (even after almost 5 years) because of the improved power management and the freed TDP from the use of HBM that could now be allocated for the GPU itself. nV stands to benefit from this, too.



GP100 ought to make the jump to HBM2. Hell, GP104 could also be a monster with HBM2 (just see them price it like the 680/980, margins should be enough to justify the change). If Pascal supports both memory standards, then, well, nV is the master of marketing and apple-like brainwashing, they could very well sell a GDDR5X flagship and then make the change to HBM2 once it's 100% ready. Going forward, just leave GDDR5/X for the low end, that's where it belongs.

Just no more 970-like shenanigans. That's all I ask for this next round.


--------------

Question. Is the Fury Nano at all diminished from it's other Fury brethren? Clocks? Shaders? ROPs? Texture units?

Nano has a full Fiji, capped to 1000MHz / 175w maximum, and configured as such that powertune throttles the clock speed/voltage according to the load. The lighter the load, the less power consumed, the more the average clockspeed is to the 1000MHz maximum.

If you lift the powertune setting from 0% (175w) you get an almost fixed 1000MHz clockspeed on most games with a little more fan noise and temperatures, although this depends too much on the workload at hand. There isn't much headroom because the power delivery is cut down and targeted for that TDP. Considering how small the heatsink is, it manages quite well. If you overclock on top of that, well, it gets overwhelmed. Nothing unexpected here. GCN is most efficient at the 800-1000MHz range. Once you get higher than that, power starts to get out of control (at least for Hawaii and Fiji, Tahiti/Pitcairn aren't as dense and don't "suffer" as much). They should try to fix this for Arctic Islands. Maxwell performs so well in part because of how well it clocks.

For how unbalanced GCN is in its Fiji form relative to what Hawaii has to offer and how much hardware it packs for non gaming workloads, it's nothing short of amazing for an almost 5 year old architecture. AMD wasn't kidding when they stated GCN was in for a long time when they presented it with the 7970. DX12 eliminating AMD's subpar DX11 driver (although good enough) and overhead should make (and is already making) things more interesting, but that's something for another time and another discussion.
 
Last edited:

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
I just like to point out that the size of the PCB does affect the cost e.g. amount of materials used. This however is a small part of the overall BOM cost (normally alot smaller once manufacturing numbers reach in the high tens of thousands). Unless we know how many layers, how many components, PCB specific features like aluminium core/blind buried vias or quite simply the complexity of the layout its quite hard to say if a smaller looking PCB is cheaper and vice versa only because those dictate the price ALOT more than the material costs.
 
Last edited:

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
Well, there is clearly a tradeoff, all the bus routing complexity that once was in the PCB is now in the interposer. Costs shift from one to another. The PCB then becomes quite simple, I think, relatively speaking.

I suppose going forward the interposer and memory/GPU assembly process will result in less and less failed product as it gets perfected, better yields, cheaper. Just how much more expensive is this way of doing things right now vs well established and understood GDDR5, we do not know.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I guess your universe doesn't accommodate the fact that this interposer allows at the very least, a simpler AND smaller PCB.
Yes, there are new costs.
Yes, there are new savings.
Net is the important figure.
You don't know and neither do most of us here, so please don't try to portray yourself as an expert. If you do get verifiable numbers, please post them, as I'm sure a lot of us would like an accurate view of costs.

Just like the guarantee that The consoles wouldn't have APU's in them and that Maxwell was definitely 20nm. He doesn't know.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
http://hexus.net/tech/news/graphics/79513-samsung-starts-mass-produce-8gb-gddr5-dram/

http://www.kitguru.net/components/g...ns-mass-production-of-8gb-gddr5-memory-chips/

Those faster and higher density GDDR5 chips are also out there to be used, if possible.

Needing only half the number of chips would be good.

Well, for the given capacity, which was never a problem - see professional cards.
GDDR5/X memory lacks bandwidth which in turn requires a wide memory bus, for example 256bit wide. Now, that 256 bit bus is actually made out of 8 32bit controllers, each paired with GDDR5/X memory module. 8 modules minimum for 256bit bus. Increasing density doesn't change it.

Lets stick to 28nm becasue it is considerably cheaper, easier, and have better performance. 14/16nm go home.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
If HBM volums are low and prices high, I wouldnt be surprised if we see more HBM products in Laptops than desktop in 2016.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Well, there is clearly a tradeoff, all the bus routing complexity that once was in the PCB is now in the interposer. Costs shift from one to another. The PCB then becomes quite simple, I think, relatively speaking.

I suppose going forward the interposer and memory/GPU assembly process will result in less and less failed product as it gets perfected, better yields, cheaper. Just how much more expensive is this way of doing things right now vs well established and understood GDDR5, we do not know.

Yeah definitely there is a trade off. I guess all the physical memory related connections (from the GPU to VRAM) are moved onto the interposer but then you still got the connection from GPU to everything else along with the seperate VRM for the HBM still on the PCB. I think this is where the majority of the space savings come from.

You make a good point. After a certain point in time, the yields will be good, production techniques perfected and i.e. lower cost etc etc. It will be the future of memory and will make GDDR obsolete even for the low end. But thats a fair amount away because the technology is simply way too raw atm for a lack of a better term.

I think its sensible to think that GDDR5 is cheaper all things considered vs HBM as of now (and within 1~2 years). Would be interesting to know the real cost % difference but im thinking its substantial enough seeing as micron is continuing on with GDDR technology with GDDR5X as a stopgap solution or perhaps a cheaper alternative.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I'm going to laugh when the next generation nvidia card with GDDR5X ends up being faster than the next AMD card, with 8GB of HBM2! Ok I will laugh and cry at the same time.

You have to understand that there are some performance metrics (compute related) that Original Tahiti is still superior to Maxwell. So, when you want to laugh, remember you are only talking about the aspect of using a GPU as a toy, not a serious piece of high tech hardware. Also, that while the software (games) aren't yet even taking full advantage of GCN hardware it's actually held it's own quite well performance wise.

Listen though, it's perfectly fine for people to have their priorities aligned to fps/W and nothing else. I'm not trying to tell you what should be important to you. Just realize there is more than the narrow performance parameters that you purchase for.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
AMD would likely take the same approach and put GDDR5X on its mid-range and low-end stuff. If not for cost reasons then think about this: HBM has had a troubled birth and yields as well as the TSV procedure/stacking was stated as being difficult... how likely is it that HBM2 is ready for prime time to meet the demands of high volume low-mid-range SKUs?

Not very likely. This leak makes perfect logical sense given the info about GDDR5X.

On the high end if not for consumers, HPC Teslas and Firepros will need HBM2 to compete against Intel's 3D memory next-gen Phi.

Actually if AMD can get their hands on it they will put HBM on anything they can. Iy's their baby and they've invested heavily in it.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
The option is there. Seriously again, I'm confused, are you guys AGAINST the implementation of new technology that is faster, consumes less power, and allows for smaller cards if you want them?

No, what you don't understand is few people care about any of these things except performance when it comes to high end video cards. Less power? Really? You think owners of exotic sports cars are arguing about who gets better fuel economy? So long as it isn't like 5 MPG, no one shopping them is going to care how "economical" they are. If the difference in power bills between a Nano and a 980ti is something you lose sleep over, you're involved in the wrong hobby. The only reason anyone cared about the terrible power usage of the 290x was because of the godawful OEM cooler AMD slapped on it which made the card miserable to live with.

Card size? Again who cares? Cases are probably one of the least frequently upgraded components of a PC. Has anyone been sitting around hoping for a Nano sized card because their case can't accommodate a regular sized video card and they have been stuck with onboard video for the last 3 years? I own a Silverstone Fotress ft02 pictured below (not my system)



As you can see, there is ample space for expansion cards. I have no complaints about this case and plan to keep it for years to come. Give me a non-idiotic reason why I should want a smaller sized video card when I have such a case?

As I've already discussed earlier in the thread I'm not blinded by shiny new things. The age of the tech means nothing to me.

Just because you can come up with some niche scenario where any of these things matter, doesn't mean the rest of us need to care. This isn't an Nvidia vs AMD debate. As long as the video card is within reasonable bounds of all the things you listed, the only one I care at all about is the performance. Up to this point HBM has not demonstrated any real world benefit. The speed of the individual components is meaningless to me, only the performance of the finished product. If Matrox comes out of no where and releases a card with EDO DRAM that is faster than a card using HBM, that's what I will buy.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Seriously again, I'm confused, are you guys AGAINST the implementation of new technology that is faster, consumes less power, and allows for smaller cards if you want them?

You know better than this Tential - or at least I hope so.

Nobody has said that GDDR5 is a better technology than HBM, but you provide no context for your rant. What everyone has been saying, at least those of us who think this is a logical step, is that HBM just isn't ready for primetime yet on a broad mainstream basis.

Silverforce said as much when he said:

Silverforce11 said:
HBM has had a troubled birth and yields as well as the TSV procedure/stacking was stated as being difficult... how likely is it that HBM2 is ready for prime time to meet the demands of high volume low-mid-range SKUs?

Exactly. GDDR5X is a stop-gap solution that is logical and reasonable while we wait the full maturation of HBM2 (and further generations). This isn't inconsistent.

For you to try to paint people as anti-HBM because we're shills for NV or whatever is laughable. I don't think anyone can accuse Silverforce of being an NV shill, if you look at his post history, but at least he's intellectually honest enough to concede the current limitations of HBM without descending into a rant attacking everyone who understands this as unpaid shills for NV.

Honestly, why is it so difficult to understand this?
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
What you choose to do is your own business. If you prefer the designs that are larger great.

The option is there. Seriously again, I'm confused, are you guys AGAINST the implementation of new technology that is faster, consumes less power, and allows for smaller cards if you want them?

The odd thing is, next gen, when these cards are available, no one will say "Hey, lets go back to DDR5!"
But because AMD is currently using HBM and Nvidia doesn't yet have it in their chips, there is some kind of mind block going on on this forum where people are caught up in the current "battle" of cards, and can't appreciate an advancement in technology.

Quite sad.

I'm not sure who exactly you're lumping me in with but I'm certainly not against HBM itself.

I do however find the focus on selling the size advantage quite overstated, although I understand why it was focused on in the marketing materials as it is a very tangible and easily communicable idea. But to me this seems quite far down the list in terms of the draw of HBM for high performance desktop cards.

In practice, and even for the large majority of SFF builds, the size advantage is not really meaningful given then there is still a preference for much larger cooling solutions for high performance cards. Even if we take both sides 2x perf/w quotes at face value it is likely that the higher end performance range will still roughly be in the 200w area at the very least if they want any meaningful performance increases.

On the chance my perception is wrong and the market does mass shift towards smaller heatsinks but with higher speed fans to compensate then I guess I would be speaking out against that from happening.

Let's say a year or two from now and people post on here which Fury 2 with HBM2 model to get. Are the recommendations really going to be for some single fan small heatsink model or the huge ones? So I don't see why the strong sell involved with HBM savings as being a big deal for this segment.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |