AMD GPU14 Tech Event Sept 25 - AMD Hawiian Islands

Page 40 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Yarn

Member
Sep 24, 2013
29
0
66
I think a lot of people are over-reacting negatively to this mantle thing. Let me address a few things based on some assumptions I have about mantle.

I see it as an addition, supplementary to DX/OGL. If mantle is designed to enable easy porting of the GCN specific optimizations devs will be putting into console games then what's the problem? It should take marginally more effort to port (if mantle is designed like that) and they would have to discard less of these optimizations in favour of general ones. Devs would still make DX/OGL targeted ports. The way I view it is mantle should enable cross-platform devs to make their optimizations relevant to a larger number of systems.

As for all the people saying that this "standard" will die if it is proprietary, they need to realize that the standard so-to-speak is really GCN, and that mantle should always be relevant as long as people are optimizing for GCN based hardware on consoles. Saying mantle hurts the industry is akin to saying taking advantage of close-to-the-metal access for consoles hurts the industry, because mantle is just an extension of that to the PC realm?

I can also see why this needed to be done in this manner and I'd be willing to elaborate on that after some rest.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
For actual gaming experience in eg bf3 i think the sound dsp needs most future development. Getting angle of shots explotions and so on - clearly hearing what is in front back and more than now and then angle. Its so important.

I really hope this sound development could take off as it would bring experience to a new level for headsets and firstpersons games.

To be frank i dont care the slightest if this new highend gpu have 2 or 3k shaders. Its boring. But i did go to bed just after the sound announcements feeling it was the most important in years. I really hope sound in general gets more attention. Great day for sure.
 
Last edited:

HurleyBird

Platinum Member
Apr 22, 2003
2,756
1,440
136
Which means what?

When marketing doesn't spell it out, they're side stepping.

Yeah, that's not marketing BS. Everything in the presentation was very high level and broad, to the point were instead of stating something like "6.3 billion transistors" they would say something like "Over 6 billion transistors." Marketing BS would be making false, intentionally misleading, or overly exaggerated statements.

Example -- here's what was marketing BS: Raja stating that using Mantle was akin to playing with 2018 hardware today. Maybe it's technically true, but it wouldn't be until 2018 assuming developers had only been focusing on that specific fixed hardware the entire time. That definitely hits the mark of being intentionally misleading.

Based on the statements in the conference AMD and Dice obviously think Mantle will increase GPU performance, as do guys like Carmack and Lottes. You on the other hand, seem to be in full denial mode, coming up with this theory out of left field without a shred of evidence. The question shouldn't be if Mantle can increase GPU performance, but by how much, and with how much effort.

If December comes around and it's less than 10%, Nvidia probably doesn't have much to worry about. Between 15-25% and they should start sweating, and anything higher than a 30% speedup will most likely require a miracle for them to stay viable. Remember that as the first Mantle enabled project BF4 is going to be on the low end of what we can expect in the future.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Seconded! :thumbsup:

No one with an IQ of over 95 would believe that AMD could produce a 28nm card 2x faster than Titan. Most reasonable rumors pointed to performance between a reference 780 and Titan.

Again, just like not many people would switch from 6800U to X850XT PE, 4890 to 285, GTX680 to 7970GE or from a 680 to a 770, R9 290X is not meant for people to sell their 780/Titan because the performance increase is not enough. It's there to make sure AMD has a competitive high-end offering for the next 9-10 months to compete against AMD's finest and whatever else NV comes up with in the near future. Gamers who are going to be upgrading for BF4 for example may then consider 780 vs. R9 290X. That's the point.

spot on. With Mantle update for BF4 in December AMD seems to have a lot of ammo. If AMD's R290X with Mantle is significantly faster than GTX Titan in BF4 then this could be a huge factor in the purchasing decision. In fact even on DX11, R290x seems to be performing very well on Frostbite 2 / BF3

http://wccftech.com/amd-hawaii-r929...osed-fastest-gpu-planet-1020-mhz-clock-speed/

In fact, for anyone who bought GTX680 and 7970, the 2nd half of 28nm generation of GPUs is not very exciting either.
If you want excitement and an increase of 75-100% from today's cards, you have to wait for 20nm Maxwell.
I kind of disagree. if AMD can price R290x at USD 549 which I think is very realistically possible given R280x is priced at USD 299 then I think its worth it. in titles like BF3 at 1600p 4x MSAA , R290x beats the HD 7970 Ghz by 40+%. Also Hawaii XT is clocked lower than HD 7970 Ghz so the gap on a clock for clock basis is even higher. With Mantle we have to wait and see if the gap between Hawaii and Tahiti widens. its because Hawaii does not have any bottlenecks like Tahiti. massive 300+ Gb/s bandwidth, 44 - 48 ROPs, 4 geometry engines (most probably 4 raster engines). The improved efficiency on the Mantle API could help extract more actual performance out of the chips and for that Hawaii is better positioned than Tahiti. Most importantly gamers buy graphics cards to play the latest games. With blockbuster titles like BF4, Watch Dogs, Call of duty Ghosts launching soon now is a good time to upgrade.

20nm products are atleast 9 - 12 months away. Nvidia might come out with GM104 first in July 2014 followed by AMD with 20nm flagship in Oct 2014. 20nm wafer costs are higher than 28nm at the same point in their lifecycle due to double patterning immersion litho. these 20nm GPUs are not going to be cheap. also 28nm high k to 20nm high k is not the big leap that 28nm high k metal gate was from 40 nm polysilicon gate. In my opinion going for Hawaii XT and enjoying the performance on the latest games this holiday season and getting an upgrade 2 years later to a GPU produced on TSMC 16 FF is not a bad idea. :thumbsup:
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I kind of disagree. if AMD can price R290x at USD 549 which I think is very realistically possible given R280x is priced at USD 299 then I think its worth it. in titles like BF3 at 1600p 4x MSAA , R290x beats the HD 7970 Ghz by 40+%.

HD7970 OC is 70-75% faster than 6970 OC in modern games. The performance increase AMD is bringing now is going to be worse this round in those terms. Also, I am personally more excited for next gen PC games. Without that, I could care less if R9 290X was 2x faster. There is nothing on the horizon that excites me to spent $600-1200 for 2 next gen cards. I want to fire up Witcher 3 and get 20 fps on my 7970 and then put in a next gen 20nm card and get 50 fps. Then I'll be excited! :thumbsup:

With Mantle we have to wait and see if the gap between Hawaii and Tahiti widens. its because Hawaii does not have any bottlenecks like Tahiti. massive 300+ Gb/s bandwidth, 44 - 48 ROPs, 4 geometry engines (most probably 4 raster engines). The improved efficiency on the Mantle API could help extract more actual performance out of the chips and for that Hawaii is better positioned than Tahiti.

Will wait for real world benchmarks. I guess since I upgraded from an unlocked 6950 to 7970 OC, even 40% faster over my card isn't making me want to upgrade.

Most importantly gamers buy graphics cards to play the latest games. With blockbuster titles like BF4, Watch Dogs, Call of duty Ghosts launching soon now is a good time to upgrade.

But none of those have next generation graphics. Call of Duty Ghosts's graphics in particular looks crap. The game is clearly not using a next generation game engine.

20nm products are atleast 9 - 12 months away. Nvidia might come out with GM104 first in July 2014 followed by AMD with 20nm flagship in Oct 2014. 20nm wafer costs are higher than 28nm at the same point in their lifecycle due to double patterning immersion litho. these 20nm GPUs are not going to be cheap.

$599-$649 for R9 290X/780 is not cheap either. I guess I am getting older but I am not longer excited about a 40% performance increase that allows me to run BF4 with MSAA vs. no MSAA. I want to fire up a next gen game and my 7970s running it at 20 fps and I put in a next gen GPU and I am getting 50-60 fps. In the best looking PC game imo - Crysis 3 - GTX780/Titan have one of the smallest leads over 7970GE. I guess I might need to wait 2-3 years before I am wowed again by GPU tech.

In my opinion going for Hawaii XT and enjoying the performance on the latest games this holiday season and getting an upgrade 2 years later to a GPU produced on TSMC 16 FF is not a bad idea. :thumbsup:

I guess that depends where you are in the upgrade cycle. You really think we'll have 16 nm GPUs by 2015? Look how long it's taking for 20nm to arrive. I wouldn't be surprised if 16 nm only showed up in Volta late 2016.

28nm - Dec 2011 7970Ghz
28nm again - October 2013 R9 290X

Let's presume no 20nm for at least 6 months, it'll be 2.5 years from 28nm to 20nm.

Another thing, 780 has been available for a long time now. It seems like waiting for R9 290X was a waste of time. Therefore, someone who held off buying 780 up to now is probably not going to be buying 290X either since it's more or less the same level of price/performance. I guess for me the 780/290X feel like stop-gap cards. They aren't fast enough for next gen games and yet there are no next gen games yet. You are paying $600-650 to max out today's titles but if you already beat Crysis 3, TR with a 680/7970GE, there isn't much incentive to upgrade unless you just love new hardware all the time.

---

As a side note, if 280X is a straight re badge of 7970GE , I think there is a possibility to flash the 280X bios on a 7970/7970GE card. It also appears there will be no reference R9 280X cards, only after-market ones.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
DX definitely has shortcomings in terms of potential performance, but MS is trying to serve the greater good? If you believe their company line. That said, we don't know details of Mantle yet. We don't know if it is direct hardware access. They did not provide full documentation on it, so we'll have to wait and see.

Honestly, I'm surprised that AMD has gone to such lengths as to try to implement a low level API like Mantle..

DirectX 11 may not be perfect, but it's a big leap over DX9 in terms of increasing CPU performance and reducing overhead. And DirectX 11.1 has further refinements that increase CPU performance even more if DICE is to be believed.

So it definitely looks as though Microsoft is serious about fixing the CPU overhead issue with DirectX.. This whole thing reeks of desperation on AMD's part..
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Definitely some interesting stuff from this tech event, and not just AMD announcing its new lineup. Most interesting to me is TrueAudio and Mantle. If AMD can really pull off hardware-accelerated virtual surround sound right, it will be very enticing to audiophiles. It sounds like the exact kind of feature to give AMD cards an edge over Nvidia cards, if the card needs to be specifically designed to support the feature. I do wonder how the feature will work in practice, though. Haven't there been similar attempts at hardware accelerated audio before?

And Mantle...color me intrigued. If there's a real performance benefit for Mantle over DirectX 11.2, then I'm all for it. I will note that for this feature to succeed, I feel it needs to support Nvidia cards. AMD can probably manage developer relations to get TrueAudio into games without Nvidia support, but asking developers to code for a whole other 3D API that not everyone will be able to use? I can't see it being any more successful than PhysX no matter how hard AMD pushes it. So yeah, Mantle needs to support Nvidia cards if they want to see it adopted widely.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,979
589
126
This whole thing reeks of desperation on AMD's part..
I guess that goes for the devs that are on board as well huh.

Honestly this was by far the best tech event AMD has ever done. Not for the production quality or anything like, that was painful. But think about past AMD events. Here is the hardware, this is when it will come out, this is what it will cost. That's about it. This time, it was all about the software infrastructure to leverage the hardware, a monumental and much needed shift for AMD. The fact that AMD is locked into every next gen console and pushing the same tech onto the PC side including software, this is a major step for the company.
 

gorobei

Diamond Member
Jan 7, 2007
3,745
1,169
136
Honestly, I'm surprised that AMD has gone to such lengths as to try to implement a low level API like Mantle..

DirectX 11 may not be perfect, but it's a big leap over DX9 in terms of increasing CPU performance and reducing overhead. And DirectX 11.1 has further refinements that increase CPU performance even more if DICE is to be believed.

So it definitely looks as though Microsoft is serious about fixing the CPU overhead issue with DirectX.. This whole thing reeks of desperation on AMD's part..

no the dx9 dx11 difference isnt reducing the overhead, it just increases the limits on number of drawcalls and adds some texture compression formats. the indirect nature of dx and the hardware abstraction layer will always be a longer roundabout path than sending calls direct to metal. the ability to directly place textures/meshes into memory bypasses the DX "vallet service"
that will get your stuff there 'eventually'.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
HD7970 OC is 70-75% faster than 6970 OC in modern games. The performance increase AMD is bringing now is going to be worse this round in those terms. Also, I am personally more excited for next gen PC games. Without that, I could care less if R9 290X was 2x faster. There is nothing on the horizon that excites me to spent $600-1200 for 2 next gen cards. I want to fire up Witcher 3 and get 20 fps on my 7970 and then put in a next gen 20nm card and get 50 fps. Then I'll be excited! :thumbsup:

20 fps -> 50 fps. thats 250% perf (150% increase). for that to happen you will need to wait for 2 more years for TSMC 16FF GPUs. 20nm is not going to allow reaching even 2x the perf of HD 7970 Ghz. I would say the realistic estimate is 25% faster than Hawaii XT or around 65 - 70% faster than HD 7970 Ghz

But none of those have next generation graphics. Call of Duty Ghosts's graphics in particular looks crap. The game is clearly not using a next generation game engine.

What about BF4 and Watchdogs. you are saying these games are not next gen. I would disagree on that.

I guess that depends where you are in the upgrade cycle. You really think we'll have 16 nm GPUs by 2015? Look how long it's taking for 20nm to arrive. I wouldn't be surprised if 16 nm only showed up in Volta late 2016.

28nm - Dec 2011 7970Ghz
28nm again - October 2013 R9 290X

Let's presume no 20nm for at least 6 months, it'll be 2.5 years from 28nm to 20nm.

I think you know that 28nm to 20nm involves double patterning immersion litho. These are difficult challenges which the semiconductor companies are solving. Intel got to 22nm without immersion litho but thats because they are a IDM who can work with an incredibly restrictive set of design rules. does not work with foundries. Intel makes the transition to double patterning at 14nm.

Secondly the foundries are keeping their BEOL (back end of line) same and changing only the transistor device to FINFET 3d at TSMC 16 FF / GF 14XM. so the step is not a traditional process shrink but more of a performance boost (at same voltage) or power savings (at same perf) due to a better transistor device. TSMC 16FF is expected a year after TSMC 20SOC. so if 20nm is q3 2014 then 16ff is q3 2015.

Another thing, 780 has been available for a long time now. It seems like waiting for R9 290X was a waste of time. Therefore, someone who held off buying 780 up to now is probably not going to be buying 290X either since it's more or less the same level of price/performance.

GTX 780 has been available for 4 months at USD 650 with cards like HOF 780 and Lightning 780 selling at USD 690 - 770

http://www.newegg.com/Product/Produc...82E16814127754
http://www.newegg.com/Product/Produc...82E16814130943
http://www.newegg.com/Product/Produc...82E16814127754

If Hawaii XT comes in at USD 550 - 600 and beats a stock GTX 780 , which it seems to as per leaked benchmarks (though we need official reviews to confirm the same) and matches these 780 OC cards I consider it better price perf.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I guess that goes for the devs that are on board as well huh.

I can't blame DICE/EA for supporting this, because the game will still run in DX11 so they don't lose anything.

AMD is the one to blame, especially if this causes NVidia to do something similar.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
no the dx9 dx11 difference isnt reducing the overhead, it just increases the limits on number of drawcalls and adds some texture compression formats. the indirect nature of dx and the hardware abstraction layer will always be a longer roundabout path than sending calls direct to metal. the ability to directly place textures/meshes into memory bypasses the DX "vallet service"
that will get your stuff there 'eventually'.

So instancing doesn't reduce CPU overhead?

And according to Repi who works for DICE, the reason why Battlefield 4 uses DirectX 11.1 is because of the reduction in CPU overhead. Something to do with constant buffers...
 

HurleyBird

Platinum Member
Apr 22, 2003
2,756
1,440
136
On a different note, listen to this Astound Sound demo with headphones on guys!

http://www.youtube.com/watch?v=nKnhcsRTNME

The audio processing could actually be pretty awesome. I have some highend headphone gear and also a Smyth Realiser, so it will be interesting comparing how the visualization tech works with headphones versus 7.1

Either way, a nice little bonus.
 
Last edited:

itsmydamnation

Platinum Member
Feb 6, 2011
2,892
3,459
136
Which means what?

When marketing doesn't spell it out, they're side stepping.

I didn't know repi, ala Johan Andersson ala Dice's frostbites lead graphics programer is in "marketing".

The 10x to 100x is a simply a draw call limitation and AMD is getting nine times more draw calls from the same CPUs with this new API which is a far cry for the 10 to 100 times others have stated. Which again has nothing to do directly with GPU performance increases, unless CPU limited.

what Repi said was linear scaling for rendering across threads. Slow draw calls can have a GPU impact because you require more latency hiding then otherwise needed because the GPu is waiting longer for all the draw calls.

repi also said lots of low level optimizations made possible, now that doesn't sound like GPU performance improvement at all!!! Or the parts about how the game engine can often make decision based off detailed information that DX API cant and thus takes a slower code path etc.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
I am not sure but I found the new GPU announcements kinda underwhelming.They are not confident themselves that it can beat Titan oob which is very weird.They took a long time to bring this parts so it is no excuse either but mantle looks sexy though.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So yeah, Mantle needs to support Nvidia cards if they want to see it adopted widely.

I don't think that's going to happen. In the past, for GPU performance optimizations GPU makers relied on drivers and working closer with developers to bring out certain features in the game engine that showcased their own GPU architectures. If there were issues, game developers would also issue patches to fix brand specific performance if required.

Now AMD is introducing a lower-level API that supposedly takes an even greater advantage of GCN hardware. Think of it as a "super driver" if you will. By its very nature it is meant to improve performance for GCN parts. It may or may not improve performance for NV due to a lower draw calls overhead vs. DX.

I don't understand why some people here call it "artificially locked". It's like asking NV to release a Windows driver update that improves performance in all DX11 titles but also improves performance for AMD GPUs. I would actually want NV and AMD to collaborate to create an open standard API that dethrones the inefficient DX API but I can see why AMD wants to target their own GPUs. They have been unable to compete with NV on the hardware side since NV has an inherent advantage of building large monolith die GPUs. Trying to extract more performance from AMD's hardware through other means (AMD GE titles, lower level API) seems to be strategies AMD is utilizing to compensate for having slower GPU hardware due to their current inability to create a 520-560mm2 die. I don't think this is unfair to NV since NV can also work with developers to make their own API and promote it if they really wanted to.

I called it 2 years ago when AMD started the whole AMD GE program that now it will become a game of who throws more $ at developers. We have no one to blame for this but NV since they started all of this in the first place by throwing $ at developers and optimizing games for their hardware. AMD tried to gain market share with price/performance, game bundles and it didn't work. The next way out is throwing even more $ at developers, which in this case means taking it to the next level by providing them with a lower level access to GCN hardware.
 
Last edited:

Fire&Blood

Platinum Member
Jan 13, 2009
2,331
16
81
I have to admit, I would have been quick to dismiss Mantle as some futile diversion for lack of proper new introductions by AMD.

So this mantle stuff, will it work also on 7970 or is it just for the 290X?

All GCN GPU's, look at the slide I'm posting:







AMD can gain serious traction for Mantle since they have their hardware in both next gen consoles, PC alone would have been far less successful. There are some bold statements in those 2 graphs, even if watered down to negate wow effect the presentation was going for, it's still a significant motion.

From the end user perspective, it can't get much worse so it will be business as usual if this initiative dies quickly. However even before today's presentation, AMD has already scheduled a follow up in San Diego a month from now. Again, as a PC end user I have nothing to lose if this fails but I am starting to think there is a lot to gain. Either way, this week has had a lot of pushing back against Microsoft and it got me curious.
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
Wow, lots of FUD in this thread. It's hard to wade through it all. Worse than the presentation itself.
 

gorobei

Diamond Member
Jan 7, 2007
3,745
1,169
136
So instancing doesn't reduce CPU overhead?

And according to Repi who works for DICE, the reason why Battlefield 4 uses DirectX 11.1 is because of the reduction in CPU overhead. Something to do with constant buffers...

instancing has been around since dx10. it is part of reducing the limits of dx in that is allowing a single mesh to be entered into video memory and used around the scene multiple times. so before on dx9 you needed say 20 oil drums and each would be loaded into the scene as drumMesh 01-20. on dx11 you load drumMesh00 and the scene assembler will load a place holder reference for drums 01-20, then when the renderer needs the mesh to calculate a poly it points back to drumMesh00.

because of draw call limits in dx there are a finite number of mesh models that can be loaded for any frame/scene. if you are using up meshes to make the billions of blades of grass that populate a scene you are in effect taking away polys from the hero or view models. these limits will always be there as part of the poly budget, and each generation of dx will usually try to increase the limit.

this however, is not the overhead of the dx abstraction layer. say your cpu is in san francisco and your gpu is in los angeles. the dx route is an airplane flight carrying the cargo from SF to LA but with a layover in chicago, new york, and miami. the direct to metal or mantle path is a non stop flight SF to LA.
 
Last edited:

HurleyBird

Platinum Member
Apr 22, 2003
2,756
1,440
136
On a different note, listen to this Astound Sound demo with headphones on guys!

http://www.youtube.com/watch?v=nKnhcsRTNME

Actually just listened to this a bit further, and the part where he puts the newspaper over the head makes it clear that he's just using binaural microphones to do the recording. A very high end dummy head like a Neumann K-100 can actually produce realistic height queues, or maybe he's using a real human being with earbud mics. Now, perhaps some of the sounds are generated using an algorithm, or he's done some processing to the recording after the fact, but technologically it's not that impressive compared to some other binaural demos out there. If Astound Sound can get close to this level of fidelity in a game it will be great, but that's a lot harder than just recording some random sounds with binaural mics.
 

Atreidin

Senior member
Mar 31, 2011
464
27
86
All of the FUD about Mantle is overblown. Games will be written for DirectX and if they have the resources, they'll throw in support for Mantle. I don't see what the big deal is or why Nvidia fanboys are getting their panties in a wad (other than the fact that for them "all AMD stuff=bad"). The games will still run on Nvidia cards, it won't be like the old days where you select IRQs and DMA channels and hope your game was coded to support your graphics card and sound card.

I also wonder if/how Mantle could work with OpenGL.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |