Nvidia reveals Specifications of GT300

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: Psynaut
Crysis looked better on low settings then most games do on high, but people couldn't just be happy with a game that looked great and played fine on medium settings, with bonus settings for future technology. It would be exactly the same as if Call of Duty 4's highest setting was made into the medium setting and a super enhanced setting was added for the highest range. The game wouldn't change in any way at all, but human perception is so flawed that everyone would bitch that CoD made a game that couldn't be played on current computers, which wouldn't be true at all. In essence, people would get more, but perceive that they received less, and bitch and whine endlessly about it

+1 :thumbsup:
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Crysis is just popular to hate, like Microsoft and anything else. Uhhhhh I can't fun Crysis so I am gonna say it sucks so that it's okay that I can't run Crysis because I don't like it anyway. It's a little ironic. Around the time of the Radeon 9700, people were complaining about how there were no games even remotely able to take advantage of the card. Now they made a game that was ahead of it's time and people still complain. Crysis has a really fun single player, definitely more fun than most pc games released around it's time and in the last year too. I like it a lot.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Keysplayr
Originally posted by: SunnyD
Originally posted by: OCguy
Wow...that could be an amazing chip. :Q

Amazingly HUGE and HOT and POWER HUNGRY... yeah. Oh yeah, also amazingly EXPENSIVE too.

You don't know the size, you don't know the heat dissapation, you don't know the power it will draw, you don't know the price. Thanks for crapping by.

But you can venture a guess that the node GT300 will be on; 40nm or 32nm at the very VERY best. Meaning unless power leakage is off the charts on GT200, you can expect something in the area of a 30% power consumption per transistor on the new GT300 chip provided it's using 40nm (almost an absolute certainty, IMO).

Actually, switching to higher clocked (1000 MHz+) GDDR5 may keep the power consumption and complexity of GT300 reasonable, since that would let them get away with a 256-bit memory bus with no performance penalty. A 384-bit to 512-bit memory bus makes for hot, power hungry cards with many PCB layers.


Originally posted by: Idontcare

55nm -> 40nm transition involved in there too, which makes most assertions regarding power consumption and die-size a pointless debate until we have data.

Educated guesses based on past die size transitions when power leakage didn't totally ruin things (ie on Intel's Prescott) are still possible.

-------

What's interesting about GT300 is how Nvidia keeps soldiering on with the monolithic die video cards, even in an economic sinkhole. Nvidia has almost become the new ATI in coming out with a top chip and then cutting it down for the high-volume markets.

It was not too long ago that Nvidia utterly dominated the lower/mid range with cheap-to manufacture cards like the 6600GT, 7900GS/GT, 9600GT and, relatively speaking, the 8800GT. Now ATI is the one releasing fantastically positioned cards like the 4850 and 4770, while Nvidia is cutting down their top-end card (and cutting into their profit margins) to make attractive cards like the GTX 260 Core 216.

 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: jiffylube1024
Originally posted by: Keysplayr
Originally posted by: SunnyD
Originally posted by: OCguy
Wow...that could be an amazing chip. :Q

Amazingly HUGE and HOT and POWER HUNGRY... yeah. Oh yeah, also amazingly EXPENSIVE too.

You don't know the size, you don't know the heat dissapation, you don't know the power it will draw, you don't know the price. Thanks for crapping by.

But you can venture a guess that the node GT300 will be on; 40nm or 32nm at the very VERY best. Meaning unless power leakage is off the charts on GT200, you can expect something in the area of a 30% power consumption per transistor on the new GT300 chip provided it's using 40nm (almost an absolute certainty, IMO).

Actually, switching to higher clocked (1000 MHz+) GDDR5 may keep the power consumption and complexity of GT300 reasonable, since that would let them get away with a 256-bit memory bus with no performance penalty. A 384-bit to 512-bit memory bus makes for hot, power hungry cards with many PCB layers.


Originally posted by: Idontcare

55nm -> 40nm transition involved in there too, which makes most assertions regarding power consumption and die-size a pointless debate until we have data.

Educated guesses based on past die size transitions when power leakage didn't totally ruin things (ie on Intel's Prescott) are still possible.

-------

What's interesting about GT300 is how Nvidia keeps soldiering on with the monolithic die video cards, even in an economic sinkhole. Nvidia has almost become the new ATI in coming out with a top chip and then cutting it down for the high-volume markets.

It was not too long ago that Nvidia utterly dominated the lower/mid range with cheap-to manufacture cards like the 6600GT, 7900GS/GT, 9600GT and, relatively speaking, the 8800GT. Now ATI is the one releasing fantastically positioned cards like the 4850 and 4770, while Nvidia is cutting down their top-end card (and cutting into their profit margins) to make attractive cards like the GTX 260 Core 216.

I made two posts one page back that covers most of what you said here. Not sure if you missed it or not.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: Keysplayr
Originally posted by: evolucion8
Originally posted by: jaredpace
After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q

Considering that ATi has the triple amount of shaders to remain competitive with the nVidia counterpart which has less shaders, GTX 260 216 shaders vs HD 4870 800 shaders, it would mean that if the nVidia GT 300 has 512 which I find unlikely, I believe that it should have 480 shader processors, it would still be competitive against the 1200 of the ATi card which keeps almost the same difference/ratio that the HD 4870 vs GTX 2x0 series currently holds.

Yes indeed. But only IF the architecture of GT300 was the same as GT200, and that number would be 1600 on the ATI side, not 1200 If you were to double the shaders on each. Because you'd still have to double the shaders for the ATI card to be able to do as twice as much as it does now. And besides. Look at the GTS250 (128 shader) vs. 4850 (800 shaders). You're ratio kind of melts away there.

Keys, NV40 = 304mm, G70 = 342mm, G80 = 484mm, GT200 = 576mm. And they've went from completely different architectures and smaller node processes during those generations. Extrapolate that, and GT300 will end up bigger.

Could I be wrong, sure. But my educated guess, and what little money that I have, is on the GT300 => to GT200, based on history. It could be equal because it is going to be produced on 40nm, but I still doubt it.

Also, it matters very little really, I only really care for price/performance, not how big the die is, it could be a 750mm for all I care. ( as long as it fits on a regular pcb )


 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: MarcVenice
Originally posted by: Keysplayr
Originally posted by: evolucion8
Originally posted by: jaredpace
After rereading these links, GT300 could beat RV870 if it really is 6-15x faster than gt200.
:Q

Considering that ATi has the triple amount of shaders to remain competitive with the nVidia counterpart which has less shaders, GTX 260 216 shaders vs HD 4870 800 shaders, it would mean that if the nVidia GT 300 has 512 which I find unlikely, I believe that it should have 480 shader processors, it would still be competitive against the 1200 of the ATi card which keeps almost the same difference/ratio that the HD 4870 vs GTX 2x0 series currently holds.

Yes indeed. But only IF the architecture of GT300 was the same as GT200, and that number would be 1600 on the ATI side, not 1200 If you were to double the shaders on each. Because you'd still have to double the shaders for the ATI card to be able to do as twice as much as it does now. And besides. Look at the GTS250 (128 shader) vs. 4850 (800 shaders). You're ratio kind of melts away there.

Keys, NV40 = 304mm, G70 = 342mm, G80 = 484mm, GT200 = 576mm. And they've went from completely different architectures and smaller node processes during those generations. Extrapolate that, and GT300 will end up bigger.

Could I be wrong, sure. But my educated guess, and what little money that I have, is on the GT300 => to GT200, based on history. It could be equal because it is going to be produced on 40nm, but I still doubt it.

Also, it matters very little really, I only really care for price/performance, not how big the die is, it could be a 750mm for all I care. ( as long as it fits on a regular pcb )

Do you want me to sit here an say,"You're right, Marc. I didn't know why I didn't see this."?? I can't. All I can say is, we don't know, and that is as a true statement as we're going to get today.

BTW, you left out the G70 to G71 transistion. process shrink AND stripped out many unneeded transistors even on the same arch. on top of that. You see, things like this can happen, or not. GT200 could be looked at with scrutiny and it could be time to economize transistors just like NV did with G70. Who knows.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Well, when thinking about it, there's the fact Nvidia can't just continue to increase die size. And AMD can produce chips half the size that come awfully close or equal nvidia's gpu's in terms of performance. If AMD can do it, im pretty sure Nvidia can do it too. But, your statement at first was, you can't extrapolate, now it's a simple, I don't know. I can tell you I don't know either, but I can extrapolate. This MIMD stuff is also pretty exciting, and not at all that far fetched.

And this image, from AT itself, really explains it all: http://images.anandtech.com/re...video/ati/4800/ilp.png And with MIMD units Nvidia would exceed that kind of efficiency even further. And maybe it won't even need 512 sp's.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: Keysplayr
BTW, you left out the G70 to G71 transistion. process shrink AND stripped out many unneeded transistors even on the same arch. on top of that. You see, things like this can happen, or not. GT200 could be looked at with scrutiny and it could be time to economize transistors just like NV did with G70. Who knows.

This would surprise me if some elements of it were not true. For every architecture generation that passes NV's engineers get more experience, their software tools get more sophisticated, their relationship with the foundry's engineers gets closer, etc.

It would be "odd" if NV's engineers did not economize to some appreciable/tangible degree their transistor usage for implementing support of their ISA in the new architecture.

300mm wafer from TSMC for reasonable volume customer like NV will run them around $3k USD. With TSMC's fab utilization still below 50% I would not be surprised if big customers like TI, ATI, and NV (as well as Qualcomm) are getting sweetheart pricing/wafer for lot-start commits for the next several quarters.

A note about yields, yields are a funny thing when fabs are operating at appreciably reduced utilization. A wafer with 50% yields running thru a fab at 20% utilization costs more per wafer than a wafer with 50% yield running thru a fab at 80% utilization. Yields become problematic and talking points when a fab nears full utilization because it means in that in order to ship more good dice you have to spend capex to expand capacity. When your fab is running at 40% capacity though it is a relatively small price adder to to start another lot (increasing utilization slightly) to increase net units out in a couple months.

Basically I am saying that if TSMC were reporting >92% fab utilization then you can bet the large die NV is running would be causing them some pains from the lowered yield aspect from D0, but with TSMC reporting 40% utilization you can count on NV getting all the wafers they need/want and nice and cheap too.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: MarcVenice
Well, when thinking about it, there's the fact Nvidia can't just continue to increase die size. And AMD can produce chips half the size that come awfully close or equal nvidia's gpu's in terms of performance. If AMD can do it, im pretty sure Nvidia can do it too. But, your statement at first was, you can't extrapolate, now it's a simple, I don't know. I can tell you I don't know either, but I can extrapolate. This MIMD stuff is also pretty exciting, and not at all that far fetched.

And this image, from AT itself, really explains it all: http://images.anandtech.com/re...video/ati/4800/ilp.png And with MIMD units Nvidia would exceed that kind of efficiency even further. And maybe it won't even need 512 sp's.

Extrapolate away then. :beer:

 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: MarcVenice
Well, when thinking about it, there's the fact Nvidia can't just continue to increase die size. And AMD can produce chips half the size that come awfully close or equal nvidia's gpu's in terms of performance. If AMD can do it, im pretty sure Nvidia can do it too. But, your statement at first was, you can't extrapolate, now it's a simple, I don't know. I can tell you I don't know either, but I can extrapolate. This MIMD stuff is also pretty exciting, and not at all that far fetched.

And this image, from AT itself, really explains it all: http://images.anandtech.com/re...video/ati/4800/ilp.png And with MIMD units Nvidia would exceed that kind of efficiency even further. And maybe it won't even need 512 sp's.

AMD had a process advantage leading to smaller chips... the question is how many TRANSISTORS they used compare to nvidia for similar performance.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: taltamir
Originally posted by: MarcVenice
Well, when thinking about it, there's the fact Nvidia can't just continue to increase die size. And AMD can produce chips half the size that come awfully close or equal nvidia's gpu's in terms of performance. If AMD can do it, im pretty sure Nvidia can do it too. But, your statement at first was, you can't extrapolate, now it's a simple, I don't know. I can tell you I don't know either, but I can extrapolate. This MIMD stuff is also pretty exciting, and not at all that far fetched.

And this image, from AT itself, really explains it all: http://images.anandtech.com/re...video/ati/4800/ilp.png And with MIMD units Nvidia would exceed that kind of efficiency even further. And maybe it won't even need 512 sp's.

AMD had a process advantage leading to smaller chips... the question is how many TRANSISTORS they used compare to nvidia for similar performance.

And as others have pointed out, the AMD solution required a more expensive memory subsystem to ensure actual IPC stayed higher than it would have otherwise been had they used the less expensive memory subsystem as NV did.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
oh yes... the more expensive GDDR5 chips allowed them to cut out half the bus size... reducing the amount of transistors needed on gpu.

so saying "amd managed to get same performance with half chip size" s misleading and incorrect.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: Idontcare
Originally posted by: taltamir
Originally posted by: MarcVenice
Well, when thinking about it, there's the fact Nvidia can't just continue to increase die size. And AMD can produce chips half the size that come awfully close or equal nvidia's gpu's in terms of performance. If AMD can do it, im pretty sure Nvidia can do it too. But, your statement at first was, you can't extrapolate, now it's a simple, I don't know. I can tell you I don't know either, but I can extrapolate. This MIMD stuff is also pretty exciting, and not at all that far fetched.

And this image, from AT itself, really explains it all: http://images.anandtech.com/re...video/ati/4800/ilp.png And with MIMD units Nvidia would exceed that kind of efficiency even further. And maybe it won't even need 512 sp's.

AMD had a process advantage leading to smaller chips... the question is how many TRANSISTORS they used compare to nvidia for similar performance.

And as others have pointed out, the AMD solution required a more expensive memory subsystem to ensure actual IPC stayed higher than it would have otherwise been had they used the less expensive memory subsystem as NV did.

I stand corrected. I'd need a die shot of the rv770 to see how much of the die size is taken up by the memory bus. A wild guess, with 756M vs 1400M, I'd say AMD will still come out ahead in terms of performance per transistor (which indeed would be a better way to measure).
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: WaitingForNehalem
Originally posted by: munky
Originally posted by: WaitingForNehalem
I don't understand the hatred towards Crysis. It was an fantastic game with amazing visuals and is the best FPS I've ever played. Who cares if it isn't all maxed out, it still looks better than any other game even though it isn't maxed. Everyone just keep complaining though, we'll have Call of Duty style games full of invisible walls and scripted events just to make sure you can max out and run at 100fps +.

Visuals don't make a game great. The gameplay was boring, with absolutely no cinematic feel like COD4, or non-linear progression like Stalker or Oblivion.

Are you kidding me? The whole game had a cinematic feel. Linear progression? This is a FPS, not RPG as mentioned. BTW, STALKER is a horrible game that is boring and has some of the worst hit detection I've ever seen.

I had the exact opposite impression. The Crysis story line was shallow and predictable, the progression was on rails, and there was completely no immersion factor. Stalker just pulled me right in, and it felt like I was inside the game world. COD4 also felt like I was playing a movie, and I was in it. Crysis was just plain boring.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Just how much more expensive is GDDR5 compared to GDDR3? One of the advantages of using GDDR5 was reduced complexity in PCB design. A 512bit memory with 16 memory chips rated at 0.83ns (fastest for GDDR3) doesn't sound like a cheap option either.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: MarcVenice
Originally posted by: Idontcare
Originally posted by: taltamir
Originally posted by: MarcVenice
Well, when thinking about it, there's the fact Nvidia can't just continue to increase die size. And AMD can produce chips half the size that come awfully close or equal nvidia's gpu's in terms of performance. If AMD can do it, im pretty sure Nvidia can do it too. But, your statement at first was, you can't extrapolate, now it's a simple, I don't know. I can tell you I don't know either, but I can extrapolate. This MIMD stuff is also pretty exciting, and not at all that far fetched.

And this image, from AT itself, really explains it all: http://images.anandtech.com/re...video/ati/4800/ilp.png And with MIMD units Nvidia would exceed that kind of efficiency even further. And maybe it won't even need 512 sp's.

AMD had a process advantage leading to smaller chips... the question is how many TRANSISTORS they used compare to nvidia for similar performance.

And as others have pointed out, the AMD solution required a more expensive memory subsystem to ensure actual IPC stayed higher than it would have otherwise been had they used the less expensive memory subsystem as NV did.

I stand corrected. I'd need a die shot of the rv770 to see how much of the die size is taken up by the memory bus. A wild guess, with 756M vs 1400M, I'd say AMD will still come out ahead in terms of performance per transistor (which indeed would be a better way to measure).

You're looking at this transistor count comparison a bit narrowly. Gaming only. ATI may come out ahead in terms of transistor count (as if this actually matters) when one compares gaming performance, but the 1400M transistors of GT200 seems to make it a helluva lot better GPGPU. So saying ATI with 756M transistors can rival NV's 1400M transistors in gaming performance is a bit misleading. GT200 wasn't designed for just gaming. As is apparent when comparing crunching capabilities of the two.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
I heard there will be a new app for F@H for ATI cards, and that it will boost performance a lot, so much, that ATI cards can Fold as fast as Nvidia cards?
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: MarcVenice
I heard there will be a new app for F@H for ATI cards, and that it will boost performance a lot, so much, that ATI cards can Fold as fast as Nvidia cards?

Yes, we've been hearing this since 4xxx series launch. There will be this. There will be that. If they can do it, GREAT!!! Lets have it!! Still waiting on improved AVIVO since the last review that was in 2008. I'm not saying ATI "can't" do it, it's just that they're "not" doing it. At this stage, they might be feeling it's hopeless to compete against GT200 hardware in anything but gaming for all we know. Why throw scant money trying to best a competitor you cannot best in those areas? If I were ATI, I'd put most resources into making their GPU's a bit more powerful in the GPGPU dept. Not just a powerful GPU design, but one that is a bit easier to code for and get something out of them. Because right now, it looks like it's a real bitch and I could see why devs arent bothering.
 

WaitingForNehalem

Platinum Member
Aug 24, 2008
2,497
0
71
Originally posted by: Keysplayr
Originally posted by: MarcVenice
I heard there will be a new app for F@H for ATI cards, and that it will boost performance a lot, so much, that ATI cards can Fold as fast as Nvidia cards?

Yes, we've been hearing this since 4xxx series launch. There will be this. There will be that. If they can do it, GREAT!!! Lets have it!! Still waiting on improved AVIVO since the last review that was in 2008. I'm not saying ATI "can't" do it, it's just that they're "not" doing it. At this stage, they might be feeling it's hopeless to compete against GT200 hardware in anything but gaming for all we know. Why throw scant money trying to best a competitor you cannot best in those areas? If I were ATI, I'd put most resources into making their GPU's a bit more powerful in the GPGPU dept. Not just a powerful GPU design, but one that is a bit easier to code for and get something out of them. Because right now, it looks like it's a real bitch and I could see why devs arent bothering.

Who cares about GPGPU performance, leave that to workstation cards. I want the card with the best gaming performance!
 

lopri

Elite Member
Jul 27, 2002
13,221
612
126
Some do care about GPGPU, obviously. Look at that 23x GTX 295 monster. I tried Badaboom and it is a cute little app that gets the job done. The performance wasn't bad at all even with a 8500 GT. For people who are less tech-savvy, one-click transcoding for their iPods via GPU is pretty cool.

I agree with AnandTech, however, on its current shortcomings. I do not know why NV can't just include some basic app in its ForceWare suite? There must be financial/strategic issue with partners, but still the burden is on NV to show its potential. I'd think NV could include some basic apps in the drivers and let the partners do customization or GUI-tweak, etc. to add value. Look how popular GPUs became among F@H crowd, and count how many people use Badaboom. The difference? F@H client = Free, Badaboom = $$ (I don't know how much, lol)

It looks like NV is going for an all-out swing for its next gen prodcuts with revolutionary approach. I do like that and I hope it brings it on without delay.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
i'm hoping for something like CUDA support for x264 encoding. that would make me move over from ATi. ATi's "stream" thing seemed like a bluff and nothing came of it.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Originally posted by: Keysplayr
Originally posted by: MarcVenice
I heard there will be a new app for F@H for ATI cards, and that it will boost performance a lot, so much, that ATI cards can Fold as fast as Nvidia cards?

Yes, we've been hearing this since 4xxx series launch. There will be this. There will be that. If they can do it, GREAT!!! Lets have it!! Still waiting on improved AVIVO since the last review that was in 2008. I'm not saying ATI "can't" do it, it's just that they're "not" doing it. At this stage, they might be feeling it's hopeless to compete against GT200 hardware in anything but gaming for all we know. Why throw scant money trying to best a competitor you cannot best in those areas? If I were ATI, I'd put most resources into making their GPU's a bit more powerful in the GPGPU dept. Not just a powerful GPU design, but one that is a bit easier to code for and get something out of them. Because right now, it looks like it's a real bitch and I could see why devs arent bothering.

It's more accurate to blame the FAH people than ATI. They can't even make a non beta properly working multicore client that has a gui installer yet. They've also been working on an Xbox 360 client for years now with nothing to show for it still. They definitely don't get anything done in a timely manner.

There shouldn't be any doubt in anyone's mind that a client that supports the 4000 series hardware properly will score significantly better than the current one. It's a joke that the vastly inferior 8800 cards get more work done than 4800 cards, it's obviously a software issue.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
the way i look at CUDA, it is a waste of space, but it is a FREE waste of space (to me)... nvidia is trying to push it so they aren't actually charging for it, they only charge me for the gaming performance. so what do I care? but yes, that is another factor of die size, what "useless features" do they cram on there? AMD for example has more extensive video decoding, which is a total waste of space as well, decreasing CPU usage from 40 to 30% for that last bit of video decoding using a non intensive codec likve VC-1 is not really, well, important. Heck nowadays x264 decodes fines on most CPUs, since our cpu tech advanced since it debuted.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
I think ATI wins in GPGPU in terms of raw performance. GPGPU is all about the FLOPs, and ATI has more.
Of course, without a decent programming API, that leaves nvidia as the defacto victor anyway.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |