Do you guys REALLY think there's going to be a 7800 Ultra with 32 pipes?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Falloutboy

Diamond Member
Jan 2, 2003
5,916
0
71
nothing is stopping nvidia from doing a die shrink Q1 next year and upping the corespeed another 25-30% that would prolly be enough to keep competetive.
 

rise

Diamond Member
Dec 13, 2004
9,116
46
91
whats up with the pricing anyway, every vendor at 599? even bfg who charge a premium for their lifetime warranty and slight oc? is this a nv mandate?

edit-i realize these are all nv reference boards (that i've seen), but you'd think bfg would charge more just on the warranty and oc.
 

imported_Ged

Member
Mar 24, 2005
135
0
0
Originally posted by: JBT
512bit memory controller? Not a chance in hell for under 2 grade for that sucker.
Originally posted by: Gamingphreek
FIrst of all there is NO POSSIBLE WAY that they will move to a 512-bit memory bus width. It simply isn't going to happen. Far too advanced, and far too expensive to do so at this point. Next gen however wouldn't suprise me.


I submit that if the "Ultra" doesn't have a 512-bit memory bus, it is not because it's impossible, it's just not necessary or cost effective yet. It looks like we can agree on that much.

A 512-bit memory bus would also let NVIDIA use much slower memory chips and achieve a higher throughput at the same time. or 384-bit bus for that matter. The name of the game in computer graphics is parallelism.


Originally posted by: Gamingphreek
Umm, unfortunately that is not the way it works ...

How was my approximation off? If that's not the way it works, feel free to explain it.

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Ged
My Official Ideas/Thoughts/Theories:

NVIDIA is going to release a 90nm G70-based GPU to compete with R520. The reason we don't have an Ultra right now is that NVIDIA is finishing up its work to port the design to 90 nm and making sure that it will be on shelves when ATI paper launches the R520.

Assuming NVIDIA moves to 90 nm for the Ultra and NVIDIA keeps the die area the same, we're going to see a large increase in Vertex and Pixel Shaders.

130 nm to 110 nm was an increase from ~223 million transistors to ~304 million transistors.

110 nm to 90 nm could end up giving them another 100 million transistors to play with in the same area (~400 million transistors).

304 - 223 = 81 million transistors = 2 more Vertex units + 8 more Pixel Pipes (2 more Quads) + Optimizations across the board

100 million more transistors could easily equate to 2 more Vertex units, 8 more Pixel Pipes and another 256-bit memory controller plus extra transistors to make all necessary units aware of the extra memory.


NVIDIA GeForce 7800 Ultra
+450 Mhz clock
+90 nm
+32 Pixel Pipes
+10 Vertex Pipes (maybe 12 vertex pipes)
+512-bit GDDR3 Memory Controller
+512MB 700Mhz (1.4 Ghz) GDDR3 Memory (16x512Mbit chips (2x(8x512Mbit))
+89.6 GB/s GPU to Memory Bandwidth

Well, I can dream at least.

NVIDIA might just port G70 to 90 nm, up the clock and utilize faster GDDR3 to compete with R520.

And here's my official thought:
Moving to a 90nm process is not as simple as <insert stupid analogy here...>
The manufacturers would have to retool all their fabs, and the design itself brings about more complications and issues to deal with - remember how much trouble intel had with the 90nm core. Issues like current leakage and capacitance become more serious with a smaller die, and unless they move to a SOI material, it might actually end up worse than a 110nm die. Even if they did decide to move to 90nm, I suspect it would take at least half a year to implement, and I doubt we will see a 90nm nvidia card this year.

That being said, adding 8 more pixel pipes would be a possible solution, but the die would be huge, and the costs will go way up because of lower yields. As for the vertex shaders and bandwidth, modern cards have plenty of that already. The most demanding games stress pixel shaders heavily, so adding more pixel pipes will offer a more significant boost than more vertex units or more bandwidth.

I still dont know if the r520 will have 24 or 32 pipes, but if they use a 90m die to begin with, the potential for 32 pipes is certainly there, more so than with a 110nm die. Ati might be late to the game, and they might even have as much trouble as intel with the die, but if it comes out with 32 pipes, it will be a serious contender to deal with, and will offer a much bigger jump in perfomance than the gf7800 did
 

linkgoron

Platinum Member
Mar 9, 2005
2,336
857
136
I think ATI should release a card with 25 pixel piplines so they can say they have more pipelines than Nvidia's cards...

 

Regs

Lifer
Aug 9, 2002
16,665
21
81
Originally posted by: linkgoron
I think ATI should release a card with 25 pixel piplines so they can say they have more pipelines than Nvidia's cards...

I think they work in quadrants.

 

Intelia

Banned
May 12, 2005
832
0
0
Originally posted by: Gamingphreek
I don't like the fact that it doesn't have and h.264 decoder. The workload placed on a cpu is quite large by this decoding and GPU assistance is very helpful in relieving the workload so it is important I would say. I was really hoping we might see something from nVidia but it's too bad.

They have H.264 acceleration. However since IIRC the spec isn't finalized nor released it will come in a later driver revision. It is in AT article.

130 nm to 110 nm was an increase from ~223 million transistors to ~304 million transistors.

110 nm to 90 nm could end up giving them another 100 million transistors to play with in the same area (~400 million transistors).

Umm, unfortunately that is not the way it works ...

NVIDIA GeForce 7800 Ultra
+450 Mhz clock
+90 nm
+32 Pixel Pipes
+10 Vertex Pipes (maybe 12 vertex pipes)
+512-bit GDDR3 Memory Controller
+512MB 700Mhz (1.4 Ghz) GDDR3 Memory (16x512Mbit chips (2x(8x512Mbit))
+89.6 GB/s GPU to Memory Bandwidth

FIrst of all there is NO POSSIBLE WAY that they will move to a 512-bit memory bus width. It simply isn't going to happen. Far too advanced, and far too expensive to do so at this point. Next gen however wouldn't suprise me.

90nm: There is a decent shot at this

32 Pixel Pipeline: While it could happen it would not be a mere refresh. It is too hard to simply slap on another 8 pipes.

10-12 Vertex Processors: MAYBE 10 but i HIGHLY doubt 12.

700mhz GDDR3: Eh... maybe looking a bit high but it is certainly within reach, however yields probably wont be too good with memory at that speed.

GPU Bandwidth 89.6GB: Not, under any circumstances, going to happen. Just put it out of your mind now. Mainly because you wont see a 512bit memory architecture.

Set your sights a lot lower. I wouldn't be suprised to see a 90nm part (Not necessarily the Ultra) sometime in the future. I also wouldn't be suprised to see 32 more pipelines and a slightly higher clockspeed. After that, you are dreaming.

-Kevin

As long as were speculating I think After G70 and R520 were going to see both companies go to Rambus XDR. Don't you?

Nvidia did work on the PS3 so I kinda see cell coming from nvidia sooner than ATI.

I am to lazy to go look but doesn't PS3 use XDR memory . I really see nvidia coming out with something really powerful within a year.

 

imported_humey

Senior member
Nov 9, 2004
863
0
0
bunnyfubbles, I forgot about TNT, Im 33 but didnt get back into pcs till G Force 2 Ultra was topdog and £680, around Jan 2001.

Dont be so sure about die size, Nvidia have 90nm i bet and will hold onto it, may be the Ultra.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: Ged
Originally posted by: JBT
512bit memory controller? Not a chance in hell for under 2 grade for that sucker.
Originally posted by: Gamingphreek
FIrst of all there is NO POSSIBLE WAY that they will move to a 512-bit memory bus width. It simply isn't going to happen. Far too advanced, and far too expensive to do so at this point. Next gen however wouldn't suprise me.


I submit that if the "Ultra" doesn't have a 512-bit memory bus, it is not because it's impossible, it's just not necessary or cost effective yet. It looks like we can agree on that much.

A 512-bit memory bus would also let NVIDIA use much slower memory chips and achieve a higher throughput at the same time. or 384-bit bus for that matter. The name of the game in computer graphics is parallelism.


Originally posted by: Gamingphreek
Umm, unfortunately that is not the way it works ...

How was my approximation off? If that's not the way it works, feel free to explain it.

AFAIK there is no such thing as a 384bit memory bus. IIRC bandwidth increases exponentially by a factor of 2. 2^2 (ie: 4, 8, 16, 32, 64, 128, 256 etc...).

It is because it is impossible. Do you know how many transitors that would take!? Additionally i dont think you really comprehend the scope of this. You dont simply slap on another 256bit bus. The process is MUCH MUCH more involved, i can not go any deeper than that simply because i dont know everything else that is required.

Additionally, with a 512bit architecture, aside from the near impossible, impossibly fast even, switch, think about how much this would cost! We are talking >$2000 a video card if you are talking about things that complex.

Furthermore, think about the yields a chip like that would get. It would be HORRIBLE to produce.

As for your approximation, you cannot assume that by simply shrinking the die you can effectively slap on some more transistors. First of all you have to deal with power leakage. Second of all, yields. Third, Cost. And so on. Adding another 100 million transistors is not an easy task.

It seems to me that you simply dont understand the scope of what is involved in creating a GPU.

-Kevin
 

imported_Ged

Member
Mar 24, 2005
135
0
0
AFAIK there is no such thing as a 384bit memory bus. IIRC bandwidth increases exponentially by a factor of 2. 2^2 (ie: 4, 8, 16, 32, 64, 128, 256 etc...).

It doesn't have to. 384 = 128 + 128 + 128.

It's just like ATI using 24-bit FPUs. 32 would be the next step up from 16-bit if you were doubling, but ATI chose 24-bit precision.

It is because it is impossible. Do you know how many transitors that would take!? Additionally i dont think you really comprehend the scope of this. You dont simply slap on another 256bit bus. The process is MUCH MUCH more involved, i can not go any deeper than that simply because i dont know everything else that is required.

Off hand, no, I don't know how many transistors it would take to implement two 256-bit memory controllers. My best guess would be two times the current 256-bit memory controller, plus some overhead.

You can simply slap on another 256-bit bus. AMD did the same thing with the Northbridge on the Opterons. Yes, you would need to augment the other support circuitry and use different memory allocation algorithms, but this is hardly a feat for NVIDIA or any such company. They (ATI, NVIDIA, etc) all went through the same process moving from 128-bit to 256-bit.

Additionally, with a 512bit architecture, aside from the near impossible, impossibly fast even, switch, think about how much this would cost! We are talking >$2000 a video card if you are talking about things that complex.

I didn't say 512-bit architecture, I said 512-bit memory controller. The only added cost for a video card that used a 512-bit memory bus would be the extra memory chips in order to have enough memory in parallel. The fact that you could use slower memory chips that cost less and are more abundant might even make the card cost less than having to use cutting-edge memory chips that are harder to come by.

I fail to see where you get this $2,000 cost estimate from. Assuming they could fit two 256-bit memory controllers on their GPU as they moved to 90 nm without needing to increase the total surface area of the GPU, a 512-bit memory controller or 2x256-bit memory controllers wouldn't increase the cost of each GPU at all (unless you are comparing a single 256-bit GPU to the dual 256-bit GPU, obviously a single controller would use less die space).

As for your approximation, you cannot assume that by simply shrinking the die you can effectively slap on some more transistors. First of all you have to deal with power leakage. Second of all, yields. Third, Cost. And so on. Adding another 100 million transistors is not an easy task.

The shrink from 130 nm to 110 nm is just like the shrink from 110 nm to 90 nm in terms of engineering that needs to take place. NVIDIA was able to add the same % of transistors, use less power, use a single-slot cooler, and increase clock speed in the move from 130 nm to 110 nm. NVIDIA has the expertise to do the same when they move to 90 nm. AMD was also able to lower power usage and increase clock speed with their move to 90 nm.

The cost of a 300mm^2 die on 130 nm is the same as the cost of a 300mm^2 die on 110 nm is the same as the cost of a 300mm^2 die on 90 nm in terms of silicon used.

I imagine that NVIDIA has already spent a great deal of time playing with the 90 nm process and design considerations while working with SONY on the RSX and they have probably already messed around with TSMC's 90 nm process. So, the cost of generating a 90 nm design for NVIDIA is moot, because they would have to take the 90 nm plunge anyway.

It seems to me that you simply dont understand the scope of what is involved in creating a GPU.

I guess if you say so.

Actually, you're right. I have no clue what goes into creating a GPU other than the higher level concepts. I do know that NVIDIA's engineers know exactly what they are doing, so my lack of expertise when it comes to making GPUs won't effect NVIDIA.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
It doesn't have to. 384 = 128 + 128 + 128.

It's just like ATI using 24-bit FPUs. 32 would be the next step up from 16-bit if you were doubling, but ATI chose 24-bit precision.

No it doesn't work like that. To achieve 256bit...do you think they just slapped 2 128bit controllers together, no.

It's just like ATI using 24-bit FPUs. 32 would be the next step up from 16-bit if you were doubling, but ATI chose 24-bit precision.

No. The 24bit IIRC refers to the precision of the pixels that are rendered, and the alpha blending. Doesn't work the same way memory architecture does.

I didn't say 512-bit architecture, I said 512-bit memory controller. The only added cost for a video card that used a 512-bit memory bus would be the extra memory chips in order to have enough memory in parallel. The fact that you could use slower memory chips that cost less and are more abundant might even make the card cost less than having to use cutting-edge memory chips that are harder to come by.

I fail to see where you get this $2,000 cost estimate from. Assuming they could fit two 256-bit memory controllers on their GPU as they moved to 90 nm without needing to increase the total surface area of the GPU, a 512-bit memory controller or 2x256-bit memory controllers wouldn't increase the cost of each GPU at all (unless you are comparing a single 256-bit GPU to the dual 256-bit GPU, obviously a single controller would use less die space).

There is no such thing as dual memory controllers. Once again to achieve 512bit they do not simply slap 2 256bit parts together. If it was that easy dont you think they would have done it already. As for your second statement which now holds no value, adding another memory controller would most certainly increase costs by a large margin. Transistors aint cheap.

The shrink from 130 nm to 110 nm is just like the shrink from 110 nm to 90 nm in terms of engineering that needs to take place. NVIDIA was able to add the same % of transistors, use less power, use a single-slot cooler, and increase clock speed in the move from 130 nm to 110 nm. NVIDIA has the expertise to do the same when they move to 90 nm. AMD was also able to lower power usage and increase clock speed with their move to 90 nm.

The cost of a 300mm^2 die on 130 nm is the same as the cost of a 300mm^2 die on 110 nm is the same as the cost of a 300mm^2 die on 90 nm in terms of silicon used.

I imagine that NVIDIA has already spent a great deal of time playing with the 90 nm process and design considerations while working with SONY on the RSX and they have probably already messed around with TSMC's 90 nm process. So, the cost of generating a 90 nm design for NVIDIA is moot, because they would have to take the 90 nm plunge anyway.

Die shrinks are not linear AFAIK. Just because they were able to lower consumption yet still pack in another ~100 million transistors does not necessarily mean that they can do it on another transition.
Your second statement has one flaw. They haven't had plenty of time workng on the RSX. It isn't due out for another year. TSMC would have to refab their plants for 90nm production. At current, if any are actually producing ANY 90nm chips, they would have to convert all of the exisitng ones that are producing 110nm to 90nm. A move that doesn't happen overnight.

What AMD and Nvidia and whatnot are able to do are different. Look at Intel, the largest microprocessor producer in the world (aside from i guess IBM), when they went to 90nm it had catastrophic results with a lot of power leakage, yields and what not. AMD sat and refined the process, in addition to them having a more energy efficient CPU (Not as high clockspeed), and was able to make everything work very well. Nvidia doesn't even have their own fabs, additionally AMD and even moreso Intel are much much larger than Nvidia. Hell, look at the problems ATI is supposed to be having. You dont think that has anything to do with 90nm do you ?

Im not trying to say you know nothing or flame you. I think you are assuming everything is done very easily when it is not. There is a lot more that has to be done that i dont even know (I will in College Engineering though ). Pete, Mathias99, BenSkywalker, and BFG10K could go into a much greater depth than I. I suggest you PM or post something directed at one of them, you will more than likely get a much better, should i say more thourough, response from one of them.

-Kevin
 

Kalessian

Senior member
Aug 18, 2004
825
12
81
The way I see it, nvidia is trying hard as hell to stay on top of this game. They know more about ATi's new cards now than we ever will. They've had a long time to plan this thing out. They must have some sort of plan in store if ATi's cards are much more powerful. Obviously, since both companies do not have their own fabs, whatever is available to one is available to the other.

They aren't like intel, which doesn't have to care that much about not having the performance crown.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
It's funny to say this, but the PS2 gfx processor had a 2560-bit memory bus, 16 pixel pipelines, and 48gb/sec bandwidth. And that was like 1999, IIRC. Of course it only had fillrate ability (no T&L, no shaders) and only 4mb vram, but you'd think 6 years later for $600 you'd have a card that would not leave you wishing for more power. And yet, you STILL can't get AA working with fp buffers at the same time.

If I didn't know better, I'd say this card should have been called a gf7600, but both Nvidia and Ati usually launch their flagship products first, so I doubt we'll have a 7800U with 32 pipes. Unless of coure, the r520 has 32 pipes, and Nvidia scrambles to release a gf7900 with 32 pipes as well, and claim this is "what the g70 should have been."
 

Squally Leonharty

Senior member
Oct 5, 2004
237
0
0
http://www.cdrinfo.com/Sections/News/Details.aspx?NewsId=14238

I don't know what will happen in the future, but at least high frequency GDDR3 memory will be possible, so 1.4GHz will be easy. At that speed and current memory width of 256-bit, theoretically 64GB/s bandwidth can be achieved at max. So it's not too far off from the speculated 89.6GB/s.

512-bit, however, is highly unlikely, but we never know what they'll be up to. 512MB cards are going to show up pretty soon with this new memory at a cheaper price, too. Not like you'll need that much memory, but yeah, I suppose this will be very beneficial for cards that are used for 3D rendering with Maya, 3dsmax, etc.
 

Patrick Wolf

Platinum Member
Jan 5, 2005
2,443
0
0
I predict ATI's new card(s) will kill Nvidia's new cards, but won't be on shelves untill January 2006.
 

obeseotron

Golden Member
Oct 9, 1999
1,910
0
0
I'm pretty sure current nvidia cards are actually 4 64 bit memory controllers, not 2 128 bit ones. Even 128bit memory controllers are prohibitive, that's why we have dual channel 64 bit for main memory and effectively quad for graphics. Whatever is required to coordinate all of these controllers proably increases at a much greater than linear pace meaning that a doubling in the number of memory controllers would probably far more than double the size of the overall memory controller. It is certainly not impossible and is probably likely on the next generation after g70/r520, but when the companies decide what they are going to implement they are working with a pretty hard transitor limit, so they have to prioritize, and obviously they thought things like pixel pipelines and shader capabilites outweighed the benefits of a more complex memory controller.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
They get the same number of cores/wafer with this as with the 6800(GT? Ultra? I forget. Read Guru3d's review if you care).

I just read through the article quickly and I don't see it mentioned anywhere(although the article is 28 pages long and I was reading quickly). I don't see how anyone could write that they are getting the same amount of cores per wafer with the die size increasing so much.
 

Intelia

Banned
May 12, 2005
832
0
0
Originally posted by: obeseotron
I'm pretty sure current nvidia cards are actually 4 64 bit memory controllers, not 2 128 bit ones. Even 128bit memory controllers are prohibitive, that's why we have dual channel 64 bit for main memory and effectively quad for graphics. Whatever is required to coordinate all of these controllers proably increases at a much greater than linear pace meaning that a doubling in the number of memory controllers would probably far more than double the size of the overall memory controller. It is certainly not impossible and is probably likely on the next generation after g70/r520, but when the companies decide what they are going to implement they are working with a pretty hard transitor limit, so they have to prioritize, and obviously they thought things like pixel pipelines and shader capabilites outweighed the benefits of a more complex memory controller.

I shouldn't post on this but as I understand it . Rambus as you all know has XDR .What I find interesting about this is that they also have a memory controller for XDR that works not only with XDR but it also works with DDR 2 & 3. If I was tring to bring best tech forward this would be my pick because it seems to me to be the best future proof tech. Rambus looks bad to many because of past claims well as it turns out they weren't cring wolf there was an actual conspiracy against them. So everyone( Intel ,IBM, AMD ,ATI and nvidia ) get over it .Give the public the best tech available. XDR memory

 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
It is too expensive as of right now. We aren't going to flame you or anything for suggesting something such as XDR. You are adding to the thread, that is what we like!

-Kevin
 

Intelia

Banned
May 12, 2005
832
0
0
Originally posted by: Gamingphreek
It is too expensive as of right now. We aren't going to flame you or anything for suggesting something such as XDR. You are adding to the thread, that is what we like!

-Kevin

Thanks. .Wouldn't the true cost be cheaper because of the built in future proof. The best example I can use is AMD . Amd is going to a new socket and memory controller (06). I understand the need to upgrade because of new socket. But if they use XDR memory controller in 06. Than they have there memory controller ondie for both DDR3and XDR I see this as cheaper in the long run . (That is for people like ourselves)I really believe for the whole industry. I mean really how expensive can this memory controller be .

 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
If there will be one, itll be a "refresh" launch with no new features and 32pipes + faster memory.

All for the low low price of $6999.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: BenSkywalker
They get the same number of cores/wafer with this as with the 6800(GT? Ultra? I forget. Read Guru3d's review if you care).

I just read through the article quickly and I don't see it mentioned anywhere(although the article is 28 pages long and I was reading quickly). I don't see how anyone could write that they are getting the same amount of cores per wafer with the die size increasing so much.

Did the die size increase? i know they moved to 110nm instead of 130nm, of course that was a massive transistor count increase as well... Unless they are taking yields into account also.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Rambus looks bad to many because of past claims well as it turns out they weren't cring wolf there was an actual conspiracy against them.

They attempted to steal an open standard that they were granted access to by lieing to the governing body. All of the members of the governing body "conspired" to protect their rights that RAMBUS shamefully trampled on. You can talk about their technology, but don't even try to pretend their legal 'tactics' are anything but an embarassment to even ambulance chasers.

Edit-

The die size of the NV40 should be ~275MM/\2(although I haven't seen that nV ever released the official numbers). Anything over 300mm is seriously pushing it for a consumer level part.
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
Originally posted by: Gamingphreek
It doesn't have to. 384 = 128 + 128 + 128.

It's just like ATI using 24-bit FPUs. 32 would be the next step up from 16-bit if you were doubling, but ATI chose 24-bit precision.

No it doesn't work like that. To achieve 256bit...do you think they just slapped 2 128bit controllers together, no.

-Kevin

Er, obviously they didn't
BUT XGI managed a 192-bit memory bus for one of their cards, because people use 64-bit's at a time.
Hence 384-bit may be possible (but at a high cost to PCB complexity), or even maybe even a 320-bit memory bus.
Again, we have a 256 bit (4x 64 bit) memory interface to GDDR3 memory
(from the AT 7800 article)
The biggest challenges that exist with a 256-bit wide memory interface are routing related; routing traces to/from the GPU and to/from the individual memory chips themselves
(From the 5950 article)
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |