THE Hydra Demo

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
http://www.pcper.com/article.p...=607&type=expert&pid=1. I found some interesting stuff here . I really like this part . Because it so confused me.
60 frames a second in crysis DX9 with 2 nv cards not bad.

What DirectX versions are supported or will be supported and what about OpenGL? Right now, only DX9 is working though DX10.1 will be ready by the end of the year. With DX10 and DX11's implementations of multi-GPU data improving and adding to the HYDRA Engine technology will only get easier for team compared to the work they had to do on DX9. OpenGL is supported by the HYDRA Engine as well.


Could this technology be applied to GPGPU work as well? Yes, though that is still far into the future. One area the team did say would be easily taken advantage of by their technology is ray tracing with its incredibly task-oriented
This chip almost looks like Intel Mitosis chip . Strange that.


So right now it will support DX9 and DX10.1 BY 09. ATI needs the DX9 path for Raytracing Intel who knows. But DX10 and 11 follow latter. If Intel is involved I bet it takes along time for DX10/11 support.
 

nonameo

Diamond Member
Mar 13, 2006
5,902
2
76
This sounds like like the rumors about the 4870 X2 not needing to duplicate the framebuffer for each core.

Well, it would fit the bill anyway.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ya you make a good point . who knows. This is the strangest chip I have heard of in a very long time.

1) Other than Mitosis chip there as been zero hype on this cheap untill 1 month ago . Mitosis was in development for along time and it was hyped a little.

2) 1 month after the hydra hype we have a real finished chip. I would love to know who is fabbing the chip. But From hype to finished chip 1 month . THats pretty interesting. We know this thing took some time to develop . All very interesting . Now we wait to see.

3) Did ya see who is involved in this chip . Slaps head. Did ya see who's investing in this chip . Falls down .

4) Its also interesting that its demoed at IDF . Wheels spinning.

I like at the end . How the author can't figure out why NV or ATI hasn't bought this company . One reason its not a listed stock company .
How much did Intel invest in this company?? Good luck to either ATI or NV in picking up this company . Not going to happen.

If I were Intel and I had worked on mitosis for along time only to find out for CPU usage it was to hard to do . But would work great with GPU's . I would start a new company and invest in it heavely . Thats what I would do . But I not intel.

I really find it strange that it will support DX10.1 by end of year. Yet NO roadmap for DX10/11 . This is interesting since Dx10.1 is harder to do than DX10. Plus Dx10.1 came sometime after Dx10 . Who opened that can of whoopass. Who under estimated ATI.
Sure looks like someone is bending over backwards for ATI. A new company can get away with this ya know.
 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
Very interesting. Best of all there should be no more microstutter problems with this type of multigpu solution. Guess we'll have to wait and see how it will work it's way into the market.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well its alot more interesting than someother hyped overplayed features being offered. We are talking about the hydra chip from lucid . Its so clear to my mind its almost lucid. This chip will allow ATI 4000 series to actually do a raytraced game when its released . Ray traced games in O9 . Anyone saying it cann't happen?

But than again I can't imagine anyone here enjoying both RT and physics on the same card. Ya just found out about this chip. But insiders have known for some time . Games have been announced that use DX10.1 Sure weres the links right . Not needed. It been gone over befor.

Imagine particles flying threw air showing real world lighting effects thats cool .
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Nemesis 1
Well its alot more interesting than someother hyped overplayed features being offered. We are talking about the hydra chip from lucid . Its so clear to my mind its almost lucid. This chip will allow ATI 4000 series to actually do a raytraced game when its released . Ray traced games in O9 . Anyone saying it cann't happen?

Keys: What other hyped overplayed features? We know we are talking about the hydra chip from Lucid. Your definition of "clear" has yet to be determined. Nvidia and ATI has both demonstrated they can do Raytracing already. Ray Traced games in 09? A possibility, although remote.

But than again I can't imagine anyone here enjoying both RT and physics on the same card. Ya just found out about this chip. But insiders have known for some time . Games have been announced that use DX10.1 Sure weres the links right . Not needed. It been gone over befor.

Keys: I thought you said we were talking about the Hydra chip from Lucid? Where did the Physics and DX10.1 talk come from? If you can have such high hopes and imagination of how terrific this is, you can certainly imagine another scenario where users will enjoy RT and Physics on the same card.
But then again, one card is not the limit.

Imagine particles flying threw air showing real world lighting effects thats cool.

Keys: Ray Tracing doesn't automatically ensure things like this. It has to be coded to do so, just like it would under rasterization. You can observe the most bland RT demo if coded as such.

This technology "sounds" good, but in the end, it just adds another level of overhead. Another layer of technology between the hardware and a given graphics driver. And will probably have to have a dev adoption to code for it. The only way I see that happening is if ATI or Nvidia buys them up. Intel is a possibility as well, but we have yet to see what kind of player Intel will be in the graphics world.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Interesting Keys. So ya see a possiable problem with having an extra chip on M/B . But Sli for X58 requires a bridge chip . That we suspect isn't really needed at all . You don't see a problem with this? If you do say so .

This is cool stuff . If it works its beyond cool.

As for NV or ATI buying this company out . If thats going to happen it would be soon . Every second of delay the Companies price goes up.

No seriously . Several companies have invested in this Company . Intel being one of them . Intel doesn't need to buy. It already has a piece. Lucid is going to become a very popular company if they pull this off.
I would rather talk about other possiabilities this chip brings to the industry.Rather than ATI NV going to buy this company out. Or If they can keep up software. Right NOW look at what we know . This thing is aimed directly at RTR. If the GPUs scale as advertized with hydra. This chip gives ATI the power required to do a RT game right now today.

I thought it was great how they demoed NV GPU's These guys have covered all the bases. But it still is alarming that for This year DX9 and DX10.1 will be supported. This could be do to the fact that NV arch isn't really suited todo RTR. They can do it . But not very well.


I know I keep going on and on about RTR . But its coming we all know it . Its a matter of when . I see it sooner than most . Given what we know . My stance is on pretty firm ground when all known facts are computed. RTR hybred game in 09. I believe it but few others do . I am old being wrong would be nothing new for me. LOL
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
145
106
www.neftastic.com
Originally posted by: keysplayr2003
Originally posted by: Nemesis 1
Well its alot more interesting than someother hyped overplayed features being offered. We are talking about the hydra chip from lucid . Its so clear to my mind its almost lucid. This chip will allow ATI 4000 series to actually do a raytraced game when its released . Ray traced games in O9 . Anyone saying it cann't happen?

Keys: What other hyped overplayed features? We know we are talking about the hydra chip from Lucid. Your definition of "clear" has yet to be determined. Nvidia and ATI has both demonstrated they can do Raytracing already. Ray Traced games in 09? A possibility, although remote.

But than again I can't imagine anyone here enjoying both RT and physics on the same card. Ya just found out about this chip. But insiders have known for some time . Games have been announced that use DX10.1 Sure weres the links right . Not needed. It been gone over befor.

Keys: I thought you said we were talking about the Hydra chip from Lucid? Where did the Physics and DX10.1 talk come from? If you can have such high hopes and imagination of how terrific this is, you can certainly imagine another scenario where users will enjoy RT and Physics on the same card.
But then again, one card is not the limit.

Imagine particles flying threw air showing real world lighting effects thats cool.

Keys: Ray Tracing doesn't automatically ensure things like this. It has to be coded to do so, just like it would under rasterization. You can observe the most bland RT demo if coded as such.

This technology "sounds" good, but in the end, it just adds another level of overhead. Another layer of technology between the hardware and a given graphics driver. And will probably have to have a dev adoption to code for it. The only way I see that happening is if ATI or Nvidia buys them up. Intel is a possibility as well, but we have yet to see what kind of player Intel will be in the graphics world.

So what if it doesn't scale "quite" as equally as SLI does... the point is that it can scale across ANY graphics platform. And from what's been said so far, the expectation is that it can scale better than SLI and more linearly too, and without these god awful profiles that are required for SLI and/or CrossFire to work (need confirmation on this, but it was said to be totally transparent to the system).

My question though - what does it do to peripheral functions of a card, such as video encode/decode, or parallel tasking such as PhysX (which no doubt NVIDIA will be locking into their driver implementation if they haven't done so already)?
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Nemesis 1
Interesting Keys. So ya see a possiable problem with having an extra chip on M/B . But Sli for X58 requires a bridge chip . That we suspect isn't really needed at all . You don't see a problem with this? If you do say so .

This is cool stuff . If it works its beyond cool.

As for NV or ATI buying this company out . If thats going to happen it would be soon . Every second of delay the Companies price goes up.

No seriously . Several companies have invested in this Company . Intel being one of them . Intel doesn't need to buy. It already has a piece. Lucid is going to become a very popular company if they pull this off.
I would rather talk about other possiabilities this chip brings to the industry.Rather than ATI NV going to buy this company out. Or If they can keep up software. Right NOW look at what we know . This thing is aimed directly at RTR. If the GPUs scale as advertized with hydra. This chip gives ATI the power required to do a RT game right now today.

I thought it was great how they demoed NV GPU's These guys have covered all the bases. But it still is alarming that for This year DX9 and DX10.1 will be supported. This could be do to the fact that NV arch isn't really suited todo RTR. They can do it . But not very well.


I know I keep going on and on about RTR . But its coming we all know it . Its a matter of when . I see it sooner than most . Given what we know . My stance is on pretty firm ground when all known facts are computed. RTR hybred game in 09. I believe it but few others do . I am old being wrong would be nothing new for me. LOL

::sigh:: We are talking about the Hydra from Lucid. Where the heck did X58 come into play here? Can you please, for the love of all thats holy, stop jumping around and show some true focus in a given thread for a nice change of pace? You are putting words in my mouth "If you do say so" when I never said any such thing despite whether or not I have a yet unsaid opinion on it. I don't know if the X58 will have an overhead because of the bridge chip. It is after all a simple bridge chip to allow SLI. Nothing more. I haven't seen benches yet for this setup compared to a system without it. There may be overhead of course. Hydra seems to be something a bit more than a simple bridge chip. It may need coding from devs in specific games to support it. Now what does the Hydra have in common with a bridge chip? AND stop forcing the rest of us off topic. Please.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: SunnyD
Originally posted by: keysplayr2003
Originally posted by: Nemesis 1
Well its alot more interesting than someother hyped overplayed features being offered. We are talking about the hydra chip from lucid . Its so clear to my mind its almost lucid. This chip will allow ATI 4000 series to actually do a raytraced game when its released . Ray traced games in O9 . Anyone saying it cann't happen?

Keys: What other hyped overplayed features? We know we are talking about the hydra chip from Lucid. Your definition of "clear" has yet to be determined. Nvidia and ATI has both demonstrated they can do Raytracing already. Ray Traced games in 09? A possibility, although remote.

But than again I can't imagine anyone here enjoying both RT and physics on the same card. Ya just found out about this chip. But insiders have known for some time . Games have been announced that use DX10.1 Sure weres the links right . Not needed. It been gone over befor.

Keys: I thought you said we were talking about the Hydra chip from Lucid? Where did the Physics and DX10.1 talk come from? If you can have such high hopes and imagination of how terrific this is, you can certainly imagine another scenario where users will enjoy RT and Physics on the same card.
But then again, one card is not the limit.

Imagine particles flying threw air showing real world lighting effects thats cool.

Keys: Ray Tracing doesn't automatically ensure things like this. It has to be coded to do so, just like it would under rasterization. You can observe the most bland RT demo if coded as such.

This technology "sounds" good, but in the end, it just adds another level of overhead. Another layer of technology between the hardware and a given graphics driver. And will probably have to have a dev adoption to code for it. The only way I see that happening is if ATI or Nvidia buys them up. Intel is a possibility as well, but we have yet to see what kind of player Intel will be in the graphics world.

So what if it doesn't scale "quite" as equally as SLI does... the point is that it can scale across ANY graphics platform. And from what's been said so far, the expectation is that it can scale better than SLI and more linearly too, and without these god awful profiles that are required for SLI and/or CrossFire to work (need confirmation on this, but it was said to be totally transparent to the system).

My question though - what does it do to peripheral functions of a card, such as video encode/decode, or parallel tasking such as PhysX (which no doubt NVIDIA will be locking into their driver implementation if they haven't done so already)?

Honestly Sunny, how can it scale "better" than current Xfire/SLI implementations? How can it eliminate microstutter? It can only dish out the frames as fast as the video cards do. So far, the only thing I see about this tech, is the ability to run SLI or Xfire on any system with 2x PCI-e slots. That would certainly be a boon, but what is required for this to happen?
So much is unknown at this point.

As for your quesiton, it's a good one. We'll have to wait and see what happens.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: keysplayr2003
...

Honestly Sunny, how can it scale "better" than current Xfire/SLI implementations? How can it eliminate microstutter? It can only dish out the frames as fast as the video cards do. So far, the only thing I see about this tech, is the ability to run SLI or Xfire on any system with 2x PCI-e slots. That would certainly be a boon, but what is required for this to happen?
So much is unknown at this point.

As for your quesiton, it's a good one. We'll have to wait and see what happens.

Hydra appears to the OS as a single GPU.
The distribution engine as it is called is responsible for reading the information passed from the game or application to DirectX before it gets to the NVIDIA or AMD drivers. There the engine breaks up the various blocks of information into "tasks" - a task is a specific job that HYDRA defines that can be passed to any of the 2-4 GPUs in the system.
It appears Hydra will be breaking DirectX calls into smaller pieces and managing their multitasking.
It remains to be seen if the scaling is going to be better.
This is no AFR so there shouldn't be any micro stutter.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Please Keys I have been very careful about On Topic . Virge gave me the word. You brought up over head. As for your other questions adderessed to Sunny. If you read the link . or other articles . This is the best graphics news in years. If it works as advertized.

There are a lot of details . and it did run crysis with 2 9800 at 60 FPS.

This is a big deal it allows other graphics companies to start up or expand because of the scaling.

 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
@keysplayr2003

Did you even read the article? Even if you didn't, you can still see in the pictures that each gpu is splits the directx workload, nothing like AFR which is where microstutter is found.

You ask how scaling will be better. Well what happens when you mix to weaker GPU in CF or SLI, if you can? Again if you read the article the hyrda engine determines the workload each gpu can handle and breaks it as needed. Not to mention getting the full frame buffer and memory bandwidth from all GPUs instead of wasting it like current SLI and CF setups do. If it works right there should be no reason not to see better scaling in all games.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: SSChevy2001
@keysplayr2003

Did you even read the article? Even if you didn't, you can still see in the pictures that each gpu is splits the directx workload, nothing like AFR which is where microstutter is found.

You ask how scaling will be better. Well what happens when you mix to weaker GPU in CF or SLI, if you can? Again if you read the article the hyrda engine determines the workload each gpu can handle and breaks it as needed. Not to mention getting the full frame buffer and memory bandwidth from all GPUs instead of wasting it like current SLI and CF setups do. If it works right there should be no reason not to see better scaling in all games.

Indeed I did. But you don't still see a need for timing there? Workload changes as scenes change, or even when a player turns around and suddenly is looking at large buildings instead of open land. This Hydra engine has it's work cut out for it if it has to adjust GPU load constantly. All we see from those pics is the theory of what Hydra is "supposed" to do. Actually doing it in real time gaming, successfully, is something else entirely. But again, much too early for this type of discussion anyway.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Janooo
Originally posted by: keysplayr2003
...

Honestly Sunny, how can it scale "better" than current Xfire/SLI implementations? How can it eliminate microstutter? It can only dish out the frames as fast as the video cards do. So far, the only thing I see about this tech, is the ability to run SLI or Xfire on any system with 2x PCI-e slots. That would certainly be a boon, but what is required for this to happen?
So much is unknown at this point.

As for your quesiton, it's a good one. We'll have to wait and see what happens.

Hydra appears to the OS as a single GPU.
The distribution engine as it is called is responsible for reading the information passed from the game or application to DirectX before it gets to the NVIDIA or AMD drivers. There the engine breaks up the various blocks of information into "tasks" - a task is a specific job that HYDRA defines that can be passed to any of the 2-4 GPUs in the system.
It appears Hydra will be breaking DirectX calls into smaller pieces and managing their multitasking.
It remains to be seen if the scaling is going to be better.
This is no AFR so there shouldn't be any micro stutter.

Well, the HD3870X2 appeared as a single GPU to the OS as well. Didn't it?

"It remains to be seen if the scaling is going to be better."
Well yeah, it all kinda remains to be seen. So agree there.

"This is no AFR so there shouldn't be any micro stutter."
There will still be a timing requirement. If each GPU is rendering certain parts of the frame, then they still have to be assembled and then rendered. But again, like you said, it remains to be seen.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
How is it to early for this discussion. This thing is right around the corner, It can cause people tp reconsider their upgrades. Many who wouldn't buy NV because of the sli tax . Can now say Hell ya . Scaleing on the X58. or on AMD M/B This is a win win for all . You spoke of timing on this chip ,

Go back look at what has happened with the orginal DX10 specs . This is Timing. Than look at when it was that DX10.1 was introduced. It was not AMD pushing for THe DX10.1 . They didn't have the Muscle . It was Intel that TOLD MS to get the spec out . Ya say this is off topic. I say it right on topic.

ATI's DX10.1 has been pressured hard. Developers ignoring the spec. NV downplaying it. Trueth is its all about globial illumination.

Now ya got the hydra chip. It changes everything. I suppose you believe its an accident that it will support DX9 now. ATI and Dx10.1 near term . With sofar no word when DX10 /11 will be supported.

This is Lucid a heavely Intel invested. Saying to the developers and NV . Your not playing nice and are trying to delay development of great software(games) .

Now there is no proof . probably no facts to support it. But it sure is looking exactly as I am thinking . Ya know what its about time to. You would say this is offtopic . I am saying this chip changes everthing if it works . and its not just for Intel. Its for all . With a friendly reminder to play nice if you want support.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: keysplayr2003
Originally posted by: Janooo
Originally posted by: keysplayr2003
...

Honestly Sunny, how can it scale "better" than current Xfire/SLI implementations? How can it eliminate microstutter? It can only dish out the frames as fast as the video cards do. So far, the only thing I see about this tech, is the ability to run SLI or Xfire on any system with 2x PCI-e slots. That would certainly be a boon, but what is required for this to happen?
So much is unknown at this point.

As for your quesiton, it's a good one. We'll have to wait and see what happens.

Hydra appears to the OS as a single GPU.
The distribution engine as it is called is responsible for reading the information passed from the game or application to DirectX before it gets to the NVIDIA or AMD drivers. There the engine breaks up the various blocks of information into "tasks" - a task is a specific job that HYDRA defines that can be passed to any of the 2-4 GPUs in the system.
It appears Hydra will be breaking DirectX calls into smaller pieces and managing their multitasking.
It remains to be seen if the scaling is going to be better.
This is no AFR so there shouldn't be any micro stutter.

Well, the HD3870X2 appeared as a single GPU to the OS as well. Didn't it?

"It remains to be seen if the scaling is going to be better."
Well yeah, it all kinda remains to be seen. So agree there.

"This is no AFR so there shouldn't be any micro stutter."
There will still be a timing requirement. If each GPU is rendering certain parts of the frame, then they still have to be assembled and then rendered. But again, like you said, it remains to be seen.

SLI and CF both don't scale 100%, so clearly it's still possible to scale better. The reason that it's possible that Hydra might scale better is because it *appears* to be primarily a hardware solution that functions regardless of the application, whereas SLI and CF are very software dependent and tailored for specific apps.

Also, Hydra claims to split up the workload 50/50 between two gpus, whereas with AFR each gpu renders every second frame and because of this each gpu duplicates a lot of the same work as the other.

I'm also not sure if there will be a 'timing requirement' per se... Since all the gpus are working on all the frames, you don't have a situation where one frame will be ready 'too early' the way as you would with AFR in SLI/CF.

IMO, the main concerns are:

What happens is the workload gets out of sync? Do you end up with parts of the frame missing?

How much latency does the chip/bus itself introduce into the equation?

All in all, this is very interesting stuff. I definitely can't wait to see how this all works out.
 

dunno99

Member
Jul 15, 2005
145
0
0
I read the article (although I forgot most details already, since I read it yesterday) and I have to say, the images look pretty suspicious. If those truly ARE the output images of the two devices (as in, they're not illustrative images showing workload differences), then I call BS. The whole point being the pillars being black on one monitor shows that its depth buffer has been generated and thus has been processed on the corresponding graphics card. If the whole system was truly parallel like the company claims, there would be no blacks there. On the other hand, if it was for only illustrative purposes, then yeah, of course.

If it's running on two plain vanilla 9800s right now and getting 60fps in Crysis, then I call BS as well. The reason being that the the whole point of the SLI/Xfire bridge is to provide enough bandwidth for buffer synchronization. Sure, the standard PCI-e x16 slot should provide 5GB/s bandwidth for communications. But if you've ever did any GPGPU programming, you'd know that reading data off of the graphics card moves more along the lines of x1 speeds. The reason being that the communication of these chips are optimized for receiving data, not sending data. So that means the two cards are communicating at much lower speeds, which isn't even close to enough bandwidth for a single buffer synchronization in a 60fps game (check the wikipedia entry on SLI if you don't believe me). Then again, who knows...maybe they're playing in 640x480.

Overall, I think the tech is vaporware and a load of lies. Render buffer resolve will ALWAYS force the cards to synchronize by default, otherwise the output is not guaranteed to be correct. And in PC games, that happens VERY frequently within a frame. Also remember that bandwidth figures represent the OVERALL bandwidth per second/frame, not instantaneous. So that means if the game is EVER serial in its rendering process (such as using the results of the last resolve for the current step), then that will incur the full latency penalty of the synch. Which, for a fullscreen buffer (1600x1200), will take probably about 5-10% performance out of the cards. This, in addition to the double vertex transform, are the two main reasons you don't get 100% scaling even with full communication bandwidth. So near-linear? Yeah right...as if these "third party" people know more about the cards than the original people who designed them? Come on, stop fooling yourself with your fantasy about how "some hyper geniuses at Lucid realized something that the original stupid engineers had no clue about." The reality is that the engineers at nVidia and ATi are very smart people and they probably got the most out of their cards already.

Furthermore, if Intel is behind this, then its agenda is so that it doesn't have to worry about patent problems with SLI/Xfire. This way, Intel can go ahead and support "Hydra" on their motherboards without ever having to license from nVidia or AMD.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
You maybe correct . The nice part is will see soon enough. IF it works tho . Its :sun: all over! So how does this hurt ATI/AMD . AMD is pretty open . The only ones getting hurt here are people who would lose on closed arch. Who would thar be.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: Nemesis 1
You maybe correct . The nice part is will see soon enough. IF it works tho . Its :sun: all over! So how does this hurt ATI/AMD . AMD is pretty open . The only ones getting hurt here are people who would lose on closed arch. Who would thar be.

Well, yes and no... You could also look at in the respect that people who previously were not able to use two NVIDIA cards in SLI may all of a sudden be in a position to purchase two cards instead of one. I've really never understood NVIDIA's stance on this in that they would rather try sell you a $250-300 motherboard than let you just run two $400-600 video cards on the motherboard of your choosing.

I'm a perfectly good example of a lost sale because of their tactics... I really like my X38 board much better than the 780i board I previously sold, and I would have probably bought another GTX 280 when they dropped in price to $400... but, well, we all know I can't use two GTX 280s on an X38 board. As it is right now, my 4870 X2 just got delivered and is sitting next to me... If I like it enough, I'll probably eventually buy another... Maybe some of the NV focus group members can report that back to their Overlord.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |