Nvidia reveals Specifications of GT300

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Idontcare
CUDA was useless to me too until the day TMPGEnc decided to incorporate CUDA into their encoders.

http://tmpgenc.pegasys-inc.com...roduct/te4xp.html#tabs

It's pretty sweet actually.

Yeah, except that:
Decoding benefits apply to MPEG-1/2 video only and may produce unexpected video output quality compared to regular CPU decoding.
I want to see GPU-accelerated encoding which looks just as good as cpu encoding, and then I might take it seriously.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Fox5
I think ATI wins in GPGPU in terms of raw performance. GPGPU is all about the FLOPs, and ATI has more.
Of course, without a decent programming API, that leaves nvidia as the defacto victor anyway.

yes, CUDA actually allows nvidia to run C code... ATI has no answer to that.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: taltamir
Originally posted by: Fox5
I think ATI wins in GPGPU in terms of raw performance. GPGPU is all about the FLOPs, and ATI has more.
Of course, without a decent programming API, that leaves nvidia as the defacto victor anyway.

yes, CUDA actually allows nvidia to run C code... ATI has no answer to that.
Sure they do. Brook+. They've had it for ages.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
If your talking about GPGPU, leave that to Tesla. The primary focus of the GT200 is gaming and there is no denying the fact that ATi has the upper hand when it comes to performance/mm^2 in this field. Although this means very little for the consumer of course.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: munky
Originally posted by: Idontcare
CUDA was useless to me too until the day TMPGEnc decided to incorporate CUDA into their encoders.

http://tmpgenc.pegasys-inc.com...roduct/te4xp.html#tabs

It's pretty sweet actually.

Yeah, except that:
Decoding benefits apply to MPEG-1/2 video only and may produce unexpected video output quality compared to regular CPU decoding.
I want to see GPU-accelerated encoding which looks just as good as cpu encoding, and then I might take it seriously.

It may be a caveat there for legal reasons, but it certainly is not needed for practical reasons. I am an end-user posting about it from experience. and to my perception there is zero IQ difference between my CUDA and non-CUDA assisted TMPGEnc encodes/decodes.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: ViRGE
Originally posted by: taltamir
Originally posted by: Fox5
I think ATI wins in GPGPU in terms of raw performance. GPGPU is all about the FLOPs, and ATI has more.
Of course, without a decent programming API, that leaves nvidia as the defacto victor anyway.

yes, CUDA actually allows nvidia to run C code... ATI has no answer to that.
Sure they do. Brook+. They've had it for ages.

Which begs the question, "Why haven't we seen more programming for ATI via brook if it's been out for ages?" It could be that ATI's shader architecture is so difficult to code for and get equal performance as an Nvidia counterpart, so why bother with it when it's so much easier to code for Nvidia? I'm just guessing here of course, but I believe an educated guess. Other explainations are certainly welcome!!

 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Fox5
I think ATI wins in GPGPU in terms of raw performance. GPGPU is all about the FLOPs, and ATI has more.
Of course, without a decent programming API, that leaves nvidia as the defacto victor anyway.

That's the whole trick, isn't it. Everybody says there is so much untapped potential in the 4xxx series. FLOPs, shader count, etc. But it may not be so easy to code for it. Hopefully ATI will address this issue with the R8xx series. If R8xx series is just adding more shaders and increasing bandwidth, I fear they will be in the same boat they are right now when it comes to GPGPU performance.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: Keysplayr
Originally posted by: Fox5
I think ATI wins in GPGPU in terms of raw performance. GPGPU is all about the FLOPs, and ATI has more.
Of course, without a decent programming API, that leaves nvidia as the defacto victor anyway.

That's the whole trick, isn't it. Everybody says there is so much untapped potential in the 4xxx series. FLOPs, shader count, etc. But it may not be so easy to code for it. Hopefully ATI will address this issue with the R8xx series. If R8xx series is just adding more shaders and increasing bandwidth, I fear they will be in the same boat they are right now when it comes to GPGPU performance.

Not too mention there is a meaningful difference between theoretical peak/max FLOPS and actual/realizable FLOPS. Its the same with IPC and CPU's. The end-user is always living in the shadow of what their rig could do in theory if all bottlenecks were eliminated.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Brook+ is not really comparable to CUDA. CUDA is mostly a full C implementation. Brook+ is lacking in many ways that limit the types of programming you can do on it, or at least make it a lot more difficult.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Fox5
Brook+ is not really comparable to CUDA. CUDA is mostly a full C implementation. Brook+ is lacking in many ways that limit the types of programming you can do on it, or at least make it a lot more difficult.

Which would explain a lot. At the speed the computer industry moves, it's kind of hard to devote the extra time to a difficult API. Programmers, or their bosses, may have a tough time justifying putting the extra time for it (If it can even be done on said API) when there are less time consuming API's to master.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: Keysplayr
Originally posted by: Fox5
Brook+ is not really comparable to CUDA. CUDA is mostly a full C implementation. Brook+ is lacking in many ways that limit the types of programming you can do on it, or at least make it a lot more difficult.

Which would explain a lot. At the speed the computer industry moves, it's kind of hard to devote the extra time to a difficult API. Programmers, or their bosses, may have a tough time justifying putting the extra time for it (If it can even be done on said API) when there are less time consuming API's to master.

I imagine the resource/project management aspects of the business end of decisions regarding programming for just CUDA, just Brook+, or both draws a lot of parallels in the same regards to the decision process behind any game developer when deciding the PC/consoles they intend to release the game on. Mac and PC? 360 and Wii? All of the above? If not, then why not? Etc.

In this environment it is never enough to simply approach the market with your hardware with the mentality "if we build it (the hardware) then they (programmers) will come". You have to seed the market with incentives to program on your hardware, lower the barrier of adoption for your hardware architecture and ISA.

If F@H has not done a good job programming for ATi then it really isn't their fault nor is it their problem, it is ATi's problem if in fact ATi cares about gaining market traction with their non-CUDA compliant architecture and ISA. No different than Apple deciding not to go after the gaming world with their mac pro's.

I very loosely consider myself a programmer and for me the language of choice comes down to compiler support and end-result performance of the compiled code all within the backdrop of the resources/time needed to get there.

If it takes me 2yrs to create code for ATi which is 10% faster than my code on CUDA but the slightly slower code on CUDA only takes me 1yr to create then I am going to code for CUDA and never look at Brook+ until I can afford to hire the additional programmers to do both projects in parallel.

This is where the difficulty in accessing those max/peak FLOPS gets trumped by the ease of accessing slightly fewer FLOPS but getting my product into the marketplace all the sooner. This is also the same opportunity Intel recognizes with their Larrabee approach, with x86 being the defacto standard for the industry it actually the one case where "if you build they will come" for a GPGPU viewpoint.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
For the general discussion of performance per transistor. Take a look at the 4770, weighs in at a very light 137mm sporting 826Million transistors. The 9800GT OTOH, has a mere 754Million transistors but takes up 276mm. Better performance per transistor? Hands down the 9800GT. More efficient design? 4770 by a long shot. The 9800GT itself takes up slightly more die space with 200Million less transistors then the 48xx parts at the same build process.

While in the abstract performance per transistor is obviously of some importance, overall design architecture is going to trump it, by a lot. Die size is more important then actual transistor count from a business perspective by a long shot, and performance per watt is the only element which is really going to effect us as end users.

In deisgn philosophy, I would be curious as to which people would think would be better for a high end part. 4770x4 on one die, or 4770x2-x2? Obviously increased memory bus to go along with the rest. I'm not certain either way, other then the fact that ideally I'd rather have single chip solutions to avoid game issues not scaling with multi GPU setups and microstutter- yes both of these are less frequent now then they used to be, but they still do crop up from both camps.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Keysplayr
Originally posted by: MarcVenice
I heard there will be a new app for F@H for ATI cards, and that it will boost performance a lot, so much, that ATI cards can Fold as fast as Nvidia cards?

Yes, we've been hearing this since 4xxx series launch. There will be this. There will be that. If they can do it, GREAT!!! Lets have it!! Still waiting on improved AVIVO since the last review that was in 2008. I'm not saying ATI "can't" do it, it's just that they're "not" doing it. At this stage, they might be feeling it's hopeless to compete against GT200 hardware in anything but gaming for all we know. Why throw scant money trying to best a competitor you cannot best in those areas? If I were ATI, I'd put most resources into making their GPU's a bit more powerful in the GPGPU dept. Not just a powerful GPU design, but one that is a bit easier to code for and get something out of them. Because right now, it looks like it's a real bitch and I could see why devs arent bothering.

You are just ranting. AVIVO encodes movies is much faster than Bada Boom and is free. F@H is not currently using the Global Data Share, Local Data Share and the Global GPR feature found in the HD 4x000 architecture which doesn't exist in the HD 3800 architecture. The current F@H client is from November 2008 for Christ sake and is optimized for the HD 3800 GPU, so that's why it runs considerably slower in the HD 4800 than in their nVidia counterparts and the games could run faster on ATi hardware if developers under the TWIMTBP program instead of optimizing shaders to do a lot of texture reads, were focused to improve performance and image quality.

I've seeing games like Call of Juarez which was on nVidia program, and now looks better with ATi, the same happened with Far Cry 1 in which the demo The Project was created, pushed the boundaries in powerful graphics with the X800 series of cards, ( http://ati.amd.com/technology/...ames/crytek/index.html ) or HL2 EP1 and EP2 which looks amazing because they have a lot of shaders. Even other games like F.E.A.R. or F.E.A.R. 2 which aren't under no paid program and looks great, because they optimize their graphics for a standard like DX9 etc.

But I see that most games under the nVidia program have blocky heads, lots of overstretched textures, lots of aliasing, heck, look at Dead Space, look at Lost Planet, look at Quake 4, Call of Duty 4, Gears of War, Assassin Creed, they look good, but there are very few shader effects and their realism relies on textures, like if we were in the late 90's. Common, instead of using textures to simulate stuff, they should use more shaders to simulate a much higher variety of materials.

Originally posted by: dguy6789

It's more accurate to blame the FAH people than ATI. They can't even make a non beta properly working multicore client that has a gui installer yet. They've also been working on an Xbox 360 client for years now with nothing to show for it still. They definitely don't get anything done in a timely manner.

There shouldn't be any doubt in anyone's mind that a client that supports the 4000 series hardware properly will score significantly better than the current one. It's a joke that the vastly inferior 8800 cards get more work done than 4800 cards, it's obviously a software issue.

Yeah, look at MilkyWay@Home, it's performance is much stronger on ATi hardware than in nVidia counterparts. ATi's knowledge of compilers and flexible programming is paying by itself today. Think about this, a HD 4890 GPU, having only 959 million of transistors, being able to keep up and outperform the 1.4 Billion monster aka GTX 260 and remain competitive with the 1.4B GTX 275 shows how much work is done per mm2 in the die against the GT200 architecture which is more inefficient in performance per mm2.

http://www.brightsideofnews.co...graphics.aspx?pageid=1

BSN*: Are you planning to release a NVIDIA client as well? Why not?

Gipsel: Not at the moment and there are several reasons for that. First of all, the ATI application still needs some polishing like multiple GPU or Linux support. Furthermore, the project itself is working with nVidia on CUDA-powered version. Apparently, nVidia gives a lot of support to BOINC projects that want to port their applications to CUDA. Together with the mature CUDA SDK, it shouldn?t take long until MW@H also gets a GPU application that supports the latest nVidia cards.

The reason I started with ATI in the first instance was the quite massive performance advantage ATI has on current hardware for the kind of calculations done at Milkyway [Dual Precision format - Ed.]. I hope, it will increase the interest of getting GPGPU applications ported also to ATI hardware, which is in a lot of cases at least as capable as comparable nVidia offerings. The fact that I?m a member of Team Planet3DNow!, a BOINC team associated with an AMD oriented website, has no influence whatsoever.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Wreckage
Originally posted by: evolucion8


You are just ranting. AVIVO encodes movies is much faster than Bada Boom and is free.

It also spits out garbage results and relies heavily on the CPU. It's free because no one in their right mind would pay for it.

http://www.pcper.com/article.p...=647&type=expert&pid=3

But you will probably pay for the nVidia version right? Nah, it's only in my mind, you are the most unbiased, fairest person of all in here, loll
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: evolucion8
Originally posted by: Keysplayr
Originally posted by: MarcVenice
I heard there will be a new app for F@H for ATI cards, and that it will boost performance a lot, so much, that ATI cards can Fold as fast as Nvidia cards?

Yes, we've been hearing this since 4xxx series launch. There will be this. There will be that. If they can do it, GREAT!!! Lets have it!! Still waiting on improved AVIVO since the last review that was in 2008. I'm not saying ATI "can't" do it, it's just that they're "not" doing it. At this stage, they might be feeling it's hopeless to compete against GT200 hardware in anything but gaming for all we know. Why throw scant money trying to best a competitor you cannot best in those areas? If I were ATI, I'd put most resources into making their GPU's a bit more powerful in the GPGPU dept. Not just a powerful GPU design, but one that is a bit easier to code for and get something out of them. Because right now, it looks like it's a real bitch and I could see why devs arent bothering.

You are just ranting. AVIVO encodes movies is much faster than Bada Boom and is free. F@H is not currently using the Global Data Share, Local Data Share and the Global GPR feature found in the HD 4x000 architecture which doesn't exist in the HD 3800 architecture. The current F@H client is from November 2008 for Christ sake and is optimized for the HD 3800 GPU, so that's why it runs considerably slower in the HD 4800 than in their nVidia counterparts and the games could run faster on ATi hardware if developers under the TWIMTBP program instead of optimizing shaders to do a lot of texture reads, were focused to improve performance and image quality.

I am just ranting? AVIVO, as I have said, was faster. But the IQ was very poor, and had pixelated blocks in the finished product. Could be the reason it was faster. It didn't do the right job. Sounds very logical to me, how bout you? Am I ranting?

What does F@H not currently using the Global Data Share mean to us? Face it Evolution, if they could code F@H to run faster on ATI hardware, they would have by now. If they could have, they would have. We've just got finished talking about how much more difficult it seems to be to code for ATI architecture. And if the rumored R8xx is still the same 1+4 shader architecture the R7 series is albeit more of them, I don't see anything changing for the next two years at least. Many devs are probably just abandoning the effort because an easier solution is available. Could be? Doesn't sound like I'm ranting.


I've seeing games like Call of Juarez which was on nVidia program, and now looks better with ATi, the same happened with Far Cry 1 in which the demo The Project was created, pushed the boundaries in powerful graphics with the X800 series of cards, ( http://ati.amd.com/technology/...ames/crytek/index.html ) or HL2 EP1 and EP2 which looks amazing because they have a lot of shaders. Even other games like F.E.A.R. or F.E.A.R. 2 which aren't under no paid program and looks great, because they optimize their graphics for a standard like DX9 etc.

Hmm. Is F@H, or BOINC a part of the "TWIMTBP" program?

But I see that most games under the nVidia program have blocky heads, lots of overstretched textures, lots of aliasing, heck, look at Dead Space, look at Lost Planet, look at Quake 4, Call of Duty 4, Gears of War, Assassin Creed, they look good, but there are very few shader effects and their realism relies on textures, like if we were in the late 90's. Common, instead of using textures to simulate stuff, they should use more shaders to simulate a much higher variety of materials.

You might as well list the other hundreds of games listed under TWIMTPB. Why stop there? LOL. Blocky heads. That was funny.

Originally posted by: dguy6789

It's more accurate to blame the FAH people than ATI. They can't even make a non beta properly working multicore client that has a gui installer yet. They've also been working on an Xbox 360 client for years now with nothing to show for it still. They definitely don't get anything done in a timely manner.

Hmmm.. They got the NV folding client out in a timely manner. Several versions IIRC.


There shouldn't be any doubt in anyone's mind that a client that supports the 4000 series hardware properly will score significantly better than the current one. It's a joke that the vastly inferior 8800 cards get more work done than 4800 cards, it's obviously a software issue.

Until this is accomplished, of course there will be doubts. If it could have been done, it would have been done. Simple. You guys are hysterical.

Yeah, look at MilkyWay@Home, it's performance is much stronger on ATi hardware than in nVidia counterparts. ATi's knowledge of compilers and flexible programming is paying by itself today. Think about this, a HD 4890 GPU, having only 959 million of transistors, being able to keep up and outperform the 1.4 Billion monster aka GTX 260 and remain competitive with the 1.4B GTX 275 shows how much work is done per mm2 in the die against the GT200 architecture which is more inefficient in performance per mm2.

http://www.brightsideofnews.co...graphics.aspx?pageid=1

BSN*: Are you planning to release a NVIDIA client as well? Why not?

Gipsel: Not at the moment and there are several reasons for that. First of all, the ATI application still needs some polishing like multiple GPU or Linux support. Furthermore, the project itself is working with nVidia on CUDA-powered version. Apparently, nVidia gives a lot of support to BOINC projects that want to port their applications to CUDA. Together with the mature CUDA SDK, it shouldn?t take long until MW@H also gets a GPU application that supports the latest nVidia cards.

The reason I started with ATI in the first instance was the quite massive performance advantage ATI has on current hardware for the kind of calculations done at Milkyway [Dual Precision format - Ed.]. I hope, it will increase the interest of getting GPGPU applications ported also to ATI hardware, which is in a lot of cases at least as capable as comparable nVidia offerings. The fact that I?m a member of Team Planet3DNow!, a BOINC team associated with an AMD oriented website, has no influence whatsoever.

I might be reading the article wrong, but is there the possibility of ATI being the faster GPGPU at MW@H because there is no Nvidia client for it yet? I am not sure. But what do you think would happen if this same dude downloaded the CUDA SDK and created a client for MW@H? I think I know what would happen. Double precision? I thought GT200 supported it? Am I wrong?

Anyway, if you called my post a rant, what the heck would you call yours? Babble?
Blocky heads. Still making me chuckle. Thank you.
:thumbsup:

 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: BenSkywalker
In deisgn philosophy, I would be curious as to which people would think would be better for a high end part. 4770x4 on one die, or 4770x2-x2?

If you think about it, you are asking the very question folks had to start answering with the invention of the integrated circuit.

Is it better to have discreet components on a printed circuit board or to have them comprehended within an integrated circuit?

The answer of course is determined by the other boundary conditions assigned to the specific situation - e.g. "for performance considerations..." or "for cost considerations..." etc.

Moore's law deals directly with the "for cost considerations" aspect of this fundamental question. In fact the cost angle is the basis of Moore's law, see the very first graph of the paper presented on the second page.

If you frame your question as "if cost were not the determining factor...which would be preferred - monolithic 4770x4 or dual-die 4770x2" the answer would be determined in terms of performance.

If you frame your question as "if cost were a consideration..." the answer would be determined by Moore's law and the cost-minimum point for the number of components integrated onto a given IC at a given technology node.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Keysplayr
I am just ranting? AVIVO, as I have said, was faster. But the IQ was very poor, and had pixelated blocks in the finished product. Could be the reason it was faster. It didn't do the right job. Sounds very logical to me, how bout you? Am I ranting?

What does F@H not currently using the Global Data Share mean to us? Face it Evolution, if they could code F@H to run faster on ATI hardware, they would have by now. If they could have, they would have. We've just got finished talking about how much more difficult it seems to be to code for ATI architecture. And if the rumored R8xx is still the same 1+4 shader architecture the R7 series is albeit more of them, I don't see anything changing for the next two years at least. Many devs are probably just abandoning the effort because an easier solution is available. Could be? Doesn't sound like I'm ranting.

Hmm. Is F@H, or BOINC a part of the "TWIMTBP" program?
You might as well list the other hundreds of games listed under TWIMTPB. Why stop there? LOL. Blocky heads. That was funny.

Originally posted by: dguy6789

It's more accurate to blame the FAH people than ATI. They can't even make a non beta properly working multicore client that has a gui installer yet. They've also been working on an Xbox 360 client for years now with nothing to show for it still. They definitely don't get anything done in a timely manner.

Hmmm.. They got the NV folding client out in a timely manner. Several versions IIRC.

Until this is accomplished, of course there will be doubts. If it could have been done, it would have been done. Simple. You guys are hysterical.

I might be reading the article wrong, but is there the possibility of ATI being the faster GPGPU at MW@H because there is no Nvidia client for it yet? I am not sure. But what do you think would happen if this same dude downloaded the CUDA SDK and created a client for MW@H? I think I know what would happen. Double precision? I thought GT200 supported it? Am I wrong?

Anyway, if you called my post a rant, what the heck would you call yours? Babble?
Blocky heads. Still making me chuckle. Thank you.
:thumbsup:

[/quote]

What a load of bloatware, you just ignored all my links and information stated and replied with a very cheap nVidia marketing stuff. AVIVO artifacts issue was AN ISSUE OF CATS 8.12, thank you. They could code the F@H client to run faster on ATi hardware, but they haven't even finished the SMP version in which had been kept in beta stage for ages, could it be because is very hard to code for Intel and AMD CPU's? Well it could be if I use your same distorted point of view like you did with the GPU client. Look how bad you look stating that there's an nVidia client, that's completely wrong, the very same GPU client works on nVidia and ATi hardware and was created ages ago for the X1900 series and then modified to work on the HD 2xxx, HD 3xxx and nVidia GPU's at the same time using OpenCL, got it? And if it was true that's very hard to code on ATi hardware which is a lie, then the client would be released first on nVidia GPU's and later on ATi hardware, so you are just ranting. And to ease your pain, ATi's HD 4870/4890 Double Precision performance is at least 4 times faster than the GTX 280/GTX285 could ever dream of.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: evolucion8
And to ease your pain, ATi's HD 4870/4890 Double Precision performance is at least 4 times faster than the GTX 280/GTX285 could ever dream of.

Well then it's completely useless. ATI's folding performance sucks, its video transcoding uses the CPU as a crutch and they still don't have GPU physics. Not to mention little to no developer support.

Add to the fact they have only the 4th fastest GPU in games and your point is pretty much lost.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: SickBeast
AMD actually has the 2nd or maybe 3rd fastest GPU when you look at it objectively.

The majority of reviews/benchmarks give a slight edge to the GTX275 over the 4890, which would also place it behind the 280 & 285. So looking at it "objectively" would make it 4th.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Wreckage
Originally posted by: SickBeast
AMD actually has the 2nd or maybe 3rd fastest GPU when you look at it objectively.

The majority of reviews/benchmarks give a slight edge to the GTX275 over the 4890, which would also place it behind the 280 & 285. So looking at it "objectively" would make it 4th.

I don't believe you. The 4890 and 275 are tied by all accounts.
 

Elfear

Diamond Member
May 30, 2004
7,126
738
126
Originally posted by: SickBeast
Originally posted by: Wreckage
Originally posted by: SickBeast
AMD actually has the 2nd or maybe 3rd fastest GPU when you look at it objectively.

The majority of reviews/benchmarks give a slight edge to the GTX275 over the 4890, which would also place it behind the 280 & 285. So looking at it "objectively" would make it 4th.

I don't believe you. The 4890 and 275 are tied by all accounts.

What's not to believe Sickbeast? Wreckage is the most unbiased person I've seen on these forums.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |