40nm Battle Heats Up

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: MegaWorks
Don't worry BFG10K, I really don't give a shit about his comments.
LOL, shame really, you might actually learn something. How'd that 4850CF set-up treat you btw?
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
No needed reply to marketing jargon by a guy who thinks ROP was the biggest factor when it comes to performance. :laugh: You even had a thread about it. ROFL...
Yep, and I was right there as well. G80 to G80 GTS, G80 to G92, G92 to G94, G92 to GT200, GT200 to GT200b. They ALL show TMU and SP are less significant than ROP when it comes to performance.

That's quite funny considering G80 gets hammered by G92 when it comes to raw performance. :brokenheart:


Based on AT's review that's right, however that wasn't the case with the original 9800GTX and GTS 512MB. Still doesn't change the fact the GTX 260 always outperforms the 9800GTX+.

Wrong! 9800gtx is 8fps faster in that same benchmark against 8800gtx. Fact that GTX 260 also has whole lot of bandwidth, more vram, and ROP just to beat 9800gtx by 5-10% in raw performance.


Ya, it could be anything but at the end of the day GTX 280 SLI distances itself from both and GTX 295 ends up looking like GTX 260 SLI. That's the point.

Anything is right. :laugh:



Yep and raw performance would be just as important at 1680 as 2560, so if bandwidth isn't an issue as you've repeatedly claimed, you'd certainly have to acknowledge the results at 1680 with 4xAA are completely relevent in proving the point ROP are more significant than SP and TMU given the performance differences between GTX 280 SLI, 260 SLI and 295.

How so when it's not it's not CPU bottlenecked? When it is being CPU bottlenecked the FPS is same across the board whether it be GTX 280 SLI or GTX 295 even with AA. At 1680x1050 4xAA it is still being bottlenecked by bandwidth. AA uses bandwidth and vram and some pixel fill. If a card has 15% more bandwidth expect 15% better fps with AA give or take. Factor in the pixel and texture fillrate to determine the final outcome of FPS.


Because lower resolutions have fewer pixels to draw per frame, regardless of AA.

Although that statement is right that's not exactly how it works. So if you have more than enough fillrate you would still be getting same fps? Of course not. If you have 2x the fillrate then what's required you would be drawing it 2x as fast or wait for CPU to feed the information. in the end more fillrate you have faster it gets.


So is bandwidth an issue at lower resolutions or not? You keep arguing both, either bandwidth is important or only "RAW performance", which is it?

With AA it is in a big way. With raw performance not as much if any at all. Depends on the game and card as well.


I'd love for you to run some benchmarks at 1680 and compare performance differences at 1680 and see which yields a bigger increase, core, shader or memory. I already know the difference but now that you have a somewhat relevant part you might find out more on your own so you can stop posting nonsense about SPs, TMUs, and bandwidth.

Considering G92 is bottlenecked by bandwidth memory speed is as much a big factor as core speed. It also depend on the game and what settings I'm trying to test.

With AA mostly bandwidth. With no AA I would emphasis on core and shader even though it's bottlenecked by bandwidth. Crysis is one of those games that need both.
 

MegaWorks

Diamond Member
Jan 26, 2004
3,819
1
0
Originally posted by: chizow
Originally posted by: MegaWorks
Don't worry BFG10K, I really don't give a shit about his comments.
LOL, shame really, you might actually learn something. How'd that 4850CF set-up treat you btw?

Learn something! You think I would wast precious time acting like a fanboy when there are other more interesting hobbies. Please don't tell me that think you're expert now!

I fixed it, I just changed the bios of each cards to the latest one from HIS so no more blue screen. My brother is using the 4850CF setup and he loves it.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
That's quite funny considering G80 gets hammered by G92 when it comes to raw performance. :brokenheart:
Only after ~25% increase to core clock to make up for its reduction in ROPs. Yet it still performs similarly to G80 despite nearly 50% increases to TMU and SP performance. Based on your theoreticals G92 should be beating G80 by 50%, but it clearly does not. Just as all of the other examples show, ROP has a greater impact on performance than SP and TMU, as you've claimed.

Wrong! 9800gtx is 8fps faster in that same benchmark against 8800gtx. Fact that GTX 260 also has whole lot of bandwidth, more vram, and ROP just to beat 9800gtx by 5-10% in raw performance.
GTX 260 is always 5-10% faster despite similar or lower theoreticals (SP and texture fillrate) than the 9800GTX+, but often much more than that once AA is enabled. I guess its probably not a coincidence that GTX 260 also has more ~40% more ROPs and pixel fillrate....

Anything is right. :laugh:
Yep, leading us to the conclusion ROPs have a greater impact on performance than TMUs and SP on Nvidia parts.

How so when it's not it's not CPU bottlenecked? When it is being CPU bottlenecked the FPS is same across the board whether it be GTX 280 SLI or GTX 295 even with AA. At 1680x1050 4xAA it is still being bottlenecked by bandwidth. AA uses bandwidth and vram and some pixel fill. If a card has 15% more bandwidth expect 15% better fps with AA give or take. Factor in the pixel and texture fillrate to determine the final outcome of FPS.
I'm not talking about clearly CPU limited situations, I'm comparing lower resolutions that would not be bandwidth limited, even with AA, that show differences in performance between the parts. Bandwidth is *NOT* the only limiting factor when it comes to AA performance.

Again, ROPs, particularly at lower resolutions where VRAM and bandwidth are not an issue are much more important with regards to performance as they handle all the Z/stencil ops, blending, MSAA resolve. Bandwidth is important as data is constantly written and read from the Z-buffer and color cache, but again, performance and necessary bandwidth is going to be limited by ROP performance.

Although that statement is right that's not exactly how it works. So if you have more than enough fillrate you would still be getting same fps? Of course not. If you have 2x the fillrate then what's required you would be drawing it 2x as fast or wait for CPU to feed the information. in the end more fillrate you have faster it gets.
Up to the point where you become CPU limited yes, but in more GPU intensive games or with AA enabled, the bottleneck isn't going to be RAW fillrate or CPU speed, its going to be compute/processing or ROP/AA performance, and that's before bandwidth or VRAM become an issue.

With AA it is in a big way. With raw performance not as much if any at all. Depends on the game and card as well.
I doubt bandwidth is an issue at 1680 with 4xAA considering much slower GPUs with less bandwidth handled those resolutions just fine and there's no adverse effects from bandwidth all the way up to 2560.

Considering G92 is bottlenecked by bandwidth memory speed is as much a big factor as core speed. It also depend on the game and what settings I'm trying to test.

With AA mostly bandwidth. With no AA I would emphasis on core and shader even though it's bottlenecked by bandwidth. Crysis is one of those games that need both.
Again, lets see some non-3DMark benches. Increase each ndividually by 10% and see what yields the most performance gain. My bet is on the core.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: MegaWorks
Learn something! You think I would wast precious time acting like a fanboy when there are other more interesting hobbies. Please don't tell me that think you're expert now!

I fixed it, I just changed the bios of each cards to the latest one from HIS so no more blue screen. My brother is using the 4850CF setup and he loves it.
Funny I remember similar comments from you when I listed potential multi-GPU pitfalls. Only to see this from you some months later:

Originally posted by: MegaWorks
I had 2 4850 CF for 4 months gave me nothing but BSOD problem. This thread should explain it. I was waiting for AMD to fix and acknowledges the problem but nooo! they say it our computer parts problem! I gave ATI the finger and I went with nVidia and got myself a nice super overclocked GTX260 core216. It's sad for last 7 years I bought nothing but ATI, but when someone screw you like ATI did I'm sorry but I'm jumping teams. BTW I had 2 3870 for 6 months before this setup and I loved it.

But hey, at least you tried it yourself, even if you learned the hard way :laugh:
 

MegaWorks

Diamond Member
Jan 26, 2004
3,819
1
0
Originally posted by: chizow
Originally posted by: MegaWorks
Learn something! You think I would wast precious time acting like a fanboy when there are other more interesting hobbies. Please don't tell me that think you're expert now!

I fixed it, I just changed the bios of each cards to the latest one from HIS so no more blue screen. My brother is using the 4850CF setup and he loves it.
Funny I remember similar comments from you when I listed potential multi-GPU pitfalls. Only to see this from you some months later:

Originally posted by: MegaWorks
I had 2 4850 CF for 4 months gave me nothing but BSOD problem. This thread should explain it. I was waiting for AMD to fix and acknowledges the problem but nooo! they say it our computer parts problem! I gave ATI the finger and I went with nVidia and got myself a nice super overclocked GTX260 core216. It's sad for last 7 years I bought nothing but ATI, but when someone screw you like ATI did I'm sorry but I'm jumping teams. BTW I had 2 3870 for 6 months before this setup and I loved it.

But hey, at least you tried it yourself, even if you learned the hard way :laugh:

I guess I overacted that time, I wasn't patient enough.

Yes I fixed without ATI's help, but that's just me. I do have a problem with my factory overclocked XFX card, it crashes everytime it reaches 85C. But when I downclock the card back to stock GTX 260 the card works fine. I guess I could call it "a piece of nVidia crap" right. I only had this card for a couple months, talk about nVidia super quality. :roll:
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Only after ~25% increase to core clock to make up for its reduction in ROPs. Yet it still performs similarly to G80 despite nearly 50% increases to TMU and SP performance. Based on your theoreticals G92 should be beating G80 by 50%, but it clearly does not. Just as all of the other examples show, ROP has a greater impact on performance than SP and TMU, as you've claimed.

In raw performance 9800gtx beats 8800gtx pretty badly. Even with 25% better clock it still doesn't quite catch up 8800gtx pixel fillrate. Actually 8800gtx has 28% more pixel fillrate and yet it gets out performed by 9800gtx when it comes to raw performance.

8800gtx 13800 Mpixels/s
9800gtx 10800 Mpixels/s

Here's the full review.

http://www.anandtech.com/video/showdoc.aspx?i=3340&p=3

1600x1200
Crysis
9800gtx 30.2
8800gtx 26.9
12%

1920x1200
Assassin's Creed
9800gtx 45.7
8800gtx 38.1
20%

1920x1200
Bioshock
9800gtx 66.6
8800gtx 58.9
13%

9800gtx has 50% more bilinear texel fill but in FP16 fill only 17% not to mention it's holding the fillrate back because of the bandwidth limitations. 50% more fillrate does not equate 50% better performance. That's quite simple minded actually if you actually think that.

GTX 260 is always 5-10% faster despite similar or lower theoreticals (SP and texture fillrate) than the 9800GTX+, but often much more than that once AA is enabled. I guess its probably not a coincidence that GTX 260 also has more ~40% more ROPs and pixel fillrate....

GTX 260 is always faster you are right. Not denying it. Not saying those extra ROP doesn't add performance either when it comes to AA or uber high resolution. Again 9800gtx+ has more fillrate but is that fillrate saturated by bandwidth that it can close the gap on GTX260? I've done simple tests where my G92 card performance improves as I only raise memory clocks even without AA. So I wonder if G92 had the bandwidth it can beat GTX260 in performance? Maybe even be neck and neck with GTX 260 216SP core when it has enough bandwidth.


Yep, leading us to the conclusion ROPs have a greater impact on performance than TMUs and SP on Nvidia parts.

Here's a picture of the GTX 280 highlighted with ROP, TMU, SP, Frame buffer... ROP is the smallest part of the chip while SP and texture units cover nearly 70% of the chip.

http://techreport.com/r.x/gefo...0/die-shot-colored.jpg

So I ask you that small section of that chip makes the most difference in performance? Why doesn't Nvidia add more ROP and kill off some texture or SP units if ROP was had the biggest impact in performance? Is nvidia dumb?


I'm not talking about clearly CPU limited situations, I'm comparing lower resolutions that would not be bandwidth limited, even with AA, that show differences in performance between the parts. Bandwidth is *NOT* the only limiting factor when it comes to AA performance.

But at lower resolutions it's still bottlenecked by CPU even looking at your bit-tech review @ 1680x1050 4xAA. You can clearly see Fallout, Left 4 dead, Call of duty, Brothers in Arms being bottlenecked there.

In lower resolutions with AA GTX 280 sli is more faster compared to GTX 295 than say no AA. Without AA that performance lead shrinks as shown in Anandtech's review. Bit tech's numbers are all over the place far as I'm concerned. GTX 295 is beating GTX 280 SLI in GRID even with AA while Anandtech review does not. Not to mention 216 core sli beating GTX 295 in many of their benches when in theoretically it's not possible unless Nvidia has neutered drivers.

Again, ROPs, particularly at lower resolutions where VRAM and bandwidth are not an issue are much more important with regards to performance as they handle all the Z/stencil ops, blending, MSAA resolve. Bandwidth is important as data is constantly written and read from the Z-buffer and color cache, but again, performance and necessary bandwidth is going to be limited by ROP performance.

But again bandwidth is the issue even at lower resolutions long as you add AA in the mix. You can easily figure this out by downclocking your GTX 280 to GTX 260 memory bandwidth and try benchmarking @ 1680x1050 4xAA and compare before and after performance differences. You would have to downclock your GTX 280 memory to 872mhz to match GTX 260/295 memory clocks of 999mhz. Try benchmarking few games to get a clear indication and post the results here.


Up to the point where you become CPU limited yes, but in more GPU intensive games or with AA enabled, the bottleneck isn't going to be RAW fillrate or CPU speed, its going to be compute/processing or ROP/AA performance, and that's before bandwidth or VRAM become an issue.

Now you are contradicting yourself.

This is what you said: Because lower resolutions have fewer pixels to draw per frame, regardless of AA.





I doubt bandwidth is an issue at 1680 with 4xAA considering much slower GPUs with less bandwidth handled those resolutions just fine and there's no adverse effects from bandwidth all the way up to 2560.

Sure it's an issue. 9800gtx+ vs GTX260. 9800gtx+ vs 8800 ultra. Should I go on?



Again, lets see some non-3DMark benches. Increase each ndividually by 10% and see what yields the most performance gain. My bet is on the core.

I don't have time to do benches right now. I'm about get rid of 8800gts I just bought waiting for GTX 280 from buy.com. Hopefully my orders goes through. What I can show you is my 8800gs benches I've done in the past.

I lowered my core clocks by 24% which would reduce both my pixel and texel fillrate. My memory clocks lowered by 24% to emphasis on this test...

Tested Crysis 1.2 1440x900 no AA dx9 high settings

STOCK OC CLOCKS 729/1728/1040
37.55 fps

CORE REDUCTION 561/1728/1040
34.87 fps -7.2% difference

BANDWIDTH REDUCTION 729/1728/800
33.70 fps -10.1% difference

memory clocks had the biggest drop in performance. This is a exact G92 chip same as my 8800gts with 1/4 of the cluster disabled. Full g92 would show exactly the same results.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,995
126
Originally posted by: chizow

He claimed the Computerbase selections were objective based on game popularity, when in reality there's always going to be subjective influence with a limited testing suite.
I don?t care what he claimed. I was quoting you, not him. It?s another attempt at deflecting the issue on your part.

You stated certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol., as if to somehow imply the article has an ATi bias because it includes some titles that ATi are traditionally strong in.

You were also implying that slightly older titles are somehow irrelevant because they don?t win any popularity contests. But again, those that play said games are very interested in the scores and don?t give a shit about your comments about the issue.

And again remember, these are 2007 titles so even with your laughable backwards compatibility ?standards? you have to admit IHVs should still be supporting them and providing performance gains.

Like I said, inserting say, CoH and Dead Space, two popular titles that clearly favor Nvidia parts would significantly change the results.
How would they change the results in Call of Juarez, Jericho or Vegas, the very games you singled out? Stop changing the subject with your irrelevant rhetoric.

Of course I'm going to point out inconsistencies and problems with Derek's reviews, that doesn't invalidate all of his testing, research, insights and opinions.
I?m not saying it does. What I?m saying is that the benchmarks you linked to could also be flawed but you don?t consider that possibility because they show nVidia in a good light, especially since they don?t include the range of titles Computerbase tested.

His Big Bang results were clearly an outlier and I pointed that out, especially given Nvidia did not list improvements in the titles AT tested.
Utter rubbish. They tested Far Cry 2 and GRID, and both are on nVidia?s list. I suggest you do some reason before mouthing off about things you clearly have no idea about.

I would absolutely point out the same if he said or did similarly with Nvidia.
Fine, then why not consider the possibility of the AT review you linked being flawed too?

LOL, looks like someone's still salty about my accurate analysis of your buying habits.
Accurate analysis? Heh. Sure, if by ?accurate? you mean ?so comically wrong that it?s not possible to be any more wrong?.

But whatever you say, champ. :roll:

Where am I dismissing the scores? I'm pointing out their performance rating aggregates aren't an accurate gauge of actual performance. Like I said earlier, I certainly enjoy seeing results from a variety of games but to say these results are completely objective or vendor agnostic is laughable.
Again, another attempted back-pedal on your part. Who gives a shit about the averages? They?re just that: averages. Forget about them if you can?t understand that.

The point is some games are seeing gains of 40% or more which is massive, more than enough to beat the GTX295 where the 4870X2 was previously losing. This is equivalent to a GPU upgrade and is bigger than a lot the gains observed when moving from a G80/G92 to a GT200. All this from a free driver upgrade.

I dislike Crossfire but damn, even I have to admit that is impressive considering the card wasn?t a slouch in many of those games to begin with.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,995
126
Originally posted by: MegaWorks

Please don't tell me that think you're expert now!
Of course Chizow?s an ?expert?. He feels qualified to talk about ATi drivers despite not having used an ATi part since 2002 because, ya know, he read it on TEH INTARNETTS. :roll:

Then he dismisses my 4850 vs GTX260+ comparison, despite me having months of gaming experience with both cards in a range of titles and drivers, because according to him ?I swapped the cards too early, so it doesn?t count?.

Yep, that little champ has it all ?figured? out. :roll:
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Guys: BFG, Chizow, Azn. You are all knowledgeable guys, but unfortunately, dead set against each other. Everything the other says is wrong, and it's pretty obvious this doesn't have much to do with 40nm battle. Reading all of your posts, I had to go back several times and re-read the topic title to remind myself what it was about.

You are all knowledgeable. You all have preferences. You are always against each other. And I'll tell you right now, none of it REALLY has to do with video cards.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
Originally posted by: keysplayr2003
Originally posted by: nosfe
charlie is up to no good again
http://www.theinquirer.net/inq...idia-delays-40nm-parts
What has been his hit ratio with nv rumors in the last year anyway? i seem to recall a lot of "i told you so" articles from him lately

HEY!!! How dare you get this thread back on track!!! :frown:



Yes, Charlie is getting a smackdown if you look at the members posts beneath the "rant".

He says the GT212 is dead? Isn't that supposed to be Nvidia's next cash cow? Of course this is the Inq.

*edit - Reading through that article I'd have to say he's now thrown out any attempt to be a journalist when it comes to Nvidia. Guess they really don't care for each other. But, I guess what matters is how much truth is in the article which remains to be seen.
 

qbfx

Senior member
Dec 26, 2007
240
0
0
Originally posted by: chizow

4870X2 wins 4 games at 1280/1680 8xAA
GTX 295 wins 3 games at 1280/1680 8xAA

GTX 295 also wins in 4 other titles with 4xAA, the highest allowed/tested. Results are similar with 2560, as already discussed.

Ok, If there wasn't the possibility you don't see well, I'd think you're lying. Before typing anything about the tests, check if these are right or wrong because I've no intenstion to list the results again:

1280x1024, 8xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies

summary: HD4870X2 wins in 5 titles, GTX295 wins in 3


1680x1050, 8xAA16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies

summary: HD4870X2 wins in 5 titles, GTX295 wins in 3


2560x1600, 4xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, Ass. Creed, CoJ, Crysis, Stalker
GTX295: CoD5, Bioshock, FC2, LP: Colonies, WiC

summary: HD4870X2 wins in 7 titles, GTX295 wins in 5


2560x1600, 8xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis, LP: Colonies
GTX295: CoD5, FC2

summary: HD4870X2 wins in 6 titles, GTX295 wins in 2

Now, from the trend above, we can derive the results for 1920x1200 XxAA/XxAF which is way more important than 1280x1024 no AA/AF anyway :laugh:, and they would be similar to 2560x1600 XxAA/XxAF.


Why would you count LP and FC2 as splits when the GTX 295 wins the majority? I guess you'd also have to count Jericho as a split as well?

Look at the numbers above, they tell stories.

Ah yep, I see it now. Again it proves my point the Performance Rating can be misleading and meaningless.

Maybe it proves it when it's about AMD/ATi winning

Sure there is, you show difference in FPS and % difference on a per title and resolution basis. Not only is it easier to read, its actually meaningful as well. For example, I can look at a 65% difference and 5.2 FPS at 2560 and dismiss the result as meaningless.

You can look at every AA/AF mode for every res. as well, that can't summarize the results from 12 titles, can it?

If by hand-picked you mean Top 10 titles for the last 2-3 months at any given time, I'd be glad to have hand-picked titles for every review. Certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol.

No, by handpicked I mean 5 titles that favor GF hardware and that were the only benches nVidia would let reviewers use during these tests.


Anyway, according to these results it's clear that the HD4870X2 wipes the floor with the 20% more expensive GTX295 on the resolutions that actually matter (stop pointing out the GTX295 wins in the "majority" of the tests only because at 1280x1024 noAA/AF the GTX295 actually kicks arse (by 10% :laugh) and don't forget 1920x1200 that's the sweetspot right now.

 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: qbfx
Originally posted by: chizow

4870X2 wins 4 games at 1280/1680 8xAA
GTX 295 wins 3 games at 1280/1680 8xAA

GTX 295 also wins in 4 other titles with 4xAA, the highest allowed/tested. Results are similar with 2560, as already discussed.

Ok, If there wasn't the possibility you don't see well, I'd think you're lying. Before typing anything about the tests, check if these are right or wrong because I've no intenstion to list the results again:

1280x1024, 8xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies

summary: HD4870X2 wins in 5 titles, GTX295 wins in 3


1680x1050, 8xAA16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies

summary: HD4870X2 wins in 5 titles, GTX295 wins in 3


2560x1600, 4xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, Ass. Creed, CoJ, Crysis, Stalker
GTX295: CoD5, Bioshock, FC2, LP: Colonies, WiC

summary: HD4870X2 wins in 7 titles, GTX295 wins in 5


2560x1600, 8xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis, LP: Colonies
GTX295: CoD5, FC2

summary: HD4870X2 wins in 6 titles, GTX295 wins in 2

Now, from the trend above, we can derive the results for 1920x1200 XxAA/XxAF which is way more important than 1280x1024 no AA/AF anyway :laugh:, and they would be similar to 2560x1600 XxAA/XxAF.


Why would you count LP and FC2 as splits when the GTX 295 wins the majority? I guess you'd also have to count Jericho as a split as well?

Look at the numbers above, they tell stories.

Ah yep, I see it now. Again it proves my point the Performance Rating can be misleading and meaningless.

Maybe it proves it when it's about AMD/ATi winning

Sure there is, you show difference in FPS and % difference on a per title and resolution basis. Not only is it easier to read, its actually meaningful as well. For example, I can look at a 65% difference and 5.2 FPS at 2560 and dismiss the result as meaningless.

You can look at every AA/AF mode for every res. as well, that can't summarize the results from 12 titles, can it?

If by hand-picked you mean Top 10 titles for the last 2-3 months at any given time, I'd be glad to have hand-picked titles for every review. Certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol.

No, by handpicked I mean 5 titles that favor GF hardware and that were the only benches nVidia would let reviewers use during these tests.


Anyway, according to these results it's clear that the HD4870X2 wipes the floor with the 20% more expensive GTX295 on the resolutions that actually matter (stop pointing out the GTX295 wins in the "majority" of the tests only because at 1280x1024 noAA/AF the GTX295 actually kicks arse (by 10% :laugh) and don't forget 1920x1200 that's the sweetspot right now.

Well, since 19x12 is the "sweet spot", why don't you list those results.
 

qbfx

Senior member
Dec 26, 2007
240
0
0
Originally posted by: keysplayr2003
Originally posted by: qbfx
Originally posted by: chizow

4870X2 wins 4 games at 1280/1680 8xAA
GTX 295 wins 3 games at 1280/1680 8xAA

GTX 295 also wins in 4 other titles with 4xAA, the highest allowed/tested. Results are similar with 2560, as already discussed.

Ok, If there wasn't the possibility you don't see well, I'd think you're lying. Before typing anything about the tests, check if these are right or wrong because I've no intenstion to list the results again:

1280x1024, 8xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies

summary: HD4870X2 wins in 5 titles, GTX295 wins in 3


1680x1050, 8xAA16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis
GTX295: CoD5, FC2, LP: Colonies

summary: HD4870X2 wins in 5 titles, GTX295 wins in 3


2560x1600, 4xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, Ass. Creed, CoJ, Crysis, Stalker
GTX295: CoD5, Bioshock, FC2, LP: Colonies, WiC

summary: HD4870X2 wins in 7 titles, GTX295 wins in 5


2560x1600, 8xAA/16xAF

HD4870X2: Jericho, GRID, RS: Vegas, CoJ, Crysis, LP: Colonies
GTX295: CoD5, FC2

summary: HD4870X2 wins in 6 titles, GTX295 wins in 2

Now, from the trend above, we can derive the results for 1920x1200 XxAA/XxAF which is way more important than 1280x1024 no AA/AF anyway :laugh:, and they would be similar to 2560x1600 XxAA/XxAF.


Why would you count LP and FC2 as splits when the GTX 295 wins the majority? I guess you'd also have to count Jericho as a split as well?

Look at the numbers above, they tell stories.

Ah yep, I see it now. Again it proves my point the Performance Rating can be misleading and meaningless.

Maybe it proves it when it's about AMD/ATi winning

Sure there is, you show difference in FPS and % difference on a per title and resolution basis. Not only is it easier to read, its actually meaningful as well. For example, I can look at a 65% difference and 5.2 FPS at 2560 and dismiss the result as meaningless.

You can look at every AA/AF mode for every res. as well, that can't summarize the results from 12 titles, can it?

If by hand-picked you mean Top 10 titles for the last 2-3 months at any given time, I'd be glad to have hand-picked titles for every review. Certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol.

No, by handpicked I mean 5 titles that favor GF hardware and that were the only benches nVidia would let reviewers use during these tests.


Anyway, according to these results it's clear that the HD4870X2 wipes the floor with the 20% more expensive GTX295 on the resolutions that actually matter (stop pointing out the GTX295 wins in the "majority" of the tests only because at 1280x1024 noAA/AF the GTX295 actually kicks arse (by 10% :laugh) and don't forget 1920x1200 that's the sweetspot right now.

Well, since 19x12 is the "sweet spot", why don't you list those results.

linky

Both cards with latest drivers:


Crysis Warhead, 1920x1200, 4xAA/16xAF

HD4870X2: avg. 26,5/min. 20
GTX295: avg. 18/min. 15


Crysis Warhead, 1920x1200, 8xAA/16xAF

HD4870X2: avg. 21/min. 17
GTX295: avg. 11,9/min. 7


Far Cry 2, 1920x1200, 4xAA/16xAF

HD4870X2: avg. 58,8/min. 41
GTX295: avg. 59,9/min. 41


Far Cry 2, 1920x1200, 8xAA/16xAF

HD4870X2: avg. 44,8/min. 27
GTX295: avg. 55,5/min. 42


STALKER: ClearSky, 1920x1200, 4xAA/16xAF

HD4870X2: avg. 22,9/min. 16
GTX295: avg. 11,5/min. 5


The HD4870X2 beats the GTX295 in Crysis and STALKER. In Far Cry 2, it fails at 1920x1200, 4xAA/16xAF and performs basically the same as the GTX295 at 1920x1200, 4xAA/16xAF (I never claimed the opposite).
 

MegaWorks

Diamond Member
Jan 26, 2004
3,819
1
0
Originally posted by: BFG10K
Originally posted by: MegaWorks

Please don't tell me that think you're expert now!
Of course Chizow?s an ?expert?. He feels qualified to talk about ATi drivers despite not having used an ATi part since 2002 because, ya know, he read it on TEH INTARNETTS. :roll:

Then he dismisses my 4850 vs GTX260+ comparison, despite me having months of gaming experience with both cards in a range of titles and drivers, because according to him ?I swapped the cards too early, so it doesn?t count?.

Yep, that little champ has it all ?figured? out. :roll:

ATI drivers are excellent! My 4850CF problem wasn't the drivers, I think it was a Bios bug. After changing the bios the BSOD error disappeared. Like I said I overacted that time I should have thought of it oh well.

BFG10K you are more qualified, because you are in a position to give a conclusion based on your experience with both camps. Hell even I'm more qualified than "Mr. wise old man" here. :laugh:
 

toslat

Senior member
Jul 26, 2007
216
0
76
This is such a nice read - lightens my day. kudos guys!!!!!!!!!!
I think Chizow needs to call for back up - the deck is being stacked against him.
Need to go grab a bite, be back soon.
Please keep up the back and forth (no loafing while am away!) :laugh:

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
In raw performance 9800gtx beats 8800gtx pretty badly. Even with 25% better clock it still doesn't quite catch up 8800gtx pixel fillrate. Actually 8800gtx has 28% more pixel fillrate and yet it gets out performed by 9800gtx when it comes to raw performance.

8800gtx 13800 Mpixels/s
9800gtx 10800 Mpixels/s

9800gtx has 50% more bilinear texel fill but in FP16 fill only 17% not to mention it's holding the fillrate back because of the bandwidth limitations. 50% more fillrate does not equate 50% better performance. That's quite simple minded actually if you actually think that.
Already gone over all of this before, you claimed SP and TMUs had the greatest impact on performance with NV parts, and while the G92 clearly benefits from the ~50% increases to both over G80, it performs nowhere close to that much faster. Further, G80 saw a very linear increase to performance by simply increasing core clocks and nothing else. This would lead one to believe that the main difference in G80 to G92 in terms of reductions, ROPs, bandwidth and VRAM are whats holding it back. All areas that were addressed significantly with GT200. What areas weren't addressed as much compared to G92b? SP performance and TMUs. Yet GT200 always outperforms GT92b, and often significantly.

GTX 260 is always faster you are right. Not denying it. Not saying those extra ROP doesn't add performance either when it comes to AA or uber high resolution. Again 9800gtx+ has more fillrate but is that fillrate saturated by bandwidth that it can close the gap on GTX260? I've done simple tests where my G92 card performance improves as I only raise memory clocks even without AA. So I wonder if G92 had the bandwidth it can beat GTX260 in performance? Maybe even be neck and neck with GTX 260 216SP core when it has enough bandwidth.
Its possible G92 with around half the bandwidth is bandwidth limited, but the question is whether or not it benefits more from memory bandwidth or core frequency? Comparing the various G92 and G92b variants, all grounded with 256-bit bus and ~1000-1000MHz memory frequency and my experience with an 8800GT I'd still come to the conclusion core clocks (ROP, set-up, TMU etc) have a greater impact on performance than shader performance or texture fillrate.

Here's a picture of the GTX 280 highlighted with ROP, TMU, SP, Frame buffer... ROP is the smallest part of the chip while SP and texture units cover nearly 70% of the chip.

http://techreport.com/r.x/gefo...0/die-shot-colored.jpg

So I ask you that small section of that chip makes the most difference in performance? Why doesn't Nvidia add more ROP and kill off some texture or SP units if ROP was had the biggest impact in performance? Is nvidia dumb?
First, counting number of transistors and die size for any given core function is a poor metric. For example, L2 cache attributes up to 80% of some Core 2 dice. Yet a chip with 1/2 or even 1/4 the L2 typically performs within 80%. Secondly, there's always going to be design decisions with regard to functional units and the goal is balance. Third, its possible Nvidia missed their target clocks or had other motives for certain design decisions (Tesla, Quadro, GPGPU etc).

As for why Nvidia didn't address ROP with GT200? They most certainly did.....they doubled ROP compared to G92 which is also a 50% increase over G80. They obviously learned from their mistakes with G92 that simply increasing compute and texture units without increasing ROPs and bandwidth wasn't enough. They didn't kill off texture or SP units, but as has been shown already, the theoreticals for TMU and SP performance with GT200 are very close to G92b due to differences in clock speed.

In lower resolutions with AA GTX 280 sli is more faster compared to GTX 295 than say no AA. Without AA that performance lead shrinks as shown in Anandtech's review. Bit tech's numbers are all over the place far as I'm concerned. GTX 295 is beating GTX 280 SLI in GRID even with AA while Anandtech review does not. Not to mention 216 core sli beating GTX 295 in many of their benches when in theoretically it's not possible unless Nvidia has neutered drivers.
Yes, that's the point, that GTX 280 SLI is always faster even in situations that aren't bandwidth limited, discounting situations where all high-end solutions are CPU bottlenecked. There's more than just the AT and Bit-Tech benches, every other review site that tested GTX 295 came to the conclusion it performs closer to GTX 260 SLI compared to GTX 280 SLI. Certainly bandwidth and VRAM come into play at higher resolutions with AA, but even at lower resolutions with or without AA the GTX 280 SLI is significantly faster when CPU bottlenecking isn't an issue.

But again bandwidth is the issue even at lower resolutions long as you add AA in the mix. You can easily figure this out by downclocking your GTX 280 to GTX 260 memory bandwidth and try benchmarking @ 1680x1050 4xAA and compare before and after performance differences. You would have to downclock your GTX 280 memory to 872mhz to match GTX 260/295 memory clocks of 999mhz. Try benchmarking few games to get a clear indication and post the results here.
I actually did exactly that with WiC, CoH, Crysis and FC2 last night at 1680 with 4xAA. I'll put actual numbers up in a bit but from preliminary results, cutting bandwidth 26% at 602/1296/868 resulted in less than 5% difference in WiC, CoH and FC2. Crysis showed more difference, ~8-10%. I can guarantee you a 26% increase to core clock would result in more than 5-10% performance gain, unfortunately Precision and RT don't allow me to change the core/shader ratio that much individually.

Now you are contradicting yourself.

This is what you said: Because lower resolutions have fewer pixels to draw per frame, regardless of AA.
How am I contradicting myself? Yes the number of pixels drawn to the frame buffer are the same for any given frame, but they can only be drawn as fast as they're fed to the ROPs, and in cases with AA, post-processing or heavy shading that rate is going to be reduced.

Sure it's an issue. 9800gtx+ vs GTX260. 9800gtx+ vs 8800 ultra. Should I go on?
Its an issue at higher resolutions with AA, but not nearly as much at 1680 with 4xAA.

I don't have time to do benches right now. I'm about get rid of 8800gts I just bought waiting for GTX 280 from buy.com. Hopefully my orders goes through. What I can show you is my 8800gs benches I've done in the past.

I lowered my core clocks by 24% which would reduce both my pixel and texel fillrate. My memory clocks lowered by 24% to emphasis on this test...

Tested Crysis 1.2 1440x900 no AA dx9 high settings

STOCK OC CLOCKS 729/1728/1040
37.55 fps

CORE REDUCTION 561/1728/1040
34.87 fps -7.2% difference

BANDWIDTH REDUCTION 729/1728/800
33.70 fps -10.1% difference

memory clocks had the biggest drop in performance. This is a exact G92 chip same as my 8800gts with 1/4 of the cluster disabled. Full g92 would show exactly the same results.
If you're comparing your GSO then you also have 25% less bandwidth than a full G92. This is similar to past comparisons with parts crippled with 64-bit or 128-bit memory buses. And yes Crysis has always been bandwidth and VRAM sensitive even at lower resolutions, so its not much of a surprise reducing bandwidth on a part that was already heavily bandwidth limited would have an adverse effect. Its also one of the few titles that is heavily shader intensive. Curious how you were able to unlink core and shader with that much difference though, I don't recall being able to change the ratio that much with my G80s or G92, and certainly not with my 280.
 

Rusin

Senior member
Jun 25, 2007
573
0
0
Originally posted by: SlowSpyder
He says the GT212 is dead? Isn't that supposed to be Nvidia's next cash cow? Of course this is the Inq.

*edit - Reading through that article I'd have to say he's now thrown out any attempt to be a journalist when it comes to Nvidia. Guess they really don't care for each other. But, I guess what matters is how much truth is in the article which remains to be seen.
Well should I believe Charlie saying that GT212 is dead or should I believe CJ which is telling us specifics of GT212 ?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BFG10K
I don?t care what he claimed. I was quoting you, not him. It?s another attempt at deflecting the issue on your part.

You stated certainly more relevant than old mainstays for certain GPU vendors. Jericho, CoJ, RS: Vegas hmm.... lol., as if to somehow imply the article has an ATi bias because it includes some titles that ATi are traditionally strong in.
Oh noes, you don't care what he claimed, problem is what you're quoting is a direct reply to his claim. So really, who gives a shit what you think I claimed when it was a reply to something he claimed? He claimed this review was somehow objective, and not just "12 games on Wolfgang's hard drive", saying they were less biased than hand-picked titles in reviews that favor Nvidia, especially when Nvidia's "hand-picked" criteria is clearly more objective to begin with.

My point is there's always going to be subjectivity when selecting a testing suite, but I'd always prefer a criteria like "Any Top 10 title from the last 3 months" over "The Good, the Bad, and the Ugly from the last 3 years, aka "Whatever is on Wolfgang's Hard Drive Today."

You were also implying that slightly older titles are somehow irrelevant because they don?t win any popularity contests. But again, those that play said games are very interested in the scores and don?t give a shit about your comments about the issue.
I never claimed they were irrelevant, but they're surely less relevant than recent, popular titles that more people are buying and playing, right now.

And again remember, these are 2007 titles so even with your laughable backwards compatibility ?standards? you have to admit IHVs should still be supporting them and providing performance gains.
Certainly, what would make you think IHVs aren't supporting them?

How would they change the results in Call of Juarez, Jericho or Vegas, the very games you singled out? Stop changing the subject with your irrelevant rhetoric.
Its not irrelevant when it clearly proves my point this and all reviews are far from objective, as he claimed. Again, if anything Nvidia's criteria of using "Top 10 recent titles" is more objective, yet both of you criticize it for being "hand-picked for marketing purposes". If marketing means better performance in recent, popular titles I'd have no problems with that criteria, ever.

I?m not saying it does. What I?m saying is that the benchmarks you linked to could also be flawed but you don?t consider that possibility because they show nVidia in a good light, especially since they don?t include the range of titles Computerbase tested.
So what are you saying? That AT's game selections are subjective? Hmmm. :roll: I've read over the review, nothing really stands out. Nvidia wins titles the majority of reviews show Nvidia winning in, ATI wins in titles that the majority of reviews show them winning.

Utter rubbish. They tested Far Cry 2 and GRID, and both are on nVidia?s list. I suggest you do some reason before mouthing off about things you clearly have no idea about.
GRID showed some improvement but considering only 1 resolution was tested and the drivers claim results may only be evident with certain hardware configs or settings, that's not much to go by at all. After retesting with the drivers Derek posted an update saying they did see larger improvements at higher resolutions or with AA, saying the drivers "handled high memory usage situations better".

I've already pointed out the serious flaws with Derek's Far Cry 2 numbers in detail:

  • 1) He was using 180.44, not 180.48 drivers and archived results that were at least 3 weeks old. Considering he didn't specifically list what he used for Rel 180 "review" its probable he used the older driver for that as well.
  • 2) Other sites clearly showed his results were the outlier. I don't think you'll find a single review site that shows the 4870 1GB is faster than the GTX 280 in FC2, outside of AT.
  • 3) Originally there was no testbed/driver information until I and others noted the omission, then he claimed 180.44 was the same as 180.48. I pointed out that wasn't necessarily true, at least one other site found a huge increase in FC2 from 180.43 to 180.48 (43 to 53 FPS, or 18%).
  • 4) He delayed the review due to ATI driver problems, using the latest ATI hot fix drivers (and still ended up with fabricated/doctored results) but did not do the same for his Nvidia parts, instead using archived results from 3 weeks earlier.
So yes, as a result of these glaring errors, along with similar problems and inconsistencies in his Rel 180 review, I felt he should step down as a reviewer if he was no longer willing or capable of producing quality reviews. I would absolutely do the same if he did similar for ATI, as the results are clearly flawed. I'm sure his recent problems with ATI drivers have made his job less pleasant lately, but that's not an excuse for such glaring mistakes.

Fine, then why not consider the possibility of the AT review you linked being flawed too?
It is possible, but as stated above, there's nothing in there that would indicate such problems based on the results. The main issue with AT reviews are archived results, but we have assurance the GTX 295 results are recent because it was just released. Its possible the 4870X2 could be using older results but the 8.12 hot fix drivers are specifically mentioned which reduces the risk.

Accurate analysis? Heh. Sure, if by ?accurate? you mean ?so comically wrong that it?s not possible to be any more wrong?.

But whatever you say, champ. :roll:
Heheh ya, what'd you ultimately claim was the reason again? Oh ya, more robust drivers and features from Nvidia. Shame that directly contradicts claims you've made in the past about ATI drivers being more robust.

Again, another attempted back-pedal on your part. Who gives a shit about the averages? They?re just that: averages. Forget about them if you can?t understand that.

The point is some games are seeing gains of 40% or more which is massive, more than enough to beat the GTX295 where the 4870X2 was previously losing. This is equivalent to a GPU upgrade and is bigger than a lot the gains observed when moving from a G80/G92 to a GT200. All this from a free driver upgrade.

I dislike Crossfire but damn, even I have to admit that is impressive considering the card wasn?t a slouch in many of those games to begin with.
I have nothing against driver updates that improve performance for free and I've acknowledged the huge gains in 2 of the 12 titles as a result. My point is that the PR 160% or 120% aggregates clearly do not reflect actual performance even in their individual settings where things are at worst, even in high bandwidth situations.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BFG10K
Originally posted by: MegaWorks

Please don't tell me that think you're expert now!
Of course Chizow?s an ?expert?. He feels qualified to talk about ATi drivers despite not having used an ATi part since 2002 because, ya know, he read it on TEH INTARNETTS. :roll:

Then he dismisses my 4850 vs GTX260+ comparison, despite me having months of gaming experience with both cards in a range of titles and drivers, because according to him ?I swapped the cards too early, so it doesn?t count?.

Yep, that little champ has it all ?figured? out. :roll:
Rofl no, I haven't read your 4850 vs GTX 260+ comparison, I'm sure its fine and all, and will ultimately come down to ugly textures in Thief 2 and Red Faction. But of course that has nothing to do whatsoever with your idiotic claims made over a year ago where you claimed ATI drivers were better than Nvidia's based on your experience when you didn't have any relevant experience with an ATI part in over 3 years.

As for ATI driver problems, are you claiming the FC2 issues didn't exist and still don't exist even to this day, despite numerous hot fixes specifically addressing it? Are you claiming the CF/Vista problems didn't exist and still don't exist, despite numerous hot fixes specifically addressing it? Once again, referencing multiple sources, particularly those with concurrent experience with hardware from both camps is certainly compelling evidence. The difference is, I'm not going to make idiotic claims that I'm basing anything on my experience :laugh:
 

MegaWorks

Diamond Member
Jan 26, 2004
3,819
1
0
Originally posted by: chizow
Originally posted by: BFG10K
Originally posted by: MegaWorks

Please don't tell me that think you're expert now!
Of course Chizow?s an ?expert?. He feels qualified to talk about ATi drivers despite not having used an ATi part since 2002 because, ya know, he read it on TEH INTARNETTS. :roll:

Then he dismisses my 4850 vs GTX260+ comparison, despite me having months of gaming experience with both cards in a range of titles and drivers, because according to him ?I swapped the cards too early, so it doesn?t count?.

Yep, that little champ has it all ?figured? out. :roll:
Rofl no, I haven't read your 4850 vs GTX 260+ comparison, I'm sure its fine and all, and will ultimately come down to ugly textures in Thief 2 and Red Faction. But of course that has nothing to do whatsoever with your idiotic claims made over a year ago where you claimed ATI drivers were better than Nvidia's based on your experience when you didn't have any relevant experience with an ATI part in over 3 years.

As for ATI driver problems, are you claiming the FC2 issues didn't exist and still don't exist even to this day, despite numerous hot fixes specifically addressing it? Are you claiming the CF/Vista problems didn't exist and still don't exist, despite numerous hot fixes specifically addressing it? Once again, referencing multiple sources, particularly those with concurrent experience with hardware from both camps is certainly compelling evidence. The difference is, I'm not going to make idiotic claims that I'm basing anything on my experience :laugh:

Ok so with all this blah blah blah! is said my question to you is: Are you saying that nVidia drivers are more robust than ATI? Please give us facts not your personal philosophical opinion.
 

Rusin

Senior member
Jun 25, 2007
573
0
0
+GDDR5 prices are going down + GDDR5 speeds are getting ridiculously high: Fastest GDDR5 chips are rated for QDR 1750MHz. Think it this way. You can get 224GB/s with 256-bit bus.
---

What they are rumouring about GT212:
1) DX10.1 support
2) 384SP, 96 TMU, 16 ROP
3) GDDR5 [QDR 1250MHz] + 256-bit bus = 160GB/s [GTX 280 = 141.7GB/s]
4) Size would be 289mm^2; only half of GT200 [61% of GT200b], remembering that memory bus change, what we can expect from 40nm halfnode process and the fact that it isn't just dieshrink..

 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |