AMD goes fermi

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I've presented the same argument multiple times and it seems you still don't understand it.

Here is what you said and I don't agree with it.

To begin, overclocking nullifies the importance of stock settings, and instead looks at performance on a spectrum based on the quality of the part, and not any predefined points on that spectrum. This changes performance comparisons from "this is my GPU speed" to "my GPU can do this speed, at this voltage, under these temperatures." This places a much greater emphasis on the quality of the chip itself. Therefore, those chips that offer the best performance/watt are the most desirable, since they have the most flexibility in the above described spectrum.

The last 2 sentences are not directly related. The best performance/watt chip may not be the best overclocker, and thus may not be the most desirable. For example, HD6850 had excellent performance/watt but GTX560Ti is a far better overclocker and is more desirable if you can afford it. Therefore, your assumption that the best performance/watt graphics card is most desirable is incorrect.

The specific case discussed was GTX580 vs. HD6970 (but of course you went on a tangent about 6950 and 7970 when those cards were not mentioned). The gamer who bought a GTX580 could care less about the fact that GTX580 consumed more power than the HD6970, even if HD6970 had better performance/watt.

This is why a 6950, the king of performance/watt in the enthusiast segment prior to the 7970, was such a steal.

The reason HD6950 was such as "steal" was because it offered $350 videocard for the price of $230. It had nothing to do with performance/watt because at 6970 speeds, its performance/watt was nothing special. People who cared about performance/watt would not unlock and overclock the 6950 because 6950 had better performance/watt than 6970 did.

Your own sentence:

Still though, in the end of the day, I have a GTX 580 I only paid $230 for.

The reason you bought the 6950 was for price/performance, not because of performance/watt.

If HD6950 @ 1000mhz consumed 100W more than a stock GTX580, but it cost you $230 vs. $499 for GTX580, would you have purchased the 6950 still?
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
You also went on and on about HD7970 in this thread when I clearly showed a minimal power consumption difference between HD6970 vs. GTX580 in my post which was directed about a comment that GTX500 series was power hungry. See a trend? You keep defending the 7970 when the discussion was never about that card in regard to power consumption.
The title of the thread is "AMD goes fermi" in regards to the 7970. Because I quote a point made to lead into an opinion on the subject doesn't mean I'm directly responding to you. There's another 30+ people replying in this thread.
I didn't read the chart wrong. The comment of going from 17 to 32 fps was not related to AnandTech's Metro 2033 benchmark, it was regarding going from unplayable to unplayable.

If you want exact #s, that's going from 26.5 fps to 31.3 fps in Metro 2033 @ 1920x1080 4AA.
And that's still over 30FPS, but I digress.
Here is BF3 was from [H], your favourite website.

HD7970 = 42.6 fps average, 27 fps min
GTX580 = 37.2 fps average, 24 fps min
HD6970 = 29.6 fps average, 21 fps min

I would love for you to go and play BF3 at 36 fps online since 36 fps is perfectly playable for you.
I do and I do just fine - http://battlelog.battlefield.com/bf3/soldier/DoctorK6/stats/177418661/ . But multiplayer has completely different requirements than single player, I'm surprised you would even attempt that leap.
For someone with a membership status from 2004, you sure have a lot of respect for other forum members. How about you try to address your disagreement with respect to another member as opposed to responding in the manner that you did?

Do you even realize the way you respond to people's arguments sometimes?
There is nothing wrong in any of those replies. Being disagreed with, told you're incorrect, or missing a fundamental point, are all parts of a debate. If you repeat the same argument without replies to my rebuttal, new information, or distinguishable changes, I will assume you don't understand the discussion. If you're sensitive to the way I write, I will try to be gentler in my responses in the future. I mean no ill will by any means.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
You seriously just went there? You are now going to tell me I should care about $20 a year in electricity costs when I am buying a $500 graphics card that will depreciate $200 in 12-15 months? If AMD released an HD7970 with 300W TDP and 50% more performance than the current 7970, you wouldn't buy it because it would have consumed 100W more than the GTX580?

I would even say that running a 2500K @ 5.0Ghz and then caring about perf/watt seems quite hillarious and conflicting...as we are not talking cost-aware lowend...
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
And that's still over 30FPS, but I digress.

30/31 fps average is unplayable to me in a FPS on the PC. I am pretty sure most enthusiasts who play FPS and racing games on the PC would agree with me and not you. 31-36 fps average means dips into low 20s for minimum, which are noticeable. In fact, I'd go as far as to say most of us here consider PC gaming superior to console gaming in part because we can game at 60 and 120 fps (hence the recent popularity of 120 Hz monitors).

Then you need to buy a better monitor setup.

A comment that came out of nowhere? 36 fps is the same speed regardless of the size of the screen, and dips to 27 fps min are what console gamers experience. No thanks.

I have no idea what that even means. To me gaming on a 42 inch plasma is superior to a 30 inch LCD that you have. I don't tell you to get a better monitor setup, do I? Again another personal comment.

I would even say that running a 2500K @ 5.0Ghz and then caring about perf/watt seems quite hillarious and conflicting...as we are talking not talking cost-aware lowend...

Obviously, especially since HD7970 bottlenecks a stock 2500k in modern games at 2560x1600 4AA. It's like throwing 140-150W of extra power into nothing. But when GTX580 consumes 50W more than an HD6970 for 20% more performance, it's power hungry.

There is nothing wrong in any of those replies. Being disagreed with, told you're incorrect, or missing a fundamental point, are all parts of a debate.

Your response was not towards me but towards notty22. When he commented regarding market share, you said:

"One of the golden rules of statistics - correlation does not imply causality."

At this point you could have come up with all kinds of reasons why you don't agree with his market share #s or financial results, etc. Instead you responded with:

"Of course, anyone with the most basic knowledge of statistics would know that, thanks for proving where you stand to the forum."

You are not telling another forum member the data he/she presented is incorrect. You are putting them down.

You think that's appropriate? I am not a mod but I find that highly disrespectful and having nothing to do with the discussion. What's next, you are going to ask people for their M. Eng from MIT before they can comment on power consumption?
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
Market share only tells part of the story, AMD gained market share but sacrificed profits in doing so. Nvidia on the other hand priced there cards at a premium favoring margins over market share. Its pretty obvious which business model was more succsessful.
Need to look at the overall picture. When you are down in marketshare, you absolutely need to gain it back, or risk slipping even farther back and being marginalized. When that happens, no one wants your products, game devs don't want to work with your hardware etc. there is a snowball effect. Not to mention you do what needs to be done to move product.

Now that AMD is on a much more level playing field in marketshare, they can start to price from a position of strength. Even more so now that AMD is ahead in performance.
Looking at the new CEO quotes about being the aggressor/predator and the premium there charging for there upcoming gpu's (rightly so) I think amd wants to maximize profits rather then market share going forward.
It's a smart strategy, but harder to pull off if you're a bit player.
Did they? AMD claims to have lunched the 7970 but I just checked in neweeg...
Nope, nothing for AMD 7970... except for reviews.

This is because it was not really lunched, AMD lied when they said they lunched the 7970, you literally cannot buy it in any store.
Are you for real? AMD didn't lie, they flat out stated when the cards were going to be for sale. Understood? If the 9th of January comes and goes and nothing shows up, and in the days and weeks following the card is still vaporware, then yea AMD was not being truthful. But your accusation is just downright dumb (I'm being kind) at this point.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
This is because it was not really lunched, AMD lied when they said they lunched the 7970, you literally cannot buy it in any store.
If nVidia lunches 3+ months after AMD on this one, I would call it botched.
But if they lunch within 1-2 months then that is normal execution. If they lunch ahead then it is well executed.

Taltamir, I'm having trouble following what it is, exactly, that you are upset with here but AMD does not sell 7970's. AMD sells the GPU, their AIB's are the 7970 sellers (and even then you are likely not buying from the AIB but rather from a reseller like Newegg or tigerdirect).

I believe AMD if they say they have launched their GPU, but that doesn't mean the AIB's have launched their video cards or that the resellers have them listed for sale.

There is a whole supply chain here to manage. AMD's part is done, its launched, but that doesn't mean everyone else in the supply chain has their part done.

Regardless, why you mad bro? Whats with all the angst "AMD lied!" going on here? Were you counting on buying one sometime between the "launch" date and the reseller availability date? I don't get the frustration you are exuding in your post
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
The last 2 sentences are not directly related. The best performance/watt chip may not be the best overclocker, and thus may not be the most desirable. For example, HD6850 had excellent performance/watt but GTX560Ti is a far better overclocker and is more desirable if you can afford it. Therefore, your assumption that the best performance/watt graphics card is most desirable is incorrect.
Ah, I see where there was a misunderstanding. This argument is presented in the case that the chips in question are already good overclockers. If a chip's design is already peaked for frequency so that it has little room to overclock, regardless of it's performance/watt, it's already worthless to an overclocker. Therefore arguing the merits of performance/watt would be moot, so why would I do that?
The specific case discussed was GTX580 vs. HD6970 (but of course you went on a tangent about 6950 and 7970 when those cards were not mentioned). The gamer who bought a GTX580 could care less about the fact that GTX580 consumed more power than the HD6970, even if HD6970 had better performance/watt.
This thread wasn't about the HD6970/GTX 580, but to honor that argument, the gamer should care just as much. The 6970 can be overclocked to GTX 580+ speeds and at lower TDP's, so yes, it still matters. All for $120 less I may add. The performance of the GTX 580 is there simply because NVIDIA slammed a ton of power circuitry on the board and went all out. You're simply paying for their poor engineering, no more, no less.
The reason HD6950 was such as "steal" was because it offered $350 videocard for the price of $230. It had nothing to do with performance/watt because at 6970 speeds, its performance/watt was nothing special. People who cared about performance/watt would not unlock and overclock the 6950 because 6950 had better performance/watt than 6970 did.
Not inherently because most don't understand that concept, which is why I made that long post pointing it out. The reason they were able to unlock and overclock their cards so easily is because the Cayman GPU is an efficient one. If you had a slap a giant aftermarket heatsink on a 6950 to unlock it or grab a new PSU, many fewer people would have ventured forth.

The reason you bought the 6950 was for price/performance, not because of performance/watt.

If HD6950 @ 1000mhz consumed 100W more than a stock GTX580, but it cost you $230 vs. $499 for GTX580, would you have purchased the 6950 still?
You're mixing too many concepts with that statement to the point that they're wrong and not my argument in any form. A 6950 that consumed 100W more than a stock GTX 580 would be well over PCIe specs and would cause consideration for PSU load, heat load, safety, etc. These extra factors completely warp the argument to the point that it's inane. The savings of gained from using a 6950 are a result of the chip being efficient. I bought a $230 6950 because I knew I could get $500 GTX 580 performance from it since the Cayman chip was proven in quality.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
30/31 fps average is unplayable to me in a FPS on the PC. I am pretty sure most enthusiasts who play FPS and racing games on the PC would agree with me and not you. 31-36 fps average means dips into low 20s for minimum, which are noticeable. In fact, I'd go as far as to say most of us here consider PC gaming superior to console gaming in part because we can game at 60 and 120 fps (hence the recent popularity of 120 Hz monitors).
Millions of people play games at 30FPS everyday. I know there's a certain crowd that's into 120Hz, that's their thing. I'm into high resolution and eye candy, that's mine. To each his own. Point is, it's more than playable.
A comment that came out of nowhere? 36 fps is the same speed regardless of the size of the screen, and dips to 27 fps min are what console gamers experience. No thanks.

I have no idea what that even means. To me gaming on a 42 inch plasma is superior to a 30 inch LCD that you have. I don't tell you to get a better monitor setup, do I? Again another personal comment.
Why is it all of a sudden not OK that I have an opinion on any comment you make? This is getting childish. I would be disappointed if I paid $500 for a graphics card and was stuck looking at blocky 1080p. To each his own.
Obviously, especially since HD7970 bottlenecks a stock 2500k in modern games at 2560x1600 4AA. It's like throwing 140-150W of extra power into nothing. But when GTX580 consumes 50W more than an HD6970 for 20% more performance, it's power hungry.
Again, I'm not quite sure how much further I can break down my argument. It has nothing to do with the absolute value of power consumption but what you're getting in return for that power consumption. I don't think I can make it any simpler. If you buy a GTX 580 that gives a certain level of performance at a certain power consumption, and I can overclock my 6950 to the same performance at similar or lower power consumption, I just saved $270. And go play modded Skyrim if you think a 5.0GHz is overkill. Furthermore, the CPU is never 100% loaded, so it's not nearly 140-150W. Again, if you understood how the hardware worked, you would understand this. The fact that you're trying to connect my CPU power profile to this argument shows how clearly far off you are in all this.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Therefore arguing the merits of performance/watt would be moot, so why would I do that?

As I understand it, your view is that enthusiasts should care about performance/watt since it impacts power consumption when overclocking. Basically if a card has excellent overclocking abilities and it overclocks well, then you'll buy one that has the lowest power consumption all things being equal since you'll be able to get a card with similar performance without incurring the added power penalty.

Did you see anyone in the last 10 pages disagreeing with the above? The few individuals who were discussing power did so in the context of absolute power consumption of a GPU. I then presented information that pretty much all high-end GPUs are power hungry, which makes the power differences between them a small point to care. You don't have to agree with that viewpoint, but that's my view.

I look at the overall power of a system since I can't just run a game on my 6970. As such, imo the proper context should be the additional power consumption in the context of the overall system power consumption. When a system is already consuming 375-400W of total power, the extra 30-50W differences are unlikely to be a deal breaker for an enthusiast. Again, you don't have to agree.

The 6970 can be overclocked to GTX 580+ speeds and at lower TDP's, so yes, it still matters. All for $120 less I may add. The performance of the GTX 580 is there simply because NVIDIA slammed a ton of power circuitry on the board and went all out. You're simply paying for their poor engineering, no more, no less.

This argument doesn't make sense since GTX580 is faster than HD6970 and GTX580 overclocks more than HD6970. So you can never overclock HD6970 to GTX580 speeds since GTX580 can overclock further.

Not only that, but in the most demanding games (Crysis 2, BF3, etc.), even the mighty HD7970 is barely faster than GTX580. So how can HD6970 be almost as fast as a GTX580? If HD7970 is only 20-25% faster than a GTX580 but it's 40-45% faster than 6970, how can HD6970 be as fast as a 580 for $120 less? That's inconsistent.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
You are not telling another forum member the data he/she presented is incorrect. You are putting them down.

You think that's appropriate? I am not a mod but I find that highly disrespectful and having nothing to do with the discussion. What's next, you are going to ask people for their M. Eng from MIT before they can comment on power consumption?
Because notty22 replies with gems like this:
Agreed and in return, he is throwing up a Fudzilla article, and I don't know if he is seriously believing his own spin, that there was a 40% swing in gpu shipments. Market share or whatever value MrK is stating.
If he wants to be condescending and detract from the discussion, he deserves the refutation.
 

MrX8503

Diamond Member
Oct 23, 2005
4,529
0
0
Post #114 of HD6990 vs. HD7970, you said "Aren't most of those games favored by NVIDIA GPUs?"

1) Can you please elaborate how my comparison of HD6990 vs. HD7970 was misleading?

2) Can you please elaborate how someone who plays Shogun 2, Crysis 2, Metro 2033 and BF3 should ignore those benchmarks?

3) Can you also elaborate how the fact that a game is AMD or NV sponsored impacts a gamer who wants to upgrade to play it? Should we just toss out all those games where HD7970 is barely faster because they are "NV sponsored"?

1. The 6990 comparison was misleading because.

A. The 6990 is a $700 part, the 7970 is not.
B. You repeated the GTX 580 over and over with benches from NVIDIA sponsored games.
C. The 7970 still outperformed the 580 in those benches and is priced accordingly

2. They shouldn't ignore those benchmarks, but I don't see how that applies to the overall performance of a card.



Looks like the 7970 is priced accordingly to me. Its also more efficient to boot.

3. No, we don't toss out those benchmarks. We acknowledge that it performs better in those games and then make a disclaimer as to why. We also acknowledge the games it does perform well in and then look at the pricepoint. What you're doing is looking at just the bad and throwing the rest out.

That's hilarious that you are calling me biased because NV's cards work well with Deferred MSAA and NV spent $ to work closer with developers. I guess in your world BF3, Crysis 2, Metro 2033 count. What's next any game that runs faster on AMD cards shouldn't count because it gives AMD an unfair advantage?

lol, I have not seen a legitimate reviewer that would review GPUs like you would. But if thats how you want to see it, then ok. Certain scenarios perform better/worse, what you're doing is called cherry picking.

In % terms it's much faster, but not in playable terms. Ya, and all 3 of them are unplayable at those settings, which means I'd be upgrading from 17 fps to 32 fps.

Seriously? Playable or not, the 7970 performed better. Cherry picking your arguments again. A top tier game being play able on a single GPU at 2560x1600 isn't a common thing and if it were, it doesn't last very long.

The GTX 580 is $500, the 7970 is $550, so whats the problem?
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Why is it all of a sudden not OK that I have an opinion on any comment you make? This is getting childish. I would be disappointed if I paid $500 for a graphics card and was stuck looking at blocky 1080p. To each his own.

My response was that I deemed 36 fps to be 'unplayable'. Your response to that was that I should upgrade my monitor instead. You tell me how your comment made any sense in the context.

Again, I'm not quite sure how much further I can break down my argument. It has nothing to do with the absolute value of power consumption but what you're getting in return for that power consumption.

And what exactly is 2500k @ 5.0ghz getting you for 140-150W of extra power consumption in games?

If you buy a GTX 580 that gives a certain level of performance at a certain power consumption, and I can overclock my 6950 to the same performance at similar or lower power consumption, I just saved $270.

I think you confuse performance/price with performance/watt.

You ignored completely my question: "Would you buy a $230 HD6950 if it consumes 100W more than a GTX580, if GTX580 cost $499?"

And go play modded Skyrim if you think a 5.0GHz is overkill.

So you underclock your 2500k to stock speeds for other games?

Furthermore, the CPU is never 100% loaded, so it's not nearly 140-150W.

If the CPU is never 100% loaded, why do you overclock it to 5.0ghz then? If the CPU is never 100% loaded, you would downclock to the lowest possible clock speed so that it is 99-100% loaded. This way you'd save on power consumption since you wouldn't need to add unnecessary voltage to keep the CPU stable at 5.0ghz at load....

Again, if you understood how the hardware worked, you would understand this. The fact that you're trying to connect my CPU power profile to this argument shows how clearly far off you are in all this.

Another personal attack. Hilarious.

Your CPU power consumption is being discussed because you are making a big fuss about 50W of power consumption. You then proceeded to explain why enthusiasts should care, but in getting there you neither proved that lower TDP directly impacts overclocking ability, nor you came up with any reason why people should actually care other than $20 more in electricity costs.

Then you finally said that you are OK with the added power consumption because you can get similar performance for $200-270 less, etc. This argument is all over the place, mixing and matching performance/watt and performance/price. HD6950 was great because of performance for the price.

If HD6950 @ 1000mhz consumed 300W and GTX580 consumed 200W, you'd probably still buy the HD6950 because it was $250+ less. Yes or no?
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
As I understand it, your view is that enthusiasts should care about performance/watt since it impacts power consumption when overclocking. Basically if a card has excellent overclocking abilities and it overclocks well, then you'll buy one that has the lowest power consumption all things being equal since you'll be able to get a card with similar performance without incurring the added power penalty.
Exactly. You understand it perfectly.
Did you see anyone in the last 10 pages disagreeing with the above? The few individuals who were discussing power did so in the context of absolute power consumption of a GPU. I then presented information that pretty much all high-end GPUs are power hungry, which makes the power differences between them a small point to care. You don't have to agree with that viewpoint, but that's my view.
This directly contradicts what you wrote above, which is why I'm having a hard time figuring out where your understanding is. The difference in power consumption of high-end GPU's is significant - up to 20%.
I look at the overall power of a system since I can't just run a game on my 6970. As such, imo the proper context should be the additional power consumption in the context of the overall system power consumption. When a system is already consuming 375-400W of total power, the extra 30-50W differences are unlikely to be a deal breaker for an enthusiast. Again, you don't have to agree.
Let me ask you this - would your system change regardless of the graphics card that's in it? No? Then why are you considering it? It's a sunk cost. Furthermore, does the system's power consumption have any great effect on the heat/power/overclocking head room of the graphics card? Not much. Does the graphics card's own heat/power consumption affect its overclocking ability? Most certainly.


This argument doesn't make sense since GTX580 is faster than HD6970 and GTX580 overclocks more than HD6970. So you can never overclock HD6970 to GTX580 speeds since GTX580 can overclock further.
From an absolute performance standpoint, I agree. I also think the 6970 is a poor purchase. Why? It's not the best on the performance/watt or performance/price curves. That's the sweet spot that ties this all together. As I mentioned earlier, overclocking a 6950 to GTX 580 speeds while overclocking a GTX 580 even higher closes the performance gap between the two (30%->20%) without change the cost gap (~$250). Furthermore, the overclocked 6950 may reach a performance level at which the GTX 580 is no longer needed. You can extend the concept to the 6970, but it isn't as good an example, as I previously mentioned.

Not only that, but in the most demanding games (Crysis 2, BF3, etc.), even the mighty HD7970 is barely faster than GTX580. So how can HD6970 be almost as fast as a GTX580? If HD7970 is only 20-25% faster than a GTX580 but it's 40-45% faster than 6970, how can HD6970 be as fast as a 580 for $120 less? That's inconsistent.
You're mixing overclocking vs. not overclocking, hence the inconsistency.

This was a fantastic response and I thank you for taking the time to put it together. You specifically summarized your points and pointed out your disagreements, which allowed me to easily agree/disagree and comment. That's all I ask. It makes it incredibly difficult to carry on a discussion/debate when I'm seemingly thrown the same statements/comments over and over despite my best attempts to explore/expand/refute them, with no follow-up response. If I'm guilty of the same, please point out where and I will do my best to try again.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
A. The 6990 is a $700 part, the 7970 is not.
2. They shouldn't ignore those benchmarks, but I don't see how that applies to the overall performance of a card.

Looks like the 7970 is priced accordingly to me. Its also more efficient to boot.

lol, I have not seen a legitimate reviewer that would review GPUs like you would. But if thats how you want to see it, then ok. Certain scenarios perform better/worse, what you're doing is called cherry picking.

The GTX 580 is $500, the 7970 is $550, so whats the problem?

And now that you have spent your time in this thread replying to my post on HD6990 vs. HD7970, I'll let you rethink everything that you said because my post for HD6990 vs. HD7970 was in response to Gaia's comment:

"Slightly more mature drivers and it is as fast as a 6990."

Notice, my reply to GaiaHunter:

1) Has nothing to do with price.
2) Has nothing to do with GTX580
3) Has nothing to do efficiency.
4) Has nothing to do with HD7970 not being a good card.

You added all of those things and went on a tangent when all him and I were discussing was HD7970's performance in relation to the 6990 and how drivers might influence that. :whiste:
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Does the graphics card's own heat/power consumption affect its overclocking ability? Most certainly.

I think you and I finally came to the main point of our disagreement. It is this.

Not necessarily. This was exactly my point to you. My GTX470 overclocked 25% at 1.087V and it also ran hotter than my 6950 did. My HD6950 is a horrible overclocker and crashes at 910mhz even at 1.2V despite running cooler and having a lower TDP.

HD6950 = 200W TDP
GTX470 = 215W TDP

The are plenty of cards with great performance/watt and low TDP vs. their competitors, but they are dogs at overclocking - HD5870 vs. GTX480/470, HD6870 vs. GTX560, etc.

A card with better performance/watt does not automatically overclock better.

From the perspective of buying my videocard for the purpose of overclocking it, the last thing I care about is performance/watt. The minute you start overclocking, you are throwing performance/watt out since power consumption increases significantly. Also, assuming overclocking is correlated with good performance/watt is not always true for all videocards.

overclocking a 6950 to GTX 580 speeds while overclocking a GTX 580 even higher closes the performance gap between the two (30%->20%)

The HD6900 series is also notorious for poor performance scaling when overclocked. So a 20% overclock (from 880 to 1000) does not net a direct performance boost on average, while 20% overclock on GTX580 makes it fly.

Take a look at this review. A fully overclocked 580 destroys an overclocked 6970.

I still think the reason you got the 6950 in the first place (and much like I did) was because it offered amazing performance for the price. The fact that it did that with similar power consumption to the 580 was just a bonus. I certainly would have purchased the 6950 anyway even if it consumed 100W more than the 580 because it was almost $250 less.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
My response was that I deemed 36 fps to be 'unplayable'. Your response to that was that I should upgrade my monitor instead. You tell me how your comment made any sense in the context.
The point is I see getting a $500 graphics card and not having the monitor setup to put that power to good use (i.e., only make it run at 36FPS) ridiculous. I wouldn't be spending $500 on a graphics card for 1080p. To each his own, that's just my point of view.
And what exactly is 2500k @ 5.0ghz getting you for 140-150W of extra power consumption in games?
That's not how CPUs perform in games. No game will load a CPU like Prime95 does and that's a fundamental misunderstanding on your part. My CPU at 5.0GHz therefore is not costing me an extra 140-150W, but much, much less. As for what it gets me, uGrids=11, baby.
I think you confuse performance/price with performance/watt.

You ignored completely my question: "Would you buy a $230 HD6950 if it consumes 100W more than a GTX580, if GTX580 cost $499?"
In this case is it performing at GTX 580 speeds? I already answered this I think:
MrK6 said:
You're mixing too many concepts with that statement to the point that they're wrong and not my argument in any form. A 6950 that consumed 100W more than a stock GTX 580 would be well over PCIe specs and would cause consideration for PSU load, heat load, safety, etc. These extra factors completely warp the argument to the point that it's inane. The savings of gained from using a 6950 are a result of the chip being efficient. I bought a $230 6950 because I knew I could get $500 GTX 580 performance from it since the Cayman chip was proven in quality.
So you underclock your 2500k to stock speeds for other games?
I think Sandy Bridge does that already, at least on cores that aren't being used. But in either case, as I explained earlier, the power consumption difference isn't anywhere near 140-150W.
If the CPU is never 100% loaded, why do you overclock it to 5.0ghz then? If the CPU is never 100% loaded, you would downclock to the lowest possible clock speed so that it is 99-100% loaded. This way you'd save on power consumption since you wouldn't need to add unnecessary voltage to keep the CPU stable at 5.0ghz....
Because the CPU already does this itself, that's the beauty of C-States, Speed Step, and other power saving technologies. The 5.0Ghz clock speed is needed for split second intervals where I look over huge draw distances or some gnarly physics. They keep the minimum frames that much higher. The rest of the time, when my graphics card is maxed, the CPU sips power as it's not under any heavy stress. The chips we have today are incredibly efficient and smart.
Another personal attack. Hilarious.
You just demonstrated twice in this one post to which I'm replying how much you don't understand about CPU's function in gaming. Telling you that is no more than stating the truth, there's no personal attack. Having a little humility and not being so defensive takes one a long way.

Your CPU power consumption is being discussed because you are making a big fuss about 50W of power consumption. You then proceeded to explain why enthusiasts should care, but in getting there you neither proved that lower TDP directly impacts overclocking ability, nor you came up with any reason why people should actually care other than $20 more in electricity costs.
Heat, noise, money for higher tier PSU's, cooling equipment, etc. The list goes on, all because of slipshod engineering. And I thought it was commonly understood how lower TDP directly impacts overclocking: lower TDP = less heat = lower resistance = higher frequencies stabilized at lower voltages. That wasn't the point though.

Then you finally said that you are OK with the added power consumption because you can get similar performance for $200-270 less, etc. This argument is all over the place, mixing and matching performance/watt and performance/price.
Because it incorporates both. Pure performance/watt is little without overclockability just like pure performance/price is easily rearranged by overclockability. Informed consumers should look at hardware not for what it is, but of what it is capable.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The point is I see getting a $500 graphics card and not having the monitor setup to put that power to good use (i.e., only make it run at 36FPS) ridiculous. I wouldn't be spending $500 on a graphics card for 1080p. To each his own, that's just my point of view.

?

HD7970 barely gets 30 fps in Metro 2033 at my resolution. Who said anything about 2560x1600? Exactly, I am not going to spend $500 on a graphics card that can't improve performance to playable in the games I play at my resolution.

And I thought it was commonly understood how lower TDP directly impacts overclocking: lower TDP = less heat = lower resistance = higher frequencies stabilized at lower voltages. That wasn't the point though.

Absolutely not. This is totally incorrect. Not only that but AMD and NV measure TDP differently. Lower TDP does not directly correlate with overclocking % in any shape or form. It depends entirely on the videocard.

When most people discuss power consumption, they discuss it in absolute terms or in terms of costs or heat, etc., not in how TDP impacts your ability to overclock in % terms. Your entire argument for why anyone should care about power consumption rests on the assumption that parts with lower power consumption are better overclockers. That premise does not hold true for all AMD or NV cards.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I think you and I finally came to the main point of our disagreement. It is this.

Not necessarily. This was exactly my point to you. My GTX470 overclocked 25% at 1.087V and it also ran hotter than my 6950 did. My HD6950 is a horrible overclocker and crashes at 910mhz even at 1.2V despite running cooler and having a lower TDP.

HD6950 = 200W TDP
GTX470 = 215W TDP
Heat and power consumption of the card compared to itself affects overclockability, not some other card. If heat and power consumption didn't affect overclockability, we wouldn't waste our time and money on elaborate cooling setups.
The are plenty of cards with great performance/watt and low TDP vs. their competitors, but they are dogs at overclocking - HD5870 vs. GTX480/470, HD6870 vs. GTX560, etc.
The HD5870 was a beastly overclocker. Mine hit 1050Mhz on air and 1.1GHz on water was common. Anyway, this goes back to my point that discussing performance/watt is moot if the chip is a dog at overclocking.

A card with better performance/watt does not automatically overclock better.
I agree, it does not. But when it is a good overclocker, it helps a lot.

From the perspective of buying my videocard for the purpose of overclocking it, the last thing I care about is performance/watt. The minute you start overclocking, you are throwing performance/watt out since power consumption increases significantly. Also, assuming overclocking is correlated with good performance/watt is not always true for all videocards.
Then you've missed a lot of tonight's discussion. Briefly summarized differently, a good overclocking card with a good performance/watt is starting at a lower threshold for cooling and power requirements. They have that much more headroom so that you can add that extra 0.0125V safely or you can run 1GHz even at a lower voltage because it runs so cool. It's just more flexibility due to an efficient, quality chip. The best I can say is look forward to my 6950/7970 comparison, as I'll cover a lot of this and I think it's a point that is better explained with live examples and good data.
The HD6900 series is also notorious for poor performance scaling when overclocked. So a 20% overclock (from 880 to 1000) does not net a direct performance boost on average, while 20% overclock on GTX580 makes it fly.
Percentage-wise it's pretty damn close. I think Fermi has better overclocking scaling, but it's not that different.

Take a look at this review. A fully overclocked 580 destroys an overclocked 6970.
Well yeah, the GTX580 is running a 21% overclock and the 6970 is only running a 6.8%. Run the 6970 at 1050MHz+ and then you'll have a fair comparison.
I still think the reason you got the 6950 in the first place (and much like I did) was because it offered amazing performance for the price. The fact that it did that with similar power consumption to the 580 was just a bonus. I certainly would have purchased the 6950 anyway even if it consumed 100W more than the 580 because it was almost $250 less.
I use all factors in my purchases. Performance/price is a big one, I agree and I think it's the best reason to purchase any piece of hardware, but not the only one. Personally, I was happy that the 6950 also unlocked and was such a great overclocker. You'll see in my review how similarly the 6950 performs to a GTX 580 and I think it'll be a better place to continue the discussion.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
?

HD7970 barely gets 30 fps in Metro 2033 at my resolution. Who said anything about 2560x1600? Exactly, I am not going to spend $500 on a graphics card that can't improve performance to playable in the games I play at my resolution.
Turn off DoF, problem solved.
Absolutely not. This is totally incorrect. Not only that but AMD and NV measure TDP differently. Lower TDP does not directly correlate with overclocking % in any shape or form. It depends entirely on the videocard.
Ah, I see what you mean. TDP as you're discussing it is a rating given by AMD and NVIDIA that can be considered arbitrary because, as you mentioned, it's measured differently with no basis. It's a useless number and not worth mentioning. TDP in its true definition, being the maximum amount of heat needed to be dissipated by the cooling solution, is however. If you lower the heat load the cooling solution you reduce temps, which is one of the pillars of overclocking.

When most people discuss power consumption, they discuss it in absolute terms or in terms of costs or heat, etc., not in how TDP impacts your ability to overclock in % terms. Your entire argument for why anyone should care about power consumption rests on the assumption that parts with lower power consumption are better overclockers. That premise does not hold true for all AMD or NV cards.
Not at all. My argument rests on the fact that good overclocking parts with lower power consumption are even better overclocking parts.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If heat and power consumption didn't affect overclockability, we wouldn't waste our time and money on elaborate cooling setups.

It depends on the videocard. There is no direct rule you can apply, as you seem to think. You can have a card that runs at 60*C and can't overclock more than 10% (Radeon 8500) and a card that runs at 80*C and has 20% headroom (GTX470).

For some cards, there is little point in upgrading the cooling. It depends on the videocard. HD4890 or HD6870 could barely overclock even if you threw $60 Shaman on them. HD4890 was nearly at its limit when it came out. It hardly had any overclocking room. The card only had a TDP of 190W.

GTX470 had an insane overclocking headroom and had a TDP of 215W.

The ability of a card's overclock is not related to its TDP in any shape or form. So it was impossible to say first hand if HD6950 would be a better overclocker vs. GTX580 by comparing their TDPs or how cool they ran. On air cooling, the 6950 usually maxes out at 950-960 mhz (+20%), while GTX580 maxes out at 930-940mhz (+21-22%). If what you are saying is true, GTX580 could never overclock better because it runs hotter, consumes way more power and has far higher TDP.

The reason HD7970 overclocks well is not because it has a low TDP/power consumption but because (1) AMD left a lot of clock headroom on 28nm on the table to achieve higher yields; (2) GCN may be a better achitecture for overclocking than Cayman architecture was. The lower power consumption / TDP of the 7970 is a benefit that 28nm process brings. The real reason for amazing overclocking is the 28nm node and low reference clocks vs. what that node is capable of running using that particular design at a specified voltage.

You are correlating overclocking abilities with lower power consumption and TDP, when those are not the factors that determine how well a graphics card overclocks.
The node, the architectural design, the power circuitry on the card, how stable the chip is at a certain voltage at specified frequencies on a particular node before electron migration kicks in, are far better predictors of overclocking abilities, etc. Also, how aggressive the brand is that releases cards (in general, NV tends to leave a lot more headroom for overclocking than AMD does).

Using TDP and power consumption to gauge how well a particular card overclocks relative to itself and comparing TDP and power consumption among AMD and NV and trying to imply a direct causation for overclocking ability of a GPU from that is one of the most inaccurate things I have ever read on our forum. Overclocking abilities are unique to each individual graphics card and even SKU and cannot be predicted based on the temperatures of the GPU, its power consumption or TDP. This is especially true since NV and AMD GPUs are rated to operate at different temperatures and measure TDP completely differently.

The fact that HD7970 goes to 1125mhz on stock voltage is because of the efficiency of the 28nm node, and has nothing to do with how cool the 7970 runs or its power consumption or its TDP. How cool the card runs is actually also a function of the node, as well as the cooler. But you can have a great overclocking videocard even if it runs at 85*C+ at stock speeds to begin with (GTX470/480, etc.).

The HD5870 was a beastly overclocker. Mine hit 1050Mhz on air and 1.1GHz on water was common.

That's nothing. 23% more with high-end cooling? GTX470, GTX560 Ti, GTX460 can do 23% overclock on reference air cooling. The point is you are assuming that TDP impact overclocking, when it doesn't. You are also assuming that TDP is comparable between brands, which it isn't.

Briefly summarized differently, a good overclocking card with a good performance/watt is starting at a lower threshold for cooling and power requirements. They have that much more headroom so that you can add that extra 0.0125V safely or you can run 1GHz even at a lower voltage because it runs so cool. It's just more flexibility due to an efficient, quality chip.

You can drop the load temperature of HD6870, HD4890 and HD6990 by 50*C at load and none of them will overclock as good as a reference GTX460.

A cooler card with lower TDP =!= overclock better than a card with higher TDP that runs hotter. It's a case by case basis.

Percentage-wise it's pretty damn close. I think Fermi has better overclocking scaling, but it's not that different.

It's not close at all. When overclocked, the GTX580 is 40-50% faster in the same games that HD7970 destroys the 6970 in.

Well yeah, the GTX580 is running a 21% overclock and the 6970 is only running a 6.8%. Run the 6970 at 1050MHz+ and then you'll have a fair comparison.

You can see from the massive gap the GTX580 has in Starcraft 2, Civilization 5, Metro 2033, Crysis 2, Lost Planet 2 that a 1050 mhz HD6970 would never catch up anyway.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
For some cards, there is 0 point in upgrading the cooling. It depends on the videocard. HD4890 or HD6870 could barely overclock even if you threw $60 Shaman on them. HD4890 was at its limit when it came out. It hardly had any overclocking room. The card only had a TDP of 190W.

GTX470 had an insane overclocking headroom and had a TDP of 215W.
None of this changes the point you quoted.
The ability of a card's overclock is not related to its TDP in any shape or form. So it was impossible to say first hand if HD6950 would be a better overclocker vs. GTX580 by comparing their TDPs.
Again, you misunderstand what TDP means. You're referring to a "rated TDP." There's a big difference.

The reason HD7970 overclocks well is not because it has a low TDP/power consumption but because (1) AMD left a lot of headroom on 28nm on the table; (2) GCN may be a better achitecture for overclocking than Cayman architecture was. The lower power consumption / TDP of the 7970 is a benefit that 28nm process brings. The real reason for amazing overclocking is the 28nm node and low reference clocks vs. what that node is capable of. You are correlating overclocking abilities to power consumption and TDP, when those are not the factors that determine how well a graphics card overclocks. The node, the architecture, the power circuitry on the card, how stable is the chip at a certain voltage at specified frequencies before electron migration kicks in, etc.
They're all factors for how it overclocks, that's the point you're not getting. Glue a penny to your 6950 GPU and see how well it overclocks on just that.

Using TDP and power consumption to gauge how well a particular card overclocks relative to itself is one of the most inaccurate things I have ever read on our forum, especially from someone who has questioned my hardware knowledge.
Because you don't understand what the terms you're using mean. Here's a good summary of TDP: http://en.wikipedia.org/wiki/Thermal_design_power . You keep confusing AMD or NVIDIA's "rated TDP" with what TDP actually means. Thermal design power is how much power your cooling system has to dissipate. It is absolutely essential that this be minimized in order to get the highest overclocks. As I previously said, this is a cornerstone of overclocking.

That's nothing. 23% more with high-end cooling? GTX470, GTX560 Ti, GTX460 can do 23% overclock on reference air cooling. The point is you are assuming that TDP impact overclocking, when it doesn't. You are also assuming that TDP is comparable between brands, which it isn't.
That was on reference cooling, actually. And again, you're erroneously referring to "rated TDP."

You can drop the load temperature of HD6870, HD4890 and HD6990 by 50*C at load and none of them will overclock as good as a reference GTX460.

A cooler card with lower TDP =!= overclock better than a card with higher TDP that runs hotter. It's a case by case basis.
Yes it is. But again, you're still confusing "rated TDP" with TDP.

It's not close at all. When overclocked, the GTX580 is 40-50% faster in the same games that HD7970 destroys the 6970 in.
And the 6970 and 7970 overclock too. You're looking at the scalability of the overclock. What fraction of the percentage overclock do you see in return on performance? It'll be on the card and game basis, and I agree, I think Fermi has better scaling, but overall it's not much different.

You can see from the massive gap the GTX580 has in Starcraft 2, Civilization 5, Metro 2033, Crysis 2, Lost Planet 2 that a 1050 mhz HD6970 would never catch up anyway.
Not without a new tesselation engine or in-game performance enhancing algorithms. Enter the 7970.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Because you don't understand what the terms you're using mean. Here's a good summary of TDP: http://en.wikipedia.org/wiki/Thermal_design_power . You keep confusing AMD or NVIDIA's "rated TDP" with what TDP actually means.

I am not confusing anything. You just come up with red herrings without addressing any of the points being made.

You cannot predict how well a card will overclock based on its power consumption or how cool a GPU runs at. You try to twist and turn and come up with "proper definitions" to TDP because you ran out of room to support your view, because it's not correct.

When buying a graphics card with overclocking in mind, the card's TDP, or its total power consumption has little to no relevance with how well a particular card will overclock. Any particular videocard's ability to overclock is impossible to predict without actually overclocking the card first hand. You can have 100 HD7970s that will overclock well and 1 that can't do more than 1050mhz.

If I had 10 videocards with power load power consumption of 50 to 200W and with GPUs running at 40*C to 80*C, you would never be able to tell me which card will overclock better.

So your argument that you purchased HD6950 because it had lower power consumption than GTX580 and ran cooler than GTX580, and as a result overclocked better falls flat on its face. The main reason you bought it was because it was $250 less.

And the main reason you keep bringing up 50W of extra power consumption is because it's another negative point you can use against NV. That's it. No one who cares about an extra 50W of power would be using an overclocked 2500k @ 5.0ghz.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I am not confusing anything. You just come up with red herrings without addressing any of the points being made.
Yes, you are. Just like you confused how a CPU runs in a game. See a pattern forming?
You cannot predict how well a card will overclock based on its power consumption or how cool a GPU runs at.

Therefore, when buying a graphics card with overclocking in mind, the card's TDP, or its total power consumption has little to no relevance with how well a particular card will overclock.
That's it's "rated TDP," again, there's a difference. You can't predict how well a card will overclock except with raw data on the batch and even then it's a hopeful correlation. None of this was ever in the argument.

So your argument that you purchased HD6950 because it had lower power consumption than GTX580 and ran cooler than GTX580, and as a result overclocked better falls flat on its face. The main reason you bought it was because it was $250 less.
You don't understand the argument. The 6950 was a proven overclocker before I even bought one in February. The fact that it was also on sale for $230 made it a steal. Where power consumption and TDP (the real TDP) come in is that I knew it would be easy for me to overclock the 6950 to GTX 580 speeds on my current cooling setup without making any changes or big sacrifices to temps or noise. It also had 2GB of RAM, but that's besides the point.

And the main reason you keep bringing up 50W of extra power consumption is because it's another negative point you can use against NV. That's it. No one who cares about an extra 50W of power would be using an overclocked 2500k @ 5.0ghz.
Ahhh, I've finally broken you. Why didn't you just save face and admit your intent from the start instead of having me drag you through the mud for the last three hours?
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I also think it's funny that you go back an edit every single post afterwards to try to save face. Too bad I quoted them .
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Yes, you are. Just like you confused how a CPU runs in a game. See a pattern forming?

I am not confused at all. C-states and other power savings technologies allow the CPU to run below its nominal frequency in idle states. Since none of these technologies affect operating frequencies at load states, the 2500k @ 3.4ghz will always consume less power at load in games than 2500k @ 5.0ghz would, regardless of C-states, EIST, etc.. C-states and power saving technologies do not help a processor when its being loaded by a game if the game actually needs the CPU's resources. You can try to minimize the power consumption at load by specifying the overclocking by using Turbo multipliers on a per core basis, but once the game loads a core, it's going to 5.0ghz when in use, which means that your CPU will always consume more power at load than a stock 2500k.

But more so, you already stated that 30-36 fps is perfectly playable for you in a FP game. You then sighted SKYRIM as the game where you actually *needed* 2500k @ 5.0ghz. However, 2500k/2600k can easily achieve that without overclocking, then why would you overclock your CPU to 5.0ghz and incur the extra power consumption penalty if you care about power consumption? Surely for someone who absolutely cares about 50-55W of power, this would prohibitive? You said yourself you don't care to game at 60 or even 120 fps where the CPU performance actually makes a huge difference in a game like SKYRIM.

Perhaps you believe that your CPU uses 5-10W of power more at load when it's overclocked and running a game at 5.0ghz?

You don't understand the argument. Ahhh, I've finally broken you. Why didn't you just save face and admit your intent from the start instead of having me drag you through the mud for the last three hours?

I didn't have any intent, it's the only remaining logical conclusion left after 3-4 pages of seeing your side. You thought GTX480's 15-20% performance advantage over 5870 was "meaningless". You thought GTX580's 20% performance advantage over HD6970 was unimportant even though GTX580 had that lead for 14 months. You argued that HD6970 can be overclocked to GTX580 speeds, but when a fully overclocked GTX580 was put head-to-head against an overclocked HD6970 and crushed it, you then claimed that GTX580's performance advantage in overclocked state didn't add to playability, but in fact it easily cleared 30 fps in games where HD6970 could not.

If all you want is 30 fps in games on the PC, then GTX580 was perfectly fast for that for 14 months. Suddenly, HD7970 is a must buy at $550 with only 20% more performance over GTX580. Really?

I understand the argument clearly. In all situations where you cannot logically defend your view, you attack the poster (like Notty22's statistical knowledge which has no relevance, or my monitor (again no relevance), or dismiss any enthusiast's desire to game at 50-60 fps in a FPS or a racing game as not the norm, etc.). You dismiss most websites anyone links unless they show what you want to see (like H and TPU only) as not counting since they are running "canned benchmarks". Ironic, because completely ignore facts such as GameGPU.ru and Bit-Tech.net running manual runs, and well in your mini-HD6950 to HD7970 review, you yourself plan to use canned/in-game benchmarks. Really?

You make a big deal about 50-55W of power consumption, while ignoring that HD7970 is already a very high power consuming card to begin with.

The best part you discuss how important power consumption is while coming from a previous generation i5 760 @ 4.1ghz and an HD5850 @ 1000mhz+.

I think this graphs speaks for itself:


Source

Everyone knows that I am a price/performance guy and I don't hide it. People might not agree with me and I am 100% OK with that. At least I consistently ripped GTX580 for being overpriced vs. GTX570 and HD7970 for costing $550 and bringing so little over 580 after 14 months at 2560x1600 where enthusiasts on 30 inch monitors actually needed it. I also said power consumption wasn't a big deal for high-end cards like GTX480/6970 because in the context of their overall system power consumption (with overclocked CPUs), it's a drop in the bucket. Even if people didn't agree with this view, I was consistent.

In your case, you throw arms in the air over 50-55W of power in some cases, somehow try to correlate power consumption of a videocard, the temperature of a GPU and its "real" TDP with its % overclocking abilities (completely ignoring node, power circuitry, how much room was left on the table for GPU clocks at a specified voltage per node) and in other cases (like i5 @ 4.1ghz and 5850 @ 1000+ mhz), power consumption isn't really important because the card was a great price/performance candidate. So now price/performance trumps power consumption. You seem all over the place.

In other cases 30 fps is fast enough for enthusiast PC gamers, and you seem to think that HD7970 provides a whole new world of playability when the card can't even break 30 fps in Metro 2033, or Dragon Age 2, or Crysis 2 with AA at 2560x1600, your monitor's resolution. Really?

Notice a pattern of inconsistency?
 
Last edited:
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |