hd4800's AND AA??!!

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

User5

Senior member
Jul 24, 2004
215
0
0
hmm, i dunno how we got onto direct x, lol.. but about the CPU bottleneck, could someone elaborate on this?

I thought a bottleneck was when one part wasn't "fast" enough for the other part to work to its full potential. What does this have to do with AA? Or are you just saying that it doesn't handle AA better than any other video card, and that I just have more than enough power to be able to handle 4x?

I just did a quick test, and ran Oblivion in 0x and was getting 47 fps. I saved it, and reloaded the game with the same exact environment, and the number dropped to 45. So I really think they did dramatically improve the way it handles AA.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
They did dramatically improved the Anti Aliasing performance since it rellies now on hardware resolve, but the Narrow and Wide Tent Filters and Edge Detect are still on Shaders since they are far more flexible than if they're on hardware (Also incurrs in the same performance hit if not worse for some unknown reason)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: evolucion8
Originally posted by: taltamir
0-20% faster at doing the exact same thing on a few cards is hardly "the future". Most developers are still using DX9 exclusively. Very few actually use DX10.
DX11 hardware will also be required to support DX10.1
Will those DX10.1 games come out before or after DX11 hits the market?

Those DX10 and DX10.1 titles will also work flawlessly on DX9 hardware as long as the developer makes sure not to use DX10 calls on DX9 hardware... oops that is called writing two different codes to do the same thing on different hardware! Having a seperate executable for DX9 and 10, or the same executable (or seperate ones, their choice) for DX10 and DX10.1 do not change the need to write different code to do the same thing... which is exactly what I said... and costs extra money (programmers) for a diminishing return.

Wrong, a strictly DX10 game won't run on DX9 hardware, it require two different type of datapaths for that and is completely different from DX10.1 and DX10.0. Read what DX10.1 brings to the table because you look ridiculous, is not only the performance improvement, is also more flexibility due to less restrictions, that allows to use effects that could not be possible in real time before like Global Illumination thanks to the Cube Map Arrays, also there's more Vertex Shaders Inputs/Outputs, Gather4 which can improve the performance greatly when shadows are used (Specially on ATi hardware since nVidia has an excellent shadowmap acceleration), Separate Blend Modes per Multiple Render Target which improves the performance using Deferred Rendering greatly. Additional LOD Instructions, Multi Sample read and writes (nVidia already supports this but is not exposed on DX10.0, only in OpenGL or when the developer codes for it). Pixel Coverage Mask which incurrs in a performance impact on G8X/G9X hardware since Shader Anti Aliasing is heavy in computing power, and increased precision for floating point operation. So is more of a diminishing return, DX10.1 is what DX10 was supposed to be, like DX9 SM3.0 was for DX9.

and if you have a DX10 card and they "avoid" using a feature in DX10.1?
option 1: Visual artifact
option 2: they write seperate DX10 code to do the same task in DX10.

The only one looking ridiculous is you. And just an FYI, there are many ATI cards that only support DX10 and not DX10.1 so it isn't an nvidia issue. Even if nvidia made G92 and G200 support DX10.1 it would not have made all the G80 and the DX10 ati cards pop out of existance.

Originally posted by: User5
hmm, i dunno how we got onto direct x, lol.. but about the CPU bottleneck, could someone elaborate on this?

I thought a bottleneck was when one part wasn't "fast" enough for the other part to work to its full potential. What does this have to do with AA? Or are you just saying that it doesn't handle AA better than any other video card, and that I just have more than enough power to be able to handle 4x?

I just did a quick test, and ran Oblivion in 0x and was getting 47 fps. I saved it, and reloaded the game with the same exact environment, and the number dropped to 45. So I really think they did dramatically improve the way it handles AA.

I am going to grossly oversimplify but:
Lets say your CPU limits you at 70 FPS, and your video card at 80fps without AA and at 40fps with AA. Without AA you will be be getting about 70 FPS due to the CPU limit, even though the video card can do 80 at that settings, the video card will not be fully utilized. With AA you will get 40FPS, your CPU will simply not be fully utilized.

However if your CPU is limiting you at 47FPS, then no matter how fast your video card is you will not get more then 47fps. And you could lower the graphic settings and lower the resolution and will still be stuck at poor FPS (your average will rise, but the minimum will stay low, so it is not always obvious)

Firstly you are testing AVERAGE FPS, not minimum. Secondly, you might be CPU bound at 47fps, with the video card being limited to 100fps without AA and to 45FPS with AA... when you enable it you drop from 47 (cpu limit) to 45 (new gpu limit). It appears like it is almost free AA, but in that example it was a 65% drop in GPU performance, you just didn't SEE it because the CPU limited you before.

Of course in REAL situations the amount of CPU and GPU usage varies all the time, AI, different items on screen, and what have you not always take a different amount from each part of the system. But if in a certain zone your CPU dips into the low 20 fps then there is nothing you could do to prevent it. Upgrading your GPU OR lowering your resolution could increase your max FPS in that zone from 30 to 300, but you will still get those same dips into the low twenties.
 

User5

Senior member
Jul 24, 2004
215
0
0
Wow, thanks for the info! I actually didn't know anything about the CPU limiting the FPS, but I made as much sense as I could out of that with my bare minimum knowledge of computers.. .. does an e6750 limit fps with aa?

The fps numbers I took were actually not the "average" but I guess the MODE, as the number is almost always fluctuating, but that was the number that seemed to keep coming up more often. But basically.. my 8800gts 320mb was basically chugging along with 2X AA enabled with a lot of games. I couldn't even consider playing in 4X unless it was a game from about 5 yrs ago. But now, theres no need to even lower it down from 4X.. I love it.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: taltamir
Originally posted by: evolucion8
Originally posted by: taltamir
0-20% faster at doing the exact same thing on a few cards is hardly "the future". Most developers are still using DX9 exclusively. Very few actually use DX10.
DX11 hardware will also be required to support DX10.1
Will those DX10.1 games come out before or after DX11 hits the market?

Those DX10 and DX10.1 titles will also work flawlessly on DX9 hardware as long as the developer makes sure not to use DX10 calls on DX9 hardware... oops that is called writing two different codes to do the same thing on different hardware! Having a seperate executable for DX9 and 10, or the same executable (or seperate ones, their choice) for DX10 and DX10.1 do not change the need to write different code to do the same thing... which is exactly what I said... and costs extra money (programmers) for a diminishing return.

Wrong, a strictly DX10 game won't run on DX9 hardware, it require two different type of datapaths for that and is completely different from DX10.1 and DX10.0. Read what DX10.1 brings to the table because you look ridiculous, is not only the performance improvement, is also more flexibility due to less restrictions, that allows to use effects that could not be possible in real time before like Global Illumination thanks to the Cube Map Arrays, also there's more Vertex Shaders Inputs/Outputs, Gather4 which can improve the performance greatly when shadows are used (Specially on ATi hardware since nVidia has an excellent shadowmap acceleration), Separate Blend Modes per Multiple Render Target which improves the performance using Deferred Rendering greatly. Additional LOD Instructions, Multi Sample read and writes (nVidia already supports this but is not exposed on DX10.0, only in OpenGL or when the developer codes for it). Pixel Coverage Mask which incurrs in a performance impact on G8X/G9X hardware since Shader Anti Aliasing is heavy in computing power, and increased precision for floating point operation. So is more of a diminishing return, DX10.1 is what DX10 was supposed to be, like DX9 SM3.0 was for DX9.

and if you have a DX10 card and they "avoid" using a feature in DX10.1?
option 1: Visual artifact
option 2: they write seperate DX10 code to do the same task in DX10.

The only one looking ridiculous is you. And just an FYI, there are many ATI cards that only support DX10 and not DX10.1 so it isn't an nvidia issue. Even if nvidia made G92 and G200 support DX10.1 it would not have made all the G80 and the DX10 ati cards pop out of existance.

Heck no, you are the one who is talking senselessly about it. Coding for DX10.1 doesn't need a special datacode path or something. You can't stop the progress, how many DX9 cards are out there right now? Lots, more than 1.5 times the DX10 cards currently, and that didn't stop the developers from using DX10 even though most games are still written under DX9. If a game is written under DX10.1 and the card is DX10, shouldn't have problems unless the game is done by lazy or tiny developing groups. Is an nVidia issue that currently is not supporting DX10.1, even the new generation of S3 cards supports it. When DX10 cards first appeared on the market, that didn't made the DX9 cards pop out of existance, neither DX10.1 or the next DX11... stop being an nVidia fanboy and face the truth, nVidia currently owns a very competitive card, the GTX 280 (Quite a powerful one) but still slower than it's rival card (HD 4870X2) and has less feature set and still using technology from 2006. Welcome to 2008!!!
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
different people have different tolerance level to frame rate dips. if a game feels choppy and you lower the resolution and AA a lot and it still feels choppy then it is probably the CPU being too slow.

As for the CPU and mentioned, it depends on the specific game played. For most games, not, for a few games (and more in the future), yes it will be too slow.

Even if the CPU is limiting you, as long as it limits you at a level where gameplay feels smooth you are ok. I recommend upgrading when you encounter such a game. (since prices drop all the time).

stop being an nVidia fanboy and face the truth, nVidia currently owns a very competitive card, the GTX 280 (Quite a powerful one) but still slower than it's rival card (HD 4870X2) and has less feature set and still using technology from 2006. Welcome to 2008!!!

Please check the two links in my signature, you will find both of my computers are running ATI cards, the last of which was purchased less then 3 weeks ago.

You know how I know that I am right and you are wrong? there is not a SINGLE DX10.1 title on the market. not ONE! that means every single developer out there in the entire world agrees with me.
Will this change in the future? of course, my statements are for TODAY though, not for the future.
Assassin creed MIGHT get DX10.1 support if they can find the time to fix it (it was reported broken by anandtech and every other review site BEFORE it was removed for you conspiracy nuts out there)
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
This thread is about anti-aliasing, let's please keep it about that, and not about DX 10 vs 10.1 vs 11 unless you have something more specific and important to argue on the merits thereof.
 

zod96

Platinum Member
May 28, 2007
2,868
68
91
I've been very impressed by the AA/AF performance of the 4850. Far better than my old 8800GT
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
It"s about time for ATi to reclaim the Anti Aliasing performance crown which had since the 9700PRO era until it was lost when the HD 2XXX and HD 3XXX came out.
 

CP5670

Diamond Member
Jun 24, 2004
5,535
613
126
Originally posted by: zod96
Is adaptive AA the same as nvidia's supersampling AA? Because I just tried GRID with my 4850 at 8xAA Adaptive and 16xAF HQ at 1680x1050 and it was still avg. about 60 fps I tried the same with my old 8800GT and it was getting about 30 fps using supersampling AA

If you're talking about transparency supersampling AA, then yeah, that's Nvidia's equivalent. The normal supersampling modes are different and are much more intensive as they cover the entire screen.
 

ManWithNoName

Senior member
Oct 19, 2007
396
0
0
Originally posted by: zod96
I've been very impressed by the AA/AF performance of the 4850. Far better than my old 8800GT

Same here with my HD 4870. Just set it an forget it. Everything I have runs great using 8xAA/16xAF. I was floored after using it in just a few games as I wasn't expecting that much of an improvement from my GTS G92. Even very old games such as Land of the Dead were improved visually. IQ totally off the hook. And no I don't play Crysis and don't have any desire to. So whether you call this "Free Antialiasing" or not, I'm glad I bought it.

Also for the record, the last two cards of mine were a 8800GT and 8800GTS G92.
 

ManWithNoName

Senior member
Oct 19, 2007
396
0
0
Originally posted by: evolucion8
It"s about time for ATi to reclaim the Anti Aliasing performance crown which had since the 9700PRO era until it was lost when the HD 2XXX and HD 3XXX came out.

:thumbsup: :thumbsup:

 

unr3al

Senior member
Jun 10, 2008
214
1
81
www.link-up.co.za
My monitor is some no-name brand CRT maxed at 1024x768... lol But I'm going for a 1440x900 LCD soon, can't wait for that! Let alone if AA and AF can be turned on...
 

AdamK47

Lifer
Oct 9, 1999
15,550
3,254
136
How is the AA compatibility? I stopped using AA in all my games since every time a new game came out there was a big chance the game would need a patch or the drivers would need to be updated in order for it to work. Does ATIs method work regardless of what it is now? This is the big reason I don't use AA. It's not about performance. If ATI can get AA to work with all of my games that use D3D or OGL then thats the card I'll be upgrading to.
 

Paratus

Lifer
Jun 4, 2004
17,123
14,491
146
I read that article last night. Very interesting stuff. I wonder what BFG thinks about it
 

deadseasquirrel

Golden Member
Nov 20, 2001
1,736
0
0
Originally posted by: taltamir
it will be funny if the reviews sayings "free AA" just didn't realize they were CPU bound.... funny and sad

it will be funny if all the people claiming "CPU bound" just didn't realize there is a difference between CPU-limited and CPU-bound.... funny and sad
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: hooflung
R770 can do free 4x AA, R700 can do free 8X AA. The R700 can do 24xCFAA in Cross-Fire as 1920x1200 with 51FPS

http://enthusiast.hardocp.com/...w3LCxoZW50aHVzaWFzdA==
While its true RV770 was redesigned to get "free 4xAA" that's the extent of it. With AoC its clearly obvious the game is CPU bottlenecked or frame capped which allows for "free" GPU load increases. In beta I know the game was capped at 65FPS, it looks to be around 80FPS from the HardOCP reviews. AT's review confirms it as they seem to be hitting the same averages as HOCP, even at lower resolutions like 1680. You're the big AoC fan, you can test this yourself by dropping the resolution as low as possible and see if it noticeably increases frame rates.

After all these replies I'm surprised no one has mentioned why RV770 allows for free AA. Few sites mentioned it, but it was covered pretty early on. I was surprised AT didn't go into more detail about it either, as this is clearly the biggest coup ATI has pulled off in years. They completely redesigned the ROPs by not only adding hardware AA resolve but also doubled the number of pixels drawn per clock with AA enabled. This table from X-bit shows 2x/4x at "full speed" or "free" and 8x at "half speed" with RV770. These hardware MSAA modes can be further combined with the shader resolve/edge/CFAA for even more "free" AA. Xbit's coverage in full. I think there was another review floating around that went into even more detail, can't find it atm though.

Originally posted by: deadseasquirrel
Originally posted by: taltamir
it will be funny if the reviews sayings "free AA" just didn't realize they were CPU bound.... funny and sad

it will be funny if all the people claiming "CPU bound" just didn't realize there is a difference between CPU-limited and CPU-bound.... funny and sad

The only difference is semantics. GPU and CPU bottlenecks aren't mutually exclusive, you can be both simultaneously, you just won't know for sure until you lessen one or the other.
 

User5

Senior member
Jul 24, 2004
215
0
0
Thanks for the info.. I'm surprised that I didn't hear too much about this prior to buying the card, but I guess it was pretty awesome finding out on my own that I no longer have to stress adding in AA!

Also nice to know since this means we 4800 owners (and hopefully all cards in the future will work the same way) get to say goodbye to aliasing forever .. you wont be missed!
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
Just read the HardOCP article, good stuff (seems like they are getting back on track)

One thing I noticed right away: 24XAA looks AMAZING! The difference between 24X and 4X is HUGE... Its worth getting a 4870 X2 just for that

It also seems Ati also has a slight IQ edge this round... In the power cable screens the 4870 clearly has a more defined and smooth image :thumbsup:
 

deadseasquirrel

Golden Member
Nov 20, 2001
1,736
0
0
Originally posted by: chizow
Originally posted by: deadseasquirrel
Originally posted by: taltamir
it will be funny if the reviews sayings "free AA" just didn't realize they were CPU bound.... funny and sad

it will be funny if all the people claiming "CPU bound" just didn't realize there is a difference between CPU-limited and CPU-bound.... funny and sad

The only difference is semantics. GPU and CPU bottlenecks aren't mutually exclusive, you can be both simultaneously, you just won't know for sure until you lessen one or the other.

While I agree with you that CPU and GPU bottlenecks don't have to be mutually exclusive (nobody ever mentioned GPU bottlenecks), I disagree that the only difference between CPU-limited and CPU-bottlenecked is semantics. They are 2 different concepts in my mind:

CPU-limited-- a faster CPU will show an increase in FPS vs. a slower CPU matched with the same card in the same benchmark. i.e. the card's performance has been "limited" by the slower CPU. In my opinion, pretty much every single game and every single GPU are "cpu-limited" to an extent. A faster CPU will almost always result in better fps, even if just a small amount. It's that "amount" that determines whether one would be better off getting a faster CPU or not based on cost vs performance increase.

CPU-bottlenecked-- no FPS is seen when upgrading to a faster GPU. i.e. on a rig playing Oblivion at 1024x768 w/ an Athlon XP 1700+ and a 6600gt getting the same fps when the GPU is swapped for an 8800gt. You are now CPU-bottlenecked and a faster GPU won't help you. Again, just as an example... I don't know if that game with those components behave that way or not.

Here is an example. It's not a totally fair comparison because one benchmark is done with 2GB of ram and the other with 4GB. But, right now, it's one of the closest examples we can get to-- 4870 at 4ghz, and 4870 at 3ghz.

This is UT3. At 3ghz, and at 1280x1024, you are CPU-limited in this game. A 4ghz CPU with the same 4870 gives it a 60+% increase. BUT, you are not CPU-bottlenecked at all at 3ghz because a faster GPU than the 4870 increases fps a lot.

And, in the same benchmark, just changing resolutions, we see the CPU-limitation disappear at 25x16 w/ a 3ghz CPU vs a 4ghz CPU with a single 4870. BUT, slap a second 4870 for CF and BAM, you have a CPU-limitation again, even at 25x16 because fps jumps up like 20%.

CPU-limitations and bottlenecks can vary across games and platforms. We see a lot of questions such as "will my E6600 hold back a 4870X2?". The answer is, of course, yes. Any CPU will be a limitation. But *how much* it will hold it back is the key. That varies game-to-game and resolution-to-resolution, all matched with varying CPUs. The honest answer to that question is that it's too difficult to answer such a generic question with narrow-focused reviews we have today.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: deadseasquirrel
Originally posted by: chizow
Originally posted by: deadseasquirrel
Originally posted by: taltamir
it will be funny if the reviews sayings "free AA" just didn't realize they were CPU bound.... funny and sad

it will be funny if all the people claiming "CPU bound" just didn't realize there is a difference between CPU-limited and CPU-bound.... funny and sad

The only difference is semantics. GPU and CPU bottlenecks aren't mutually exclusive, you can be both simultaneously, you just won't know for sure until you lessen one or the other.

While I agree with you that CPU and GPU bottlenecks don't have to be mutually exclusive (nobody ever mentioned GPU bottlenecks), I disagree that the only difference between CPU-limited and CPU-bottlenecked is semantics. They are 2 different concepts in my mind:

CPU-limited-- a faster CPU will show an increase in FPS vs. a slower CPU matched with the same card in the same benchmark. i.e. the card's performance has been "limited" by the slower CPU. In my opinion, pretty much every single game and every single GPU are "cpu-limited" to an extent. A faster CPU will almost always result in better fps, even if just a small amount. It's that "amount" that determines whether one would be better off getting a faster CPU or not based on cost vs performance increase.

CPU-bottlenecked-- no FPS is seen when upgrading to a faster GPU. i.e. on a rig playing Oblivion at 1024x768 w/ an Athlon XP 1700+ and a 6600gt getting the same fps when the GPU is swapped for an 8800gt. You are now CPU-bottlenecked and a faster GPU won't help you. Again, just as an example... I don't know if that game with those components behave that way or not.

Here is an example. It's not a totally fair comparison because one benchmark is done with 2GB of ram and the other with 4GB. But, right now, it's one of the closest examples we can get to-- 4870 at 4ghz, and 4870 at 3ghz.

This is UT3. At 3ghz, and at 1280x1024, you are CPU-limited in this game. A 4ghz CPU with the same 4870 gives it a 60+% increase. BUT, you are not CPU-bottlenecked at all at 3ghz because a faster GPU than the 4870 increases fps a lot.

And, in the same benchmark, just changing resolutions, we see the CPU-limitation disappear at 25x16 w/ a 3ghz CPU vs a 4ghz CPU with a single 4870. BUT, slap a second 4870 for CF and BAM, you have a CPU-limitation again, even at 25x16 because fps jumps up like 20%.

CPU-limitations and bottlenecks can vary across games and platforms. We see a lot of questions such as "will my E6600 hold back a 4870X2?". The answer is, of course, yes. Any CPU will be a limitation. But *how much* it will hold it back is the key. That varies game-to-game and resolution-to-resolution, all matched with varying CPUs. The honest answer to that question is that it's too difficult to answer such a generic question with narrow-focused reviews we have today.

I'm very familiar with those benchmarks actually, I've linked to them numerous times and I don't disagree with much of what you're saying because like I've said, it just comes down to semantics. You're assigning an arbitrary and absolute value to "bottlenecked" and then claiming everything else inbetween for "limited" when those values are based only on your subjective definitions.

I'll illustrate the point with the examples we're both familiar with. Even in your example, if you look at the 3GHz results, you'll see that every solution regardless of fast single or multi-GPU tends to cap around the same FPS, 110-115 for 1920 and 115-120 for 1280. This is not by accident....clearly the CPU is no longer able to generate more frames to be rendered regardless of the GPU solution even across different resolutions even when you cut pixels by 1/2 by going from 1920 to 1280. Clearly this is bottlenecked even by your definition.

Now look at the 4GHz results and you will once again see separation, even at 1280 where there was none before. The most telling differences here come with the RV770 parts since they're mostly just clockspeed increases and scale almost linearly when they're not CPU bottlenecked. At 3GHz there is almost no difference between the 4850CF and 4870CF solution. At 4GHz the difference is obvious. Even more telling is the fact that a single 4870 at 4GHz is FASTER than 4870CF at 3GHz. By lessening the bottleneck you again see performance gain where there was none before.

Honestly I don't see how this isn't super obvious, people are using low resolutions and slower parts saying they run "fine" or linking to year old benchmarks with GPUs that are less than half as fast as today's fastest solutions as justification there isn't need for faster CPUs....it really makes no sense.

 

hooflung

Golden Member
Dec 31, 2004
1,190
1
0
Originally posted by: chizowYou're the big AoC fan, you can test this yourself by dropping the resolution as low as possible and see if it noticeably increases frame rates.

Well, on my Opteron 180 at stock, with 8800GS SLI and 4g ram I can do 8xAA and 16xAF at 1280x1024 all day long and still get about 35-40fps ( cities its usually 22-28 ).

When I crank it to 1680x1050 you are seeing single digit frame rates.

My Core 2 Duo e4300 at 2.7ghz, 4g DDR2 4-4-4-12 with an HD3850 can match the SLI rig at 12x10 for the most part although 2x - 4xAA runs smoother overall instead of 8x. At 1680x1050 the C2D can do 0xAA - 16xAF and still get about 40FPS in general ( with 20-30 fps in cities ).

Take those numbers with a grain of salt, they are just obervations over hours of gameplay not exactly benchmarked for a review. Ofc they were fraps numbers but they do match the ingame fps counter pretty well.

I don't like the way AoC plays at all over 12x10 on the Opteron. Even at 14x9 on the Opteron runs about like the C2D/HD3850 does at 16x10 with AA cranked up to 8x.

Make your own conclusions.
 

deadseasquirrel

Golden Member
Nov 20, 2001
1,736
0
0
Originally posted by: chizow
You're assigning an arbitrary and absolute value to "bottlenecked" and then claiming everything else inbetween for "limited" when those values are based only on your subjective definitions.
I agree. These are my subjective definitions (as prefaced with IMO when I posted it). However, do you agree that the terms "bottleneck" and "limit" have inherent differences? With bottlenecking implying a a much *stronger* limitation? This is the underlying argument for me separating the two classifications.

people are using low resolutions and slower parts saying they run "fine" or linking to year old benchmarks with GPUs that are less than half as fast as today's fastest solutions as justification there isn't need for faster CPUs....it really makes no sense.
I agree with you. A single example is NOT good enough to say that XYZ cpu is good enough for ABC gpu. A poster mentioning that his Opty 165 runs everything "fine" on high at 16x10 with a 4870 doesn't mean that the Opty isn't limiting the 4870 by a significant margin (compared to a C2Q @ 4Ghz). However, it also doesn't mean that the poster should've kept his 7800gt instead. No doubt, he is seeing a better increase in performance by upgrading his GPU vs changing out his platform (dollar for dollar).

My opinion is that there is no easy answer to the generic question we see time and time again in this forum-- will ABC gpu be bottlenecked by my XYZ cpu?

I'm really looking forward to some review site out there testing the current crop of GPUs with *various* CPUs across many games and several resolutions. And not just a C2Q clocked at different speeds with some cores disabled. Show me an A64 single core at 2.8 too. And I want to see MIN/AVG/MAX as well. I know, I know... I've just asked for a 30-page review. Oh well. Maybe it'll happen.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |