ATi 4870 X2 (R700) previews thread

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

HurleyBird

Platinum Member
Apr 22, 2003
2,759
1,455
136
GTX has better core/shader oc potential, 4870 has more mem oc potential.

EDIT: Heh, thought page 3 was the last page.
 

OCGuy

Lifer
Jul 12, 2000
27,224
36
91
Originally posted by: Compddd
Why are so many people quoting $450 as the new price for GTX280 on these forums, but the articles on the various hardware sites say $500?

EVGA 280 $449 @ Egg and EVGA.com ... I guess its time to upgrade if I can get any decent price for this 260 :/

Edit: Maybe they got word the X2 will surprise-launch at a lower price??
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
Originally posted by: Stoneburner
What's the expected scaling with Quad-Fire?

expected: good

currenty: bad

according to Hardocp's 4870X2 Quad-fire review.

But we should expect something a bit better than seeing a 0% increase going from 2 gpus to 4 (which is where its at now) when the new drivers drop with the card in aug.

 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: Rhino2
Originally posted by: nitromullet
Another thing to consider (and I know BFG has already thought of this) is if micro stutter has been fixed, how did they do it? Did ATI introduce a delay on the every second AFR frame to keep the intervals relatively even, or did they come up with another solution?

The reason this is important is because if a delay was added to every second AFR frame, this would result in increased input lag. This is already a deal breaker for some. I imagine that we will find out how fixed micro-stutter is when the 4870X2 is released, and what (if any) affects the remedy has on game play.

I know there was talk of some level of GPU to GPU communication possible on the 4870x2, though until a full NDA lift we won't really know how deep this goes or if it actually affects anything.

True. The only reason I mention the concept of introducing a delay to 'even' out frames is because when the whole micro stutter thing became the hot button a few months ago, the only successful proof of concept at that time was to introduce a delay.

Considering that this was just a few months ago ATI would have fixed micro stutter pretty quickly. Makes you wonder why it was ever a problem to begin with if it was so easy to solve... Surely they noticed it in their testing of the 3870X2. I know I noticed it on my 3870X2, I just didn't know why I was seeing high fps but jerky/uneven game play.

Honesty, the more I hear about the 4870X2, the more I want to try one out... Kinda sucks that you have to drop $500 for the dubious honor of being a test monkey.
 

toslat

Senior member
Jul 26, 2007
216
0
76
Originally posted by: nitromullet
True. The only reason I mention the concept of introducing a delay to 'even' out frames is because when the whole micro stutter thing became the hot button a few months ago, the only successful proof of concept at that time was to introduce a delay.

Considering that this was just a few months ago ATI would have fixed micro stutter pretty quickly. Makes you wonder why it was ever a problem to begin with if it was so easy to solve... Surely they noticed it in their testing of the 3870X2. I know I noticed it on my 3870X2, I just didn't know why I was seeing high fps but jerky/uneven game play.

Honesty, the more I hear about the 4870X2, the more I want to try one out... Kinda sucks that you have to drop $500 for the dubious honor of being a test monkey.

Increased lag would depend on where the delay was introduced. If the delay is at the start of processing the new frame (requires a predictor for processing time), then you would 't see any lag, while if the delay is at the point between finished frame - display frame, then there would be lag. The latter being the more straight forward to implement. In either case, perceived lag would be much less (if at all) given that only <50% of frames would suffer the delay.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: toslat
Originally posted by: nitromullet
True. The only reason I mention the concept of introducing a delay to 'even' out frames is because when the whole micro stutter thing became the hot button a few months ago, the only successful proof of concept at that time was to introduce a delay.

Considering that this was just a few months ago ATI would have fixed micro stutter pretty quickly. Makes you wonder why it was ever a problem to begin with if it was so easy to solve... Surely they noticed it in their testing of the 3870X2. I know I noticed it on my 3870X2, I just didn't know why I was seeing high fps but jerky/uneven game play.

Honesty, the more I hear about the 4870X2, the more I want to try one out... Kinda sucks that you have to drop $500 for the dubious honor of being a test monkey.

Increased lag would depend on where the delay was introduced. If the delay is at the start of processing the new frame (requires a predictor for processing time), then you would 't see any lag, while if the delay is at the point between finished frame - display frame, then there would be lag. The latter being the more straight forward to implement. In either case, perceived lag would be much less (if at all) given that only <50% of frames would suffer the delay.

I would think that the delay would have to be introduced between finished frame and display frame. The reason for micro stutter is because one frame follows the one before it 'too closely' leaving a huge gap between it and the next frame, this is what caused the stutter.

Since this won't be a constant, it seems to me that the only way it could be fixed by imposing a delay would be to build in a rule that in effect says if interval <= x, then insert delay. If that is the case, the frame would have to already be rendered or the interval would be unknown.

If you were to introduce a delay prior to rendering, it seems that the only effect that would have would be to reduce the fps since you would have to apply the delay to every second frame rendered.

I'm really out of my area of expertise here though, so I could be completely incorrect or misunderstanding what you were saying. Of course, there is the option that ATI solved this issue with a means other than introducing a delay at all.
 

toslat

Senior member
Jul 26, 2007
216
0
76
Originally posted by: nitromullet
I would think that the delay would have to be introduced between finished frame and display frame. The reason for micro stutter is because one frame follows the one before it 'too closely' leaving a huge gap between it and the next frame, this is what caused the stutter.

Since this won't be a constant, it seems to me that the only way it could be fixed by imposing a delay would be to build in a rule that in effect says if interval <= x, then insert delay. If that is the case, the frame would have to already be rendered or the interval would be unknown.

If you were to introduce a delay prior to rendering, it seems that the only effect that would have would be to reduce the fps since you would have to apply the delay to every second frame rendered.

I'm really out of my area of expertise here though, so I could be completely incorrect or misunderstanding what you were saying. Of course, there is the option that ATI solved this issue with a means other than introducing a delay at all.

Like you said, the whole aim is to remove the skew in frame times before the display.

The time at which a frame is ready to be displayed depends on when the GPU starts working on that frame, and also on how long it takes the GPU to process it. Hence to adjust the time when a frame is finished, I could either finish the frame and hold on to it, or start processing it late. In the former case, each frame is held on to till the appropriate time, while in the latter, the delay is applied at intervals to correct the skew. Both need to be adaptive, as the right amount of delay depends on FPS.

(Scenario: GPU1 renders odd numbered frames, while GPU2 renders even frames)
The amount of delay to be introduced depends on when the previous frame was finished and when the next frame is expected to be ready i.e delay for #4 depends on when #3 is ready, and when #5 is expected. Even if we wait till #3 is ready, the time for #5 has to be predicted. Hence the system will require a form of a predictor, which ever option is chosen. For ease of implementation, most likely both values will be predicted. The accuracy of the predictor will improve with reduction in processing time variance (at the extreme of constant instantaneous FPS from a single GPU, the predictor is always right). Of course higher accuracy is required at lower frame rates.

The problem with the 'start delay' option is that, in addition to the earlier variables, we also need to predict the processing time for the current frame i.e. #4. This latter prediction is easier with similar GPUs but gets more difficult with dissimilar ones. Predicting processing time for #4 is same order of accuracy as predicting for #5 in similar GPU case. So you really dont lose much.

On the upside for this option is that each GPU renders the most recent game state and thus reduces input lag.
 

thilanliyan

Lifer
Jun 21, 2005
11,944
2,172
126
Originally posted by: nitromullet
Kinda sucks that you have to drop $500 for the dubious honor of being a test monkey.

Lol, I did that when the 8800GTS 640 came out. I vowed never again.

To you guys talking about adding a delay between frames, wouldn't this reduce the average fps? Or would it just reduce the reported fps but increase the perceived fps (or at least reduce the stutter)?
 

toslat

Senior member
Jul 26, 2007
216
0
76
The delay would affect the instantaneous fps but shouldn't affect the average fps.
Basically a sequence of 40fps frames with arrival time (ms) [0 10 50 60 100 110 ....] will have micro-stutter due to the early arrival of the even # frames (or lateness of odd # frames) i.e. some consecutive frames have only a 10ms gap while others have a 40ms gap, instead of a constant 25ms. This can be corrected perfectly by adding a delay of 15ms to the even # frames resulting in [ 0 25 50 75 100 125...] i.e. perfect 40fps.
Of course in practice, the result will be far from perfect but only needs to make the microstutter imperceptible.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
You would end up with framerates being reported as the same, only thing is that the frames would be rendered in the correct time.

Instead of haveing..1..23..45..67..89 etc.

you would have:.....1.2.3.4.5.6.7.8.9 etc.

Same framerate, only they would be where they belong.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Does anybody know what's causing the micro stutter? Driver, CPU, PCIe, multi GPU interconnect, ...?
If it was interconnect then X2's faster one should take care about that.
 

thilanliyan

Lifer
Jun 21, 2005
11,944
2,172
126
Originally posted by: keysplayr2003
You would end up with framerates being reported as the same, only thing is that the frames would be rendered in the correct time.

Instead of haveing..1..23..45..67..89 etc.

you would have:.....1.2.3.4.5.6.7.8.9 etc.

Same framerate, only they would be where they belong.

From your example...why is there a delay of 2 "dots" between sets of frames, and can you actually reduce THAT delay, because if you can't then the fps will go down won't it?

As I see it, the only way to make sure the delay between frames is the same would be something like this:
1..2..3..4..5..6..7..8..9

Basically how was the delay of 2 "dots" reduced to 1 "dot" and spread amongst the rest of the frames?

Or is it that the 2nd frame in the set has already been rendered and IT'S OWN output is delayed? (Hmm...I think I might have answered my own question )
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Janooo
Does anybody know what's causing the micro stutter? Driver, CPU, PCIe, multi GPU interconnect, ...?
If it was interconnect then X2's faster one should take care about that.

If I had to guess its the scheduler as the CPU is producing pre-rendered frames at a set pace, the problem is that with multi-GPU both GPU are both signaling "ready" and rendering too closely to one another. So basically you get 2 output frames very closely, then while they are both rendering that creates the long delay followed again by 2 frames output again at very close intervals. Perhaps with the GPU interconnect the 1st GPU is telling the 2nd GPU when to signal "ready" and begin rendering the next frame.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: thilan29
Originally posted by: keysplayr2003
You would end up with framerates being reported as the same, only thing is that the frames would be rendered in the correct time.

Instead of haveing..1..23..45..67..89 etc.

you would have:.....1.2.3.4.5.6.7.8.9 etc.

Same framerate, only they would be where they belong.

From your example...why is there a delay of 2 "dots" between sets of frames, and can you actually reduce THAT delay, because if you can't then the fps will go down won't it?

As I see it, the only way to make sure the delay between frames is the same would be something like this:
1..2..3..4..5..6..7..8..9

Basically how was the delay of 2 "dots" reduced to 1 "dot" and spread amongst the rest of the frames?

Or is it that the 2nd frame in the set has already been rendered and IT'S OWN output is delayed? (Hmm...I think I might have answered my own question )

The dots are irrelevant and only for example. Don't take them literally. But what you can take literally is that if the above numbers represented 1 second's worth of play, then both the top, and bottom numbers still return a result of 9fps. Top one with microstutter, the bottom without. The bottom one would of course appear smoother.
 

toslat

Senior member
Jul 26, 2007
216
0
76
Originally posted by: chizow
Originally posted by: Janooo
Does anybody know what's causing the micro stutter? Driver, CPU, PCIe, multi GPU interconnect, ...?
If it was interconnect then X2's faster one should take care about that.

If I had to guess its the scheduler as the CPU is producing pre-rendered frames at a set pace, the problem is that with multi-GPU both GPU are both signaling "ready" and rendering too closely to one another. So basically you get 2 output frames very closely, then while they are both rendering that creates the long delay followed again by 2 frames output again at very close intervals. Perhaps with the GPU interconnect the 1st GPU is telling the 2nd GPU when to signal "ready" and begin rendering the next frame.

Am not even sure the GPU's have a scheduler per se. Wont be surprised if the GPUs just give something similar to an CTS whenever they are done with the current frame, and since at startup, both GPUs are available, they start rendering very close to each other.
 

Martimus

Diamond Member
Apr 24, 2007
4,488
153
106
Originally posted by: toslat
Originally posted by: chizow
Originally posted by: Janooo
Does anybody know what's causing the micro stutter? Driver, CPU, PCIe, multi GPU interconnect, ...?
If it was interconnect then X2's faster one should take care about that.

If I had to guess its the scheduler as the CPU is producing pre-rendered frames at a set pace, the problem is that with multi-GPU both GPU are both signaling "ready" and rendering too closely to one another. So basically you get 2 output frames very closely, then while they are both rendering that creates the long delay followed again by 2 frames output again at very close intervals. Perhaps with the GPU interconnect the 1st GPU is telling the 2nd GPU when to signal "ready" and begin rendering the next frame.

Am not even sure the GPU's have a scheduler per se. Wont be surprised if the GPUs just give something similar to an CTS whenever they are done with the current frame, and since at startup, both GPUs are available, they start rendering very close to each other.

Yeah. I would figure that they would change that to start rendering with the second GPU only after a set percentage of the first frame is rendered (likely around 50%), and not to begin until that point. That should keep the frame output interval pretty constant no matter the frame rate.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: Martimus
Originally posted by: toslat
Originally posted by: chizow
Originally posted by: Janooo
Does anybody know what's causing the micro stutter? Driver, CPU, PCIe, multi GPU interconnect, ...?
If it was interconnect then X2's faster one should take care about that.

If I had to guess its the scheduler as the CPU is producing pre-rendered frames at a set pace, the problem is that with multi-GPU both GPU are both signaling "ready" and rendering too closely to one another. So basically you get 2 output frames very closely, then while they are both rendering that creates the long delay followed again by 2 frames output again at very close intervals. Perhaps with the GPU interconnect the 1st GPU is telling the 2nd GPU when to signal "ready" and begin rendering the next frame.

Am not even sure the GPU's have a scheduler per se. Wont be surprised if the GPUs just give something similar to an CTS whenever they are done with the current frame, and since at startup, both GPUs are available, they start rendering very close to each other.

Yeah. I would figure that they would change that to start rendering with the second GPU only after a set percentage of the first frame is rendered (likely around 50%), and not to begin until that point. That should keep the frame output interval pretty constant no matter the frame rate.

It can not happen. FPS are not constant. They change based on what's happening on the screen.

How big is a rendered screen? How many bytes?
 

toslat

Senior member
Jul 26, 2007
216
0
76
Originally posted by: Janooo
It can not happen. FPS are not constant. They change based on what's happening on the screen.

How big is a rendered screen? How many bytes?

Still the values are bounded and of low variance else we would have (perceptible) micro-stutter even in single GPU

 

Martimus

Diamond Member
Apr 24, 2007
4,488
153
106
Originally posted by: Janooo
Originally posted by: Martimus
Yeah. I would figure that they would change that to start rendering with the second GPU only after a set percentage of the first frame is rendered (likely around 50%), and not to begin until that point. That should keep the frame output interval pretty constant no matter the frame rate.

It can not happen. FPS are not constant. They change based on what's happening on the screen.

How big is a rendered screen? How many bytes?

That is the whole point of setting it to a throughput delay rather than an arbitrary time delay; it would compensate for changes in the frame rate. The idea is to have the output of each processor given at a regular interval, but beyond that the data has to be accurate to the time it is outputted, or else the stutter effect will still be apparent even if the frames are at equal intervals (just because the output was displayed at the correct time, it won't look right because the output was a render of a different time and it will seem to be out of place.) So you would need to introduce the timing at the data input stage for the two processors. Since frame rates change depending on the load, you cannot just assign a an arbitrary minimum amount of time between the start of data processing between the two processors, but you should be able to set a minimum amount of processing done by the previous processor before the next processor begins. Doing it the latter way will automatically adjust for framerate changes. It would likely set the minimum amount of processing to less than 50% to account for latencies, but the idea would be to start the second processor when the first one is half done.
 

BroadbandGamer

Senior member
Sep 13, 2003
976
0
0
Can someone explain to me exactly what Micro-Stutter is and whether or not I need to worry about this? I haven't been following the hardware scene much so this is the first I've heard of this. I plan on buying a 4870 X2 the day it's released. Now I'm thinking I should just pick up a 4870.

Thanks!
 

Hauk

Platinum Member
Nov 22, 2001
2,806
0
0
Originally posted by: BroadbandGamer
Can someone explain to me exactly what Micro-Stutter is and whether or not I need to worry about this? I haven't been following the hardware scene much so this is the first I've heard of this. I plan on buying a 4870 X2 the day it's released. Now I'm thinking I should just pick up a 4870.

I can tell you what it isn't, and that's a big deal. It's blown way out of proportion. If you have your eyes on an X2, just buy it... :thumbsup:

 

woolfe9999

Diamond Member
Mar 28, 2005
7,153
0
0
Originally posted by: BroadbandGamer
Can someone explain to me exactly what Micro-Stutter is and whether or not I need to worry about this? I haven't been following the hardware scene much so this is the first I've heard of this. I plan on buying a 4870 X2 the day it's released. Now I'm thinking I should just pick up a 4870.

Thanks!

It refers to the phenomenon of frames being rendered at uneven time intervals when you use multiple GPU's. This causes an impression of "stutter" (i.e. a lack of smoothness) in the way you experience the game. In reality, this phenomenon has presumably been around for close to a decade (since the earliest multiple GPU implementations), but does not seem to have been identified as a problem until some time last year. In other words, no one knew there was a problem until someone proved on an objective level that the phenomenon was occurring and then gave it a name. Since this phenomenon was objectively verified and named, we now seem to have all kinds of people who suddenly find it troublesome, i.e. now knowing about micro-stutter, they have sat down and looked for it, and have discovered that they can detect it, which is no great surprise.

The short answer is what SteelSix said above, if you want to buy an x2, just do it.

- woolfe
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |