ATI 4xxx Series Thread

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Avalon

Diamond Member
Jul 16, 2001
7,567
156
106
Man, these specs look great. 50% more shaders, 100% more TMUs, higher clocks, faster memory, same thermal envelope as the 3800 series. Pending the GT200 lineup, an HD4850 GDDR5 card sounds really nice for $250.

Here's to hoping it performs as good as it looks.
 

thilanliyan

Lifer
Jun 21, 2005
11,944
2,175
126
Originally posted by: Jax Omen
I still stand by that anyone who can afford high-end GPUs shouldn't be affected by the power costs associated with said GPUs. It's pocket change by comparison. And if they are? Turn off your damn AC/heat! Those consume more power than everything else combined in the average home. Next-most power-hungry is the fridge/freezer. PCs are pretty far down the list.

No doubt about that. If you can't afford the slightly extra cost you shouldn't be gaming. However, as I've said more efficient computers collectively can make a big difference in energy consumption.

And thanks to everyone else who is keeping us on topic.
In the Tom's article it said that they were going to keep clocks low in order to keep the 4870 at a 150w power envelope and the 4850 at a 110w power envelope.
 

Jax Omen

Golden Member
Mar 14, 2008
1,654
2
81
Eh, I'm just not enough of a hippy to care about the power consumption of my computer, I guess.

I wish I could afford to buy a more fuel-efficient car... but my 1992 Taurus is about it. And as long as it works, I see no reason to spend tons of money upgrading. Now, my computer... it's a toy. If it works, it means I haven't added enough stuff to it yet
 

thilanliyan

Lifer
Jun 21, 2005
11,944
2,175
126
Originally posted by: Jax Omen
Eh, I'm just not enough of a hippy to care about the power consumption of my computer, I guess.

Lol, I'm not a hippie...just a bit of a tree hugger...I don't think I smoke enough...green stuff...to qualify as a hippie.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Originally posted by: chizow
After seeing more firm specs on the RV770 I don't think there's too much good news here. The only good news for ATI is that they'll have the fastest single-gpu card for a month or two until GT200 releases, at which point they'll get lapped again in terms of performance. Than an X2 version might put them in a competitive position again at which point NV will respond with a die-shrink or SLI-on-a-card solution of their own or both. All while maintaining a comfortable lead at the high-end with a $2000 GT200 Tri-SLi solution.

As for 4870, I don't think it'll be much faster than 9800GTX/8800GTX/Ultra in terms of performance. Maybe 15-25% faster, max. 16 > 32 TMUs seem to be the biggest gain here and specifically mentioned as a major bottleneck for ATi R600 parts. Still, that only puts ATI's texture fill-rate equivalent to a 9600GT, not counting any advantages from different vendor design. The rest of the specs seem rather unspectacular with questionable gains, although shaders may also scale well as that seemed to be another weak point of R600. Going from 64 to 96 real shaders, or 320 to 480 super scalar along with unlinked shader clocks should help close any gaps in shader performance in unoptimized games where NV held a lead previously.

This part would've been a great answer to G80/G92 6 months ago when RV670 released, or even a year ago when R600 released. But at this point I think it'll be obvious that its too little too late, mostly competing with G80/G92 and made obsolete again when NV fires back with GT200 later this quarter.


I will bet you right now the difference between no AA/AF and 4xAA&AF is going to be around on average 15%
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: chizow
After seeing more firm specs on the RV770 I don't think there's too much good news here. The only good news for ATI is that they'll have the fastest single-gpu card for a month or two until GT200 releases, at which point they'll get lapped again in terms of performance. Than an X2 version might put them in a competitive position again at which point NV will respond with a die-shrink or SLI-on-a-card solution of their own or both. All while maintaining a comfortable lead at the high-end with a $2000 GT200 Tri-SLi solution.

As for 4870, I don't think it'll be much faster than 9800GTX/8800GTX/Ultra in terms of performance. Maybe 15-25% faster, max. 16 > 32 TMUs seem to be the biggest gain here and specifically mentioned as a major bottleneck for ATi R600 parts. Still, that only puts ATI's texture fill-rate equivalent to a 9600GT, not counting any advantages from different vendor design. The rest of the specs seem rather unspectacular with questionable gains, although shaders may also scale well as that seemed to be another weak point of R600. Going from 64 to 96 real shaders, or 320 to 480 super scalar along with unlinked shader clocks should help close any gaps in shader performance in unoptimized games where NV held a lead previously.

This part would've been a great answer to G80/G92 6 months ago when RV670 released, or even a year ago when R600 released. But at this point I think it'll be obvious that its too little too late, mostly competing with G80/G92 and made obsolete again when NV fires back with GT200 later this quarter.

A few points I'd like to add:

1. The individual tmu's in the r6xx series are beefier than the ones found in the g80 and g9x cards. Each one works on FP16 data at full speed, while Nvidia's are based on INT8 data formats.

2. Ati would have a huge die size advantage if these specs are true, and could roll out an X2 card way before the competition can respond, because Nvidia would likely have to wait until the refresh cycle to make a dual-gpu card viable using theit much bigger gpu.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: chizow
After seeing more firm specs on the RV770 I don't think there's too much good news here. The only good news for ATI is that they'll have the fastest single-gpu card for a month or two until GT200 releases, at which point they'll get lapped again in terms of performance. Than an X2 version might put them in a competitive position again at which point NV will respond with a die-shrink or SLI-on-a-card solution of their own or both. All while maintaining a comfortable lead at the high-end with a $2000 GT200 Tri-SLi solution.

As for 4870, I don't think it'll be much faster than 9800GTX/8800GTX/Ultra in terms of performance. Maybe 15-25% faster, max. 16 > 32 TMUs seem to be the biggest gain here and specifically mentioned as a major bottleneck for ATi R600 parts. Still, that only puts ATI's texture fill-rate equivalent to a 9600GT, not counting any advantages from different vendor design. The rest of the specs seem rather unspectacular with questionable gains, although shaders may also scale well as that seemed to be another weak point of R600. Going from 64 to 96 real shaders, or 320 to 480 super scalar along with unlinked shader clocks should help close any gaps in shader performance in unoptimized games where NV held a lead previously.

This part would've been a great answer to G80/G92 6 months ago when RV670 released, or even a year ago when R600 released. But at this point I think it'll be obvious that its too little too late, mostly competing with G80/G92 and made obsolete again when NV fires back with GT200 later this quarter.

If you look at the specifications vs. the performance of the 3870, then you are dead wrong.

Looking at pure numbers, the HD 4870 is a solid ~2X improvement in just about every area of the GPU over HD 3870. The only area where performance hasn't been improved much is the ROP area; the ROPs are not much of a bottleneck, and with a faster core speed, ATI already has a significant advantage in that area over nVidia.

Looking at the numbers to back up what I said:

In terms of shader performance:
HD 4870 (480 * 2 * 1.050) = 1008
HD 3870 (320 * 2 * 0.775) = 496
4870 = 2.03X 3870

In terms of texture performance:
HD 4870 (32 * 0.850) = 27.2
HD 3870 (16 * 0.775) = 12.4
4870 = 2.19X 3870

In terms of memory bandwidth:
HD 4870 (3880 * 0.032) = 124.2 GB/s
HD 3870 (2250 * 0.032) = 72.0 GB/s
4870 = 1.725X 3870

The 4870 improves in every aspect significantly, and it is more balanced than the current design. Shader performance remains strong, but now the texture performance is there to back it up. The GPU has plenty of power and also plenty of memory bandwidth to keep it fed.

Looking at 3870 reviews... there is no situation that I can find where doubling the 3870's performance does not equal better performance than the 8800 Ultra. In many cases the gain is very significant.

The 4870 X2 should definitely exceed the performance of the 9800GX2 by a wide margin. Obviously GT200 is another story. But how powerful is GT200 really going to be? Considering the current die size of G92, which is huge as it is, how much room does nVidia have to expand on it? I cannot imagine that GT200 would be anything more than 40-50% faster than 9800GX2 if it is a single GPU, and from what I see that would make it slightly faster than the 4870 X2, if that.

The problem with nVidia is of course die size, as I mentioned. From what I have seen, RV770 should be much smaller than nVidia's G92 and exceed its performance greatly. Even nVidia moving to a 55nm process would likely make G92 around equal to RV770 in die size. Considering rumors point to GT200 on 65nm, it would likely be a chip like G80, in the range of ~500mm^2. That's not a GPU that any company wants to produce; it is a lot easier and cheaper to fab (2) 250mm^2 chips than to fab a single 500mm^2 chip. If GT200 is really single GPU, then that will likely be the situation.

As for nVidia responding with a die shrink, they are usually way behind in moving to a new process. AMD moved to 55nm in Nov 07, and nVidia does not have a single 55nm GPU out 5 months later. By the end of this year, RV770 could be shrunk to 45nm if TSMC's process is ready in time.






 

Rusin

Senior member
Jun 25, 2007
573
0
0
If we use simple mathematics:
G92b: Should be like 230mm^2 minus architectural updates
RV770: Should be like 250mm^2 minus architectural updates

They are saying that GT200 would have 1000-1100 million transistors if they would do it with 55nm there would be even chance that GT200 would be smaller chip than G92 [305-335mm^2 (G92: 324mm^2)]. With 65nm it should be around 430-470mm^2. and of course.. there can be updates on architecture that could make it smaller.
-------

If these rumours are true then AMD would take steps in performance/watt-ratio that have never ever seen before..and all this by using the same 55nm production architecture and not even implementing new GPU architecture? Also rumours would indicate that Nvidia would almost go backwards; GT200 would be basically 9800 GX2 on single chip, but GT200's TDP would be over 50W higher?

HD4870 X2 would be more than twice as fast as HD3870 X2 and no powerconsumption increases?
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: Rusin
If we use simple mathematics:
G92b: Should be like 230mm^2 minus architectural updates
RV770: Should be like 250mm^2 minus architectural updates

They are saying that GT200 would have 1000-1100 million transistors if they would do it with 55nm there would be even chance that GT200 would be smaller chip than G92 [305-335mm^2 (G92: 324mm^2)]. With 65nm it should be around 430-470mm^2. and of course.. there can be updates on architecture that could make it smaller.
-------

If these rumours are true then AMD would take steps in performance/watt-ratio that have never ever seen before..and all this by using the same 55nm production architecture and not even implementing new GPU architecture? Also rumours would indicate that Nvidia would almost go backwards; GT200 would be basically 9800 GX2 on single chip, but GT200's TDP would be over 50W higher?

HD4870 X2 would be more than twice as fast as HD3870 X2 and no powerconsumption increases?

Math doesn't tell the whole story; very few die shrinks see linear scaling, especially as we get into smaller and smaller processes.

Here are rumored die sizes for G92b/RV770: http://www.vr-zone.com/article...s_Comparison/5647.html

Obviously that's not concrete, but that is the best idea right now that we have of the die size of these chips.

Clearly AMD's new GPUs are focused more on power consumption than the previous generation. Likely this is one of the side effects of AMD exerting influence over ATI; while prior to the acquisition ATI was always about having the highest performance and never cared about power consumption, we now see AMD coming in and changing that. Personally I could care less about power consumption, so this is a step backwards IMO. But you can't have cards that consume 300W, so some restraint is needed regardless.

The HD 4870 does look to consume more power than RV670; RV670 had a 130W~ TDP IIRC, RV770 being 150W if rumors are correct. But the performance increase will certainly be larger than the increase in power consumption.

At this point we have no idea what GT200 is; while we now have (almost) concrete details on RV770, GT200 is still a bit farther down the line and we know very little about it. Likely it is an enhanced G80 with more SP, but beyond that everything is speculation.

I would expect that GT200 will exceed 9800GX2 performance within the same power envelope: especially if it is a single chip, as the less than linear scaling of multi-GPU in many cases hurts performance-per-watt.

 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
realistic Tri-SLI / CF power draw: 400+watt idle, 700+watt load.
realistic mid-high computer power draw: 200 watt idle, 300 load.
realistic power saved by turning off the video card while idling: 100watts per video card.

realistic power cost 0.07$/KWh in cheap states (the ones who say they get it for 5 cents per KWH probably fall into the 7 cents category with all the hidden charges). 14 cents in texas. 25 cents in some REALLY bad states.

If you are a regular person who runs his comp 24/7. and plays about 2 hours a day on average. Then for 22 hours a day your video card sits idle. If you had it off you would save at least 100 watts for those 22 hours (you are not saving a single watt in the 2 hours a day you use the video card.

So:
22 hours / day * 100watts = 2.2 Kwh / day.
2.2 Kwh / day * 365 days / year = 803 KWh / year.
803 KWh / year * 7 cents / KWh = 56.21$ / year
803 KWh / year * 14 cents / KWh = 112.42$ / year
803 Kwh / year * 25 cents / KWh = 200.75$/ year

multiple that by THREE if you have a triple SLI system... and I am STILL being generous cause I rounded down the power saving to 100 watts per video card. When I saw some cards where its 116 or more... So you could always add 16% or whatever to the end result.

Ofcourse the BIGGEST saving you could do it TURN OFF THE COMPUTER when not in use. The wonderful thing about the hybrid power is that it turns off the video card, and saves about HALF the idle power the computer wastes (with most of the other half being the CPU, and negligible amounts for everything else. For example, HDD take 3-5 watts on most HDD. 10 watts on high RPM drives.



Speaking of being a hippy (I admit to being a bit of one myself).. if you REALLY want to do some good you want to switch to 100% renewable energy plan, and then the MORE electricity you waste the BETTER the environment gets.
Because the more electricity you consume, the more money the electric company makes, if you sign up with a 100% natural electric company, that extra money allows them to fund expanded facilities, which drives costs down, AND provides income for the manufacturers of said facilities, providing more money into research and development!

Thus the ideal thing to do is sign up with a 100% wind/solar/hydro power company and waste as much electricity as you can, and you will be contributing to the environment.
 

biostud

Lifer
Feb 27, 2003
18,675
5,400
136
The RV770 is supposed to have +800M transistors the g92 has ~686M but the G92b should be larger than the RV770, why is that?
 

dennilfloss

Past Lifer 1957-2014 In Memoriam
Oct 21, 1999
30,509
12
0
dennilfloss.blogspot.com
Looking at the numbers to back up what I said:

In terms of shader performance:
HD 4870 (480 * 2 * 1.050) = 1008
HD 3870 (320 * 2 * 0.775) = 496
4870 = 2.03X 3870

In terms of texture performance:
HD 4870 (32 * 0.850) = 27.2
HD 3870 (16 * 0.775) = 12.4
4870 = 2.19X 3870

In terms of memory bandwidth:
HD 4870 (3880 * 0.032) = 124.2 GB/s
HD 3870 (2250 * 0.032) = 72.0 GB/s
4870 = 1.725X 3870

Sounds like a good upgrade for my 3870.:thumbsup:
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
@extelleron: more and more rumors point to 850 core and 1050 shader, so...

(480*2*.85) = 816

816/496 = 1.65X . still a good improvement, but not nearly as impressive.
 

Quiksilver

Diamond Member
Jul 3, 2005
4,725
0
71
Originally posted by: bryanW1995
@extelleron: more and more rumors point to 850 core and 1050 shader, so...

(480*2*.85) = 816

816/496 = 1.65X . still a good improvement, but not nearly as impressive.

Well even still... if their prototype cards are @ 1.05, what would this mean for overclockers when they get a hold of these baby's when that launch, it really makes ya think about some of the core clock ceilings of these.
 

thilanliyan

Lifer
Jun 21, 2005
11,944
2,175
126
Originally posted by: biostud
The RV770 is supposed to have +800M transistors the g92 has ~686M but the G92b should be larger than the RV770, why is that?

Is G92b on the 55nm process or the 65nm process? That would explain the difference. Or maybe they've included some added functionality?? (ie. like the NVIO chip that was separate on the G80 but part of the die I think on G92.)
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: bryanW1995
@extelleron: more and more rumors point to 850 core and 1050 shader, so...

(480*2*.85) = 816

816/496 = 1.65X . still a good improvement, but not nearly as impressive.

If shader clock is 1050MHz, then the shaders will be clocked at 1050MHz.

Not sure what you are getting at. It's (480*2*1.05), which is exactly what I said.

Originally posted by: thilan29
Originally posted by: biostud
The RV770 is supposed to have +800M transistors the g92 has ~686M but the G92b should be larger than the RV770, why is that?

Is G92b on the 55nm process or the 65nm process? That would explain the difference. Or maybe they've included some added functionality?? (ie. like the NVIO chip that was separate on the G80 but part of the die I think on G92.)

G92b will be 55nm and have (as far as we know) the same 754M transistors that G92 currently has.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: biostud
The RV770 is supposed to have +800M transistors the g92 has ~686M but the G92b should be larger than the RV770, why is that?
Not all structures are the same. Some things (e.g. cache) are bigger than other things (e.g. ALUs) when made out of the same number of transistors.
 

Ketherx

Member
Mar 31, 2008
58
0
0
These are going to have crossfire right? If these cards are really good, I'll make another AMD build instead of intel since crossfire's a bit cheaper on an AMD system (that I've found anyway).
 

Quiksilver

Diamond Member
Jul 3, 2005
4,725
0
71
Originally posted by: Ketherx
These are going to have crossfire right? If these cards are really good, I'll make another AMD build instead of intel since crossfire's a bit cheaper on an AMD system (that I've found anyway).

Why would to crossfire or not to crossfire change what system you wish to build? Hence, crossfire is supported on both sides of the fence. What should be is price/performance/features, but who am I to say...
 

Ketherx

Member
Mar 31, 2008
58
0
0
Well, I've looked for a crossfire board for intel and it's about $100 more than one for AMD (that I've found) so it'd be cheaper for me to go with AMD. I've always liked AMD anyways so it's no problem for me.
 

Mr. Lennon

Diamond Member
Jul 2, 2004
3,492
1
81
Ive bought every part for my new rig besides the graphics card. Good thing I waited. How soon are these really going to come out? I can handle another month with my x850xt pe in the meantime I guess lol.
 

Grinja

Member
Jul 31, 2007
168
0
0
Originally posted by: Bakku
unless they revamp their current inefficient stream shaders, i dont see how the 4k series would be groundbreaking in performance. as long as the shader architecture stays the same, it doesnt matter if it's using GDDR3 or GDDR5. just my 2c.

My guess for the 'poor' performance was the amount of Texture Mapping Units on the HD3xxxx series (or perhaps combination of Shaders and TMUs?).

They've doubled the TMU's for the HD4xxxx series but I'm sure Nvidia will be increasing the TMU on the GT200. I would guess that nvidia will also be shrinking to 55nm soon ...

I moved to an X1950 pro when I upgraded my 6800GT. Been waiting to upgrade since the HD series was announced but can't seem to get over the fact that the only real way to beat the 8800 ultra is with a dual chip/card setup ..ughhh.


 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Extelleron
If you look at the specifications vs. the performance of the 3870, then you are dead wrong.

Looking at pure numbers, the HD 4870 is a solid ~2X improvement in just about every area of the GPU over HD 3870. The only area where performance hasn't been improved much is the ROP area; the ROPs are not much of a bottleneck, and with a faster core speed, ATI already has a significant advantage in that area over nVidia.
Actually I did run the numbers and while I don't think 2x performance is realistic given the released specs of RV770, I do think a 50% increase over RV670 is achievable, which puts me at my 15-25% increase estimate over existing G80/G92 parts. Considering a 3870 in CF or in X2 often fails to beat the 8800/9800 GTX/Ultra in games that don't scale particularly well, I'm not sure why you're so confident RV770 will approach doubling RV670's performance. Personally I think the improvements aside from the TMU additions are unnecessary and that ignoring ROPs is a mistake.

The good news is that we probably won't have to wait long to find out, as rumors suggest 4870 will be launched in only a week or two in early May. If it doesn't double 3870s performance than it'll be doomed from the outset, as GT200 does promise a true doubling of G92's main specs and overall performance.

Originally posted by: munky
A few points I'd like to add:

1. The individual tmu's in the r6xx series are beefier than the ones found in the g80 and g9x cards. Each one works on FP16 data at full speed, while Nvidia's are based on INT8 data formats.

2. Ati would have a huge die size advantage if these specs are true, and could roll out an X2 card way before the competition can respond, because Nvidia would likely have to wait until the refresh cycle to make a dual-gpu card viable using theit much bigger gpu.
Point 1 is true but doubling TMU only brings their FP16 capabilities in-line with NV's G80 and G92. As for Point 2, GT200 would be the monster on the older process, similar to G80. NV has more closely followed Intel's tick-tock approach as of late and I think they've seen great success with it. Considering both AMD and NV both use the same fabs for their chips I don't see how you consider this an advantage for one over the other. Lately they've just used alternate optical shrinks (G80@90nm/G92@65nm and R600@80nm/RV670@55nm). Sure AMD might have a few months advantage on an X2 part, but if it doesn't convincingly outperform a single-GPU part and still gets destroyed at the ultra high-end by multiple GT200 it'll still be lost in the overall GPU landscape.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Since I haven't owned a NV card since the 4600 I can't wait. This should be a good card.
I hope this isn't another R500 . MS really screwed the pooch on DX10 . When they cut out alot of what DX10 was suppose to be . Dx10.1 reinstalls what MS left out of DX10 . Just because NV didn't and hasn't yet produced a true DX10.1 card.

But I suppose the whistle blowers made it so MS had to leave DX10.1 out of the orginal DX10 . Because ATI had info that NV didn't have because of the Xbox .

Do you guys really think MS will do a good job with DX10.1 . I would say not. Because MS is pushing consoles. Its in MS best interest to do a poor job on DX10. Its money in the bank for MS.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |