AMD 7800 / 7700 reviews

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hans Gruber

Platinum Member
Dec 23, 2006
2,217
1,153
136
@MrPickins didn't fully follow through with the reasoning, he should have stopped at ISO performance.

Consider this, the Asus TUF 7800XT is 7.5% faster than 4070, while consuming roughly 275W versus 200W. Here's what happens when you're willing to give up 5-6% performance on an 6800XT by lowering clocks, no undervolt. The workload is an UE4 game.

View attachment 85722

There we go, ~70W delta.
Just think how great a 6800xt would be on 5nm silicon. You can take 20% off those power numbers in your chart.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,755
751
136
Power color 7800XT Hellhound is a Beast, lowest temps with lowest noise (low DB) and 6900XT performance at 520 USD (new egg) or 600 Euros (Germany)

Number one place in the Computerbase.de 7800XT Custom 5 card review






I'd be concerned about those GPU > Hotspot deltas. Those are some pretty wide gaps.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,541
2,541
146
I suspect that the chiplet design does partly contribute to higher power usage. Keep in mind that RDNA3 is an early product to utilize GPU chiplets. There can be issues with new tech at first, we all know this. But 275W still isn't bad for the performance, and the price is decent too.

Not to say that the RTX 4070 is a bad card (I think it is one of the better Geforce RTX 4000 series cards, considering decent price and the 8 pin option, as well as 12GB of memory vs. 8GB to it's predecessor, all allowing for decent performance and good efficiency) but I personally would be more inclined to go with a 7800XT at current prices and performance, plus the 7800XT also has 16GB, an addtional 4GB.
 
Jul 27, 2020
17,866
11,645
116
My personal feeling is that RDNA3's caching algorithm is poor and leads to a lot of VRAM accesses, increasing the power usage. Begrudgingly I have to admit that the green team engineers have worked out a pretty elegant design that would have worked beautifully in a game console or a handheld. But of course, their leader is too much of an idiot to let their technology be put to good use. Money first. Let the world burn but money first!
 

blckgrffn

Diamond Member
May 1, 2003
9,197
3,183
136
www.teamjuchems.com
My personal feeling is that RDNA3's caching algorithm is poor and leads to a lot of VRAM accesses, increasing the power usage. Begrudgingly I have to admit that the green team engineers have worked out a pretty elegant design that would have worked beautifully in a game console or a handheld. But of course, their leader is too much of an idiot to let their technology be put to good use. Money first. Let the world burn but money first!
Eh, I am pretty sure it’s just poor clock scaling and the higher than optimal clocks as demonstrated just a few posts ago. Had they been able to scale clocks more efficiently or been still competitive with 10% lower clock speeds we’d be just fine with the efficiency.

I think the PS5 Pro is poised to show us what AMD can do when they are able to fully optimize a platform. I expect it will be quite solid like the PS 5 and Series X.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,217
1,153
136
My personal feeling is that RDNA3's caching algorithm is poor and leads to a lot of VRAM accesses, increasing the power usage. Begrudgingly I have to admit that the green team engineers have worked out a pretty elegant design that would have worked beautifully in a game console or a handheld. But of course, their leader is too much of an idiot to let their technology be put to good use. Money first. Let the world burn but money first!
AMD ordered the Mercedes C class car in TSMC 5nm silicon and Nvidia paid for the top of the line S class Mercedes of TSMC silicon. Nobody wants to answer the question I posed in earlier threads. AMD, Nvidia and Intel GPU's are all now on TSMC silicon. Everybody assumes all 5nm TSMC is the same. For the first time since AMD has been on TSMC silicon. The playing field is no longer level or the same. You have to pay for performance on the same node. Nvidia went with the high end 5nm silicon because crypto was flying high and they wanted to move up in the TSMC customer ranks. Apple is #1. AMD was high up in the favorable customer rankings. You move up the list with large wafer purchases.

We can argue this until the end of time. A standard Corvete C8 Stingray is not a Corvette C8 ZO6. The same goes for the silicon used by both AMD and Nvidia. On a side note Intel is using the N6 which is the latest and greatest 7nm silicon. That is the ultra cheap solution in 202/2023.

Of course you guys can disagree with me. The flip side is AMD makes terribly inefficient GPU's in comparison to Nvidia. As I have said it's the silicon being used that provides much greater power efficiency and modest performance increases if you are on team green.

Now on a side note. If Intel releases the Alchemist+ on N4 silicon. The Arc A750 would drop from 190w under load to 150w or less just because of the 5nm advanced silicon. Forget customization, it's all about the silicon process.

The reason AMD doesn't release the 6800xt on N5 5nm silicon. It would ruin the RNDA3 product stack. The same could be said about a special edition 7900xtx on N4. AMD went cheap and nobody wants to admit it.




Trolling is not allowed.


esquared
Anandtech Forum Director
 
Last edited by a moderator:
Jul 27, 2020
17,866
11,645
116
I dunno. I feel like the green engineers designed Ada to be power efficient in every way coz they realize the threat of AMD and now Intel in the lucrative gaming laptop market so they wanted their silicon to provide the longest lasting battery life and hence better consumer experience.


I think my hunch was right. Ada really is power efficient due to very large L1 and L2 caches that keep the various units busy instead of waiting for data and burning useless energy and coupled with fewer VRAM accesses, it leads to vastly better power efficiency.

Memory Subsystem
The Ada SM contains 128 KB of Level 1 cache. This cache features a unified architecture that can be configured
to function as an L1 data cache or shared memory depending on the workload. The full AD102 GPU contains
18432 KB of L1 cache (compared to 10752 KB in GA102).
Compared to Ampere, Ada’s Level 2 cache has been completely revamped. AD102 has been outfitted with
98304 KB of L2 cache, an improvement of 16x over the 6144 KB that shipped in GA102. All applications will
benefit from having such a large pool of fast cache memory available, and complex operations such as ray
tracing (particularly path tracing) will yield the greatest benefit.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
I suspect that the chiplet design does partly contribute to higher power usage. Keep in mind that RDNA3 is an early product to utilize GPU chiplets. There can be issues with new tech at first, we all know this. But 275W still isn't bad for the performance, and the price is decent too.
260-270w power consumption is pretty poor for N32 when N21/6800XT on last gen process is similar performing at 300-310w, and the original RTX 3080, an arguably faster card if including RT, is c. 330-340w and was criticized for power consumption at launch. 20-25% better efficiency than a gpu fabbed on Samsung 8nm is nothing to write home about.

As a comparison to Ada, AD104 is similar/slightly smaller size wise to N32+MCDs, it allocates die space for more specialized hardware on die (Tensor cores/more thorough dedicated RT accelerators), and yet a heavily binned version of AD104 in the RTX 4070 performs similar to N32 at ~200-210w.

AMD is clearly behind this generation from an architectural perspective, decent pricing or not.

Thats total non sense, if you reduce the 7800XT perf down to the 4070 level and cut 4GB out of 16 it will consume about the same power amount +-10W.

AMD already made a 12GB 7800XT, more or less, it's called the 7700XT and manages to perform c. 10% worse than the 4070 while still consuming 15-20% more power. But I am sure AMD's engineers know less than you about how to tune their GPUs.
 
Last edited:
Reactions: psolord

coercitiv

Diamond Member
Jan 24, 2014
6,390
12,814
136
I think my hunch was right. Ada really is power efficient due to very large L1 and L2 caches that keep the various units busy instead of waiting for data and burning useless energy and coupled with fewer VRAM accesses, it leads to vastly better power efficiency.
They did it for ray tracing first, the power efficiency is an added benefit. They probably wouldn't have done it for power efficiency alone, as the large cache has a high cost: it either forces larger and more expensive dies, or it limits the amount of compute or I/O one can cram in the same die space. You can look at the large L2 as the feature that made Ada very power efficient, but you can also look at it as the reason why the 4060Ti had to go with 128 bit bus and barely managed to present itself as a performance upgrade over last gen. The L2 scales nicely with the bigger Ada dies, the smaller ones feel the pain of compromise.

You can see the same problem with AMDs RDNA3, as they are trying to balance the L3 when compared with the previous gen. If cache was that much beneficial, they would have kept the same cache ratio as with RDNA2, instead they dialed it down. No free lunch.
 

Abwx

Lifer
Apr 2, 2011
11,166
3,862
136
260-270w power consumption is pretty poor for N32 when N21/6800XT on last gen process is similar performing at 300-310w, and the original RTX 3080, an arguably faster card if including RT, is c. 330-340w and was criticized for power consumption at launch. 20-25% better efficiency than a gpu fabbed on Samsung 8nm is nothing to write home about.
.

It s 249W, not 275W, and 32% better efficency, not the same as 20-25%...

Better to look at real numbers rather than more or less doctoring false numbers to downplay a product.

 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,423
2,914
136
It s 249W, not 275W, and 32% better efficency, not the same as 20-25%...

Better to look at real numbers rather than more or less doctoring false numbers to downplay a product.

He wrote 260-270W, not 275W. That can also be considered as doctoring false numbers to downplay other users.

7800XT is not bad for that price, but that doesn't mean It's great either.
RDNA3 didn't meet expectations, that's all. It has problems achieving high clocks within reasonable power consumption.
Ada is better, but It also has Its own disadvantages in low Vram and higher price.
Not a good time for upgrade, but who knows If the future won't be worse.
 
Last edited:

insertcarehere

Senior member
Jan 17, 2013
639
607
136
It s 249W, not 275W, and 32% better efficency, not the same as 20-25%...

Better to look at real numbers rather than more or less doctoring false numbers to downplay a product.

So since you do use Computerbase as a reference ..
That s total non sense, if you reduce the 7800XT perf down to the 4070 level and cut 4GB out of 16 it will consume about the same power amount +-10W.



Where's this 7800XT that performs like a 4070 at 4070 power draw again? Because AMD more or less made a 12GB 7800XT in the 7700XT and that sure ain't it..
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,423
2,914
136
It's not a good idea to cut out one MCD from N32 as shown with 7700XT. You loose 1/4 of BW and 1/4 of Infinity cache.
It cripples performance too much, despite only 6% difference in TFLOPs.
Ada104 has lower BW requirements in comparison.
Also let's not forget that 7700XT has higher clockspeed than 7800XT, which is negatively affecting Its efficiency.

TPU shows pretty good OC potential for both 7700XT and 7800XT, a shame they don't test It more. For example in an actual game and what power consumption It has while OC-ed.

@insertcarehere you can keep 4 MCDs, but you won't use 8*16gbit GDDR6 chips, but 4*16gbit and 4*8gbit ones for 12GB in total, but this also will incure performance penalty when you use more than 8GB of that Vram, because anything above that will have only 1/2 of BW.
 
Last edited:

insertcarehere

Senior member
Jan 17, 2013
639
607
136
It's not a good idea to cut out one MCD from N32 as shown with 7700XT. You loose 1/4 of BW and 1/4 of Infinity cache.
It cripples performance too much, despite only 6% difference in TFLOPs.
Ada104 has lower BW requirements in comparison.
Yup I think it's pretty clear that Ada's caches allow it to get away with way lower Vram bandwidth than RDNA3 can.. But I sure don't know how one realistically gets 12gb N32 without cutting an MCD 😉
 

Abwx

Lifer
Apr 2, 2011
11,166
3,862
136
So since you do use Computerbase as a reference ..

View attachment 85753
View attachment 85754

Where's this 7800XT that performs like a 4070 at 4070 power draw again? Because AMD more or less made a 12GB 7800XT in the 7700XT and that sure ain't it..

You said that the 7800XT has only 20-25% better perf/watt than the 3080 while consuming 260-270W, wich are two wrong numbers of yours, you should read your post, so what has the 4070 to do with my answer..?..

May i remind you your sayings ?.

260-270w power consumption is pretty poor for N32 when N21/6800XT on last gen process is similar performing at 300-310w, and the original RTX 3080, an arguably faster card if including RT, is c. 330-340w and was criticized for power consumption at launch. 20-25% better efficiency than a gpu fabbed on Samsung 8nm is nothing to write home about.
 

coercitiv

Diamond Member
Jan 24, 2014
6,390
12,814
136
Hold my beer... (though it seems 170W would be enough)
We were specifically talking about NOT undervolting, as undervolting can also be applied to Nvidia cards too. Underclocking shows all 7800XT cards can be ~200W cards when aiming for roughly equal raster performance as the 4070.

Opting for the 4070 due to power reasons is pursuing a false narrative, especially considering people who want to save on power will use frame caps and... as some call them around here... "correct settings" to optimize for lower power. Anyone who opts for 4070 over 7800XT should do so for the difference in feature sets and emphasis on RT. At least then it's an honest choice, even if based on personal preference.

Personally I also undervolt my card, and I play most games at ~150W GPU power or lower. Last night when I took those 270/200W screenshots was the first time I heard my card's fans in many months.
 

PJVol

Senior member
May 25, 2020
622
551
136
Very nice, but you also need to be lucky with your AMD GPU, you can end up with worse silicon.
From that picture actually OC-ing memory helped performance despite downclocking the GPU.
Mine is a usual reference design of modest quality, mem OC does help, though not as much as other "optimizations" do. It's just that the outcome heavily dependant on the type of worlkoad typical for a particular game. Here's another example just for the illustrative purposes (> 40% reduction in power):
 

Attachments

  • hzd-150W.png
    3.6 MB · Views: 18
  • hzd-255W-stock.png
    3.6 MB · Views: 14
Last edited:
Reactions: TESKATLIPOKA

TESKATLIPOKA

Platinum Member
May 1, 2020
2,423
2,914
136
Gaming Overclocking Performance vs reference 7800XT,

+12% in Cyberpunk 77

+15% in God of war

+13% in Horizon Zero Dawn

Not bad,

Great findings, thanks.
Hellhound OCed manages 20% higher average clockspeed than the reference in Cyberpunk, yet performance increase is only 11.1%(1% low) and 13.7%(average).
BW is holding It back?
This 20% increase in clockspeed results in 32% higher consumption. Not as bad as I feared, but performance increase is low compared to frequency gain.
 
Last edited:

PJVol

Senior member
May 25, 2020
622
551
136
We were specifically talking about NOT undervolting, as undervolting can also be applied to Nvidia cards too. Underclocking shows all 7800XT cards can be ~200W cards when aiming for roughly equal raster performance as the 4070.
Ah... I see. Although it's not clear why you choose to limit the clocks (not a good idea for the real case scenarios) instead of just limiting gpu power.
The clock slider in wattman is for expanding its limit, not to decrease.
 

blckgrffn

Diamond Member
May 1, 2003
9,197
3,183
136
www.teamjuchems.com
Ah... I see. Although it's not clear why you choose to limit the clocks (not a good idea for the real case scenarios) instead of just limiting gpu power.
I think it illustrates well (and did with Ampere too) just how silly the stock clock settings were. You also don't need to limit power if your "boost clock" aren't set to stupid levels on the chips efficiency curve.

Clearly both vendors know that we prize our User Benchmark scores over actual quality full system engineering.
 

coercitiv

Diamond Member
Jan 24, 2014
6,390
12,814
136
Ah... I see. Although it's not clear why you choose to limit the clocks (not a good idea for the real case scenarios) instead of just limiting gpu power.
The clock slider in wattman is for expanding its limit, not to decrease.
Because the power slider on my card allows for only 15% adjustment, as the engineers also thought the same about the power slider... that it's purpose is to expand power.
 
Reactions: Tlh97 and SamMaster
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |