Nvidia ,Rtx2080ti,2080,(2070 review is now live!) information thread. Reviews and prices

Page 33 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Pandamonia

Senior member
Jun 13, 2013
433
49
91
BS economics. Loss of crypto revenue or not has nothing to do with the price of RTX. Larger die sizes, billions more transistors = higher cost, higher price. Crypto revenue is not part of sales targets so company is not under obligation 'to make up for it'.

That said, are the products worth it? Certainly not for me and even if I could afford $800-1200 GPUs, still have a sense of value and would not buy an RTX card.
Maintaining shareholder value is the ceos number 1 job.

The black hole has to be filled somewhere. The current node is very mature which can support these die sizes especially since the chips are probably factory second teslas which would go in the bin.

Wasn't Turing just invented out of thin air a short while back which previously never showed up on any road map....

Considering ray tracing doesn't even work and can't support even 1080p it doesn't fit in to any road map that has shown recently. Until Turing we were after 4k and high fps and gsync benefits of gaming. They have now abandoned 4k and high fps in favour of ray tracing at 1080p 30 fps.

Unicorn technology which no game has and will never be adopted by mainstream gaming gpus because the laws of physics will not allow transistors small enough to build a chip which currently needs to be priced at 1060 levels and perform twice as good as the 2080ti does currently in ray tracing to even allow 1080p 60fps which is the bare minimum for mainstream pc gaming.

None of this fits. None of this makes sense. This is a cash grab to keep share prices high.

Sent from my SM-N960F using Tapatalk
 
Reactions: Feld and psolord

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
It seems like GDDR6 has waaaay higher power consumption than HBM2.

With a +150 memory offset on a 2080 Ti founders (7100MHz vs 7000MHz stock or or 1.5% overclock from 616GB/s to 625GB/s) at 100% power limit (250W), on a short run of timespy it was already causing the core to throttle ~50 Mhz, and resulting in a lower score, to keep it under 250W TDP.

Whereas I found with a buddy's Titan V that you could jack up the HBM to 1050 MHz vs 850 MHz stock (24% overclock to 806GB/s from 652.8GB/s) and still maintain within the 100% power limit with no core throttling at all.

I think this explains why Turing cards seem to be consuming so much power. GDDR6 uses a lot of power, and small memory overclocks are increasing power consumption through the roof, whereas HBM2 uses very little power and also can be overclocked without eating much into the available TDP.
 
Last edited:
Reactions: crisium and IEC

amenx

Diamond Member
Dec 17, 2004
4,012
2,280
136
Maintaining shareholder value is the ceos number 1 job.
This is not necessarily achieved by jacking up prices. Any bozo running any business can just jack up prices, but products will not sell with the same volume. Its not like Sony can say, hey lets raise price of PS4 from $400 to 700 and we'll make more money for our shareholders . Nvidia will likely take a big hit on sales of RTX cards due to high costs/price and will end up making much less money than before.
 

Pandamonia

Senior member
Jun 13, 2013
433
49
91
This is not necessarily achieved by jacking up prices. Any bozo running any business can just jack up prices, but products will not sell with the same volume. Its not like Sony can say, hey lets raise price of PS4 from $400 to 700 and we'll make more money for our shareholders . Nvidia will likely take a big hit on sales of RTX cards due to high costs/price and will end up making much less money than before.
Nvidia ceo has been doing this for years. They test the market to see how it reacts.

Look at apple and Samsung.

We are not talking about jacking up prices of old tech like the PS4. We are talking about new products and new unheard of price points. Every company seems to be at it currently.

The early adopters are fueling this and this generation the performance gains aren't there and it needs to flop to teach them a lesson.

Sent from my SM-N960F using Tapatalk
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Nvidia ceo has been doing this for years. They test the market to see how it reacts.

Look at apple and Samsung

As he should thats his job, and Samsung and Apple are the best at what they do also, they are the most successful companies out there. Hard to argue with that.
 

Pandamonia

Senior member
Jun 13, 2013
433
49
91
As he should thats his job, and Samsung and Apple are the best at what they do also, they are the most successful companies out there. Hard to argue with that.
As a consumer its not good. I'd hope everyone on this forum is a consumer and not a spiv corporate stooge.

Lack of competition hurts consumers and this is why the rtx sucks. Performance sucks. Pricing sucks. Ray tracing sucks.

What we all wanted was a 1080ti replacement with 50% more frames for the same cost. So we can run 4k at 100fps which is the holy grail.



Sent from my SM-N960F using Tapatalk
 
Reactions: akyp11 and psolord

PrincessFrosty

Platinum Member
Feb 13, 2008
2,301
68
91
www.frostyhacks.blogspot.com
What we all wanted was a 1080ti replacement with 50% more frames for the same cost. So we can run 4k at 100fps which is the holy grail.

Most 4k monitors aren't capable of more than 60hz, in part because many of the connectors out there just don't have the bandwidth for it and quite frankly people interested in more than 60fps to begin with are a small market segment and so aren't worth targeting, especially considering that 4k is already a small market segment to begin with.

The RTX cards make 4k 60fps playable so throwing yet more die space at traditional Ops didn't really make a lot of sense, it's just a waste. It's better spent dedicating that die space to specialized Ops that offer improved image quality/fidelity. I mean honestly the 4k 60fps target was kinda met by the last gen anyway, I ran a single 1080 and played 4k in almost all my games just fine, and just added a 2nd for SLI for a few of the newer AAA games that struggle.

The only trick Nvidia really missed here is making a smaller and cheaper chip that's all rasterization focused and can't do RTX but makes use of the new smaller node, but I genuinely don't believe that they'd have made a product there that would be a significant competitor to the 1080Ti. if you want a single 4k card capable card then a 1080Ti is just fine. Having to design 2 versions of the same architecture would have been expensive, probably prohibitively so.
 

Pandamonia

Senior member
Jun 13, 2013
433
49
91
Most 4k monitors aren't capable of more than 60hz, in part because many of the connectors out there just don't have the bandwidth for it and quite frankly people interested in more than 60fps to begin with are a small market segment and so aren't worth targeting, especially considering that 4k is already a small market segment to begin with.

The RTX cards make 4k 60fps playable so throwing yet more die space at traditional Ops didn't really make a lot of sense, it's just a waste. It's better spent dedicating that die space to specialized Ops that offer improved image quality/fidelity. I mean honestly the 4k 60fps target was kinda met by the last gen anyway, I ran a single 1080 and played 4k in almost all my games just fine, and just added a 2nd for SLI for a few of the newer AAA games that struggle.

The only trick Nvidia really missed here is making a smaller and cheaper chip that's all rasterization focused and can't do RTX but makes use of the new smaller node, but I genuinely don't believe that they'd have made a product there that would be a significant competitor to the 1080Ti. if you want a single 4k card capable card then a 1080Ti is just fine. Having to design 2 versions of the same architecture would have been expensive, probably prohibitively so.
The market for ray tracing is non existing since no games really have it and nobody wants to play at 1080p. So that argument holds no ground.

4k 120fps is about as good as it gets as it just needs more power which if we had a smaller node and more transistors would be possible as its just scaling current designs up.

Rtx is a joke we waited 18 months for.

Sent from my SM-N960F using Tapatalk
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
The proper, much faster, replacement for the previous generations xxti has been on a 2 year cycle for a few generations now. The 2080ti will either get a lot cheaper in a year or just conceivably replaced if 7nm has made it by then.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
The market for ray tracing is non existing since no games really have it and nobody wants to play at 1080p. So that argument holds no ground.

4k 120fps is about as good as it gets as it just needs more power which if we had a smaller node and more transistors would be possible as its just scaling current designs up.

Rtx is a joke we waited 18 months for.

Sent from my SM-N960F using Tapatalk
We don't know that RT will be limited to 1080P, and as time goes on, it's looking more and more like it won't be.
RT will most likely be an adjustable effect, just like any other effect. It probably won't be either/or.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
quite frankly people interested in more than 60fps to begin with are a small market segment and so aren't worth targeting

I wonder about the veracity of this statement and if there is a disconnect between the tech community and the broader gaming market. I've noticed (and even years ago when high refresh monitors first started appearing) that at least in the circle the sentiment seemed to lean more towards high resolutions over higher refresh. But I do question whether or not that actually applies to the broader PC gaming market.

I'm aware of market research data from Digitimes which seems to indicate the growth rate in annual sales of high refresh monitors has doubled each of the last two years. Also that manufacturers in general are now heavily targeting this segment and if you look at their new releases are predominantly high refresh options. Based on steam survey data 1080p monitor market share isn't just the highest it's also growing several order of magnitudes higher compared to other resolutions (4k and 1440p both went down slightly in the last one even). Anecdotally in terms of other communities I might interact with (outside of tech circles) there seems to be a much higher interest for high refresh relative to high resolution.

In general I've always had this feeling that tech forums might lean more towards high resolution vs. high refresh more so compared to the broader PC gaming market. I actually wonder if a high refresh vs high resolution survey done in this sub forum, the CPU sub forum, and the gaming sub forum if there would be a significant difference in results.
 
Last edited:

Ottonomous

Senior member
May 15, 2014
559
292
136
As he should thats his job, and Samsung and Apple are the best at what they do also, they are the most successful companies out there. Hard to argue with that.
Why would corporate profit be consumer success? Edit: Nvidia capitalizing on their dominance and compute tech is the main reason why you're considering a 2070 over a 2080 this gen
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Why would corporate profit be consumer success?

The same reason AMD has low end high power usage cards, its called R&D.

Nvidia capitalizing on their dominance and compute tech is the main reason why you're considering a 2070 over a 2080 this gen

I was gaming at 1080p with my gtx960 4gb and upgraded to a gtx1070 also for 1080p, I'm currently looking at a 1440p monitor so a gtx2070 will be good for quite a while at 100hz.
If I'm going to spend $500 ,I might as well get the latest tech and the performance I need.
Would I need a 2080? No, that's way to much card.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
It seems like GDDR6 has waaaay higher power consumption than HBM2.

HBM/HBM2 is 3-4x more efficient per GB/s of bandwidth than GDDR technology. It's achieved by a combination of lower frequency, lower voltage but wide running memory coupled with traces being extremely short due to close proximity to the GPU chip.
 
Reactions: ozzy702

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
HBM/HBM2 is 3-4x more efficient per GB/s of bandwidth than GDDR technology. It's achieved by a combination of lower frequency, lower voltage but wide running memory coupled with traces being extremely short due to close proximity to the GPU chip.

Makes you wonder how far behind Vega would be in perf/watt is if it weren't using HBM2...Might be a 400W card.
 
Reactions: ozzy702

RichUK

Lifer
Feb 14, 2005
10,334
677
126
The temptation is kicking in. Possibly a 2080.

Currently running a gtx 1060 6GB with a 144hz 1440p monitor.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Do you have a link for that, specifically for gddr6 vs hbm.
Thanks

This is based on the claim by AMD when they launched Fury. GDDR6 should be more efficient, but so is HBM2 over HBM.

You won't see a big difference at the chip level because considering how much HBM2 costs they're going to use the efficiency to significantly improve bandwidth so it can be used on expensive products. So with Vega, you might have seen 10-20W extra power, but only have 300-350GB/s memory bandwidth if they opted to use GDDR5.

Some manufacturers claim even less of a gain for HBM. They are saying 2-2.5x difference at the same bandwidth.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
580
126
Do you have a link for that, specifically for gddr6 vs hbm.
Thanks

This is based on the claim by AMD when they launched Fury. GDDR6 should be more efficient, but so is HBM2 over HBM.

You won't see a big difference at the chip level because considering how much HBM2 costs they're going to use the efficiency to significantly improve bandwidth so it can be used on expensive products. So with Vega, you might have seen 10-20W extra power, but only have 300-350GB/s memory bandwidth if they opted to use GDDR5.

Some manufacturers claim even less of a gain for HBM. They are saying 2-2.5x difference at the same bandwidth.

There was an interesting paper I read a while back discussing the possibilities of "Fine-Graned RAM" as an alternative to both HBM and GDDR. That's neither here nor there really. New memory architectures and technologies are announced by the handfuls every year. Link: https://www.cs.utexas.edu/users/skeckler/pubs/MICRO_2017_Fine_Grained_DRAM.pdf

One of the things I did remember is that it provided numbers are the energy for data access. According to that paper, the amount of Picojoules per bit of data accessed is like this:

GDDR5: 14.0 pJ/bit
HBM2: 3.9 pJ/bit

In those terms, HBM2 does indeed provide a very significant power savings per bit moved. Those savings are converted into effective bandwidth of course if you simply use the gains to increase your bandwidth in the same power envelope.

Samsung claims 75% more bandwidth from GDDR6 and 35% less power compared to GDDR5. While that is a substantial gain, I do not believe that will catch up to it's own Aquabolt Gen 2 HBM, but that's only speculation on my part. I have not seen any hard numbers published yet on the efficiency of GDDR6.
 
Reactions: psolord

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Samsung claims 75% more bandwidth from GDDR6 and 35% less power compared to GDDR5. While that is a substantial gain, I do not believe that will catch up to it's own Aquabolt Gen 2 HBM, but that's only speculation on my part. I have not seen any hard numbers published yet on the efficiency of GDDR6.

75% more bandwidth at same power, or 35% less power at same performance. It always works like that.

Also, other data shows the later iterations of GDDR5(not GDDR5x) are more efficient than earlier ones.

It's still a trade-off as HBM technology is lot more expensive.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I wonder about the veracity of this statement and if there is a disconnect between the tech community and the broader gaming market.

The broad gaming market doesn't want super expensive stuff either.

Their overall volume is going to decrease with RTX, but it'll be more than made up by the price increase.

7nm stuff isn't going to be cheaper either. Maybe they'll keep the price same and actually increased perf/$ that gen.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Early review over at [H] of a 2070 they got their hands on... an MSI AIB card. Conclusion is 15 to 20% faster than GTX1080 but focuses on the fact that the MSI is clocked among the highest out of the box. Still decent:

https://www.hardocp.com/article/2018/10/14/msi_geforce_rtx_2070_gaming_z_performance_review/
That's a good review , the 2070 is 10 to 20% faster than a gtx1080 at the same 1920 core clocks and consumes the same amount of power . A good apples to apples comparison.
It just blows the more expensive Vega 64 OC away.
I guessed the 2070 would be about 10% faster than the gtx1080 , but it's dead smack in the middle of a gtx1080 and 1080ti and so is it's price.
 
Last edited:

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
580
126
75% more bandwidth at same power, or 35% less power at same performance. It always works like that.

Also, other data shows the later iterations of GDDR5(not GDDR5x) are more efficient than earlier ones.

It's still a trade-off as HBM technology is lot more expensive.

While I agree that's how it typically works. I have not read any particular language that indicates that it's an either/or this time. GDDR5 was rated at 8Gbps and 1.5V. GDDR6 is rated at 14Gbps and 1.35V. That's where Samsung is getting their figures from. GDDR5X increased bandwidth to 11Gbps at 1.5V, so it slots somewhere in the upper middle of the curve.

I believe most of the savings are from moving to a smaller node.

No argument that it's not a cost balance, same as GDDR6. Otherwise everything would be using GDDR6 or HBM2.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I believe most of the savings are from moving to a smaller node.

Whatever changes it has cumulatively results in the claimed gain, but its never an AND. It's always an OR.

It's because any power savings are used to increase frequency. GDDR6 clocks 75% higher, so it needs more than a 10% voltage reduction to keep power usage same as the chip that clocks far less. Voltage reduction + capacitance reduction from new process.

Do you believe a 2.7x efficiency gain is more realistic?

No argument that it's not a cost balance, same as GDDR6. Otherwise everything would be using GDDR6 or HBM2.

GDDR6 is still cheaper than HBM2.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |