GeForce GTX 1180, 1170 and 1160 coming in August. Prices inside

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

SirDinadan

Member
Jul 11, 2016
108
64
71
boostclock.com
It's pretty simple really- the Titan V had 42% more CUDA cores than the 1080 Ti, yet HardOCP testing showed that typically it was only around 30-35% faster than the 1080 Ti FE. There's a utilisation issue with having all those cores, so Turing drops some cores and perhaps has some underlying tweaks at the SM level to make up some of the deficit. This probably results in ~30% gains over the 1080 Ti that we're talking about.
I have recently benchmarked TITAN V against the Pascal (and Maxwell) cards with different GPU rendering sw and the difference can range from 20%-50% between the GTX1080 Ti and the TITAN V.
GPU rendering - Maxwell vs Pascal vs Volta performance scaling - V-Ray, Redshift, Indigo Renderer, LuxMark, Blender Cycles @ BoostClock.com
 
Reactions: moonbogg

happy medium

Lifer
Jun 8, 2003
14,387
480
126
I am running a 1060 GTX 6GB - how much faster will the RTX 2080 Ti be?

I recently bought an Asus 144hz 1440p G-Sync monitor and get about 50fps at max setting in PUBG at 1440p (OW is typically 110 FPS).
You would get about 150fps.
Especially if you have an overclocked Intel modern CPU.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
I have recently benchmarked TITAN V against the Pascal (and Maxwell) cards with different GPU rendering sw and the difference can range from 20%-50% between the GTX1080 Ti and the TITAN V.
GPU rendering - Maxwell vs Pascal vs Volta performance scaling - V-Ray, Redshift, Indigo Renderer, LuxMark, Blender Cycles @ BoostClock.com
First of all, I love the work you're doing.

As for the results themselves, am I correct in assuming that the straight lines are mean curves passing through the highest performing card in each generation? If so, it is quite surprising to see Maxwell doing much better than Pascal in normalized perf/GFLOP, and except for Redshift and Blender, which is scene-dependent, Volta trumps them all pretty consistently by a wide margin. I would therefore expect Turing to behave similarly, if not even better in these benchmarks.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
I have recently benchmarked TITAN V against the Pascal (and Maxwell) cards with different GPU rendering sw and the difference can range from 20%-50% between the GTX1080 Ti and the TITAN V.
GPU rendering - Maxwell vs Pascal vs Volta performance scaling - V-Ray, Redshift, Indigo Renderer, LuxMark, Blender Cycles @ BoostClock.com
There is so much wrong with this test methodology it's not even funny.

You are testing on a 2016 OS update, on Linux, on OpenCL, on an R5 1600, while using pseudo-scientific "normalized" numbers comparing to gflops when almost all your tests are memory bandwidth/latency limited.

Trying to extract any useful information out of them with this testing methodology outside of the memory bandwidth and latency of the specific cards is just completely misunderstanding of how any of this stuff works.
 
Last edited:

sze5003

Lifer
Aug 18, 2012
14,184
626
126
I am running a 1060 GTX 6GB - how much faster will the RTX 2080 Ti be?

I recently bought an Asus 144hz 1440p G-Sync monitor and get about 50fps at max setting in PUBG at 1440p (OW is typically 110 FPS).
Coming from a 1060 it's a good upgrade if you don't mind spending the money for it. I'd say get a 1080ti instead. But better to wait until Monday and then look out for some benchmarks whenever those will come.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
There is so much wrong with this test methodology it's not even funny.

You are testing on a 2016 OS update, on Linux, on OpenCL, on an R5 1600, while using pseudo-scientific "normalized" numbers comparing to gflops when almost all your tests are memory bandwidth/latency limited.

Trying to extract any useful information out of them with this testing methodology outside of the memory bandwidth and latency of the specific cards is just complete misunderstanding of how any of this stuff works.
TIL that memory latency is an important factor in GPU rendering, and performance metric reported in terms of wall time or Msamples/Mrays per second is "pseudo-scientific". Can I have what you're smoking?
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
TIL that memory latency is an important factor in GPU rendering, and performance metric reported in terms of wall time or Msamples/Mrays per second is "pseudo-scientific". Can I have what you're smoking?

Trying to mask your lack of knowledge by addressing zero of my points but trying to build a straw man instead. Typical.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Trying to mask your lack of knowledge by addressing zero of my points but trying to build a straw man instead. Typical.
Typical response trying to sound smart without knowing that many of the commercially available renderers are OpenCL, and that GPU compute is done through issuing compute kernels on the CPU, and so long as you don't measure the time it takes to prepare those kernels, GPU compute is pretty much independent of the CPU as far as rendering is concerned.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
Typical response trying to sound smart without knowing that many of the commercially available renderers are OpenCL, and that GPU compute is done through issuing compute kernels on the CPU, and so long as you don't measure the time it takes to prepare those kernels, GPU compute is pretty much independent of the CPU as far as rendering is concerned.

Keep going.

Make sure to link to your website of choice which shows how a 1080 is the same speed as 980 because it cannot possibly be because of a purposefully CPU limited test right?

Oh, and good job making another straw man while also ignoring my original post yet again.

Let's see if you can top the amount of stawmen you produced the last time you word vomited at me in a thread.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Keep going.

Make sure to link to your website of choice which shows how a 1080 is the same speed as 980 because it cannot possibly be because of a purposefully CPU limited test right?

Oh, and good job making another straw man while also ignoring my original post yet again.

Let's see if you can top the amount of stawmen you produced the last time you word vomited at me in a thread.
Here you go, 8086K at 5GHz Luxmark microphone render. 1070 and 1070 Ti performing nearly identically. CPU-bottleneck, right? Here let me give you a hint - rendering needs a compute, not graphics API.
 

SirDinadan

Member
Jul 11, 2016
108
64
71
boostclock.com
@tamz_msc
Thx! The lines are just simple straight lines to the top card of each architecture.
In my reasoning, if the dots sit from an uarch on the top cards line that means great scaling - adding more CUDA cores / freq is not wasted on the top GPU.

Moreover, I wanted to highlight that even the same render engine can produce very different results depending on the scene. My next aim is to introduce more scenes for V-Ray, Redshift etc.
Additionally, some of the benchmarks scenes are a bit old or don't represent "real-world" problem complexity. Except for the last Blender scene which is from an actual production (and the scaling is quite good).

@24601
- Latest CUDA on Ubuntu is only supported on 17.10, 16.04. Went for 16.04 as it is an LTS release, and of course everything was patched up with the latest updates.
- Tested on Win10 as well, you can check out the results, very same performance in OpenCL based renderers. Win10 can be slower with CUDA sometimes (depending on sw / scene).
- On the OpenCL thing - these are GPU render engines, some of it is exclusive to NV and uses CUDA; Indigo and LuxMark is OpenCL based.
- pseudo-scientific "normalized" numbers - what? In the "test methodology" section everything is elaborated, some render engines provide render times, some MSample / sec. Normalization is just taking the top performer = 100%.
- R5 1600 - next thing for the channel is to get a HEDT platform. These are GPU render engines, if everything is fine the R5 1600 should get the job done. As I mentioned will test with HEDT platform and compare results.
- I compared our results with Phoronix and PugetSystems and everything agreed - they used top-of-the-line CPUs.

I'm always open for constructive suggestions on what I have got wrong or how to present data. Although I'm a bit baffled by the outright negativity. To measure, process and create these kind of data takes a lot of time and effort, and I do my best to be as transparent as possible about system details and how do I collect the scores.

I'm sorry for derailing the thread. If you want to contact me about my wrongdoing on the benchmark I guess you can find my email on the site or just create another topic. No more comments from me on the issue as this is off-topic. Sorry again!
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
@tamz_msc
Thx! The lines are just simple straight lines to the top card of each architecture.
In my reasoning, if the dots sit from an uarch on the top cards line that means great scaling - adding more CUDA cores / freq is not wasted on the top GPU.

Moreover, I wanted to highlight that even the same render engine can produce very different results depending on the scene. My next aim is to introduce more scenes for V-Ray, Redshift etc.
Additionally, some of the benchmarks scenes are a bit old or don't represent "real-world" problem complexity. Except for the last Blender scene which is from an actual production (and the scaling is quite good).

@24601
- Latest CUDA on Ubuntu is only supported on 17.10, 16.04. Went for 16.04 as it is an LTS release, and of course everything was patched up with the latest updates.
- Tested on Win10 as well, you can check out the results, very same performance in OpenCL based renderers. Win10 can be slower with CUDA sometimes (depending on sw / scene).
- On the OpenCL thing - these are GPU render engines, some of it is exclusive to NV and uses CUDA; Indigo and LuxMark is OpenCL based.
- pseudo-scientific "normalized" numbers - what? In the "test methodology" section everything is elaborated, some render engines provide render times, some MSample / sec. Normalization is just taking the top performer = 100%.
- R5 1600 - next thing for the channel is to get a HEDT platform. These are GPU render engines, if everything is fine the R5 1600 should get the job done. As I mentioned will test with HEDT platform and compare results.
- I compared our results with Phoronix and PugetSystems and everything agreed - they used top-of-the-line CPUs.

I'm always open for constructive suggestions on what I have got wrong or how to present data. Although I'm a bit baffled by the outright negativity. To measure, process and create these kind of data takes a lot of time and effort, and I do my best to be as transparent as possible about system details and how do I collect the scores.

I'm sorry for derailing the thread. If you want to contact me about my wrongdoing on the benchmark I guess you can find my email on the site or just create another topic. No more comments from me on the issue as this is off-topic. Sorry again!

Using "normalized" numbers to imply that results should be linear regardless of memory bandwidth and latency of the cards.

Many compute workloads are explicitly memory bandwidth and/or memory latency bottlenecked.

Those happen to be most of the ones you chose to test.

What this misses is that this thread is about gaming cards, and therefore the implied purpose of this thread is about gaming performance.

The reason this is relevant is that each subsequent version of Nvidia's gaming cards have improved texture processing compression (the compression used for both storing the textures for intermediate steps as well as original streaming from VRAM to L2/L1/L0)

This allows Nvidia to lower the memory bandwidth to gflops ratio for each architectural advance between GF1xx to GP1xx/GV1xx and perhaps TU1xx.

An objective observer would see your "testing" as purposefully implying that gaming performance per GFLOP has regressed with each subsequent architectural advance when the opposite is actually the case. (As you either succesfully got tamz_msc to think and/or they simply want any possibly negative thing about Nvidia shouted from the rooftops)
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,359
5,017
136
You would get about 150fps.
Especially if you have an overclocked Intel modern CPU.

150 FPS when nothing is going on (even with some settings at low).

With an overclocked i7-8700K and a GTX 1080 Ti that boosts to ~2000MHz I can't maintain 144 FPS at 1440p, much less 100 FPS when the action gets intense. PUBG is poorly optimized.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
150 FPS when nothing is going on (even with some settings at low).

With an overclocked i7-8700K and a GTX 1080 Ti that boosts to ~2000MHz I can't maintain 144 FPS at 1440p, much less 100 FPS when the action gets intense. PUBG is poorly optimized.

PUBG simply has more actual things happening at once within player view-range than the bevy of console ports neutered by having to support smooth play on 4+2-4 core Jaguar cores clocked at 1.6 ghz.

It's simply a matter of design target.

If PubG was designed to strict console play-ability in mind, it would simply design the game to have less things going on (i.e. Destiny 2) and then people would think it was "better optimized"

It's the classic PC first vs Console first design philosophy clash.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Strawman post #n, not even 1 post pause between your strawman word vomit.

Keep ignoring my original post.
Using "normalized" numbers to imply that results should be linear regardless of memory bandwidth and latency of the cards.

Many compute workloads are explicitly memory bandwidth and/or memory latency bottlenecked.

Those happen to be most of the ones you chose to test.

What this misses is that this thread is about gaming cards, and therefore the implied purpose of this thread is about gaming performance.

The reason this is relevant is that each subsequent version of Nvidia's gaming cards have improved texture processing compression (the compression used for both storing the textures for intermediate steps as well as original streaming from VRAM to L2/L1/L0)

This allows Nvidia to lower the memory bandwidth to gflops ratio for each architectural advance between GF1xx to GP1xx/GV1xx and perhaps TU1xx.

An objective observer would see your "testing" as purposefully implying that gaming performance per GFLOP has regressed with each subsequent architectural advance when the opposite is actually the case. (As you either succesfully got tamz_msc to think and/or they simply want any possibly negative thing about Nvidia shouted from the rooftops)
I was replying to another guy saying that average gaming performance increase of Titan V compared to the 1080Ti was 30-35%. This other guy did some comparisons in GPU rendering, which is a compute, NOT graphics workload where the Titan V can be even more effectively utilized and thus can give higher performance, which led me to comment that Turing would probably bring improvements that helps in better utilization of the CUDA cores, just like how Volta is able to better utilize its CUDA cores for compute.

And you with all your 'knowledge' and radiating intelligence all around come along and call these compute benchmarks pseudo-scientific and harp on like a broken tape-recorder saying things that are completely irrelevant to the original discussion I was trying to have.
 
Reactions: SirDinadan

24601

Golden Member
Jun 10, 2007
1,683
39
86
as expected, trying to ret-con your mistakes instead of admitting to them.

Classic.

As predictable as one of those new-fangled AI algos.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
as expected, trying to ret-con your mistakes instead of admitting to them.

Classic.

As predictable as one of those new-fangled AI algos.
I'm still waiting for your explanation as to why factoring out the time it takes to issue compute kernels to the GPU by the CPU in reporting benchmark scores makes benchmarking irrelevant because it was done with a Ryzen CPU.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
I'm still waiting for your explanation as to why factoring out the time it takes to issue compute kernels to the GPU by the CPU in reporting benchmark scores makes benchmarking irrelevant because it was done with a Ryzen CPU.

Literally still making straw-men arguments after the argument is already over.

Fix your AI posting algo bro, it's busted.

break;
return = 0;
#ChrisHookYouForgotToTurnOffYourBurnerWhenYouLeftTheHouse
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Literally still making straw-men arguments after the argument is already over.

Fix your AI posting algo bro, it's busted.
Thankfully this forum has the ignore button for differently-abled folks.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
STFU you 2 - you’re ruining the thread with your purse swinging.
And predictably, incoming false equivalency.

Such sore losers, lol.

Posting bullcrap at me and then blaming me that you posted bullcrap at me.

Classic.
 

24601

Golden Member
Jun 10, 2007
1,683
39
86
So ~$899 for an eVGA 2080Ti. Paper launch still or shipping in the next week or two?
AIBs are already taking pre-orders. The one on Nvidia's site could be day 1 or pre-order, won't know until monday.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
True, the math seems off. 2944 cores would require 1925 MHz boost just to match a 1080 Ti. It may be Nvidia is getting ready with much better gaming drivers with Turning launch.

Gotta remember the Titan V does Async/DX12 loads more efficient than Pascal. In Sniper Elite for example it was 50-60% faster than 1080 Ti despite only have 42.8% more shaders. So it's possible we'll see a slight IPC increase from drivers.

There doesn't seem to be any super high clocking going on.
https://videocardz.com/77439/pny-geforce-rtx-2080-ti-xlr8-series-leaked-by-pny

The overclocked 2080 has 1710MHz boost speed, 2080 Ti has a 1545 MHz boost speed.
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |