Anybody else unimpressed with new midrange Nvidia GPUs, and much higher MSRP?

Page 34 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Is it ordinary to get such a linear boost in performance with OC?

2.5/1.7=1.47
31286/21828=1.43
15280/10367=1.47
7369/4998=1.47

Also, is he using a reference card? Using the base 180 watts TDP, wouldn't the card need something like 270 watts for such an OC, which I assume needs an extra power connector (someone please correct me if I'm wrong)?

No, it's as if the memory bandwidth doesn't bottleneck the card at all which is strange because even on my Titan I get the improvement in fps when I OC the memory. This card has almost the same bandwidth as the original Titan yet the core is 3 or 4 times faster and yet the memory doesn't bottleneck it one bit? Well, there are bound to be games and moddes where it will but I expected a bit of a bottleneck from memory across the board.
Double the stock Titan X with the same memory bandwidth.(lower even) How effective is memory compression on the newest cards? Can 2-channel DDR4 2400MHz be enough for an APU with a GPU with similarly effective memory compression to surpass current APUs by 2-3X times?
 
Last edited:

Buttercream

Member
Sep 25, 2013
39
3
71
Someone on chiphell benched 1080 at 2.5ghz on air and says these are the scores, and also it is 46% faster than a 980Ti @ 1500/8000.

With this OC, it scores almost double the stock Titan-X


2.5ghz is achieved with watercooling, according to the title.
 

renderstate

Senior member
Apr 23, 2016
237
0
0
Is it ordinary to get such a linear boost in performance with OC?

2.5/1.7=1.47
31286/21828=1.43
15280/10367=1.47
7369/4998=1.47

Also, is he using a reference card? Using the base 180 watts TDP, wouldn't the card need something like 270 watts for such an OC, which I assume needs an extra power connector (someone please correct me if I'm wrong)?

It's not ordinary, especially when the overclocking factor is so large. It smells like fake..
 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
Your PM to me mentioned nothing of the 1070... Here was my response to you, also no mention of the 1070...



It's sad we have to deal with people like you on this forum... :\


Personal attacks are not allowed.
Markfw900

That's not how i meant..You said GTX770 is not powerful enough to utilize more than 2gb vram.I disagree with that statement but that's what you said.
Now the upcoming GTX1070 is supposed to be close to double the performance of GTX770.So by that logic of yours,its fairly easy to calculate that 1070 will not be fast enough to utilize more than 4gb vram if GTX770 can't utilize more than 2gb.
Again i believe you're wrong.But if i do assume you're right as you're more experienced forum member than me,then shouldn't Nvidia also release a 4gb 1070 for cheaper as acc to your theory,the 1070 can't utilize more than 4gb anyway?
You said "For how powerful a single 770 is, most games/settings that push VRAM over 2GB are going to get poor performance."
So by your logic it should also be "For how powerful a single 1070 will be, most games/settings that push VRAM over 4GB are going to get poor performance.
 
Last edited:

Freddy1765

Senior member
May 3, 2011
389
1
81
You said "For how powerful a single 770 is, most games/settings that push VRAM over 2GB are going to get poor performance."
So by your logic it should also be "For how powerful a single 1070 will be, most games/settings that push VRAM over 4GB are going to get poor performance.

What?
That's not how this works. That's not how any of this works.
 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
What?
That's not how this works. That's not how any of this works.

Yeah probably not.But from what i have learnt so far,i assumed if card A is twice as powerful as card B then card A should be able to utilize twice as much vram as card B.
So if you could be kind enough to explain how it actually works,that would be great.Does bus width,memory bandwidth,number of SP's,ROP's,shaders,manufacturing process,architecture,etc also play a part?I would love to know how can a person accurately determine how much vram a card can or cannot use.
 

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
Yeah probably not.But from what i have learnt so far,i assumed if card A is twice as powerful as card B then card A should be able to utilize twice as much vram as card B.
So if you could be kind enough to explain how it actually works,that would be great.Does bus width,memory bandwidth,number of SP's,ROP's,shaders,manufacturing process,architecture,etc also play a part?I would love to know how can a person accurately determine how much vram a card can or cannot use.
It only depends on what the games want. If a game has super high resolution textures but isn't very demanding otherwise even slow gpu's can make use of lots of vram.

Some games don't allow "ultra" textures on cards with not enough vram, others use a dynamic system, looking at how much vram is available, streaming in the appropriate texture detail for all the objects depending on where the player is in the gameworld.
 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
It only depends on what the games want. If a game has super high resolution textures but isn't very demanding otherwise even slow gpu's can make use of lots of vram.

Some games don't allow "ultra" textures on cards with not enough vram, others use a dynamic system, looking at how much vram is available, streaming in the appropriate texture detail for all the objects depending on where the player is in the gameworld.

Well if it depends on particular games then why have i been seeing such extremely generalized statements like "this card isn't fast enough to utilize this much vram" or "this card will run out of horsepower before it can utilize all the vram" over the past many years?
Its just that i'm annoyed by people throwing meaningless statements that a certain card isn't powerful enough to utilize certain amount of vram leading to a lot of people buying cards with lower vram when the higher one was just $20-50 more.And then i see a lot of statements like "If only i had got the model with double the vram i wouldn't have had to upgrade so soon".
Forum members making people buy lower vram card just to save a few dollars is one of the things that annoy me.
Thankfully the 1070 will come only in 8gb models so this debate can rest for now.However when Nvidia announces GTX1060 4gb for $229 and GTX1060 8gb for $279 then i bet we will see many"GTX1060 isn't powerful to take advantage of more than 4gb vram" statements.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Well if it depends on particular games then why have i been seeing such extremely generalized statements like "this card isn't fast enough to utilize this much vram" or "this card will run out of horsepower before it can utilize all the vram" over the past many years?
Its just that i'm annoyed by people throwing meaningless statements that a certain card isn't powerful enough to utilize certain amount of vram leading to a lot of people buying cards with lower vram when the higher one was just $20-50 more.And then i see a lot of statements like "If only i had got the model with double the vram i wouldn't have had to upgrade so soon".
Forum members making people buy lower vram card just to save a few dollars is one of the things that annoy me.
Thankfully the 1070 will come only in 8gb models so this debate can rest for now.However when Nvidia announces GTX1060 4gb for $229 and GTX1060 8gb for $279 then i bet we will see many"GTX1060 isn't powerful to take advantage of more than 4gb vram" statements.

Because people make blanket statements all the time without being honest with people and telling them the truth. They're trying to stand up for their favorite GPU company as well as justify their own current decisions.
 

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
Well if it depends on particular games then why have i been seeing such extremely generalized statements like "this card isn't fast enough to utilize this much vram" or "this card will run out of horsepower before it can utilize all the vram" over the past many years?
Its just that i'm annoyed by people throwing meaningless statements that a certain card isn't powerful enough to utilize certain amount of vram leading to a lot of people buying cards with lower vram when the higher one was just $20-50 more.And then i see a lot of statements like "If only i had got the model with double the vram i wouldn't have had to upgrade so soon".
Forum members making people buy lower vram card just to save a few dollars is one of the things that annoy me.
4GB models used to be significantly more expensive, currently it's only ~10% on something like a 960 so it's a no brainer.

The argument for slow cards to not need lots of vram is that high resolution textures might not make a big difference to image quality if you're running 720p without antialiasing. But people don't like nuance, they want a yes/no answer.
 

Timmah!

Golden Member
Jul 24, 2010
1,463
729
136
AMD GPUs superior at OpenCL....nothing new here. Show me some CUDA results, how it compares to GM200.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
AMD GPUs superior at OpenCL....nothing new here. Show me some CUDA results, how it compares to GM200.

CUDA is a closed standard. OpenCL on the other hand is open to both vendors. And apparently nVidia hasn't figured out a way to inject OpenCLworks into it yet.
 

renderstate

Senior member
Apr 23, 2016
237
0
0
CUDA is the de facto compute language standard. OpenCL is open but crappy and completely irrelevant. When Google prefers writing their own CUDA compiler instead of using OpenCL you already know who is the king of the hill.
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
CUDA is the de facto compute language standard. OpenCL is open but crappy and completely irrelevant. When Google prefers writing their own CUDA compiler instead of using OpenCL you already know who is the king of the hill.
Is this based on your experience with it?

OpenCL, and CUDA are exactly the same type of compute API. They are different, of course. What differed them mostly is easiness of programming apps. Because documentation for OpenCL is lacking a lot, and you need to optimize for every hardware there is, it is not easy to program applications for OCL. OTOH, CUDA is very well documented, and has very good Nvidia driver support, because it is their own proprietary API.

It is beyond me that people believe that thanks to CUDA 4 TFLOPs GPU from Nvidia is faster than 6 TFLOPs GPU from AMD. That is how strong mindshare is. AMD played however big with Boltzmann initiative, and CUDA compilers for CUDA-to-OpenCL.

Only proper implementation of OpenCL currently in any application I have seen is Final Cut Pro X.

And there are rumors that it will get GIGANTIC update with VR features at upcoming WWDC. Guess, which one vendor will provide drivers and hardware for that platform?

Vulkan will also use OpenCL for compute in gaming. Keep in mind that.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
CUDA is the de facto compute language standard. OpenCL is open but crappy and completely irrelevant. When Google prefers writing their own CUDA compiler instead of using OpenCL you already know who is the king of the hill.

Irrelevant for the particular post you made. We can't have the bench you requested because CUDA is closed.
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
CUDA is the de facto compute language standard. OpenCL is open but crappy and completely irrelevant. When Google prefers writing their own CUDA compiler instead of using OpenCL you already know who is the king of the hill.
cuda is good for certain things and its perfomance is balanced only by the next best nvidia card
opencl on the other hand isnt really bound on certain hardware and that makes it quite the best option...

i agree cuda is to gpgpu what steam is for the digital market the problem is nvidia never really cared to understand the market for those kind of things and that is being reflected to the recent lttiam decision to enable opencl on most smartphones and tablets out there..

so tell me how long you think nvidia will keep cuda as a closed source ecosystem?given that opencl practicly supports everything that matters and not out there GPUs, GPPs,FPGAs, and dsp's
 

renderstate

Senior member
Apr 23, 2016
237
0
0
It's irrelevant, OpenCL is already on its way out. Even Apple, that started the whole OpenCL effort, is not interested in it anymore. It's a crappy API surrounded by even crappier implementations. CUDA not being open hasn't stopped Google from implementing their own CUDA compiler. CUDA is simply the standard and OpenCL is a bad and subpar imitation.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
cuda is good for certain things and its perfomance is balanced only by the next best nvidia card
opencl on the other hand isnt really bound on certain hardware and that makes it quite the best option...

i agree cuda is to gpgpu what steam is for the digital market the problem is nvidia never really cared to understand the market for those kind of things and that is being reflected to the recent lttiam decision to enable opencl on most smartphones and tablets out there..

so tell me how long you think nvidia will keep cuda as a closed source ecosystem?given that opencl practicly supports everything that matters and not out there GPUs, GPPs,FPGAs, and dsp's

It is absolutely irrelevant to corporates if it is closed or open, the only that matters to them is support and timeliness to implement crs. I work in medical imaging where we do some of our heavy computational stuffs on gpus and we did evaluate quite a few options before taking the plunge with CUDA, in short opencl is a hot mess.The scalability was very inconsistent and getting some things done was a nightmare.
 

Glo.

Diamond Member
Apr 25, 2015
5,763
4,667
136
It's irrelevant, OpenCL is already on its way out. Even Apple, that started the whole OpenCL effort, is not interested in it anymore. It's a crappy API surrounded by even crappier implementations. CUDA not being open hasn't stopped Google from implementing their own CUDA compiler. CUDA is simply the standard and OpenCL is a bad and subpar imitation.

Well in some form OpenCL is going out. Why? Because AMD is working on their own CUDA-like API. It will be Open Source. It will be quite similar to Metal, from Apple. And it revolves around HSA 2.0 initiative. But in essence it is another Compute API. Oh, and it is based on Mantle 2.0.

And no, OpenCL in current form does not go anywhere. People who are claiming that OCL is irrelevant, or bad are completely clueless about what they are talking about. It will be used for both: Metal, Vulkan, and other general computing uses.
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
It's irrelevant, OpenCL is already on its way out. Even Apple, that started the whole OpenCL effort, is not interested in it anymore. It's a crappy API surrounded by even crappier implementations. CUDA not being open hasn't stopped Google from implementing their own CUDA compiler. CUDA is simply the standard and OpenCL is a bad and subpar imitation.
please give me any link stating that... as far as i can see more and more companies and programs are actually supporting opencl...
cuda is not a standar cuda is just ONE of many ways you just cant see what is going on between gpuopen opencl hsa and "mantle" (if you cant see how certain companies (7 to be precise) wants a total open source package for everything and this is what they are pushing for well i dont know )
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |