Idea: should nvidia offer a way to run the 970 as just 3.5 GB?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 15, 2014
119
0
76
Help me out for a second here:

So does it write on the 7,1 (first bus section), but then to write to 7,2(the second split via the buddy interface), does it have to repeat all the steps;??

On the 980: 12345678
On the 970: 123456(7,1)(1234567)(7,2)

Where the ones in bold are not being used

Instead of jumping to 8, but since it needs to access 7 again, it has to start all the way from the beginning, that's why it reduces the speed to 1/8 ?

I'm no expert, but why not find a way to make use of the 123456 buses while doing it's second round for the 7,2 channel, because then it would be able to either use 7,1 OR 7,2 .
Like:
1234567[1or2]

IT seems smarter to design it this way, but deep down there's probably a reason they did it the way they have.
 
Last edited:

Pneumothorax

Golden Member
Nov 4, 2002
1,182
23
81
3.5gb 970/4gb 980
2gb 770/ 3gb 780
1.2gb 570/ 1.5gb 580
896mb 275/1gb 280
512mb 8800gt/768mb 8800gtx

The above and the whole 3.5gb 970 issue suddenly really isn't surprising me much honestly,expecting a true 4gb 970 would have broken a trend Nvidia has been following way back to 8800 series if not even longer on their top tier and second best cards.

It still sucks but i wouldn't be surprised if this trend continues going forward in the future.

Then all the more reason why 'cut down' Nvidia chips should be avoided in the future. I don't understand why they're so stinkin' stingy when it comes to vram? Planned obsolescence? How is AMD able to provide the same ram for their 'cut-down' parts? They're just like Apple when it comes to regular system ram.
 
Feb 15, 2014
119
0
76
3.5gb 970/4gb 980
2gb 770/ 3gb 780
1.2gb 570/ 1.5gb 580
896mb 275/1gb 280
512mb 8800gt/768mb 8800gtx

The above and the whole 3.5gb 970 issue suddenly really isn't surprising me much honestly,expecting a true 4gb 970 would have broken a trend Nvidia has been following way back to 8800 series if not even longer on their top tier and second best cards.

It still sucks but i wouldn't be surprised if this trend continues going forward in the future.

770 was GK104
780 was GK110
entirely different chips!!!

Compare 670 vs 680=770

They probably realized they were losing chips because of memory defects, so they decided to add provisions in their architecture to take care of these problems.

There are probably 970 chips out there with okay memory interfaces which have been laser-ed to shorten the width of the bus.

They should have released this as 965 (3.5Gb 224bit) and 970 (4gb 256bit)
or
970 (3.5gb 224bit) and 970ti(4gb 256bit); no doubt the 3.5GB looks bad
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
770 was GK104
780 was GK110
entirely different chips!!!

Compare 670 vs 680=770

They probably realized they were losing chips because of memory defects, so they decided to add provisions in their architecture to take care of these problems.

There are probably 970 chips out there with okay memory interfaces which have been laser-ed to shorten the width of the bus.

They should have released this as 965 (3.5Gb 224bit) and 970 (4gb 256bit)
or
970 (3.5gb 224bit) and 970ti(4gb 256bit); no doubt the 3.5GB looks bad

Valid points, but I wonder if they would have had to artificially cut-down too many intact GM204s (intact 980s) to meet the SKU demand.

It appears the 'design' intent of the 970 was allow for a variety of fab failures to be included in the 970 SKU.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
770 was GK104
780 was GK110
entirely different chips!!!

When i got my 770,the 680 was already phased out so pretty much the 780 was a fair comparison.It was still the same 700 series. GPU Boost 2.0 and a higher bin on the 680=770 so its nearly a whole new card in itself.

Believe voltage and tdp was also raised.Need to double check that information.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Well....

The 3.5gb stride is all that can happen at once, that's 196GB/s. the segmented 500mb has the same bandwidth as every one of the other 7 memory blocks. They all have the same bandwidth capability, 28gb/s. Every last one of them

That's eight 500mb memory blocks at 28gb/s

28 X 8 = 224 GB/s

That's where we get the 224.

The segmented ram has the same speed as every other 500 mb block. Anandtech article actually states this clearly but soon afterwards they start making things really muddy. but so we can stay clear, let me restate-
The segmented ram is no slower than any other 500mb block when it is accesed.

Soon after anandtech article explains that, they start clumping all the other 500mb blocks together as one large 3.5gb memory block. Everyone is calling this the fast ram, but its actually seven 500mb blocks at 28GB/s. that's where you get the 196GB/sec figure.

But that's we're this entire thing is getting confused. If you were pushing every bit of all the ram thru at one time, it could only come from 7 of the 8 ram blocks in one large and organized stride. 7 blocks pushing 28GB/s all at once for a maximum of 196GB/s. if you were steady pushing 3.5gb worth of data, non stop over a span of time, the segmented 500mb would only hurt because it cannot be read while its neighboring block is being read at the same time......

BUT-

That's completely unlike any gaming load. The only way the scenario exist would be in a synthetic / benchmark condition that was created with the sole purpose of testing the entire bandwidth all at once. In a real life, a real gaming scenario, its just not a steady 3.5GB/s data push thru non stop over a long span of time. Your GPU would completely be overwhelmed, the GPU would be completely saturated. It's completely unrealistic and absolutely not how the GPU works in a normal gaming environment.

The truth is that there is no 3.5gb block of ram. It's 7 blocks of 500mb. And the data seldom flows in a massive 3.5gb stride, unless your benchmark is written to do that.

So, then we have the matter of the "slow" ram. The "slow" 500mb......

Well, it doesn't really work like that.
It's actually two 500mb blocks that share the same path. It's actually 1gb of ram that shares resources. The other 3gb can carry on all day long without any problems. But those 2blocks that share the 1 path, this is the compromise. Either data flows thru one segment, or data flows from the other. But not at the same time.

Putting the entire 3.5gb together and calling it the fast ram is just totally wrong. I mean, in a scenario that is pushing strictly bandwidth capacity, the data can flow all together as if it is one large stride but in the real world games aren't 3.5gb chunks that just pass thru non stop. Games have all sorts of stuff loaded into VRAM and ATM, no single item stored in vram is 3.5gb in size. That's crazy to talk like that.

But the truth is, as fas as gaming, its not the 500mb interfering with the 3.5gb. That will never be the case. It's a 500mb block sharing the same path as its neighboring 500mb. It's the last 1GB that is in struggle. This is why nvidia set the drivers to avoid trying to stuff anything in there, because then you have the XOR situation that can choak out the smm.

The segmented ram can be read along with the all the other 3gb. It can be read as fast as all the other 500mb blocks. But when it pushing dats, it blocks data flow from the 500mb block beside it. This is the issue, and why it is so interesting.


Sooner or later somewhere there needs to be better explanation of is.

Anandtech spoke of the all the other 7 memory blocks as if it was one 3.5gb block right after they explained that the segmented 500mb had the same transfer rate as all the other ram blocks. They went from talking about how it works to describing a specific scenario, one that is very unlikely in the real world.

Its probably not so much better when you consider that two 500mb blocks are competing for the same path, that this is an entire gb segment swapping priority to be read. But that's how it will be when you are gaming over 3500mb. Nvidia might find some magic way to deal with it, or they might not. I expect they are working on a driver to better manage this but have pulled down reference to try to save their butts legally when the lawsuits come in. If the driver comes out it will be carefully worded as not to be a "fix" for anything. Because they will stand by their stance that nothing is broke and it was intentional design. Their lawyers surely have been advising them and that is why they are scared to make any moves right now
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,403
12,864
136
The truth is that there is no 3.5gb block of ram. It's 7 blocks of 500mb. And the data seldom flows in a massive 3.5gb stride, unless your benchmark is written to do that.
In one case a 1KB stride is being stripped over 7 crossbar ports, in the other it is being transferred though 1 crossbar only. You still think there's no 3.5GB block?
 
Last edited:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
In one case a 1KB stride is being stripped over 7 crossbar ports, in the other it is being transferred though 1 crossbar only. You still think there's no 3.5GB block?

Exactly, don't need to fill it for the bandwidth to be available. Everytime something is read or written to the 3.5 GB section it is being done simultaneously on all 7 memory chips. That last 0.5 GB is just one lonely chip and because it shares a controller with one of the chips in the 3.5 GB section even using that lonely chip will impact the use of the stripe of 7.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
And if they turn off access to that 512 MB, there is no chance of a stall.

If my 680 died I'd still be interested in the 970 if I didn't need to worry about some future game or OS change crippling the RAM speed because the driver fixups can't cope with it. Without the peace of mind of knowing I can make it work as a normal 3.5 GB card it's off my list. Just my opinion of course.
 
Feb 15, 2014
119
0
76
According to them its still better to have the 0.5GB slow portion, thanks to the way they segmented it, if data is less than 3.5gb, they do not allow it to slow down your card, but then if its more,theyd rather make it access it from the slow part than the system ram, which is even slower

they should have a 3.5gb software option, might be useful for troubleshooting a bad game
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,980
126
According to them its still better to have the 0.5GB slow portion, thanks to the way they segmented it, if data is less than 3.5gb, they do not allow it to slow down your card, but then if its more,theyd rather make it access it from the slow part than the system ram, which is even slower
That was exactly my understanding of the situation.
 

Eymar

Golden Member
Aug 30, 2001
1,646
14
91
One way to read why not to disable the .5GB and it is true. However, it is more likely (based on Ryan Smith's article) to keep the 4GB total memory for marketing and maximize bandwidth for most situations by not disable a whole ROP\MC partition. Maxwell doesn't allow just a MC to be disabled so if the whole ROP\MC partition is disabled then overall bandwith goes to 168 GB/s (6 chips * 28 GB/s) and 3 GB total memory for everything. Versus current 192 GB/s and 3.5 GB for practically everything and the red headed step child MC with it's .5GB with 28 GB/s bandwidth. 228 GB/s total bandwidth and 4GB definitely looks better in advertising specs.
 
Last edited:

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
If marketing is that important, it can ship as 4 GB card but with an "advanced" option in settings for 3.5 GB mode, where that is what it reports to the OS.

The bad ROP/MC stays enabled, but the OS and games are told that there is only 3.5 GB.

That way games and the OS treat it correctly as a 3.5 GB card if necessary. Any game code that tries to guess at the best memory use for the card will then see 3.5 GB and load its 2 GB or 3 GB textures, etc. instead of trying to really use all 4 GB.
 

darkfalz

Member
Jul 29, 2007
181
0
76
There is no advantage to it, at all. Using 500 MB of slower VRAM for textures would still be much better than using 500 MB of (comparatively) very slow system ram. If you don't want it to use over 3.5 you'll probably need to go to a lower level with textures on new games (however these are probably aimed at 2 GB rather than 3 GB boards).
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
There is no advantage to it, at all. Using 500 MB of slower VRAM for textures would still be much better than using 500 MB of (comparatively) very slow system ram. If you don't want it to use over 3.5 you'll probably need to go to a lower level with textures on new games (however these are probably aimed at 2 GB rather than 3 GB boards).

An option for CUDA or professional apps that perform better if they think it only has 3.5GB?
 

NTMBK

Lifer
Nov 14, 2011
10,269
5,134
136
I think that they should definitely enable it as an option, somewhere deep in the settings menu. It's unrealistic to expect NVidia to make tuned drivers just for the 970 for every single game that comes out in the next 5 years (let alone the entire PC catalogue), so there will be cases where the funky memory setup causes performance issues. Giving power users the option to try disabling the last 512MB seems pretty sensible, and I'd applaud NVidia if they gave that option. :thumbsup:
 

flexy

Diamond Member
Sep 28, 2001
8,464
155
106
This advanced "buffer memory" of 512MB doesn't even SEEM TO WORK, people reporting pagefile and system memory access once 3.5GB memory are exceeded. It's like the 512MB chunk is not even there.

And of course things wouldn't get "better" by limiting the card to 3.5GB (if they're not already limited). For sure not in those scenarios where we reach (or want to allocate) 3.7, 3.8, 3.9 GB.

Remember: The entire fiasco came to light because people observed that the cards only allocate 3.5GB as opposed to 4GB on the 980. Obviously, the magic 512MB which are "still better than system memory" don't work as intended.

Latest rumours say that when the card is allocating more than 3.5GB it's in fact using system memory.
 

UaVaj

Golden Member
Nov 16, 2012
1,546
0
76
this would be an admission of guilt.

even so. nvidia should just disable that slower 0.5gb and make 970 into 3.5gb.
ideally via driver, if not possible - then via firmware, last resort - de solider that slower dram.

that will keep anything from being allocated into that slower ram causing stutter "when" access.

3.5gb vram is more than enough. (anyone see 680/770 owner complaining about 2gb vram not enough?)
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
3.5gb vram is more than enough. (anyone see 680/770 owner complaining about 2gb vram not enough?)

Exactly. My 680 works with great with 2 GB. I'd consider buying a 970 as a replacement if I had the peace of mind that games in 2016+ won't see "4 GB RAM" when they query the card and then try to use all 4 GB.

If games see 3.5 GB and use 3.5 GB, great. I don't want to have to mess with settings to try to force the card below 3.5, and maybe only get the image quality or FPS of a 2 GB card because I needed to down-setting too much.
 

UaVaj

Golden Member
Nov 16, 2012
1,546
0
76
future proofing is an epic failed fallacy.

when 2016+ comes. you simply upgrade to whatever video card that is necessary to get the job done.



future proofing (aka investing) into nvidia gpu. you are much much off investing into nvidia stocks.
 
Last edited:

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
This advanced "buffer memory" of 512MB doesn't even SEEM TO WORK, people reporting pagefile and system memory access once 3.5GB memory are exceeded. It's like the 512MB chunk is not even there.

A.

Saw this when i messed with COD:AW during the free weekend on Steam.Anything outside of Normal textures would max out my 970@3.5gb and without a pagefile i was pretty much crashing so hard i could only hard reset my tower.Dagefile helped with stability but i saw constant disk access during game play and slight stutter.

Titanfall with Insane Textures causes crashes on certain maps,i haven't tried the game with a pagefile but that game is such a travesty i simply run very high+4x msaa without issues.

Both crap examples i know but i am sure if i got into other major titles i would see even more examples.
 
Feb 19, 2009
10,457
10
76
future proofing is an epic failed fallacy.

when 2016+ comes. you simply upgrade to whatever video card that is necessary to get the job done.

Not for Multi-GPU users. For example, SLI 770s 4GB still pack plenty of firepower, enough to run SoM maxed with ultra textures and won't suffer terrible stutters like 970 or SLI 970.

I'm also pretty sure a 980 SLI is future proof for 1080/1440p in 2016, as in, it'll run most games very well at high settings.

The term future proofing has many meanings, but for me, it simply means extending the useful life of your hardware rather than updating to the latest every year.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
future proofing is an epic failed fallacy.

when 2016+ comes. you simply upgrade to whatever video card that is necessary to get the job done.



future proofing (aka investing) into nvidia gpu. you are much much off investing into nvidia stocks.

I don't buy a new video card every year.

And I shouldn't need to buy a new video card just because I'm tired of messing with settings to get the best quality while keeping VRAM use under 3.5 GB. The card and game engine should negotiate that for me, but with the 970 it might lie and say "I'm a 4 GB card" even though it isn't. If the 970 says "I'm a 3.5 GB card" then my gaming life will be unicorns and rainbows.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Biggest problem with the idea is that nVidia is claiming no performance issues because of the memory partitioning. So, what's the point of having it if it's not going to improve anything. All it will do is lose you .5gig of RAM, from their public point of view.

Assuming there is any merit to making it a 3.5gig card, then what about those who would not have upgraded from a 2 or 3 gig card if the card was known to be effectively a 3.5gig card?

nVidia's whole defense on this has been that it's a complete non issue. They either need to come out and take it on the chin about this card's memory configuration not performing the same as a true 4gig/256bit card, or keep up the line that there is absolutely nothing wrong. That would preclude them from doing anything about it.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |