Yet another sneaky Nvidia trick

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

lopri

Elite Member
Jul 27, 2002
13,310
687
126
Originally posted by: apoppin
How does nvidia - using their own pciE first developed for AMD platforms -Linkboost was introduced on nforce 590 for *AMD CPUs* - can possibly be "unfairly crippling" intels' own chip sets? Unless you want to take his argument to the extreme that intel's chipset engineer are also incompetents regarding PCIe ... i think not
The reason I was thinking about it is because I know that DMI is, as mentioned earlier, a PCIe x4 lane. (DMI = NB to SB interconnect on Intel chipsets) On the other hand, AMD uses HyperTransport for just about every interconnects (core to core, chip to chip) and it's an open standard - which is why NV has been using it on their Intel chipsets.

With this information, it's easily explained the instability caused by high PCIe clocks if the DMI clock generator is shared with PCIe clock generator on Intel boards. Because basically this will mess up the NB-SB interconnect. Yeah but I need someone's confirmation on this. I don't know for sure.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
are you guys *stuck* on the clock frequency being reported wrongly?

have you perhaps considered that the product is too new to have these drivers correctly written

i know a launch is prioritized .. not everything is completed at launch

and i *notice* that things are changing a little bit ... there is some "doubt" as to nvidia's Purpose in this

I don't know for sure.

Good we are making progress. my mind is also open



believe-it-or-not

Oh, my friend reminded me that some reviews sites did test the 9600GT in intel MBs where LinkBoost doesn't work and saw similar improvement as in nForce MB reviews with 9600GT.

--so it is still a mystery ... Link Boost only appears to play some part .. but NOWHERE do i see evidence that it is "shady" .. actually i'd call it pretty 'slick' if i had one


But ... undocumented and unexplained fully, yes
--i love a mystery
:heart:
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: lopri
Originally posted by: apoppin
How does nvidia - using their own pciE first developed for AMD platforms -Linkboost was introduced on nforce 590 for *AMD CPUs* - can possibly be "unfairly crippling" intels' own chip sets? Unless you want to take his argument to the extreme that intel's chipset engineer are also incompetents regarding PCIe ... i think not
The reason I was thinking about it is because I know that DMI is, as mentioned earlier, a PCIe x4 lane. (DMI = NB to SB interconnect on Intel chipsets) On the other hand, AMD uses HyperTransport for just about every interconnects (core to core, chip to chip) and it's an open standard - which is why NV has been using it on their Intel chipsets.

With this information, it's easily explained the instability caused by high PCIe clocks if the DMI clock generator is shared with PCIe clock generator on Intel boards. Because basically this will mess up the NB-SB interconnect. Yeah but I need someone's confirmation on this. I don't know for sure.

i am not 100% sure what you are saying


NVIDIA has a license from Intel to develop chipsets. Are you saying they cannot *improve* on the standard?... Clearly if nvidia did something to nForce so that if AMD's cards ran worse on nForce than the PCIe standard there would be a lawsuit, imo.

Don't you think it is OK for nvidia to go above the standards for their OWN products. That is not "shady"

 

lopri

Elite Member
Jul 27, 2002
13,310
687
126
Well, after reading more - I'd rather think it could be just a compromise that NV had to make. 780i isn't a native PCIe 2.0, but rather a patched PCIe 2.0. Upcoming 790i will be a native PCIe 2.0. And we don't know how that chipset will handle PCIe 2.0 as yet. Remaining questions (other than technical ones) are:

1. Why did NV hide (or even lie about) this? The NV we know, will not lose a single chance for a PR.
2. If a card is capable of clocking up to xxxMHz - the limit of overclocking, be it through PCIe frequency or core frequency - what's the point of tying it to PCIe, instead of releasing the card at that xxxMHz? PCIe overclocking is, indeed, not recommended on NV chipset, either. (Except that NV chipset's PCIe is already overclocked as it is, because they didn't have time for native PCIe 2.0 with DDR2. This is reserved for 790i and DDR3)
3. And so on.

But this is not the 'LinkBoost' they championed in 2006 and I honestly don't see anything slick about it, either. If they want to claim this as 'LinkBoost', they need a new definition for the LinkBoost. "LinkBoost" was/is literally about boosting 'link'. The core theory of LinkBoost is that the improved PCIe bandwidth will yield better throughput between GPU-CPU or GPU-GPU. This was/is not something mistakable.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: lopri
Well, after reading more - I'd rather think it could be just a compromise that NV had to make. 780i isn't a native PCIe 2.0, but rather a patched PCIe 2.0. Upcoming 790i will be a native PCIe 2.0. And we don't know how that chipset will handle PCIe 2.0 as yet. Remaining questions (other than technical ones) are:

1. Why did NV hide (or even lie about) this? The NV we know, will not lose a single chance for a PR.
2. If a card is capable of clocking up to xxxMHz - the limit of overclocking, be it through PCIe frequency or core frequency - what's the point of tying it to PCIe, instead of releasing the card at that xxxMHz? PCIe overclocking is, indeed, not recommended on NV chipset, either. (Except that NV chipset's PCIe is already overclocked as it is, because they didn't have time for native PCIe 2.0 with DDR2. This is reserved for 790i and DDR3)
3. And so on.

But this is not the 'LinkBoost' they championed in 2006 and I honestly don't see anything slick about it, either. If they want to claim this as 'LinkBoost', they need a new definition for the LinkBoost. "LinkBoost" was/is literally about boosting 'link'. The core theory of LinkBoost is that the improved PCIe bandwidth will yield better throughput between GPU-CPU or GPU-GPU. This was/is not something mistakable.

i keep seeing this from you and i don't get it:
1. Why did NV hide (or even lie about) this? The NV we know, will not lose a single chance for a PR.

You keep insisting that nvidia did something *deliberate* .. to "conceal" as to get an unfair advantage somehow. And that they are *lying* .. imo that IS a pretty *heavy* accusation and the REASON i responded so strongly.

We don't know motive. And we don't know *for sure* even IF it is Link Boost - that was just MY theory - i don't know if it was updated ... i have no friends at nvidia ... and there is no reason to hold me to ridicule for my opinion just as you should not be *ridiculed* for yours!

i think we both want to get to the *bottom* of this. NVIDIA *has* to answer imo ... and i will withhold judgment - until i have more facts.

i will tell you what HAS surprised me . . . that i ended up "defending" NVIDIA completely alone
... what do you think of that .. they have no more fans here?
:Q
 

lopri

Elite Member
Jul 27, 2002
13,310
687
126
The information which crystal is used is stored inside the GPU's strap registers which are initialized from a resistor configuration on the PCB and the BIOS. In the case of the GeForce 9600 GT the strap says "27 MHz" crystal frequency and Rivatuner Monitoring applies that to its clock reading code, resulting frequency: 783 MHz = 27 MHz * 29 / 1. The NVIDIA driver however uses 25 MHz for its calculation: 725 MHz = 25 * 29 / 1. This explains the clock difference and can only be seen on the core frequency (the memory PLL is running at 27 MHz). Now the big question is, who is wrong? When I asked NVIDIA about this phenomenon they replied: "The crystal frequency is 25MHz on 9600GT. Clock is 650MHz". So far so good. But why would you want to use a 25 MHz crystal for core and a 27 MHz one for memory? And why is the only crystal I see on the PCB a 27 MHz one? (Bolded by me)
http://www.techpowerup.com/rev...GT_Amp_Edition/24.html

Edit: With regards to your last comment - I don't think some are accusing NV because they're siding with AMD or vice-versa. Some folks can and will criticize whoever is doing what, and others can say otherwise. I don't think this has always to be tied to a person's loyalty to a orporation.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: lopri
With regards to your last comment - I don't think some are accusing NV because they're siding with AMD or vice-versa. Some folks can and will criticize whoever is doing what, and others can say otherwise. I don't think this has always to be tied to a person's loyalty to a orporation.

no it appears really one-sided ... it appears that i am the only one saying "hold on" we don't know enough YET to "accuse" nvidia of anything

there appears to be a very large bandwagon of posters who are already getting the bonfires ready for the execution ... i see a certain prejudice ... a pre-judgment without enough facts to convict anyone ... and i am NOT asking for "beyond a shadow of a doubt" - that would be mistaking me for a 'fan' ... i am asking for "reasonable doubt" and a suspension of final judgment until it becomes clear.

 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: lopri
The information which crystal is used is stored inside the GPU's strap registers which are initialized from a resistor configuration on the PCB and the BIOS. In the case of the GeForce 9600 GT the strap says "27 MHz" crystal frequency and Rivatuner Monitoring applies that to its clock reading code, resulting frequency: 783 MHz = 27 MHz * 29 / 1. The NVIDIA driver however uses 25 MHz for its calculation: 725 MHz = 25 * 29 / 1. This explains the clock difference and can only be seen on the core frequency (the memory PLL is running at 27 MHz). Now the big question is, who is wrong? When I asked NVIDIA about this phenomenon they replied: "The crystal frequency is 25MHz on 9600GT. Clock is 650MHz". So far so good. But why would you want to use a 25 MHz crystal for core and a 27 MHz one for memory? And why is the only crystal I see on the PCB a 27 MHz one? (Bolded by me)

and now i might be able to answer the first part bolded by you

You might be seeing a crystal for MPEG. A 27mhz frequency is used with video because you can derive your NTSC/PAL timings from it - especially if you are outputting to video in addition to a monitor.


Here is a MS link that ties it to MPEG standard for PC video cards.
http://www.microsoft.com/whdc/archive/VidSynch.mspx

this also ties it to 27Mhz
http://books.google.com.au/boo...w6DNMyZA&hl=en#PPP1,M1

you probably want the GPU and bus timings to match


i am really tired .. i will take another look in the morning
:moon:







 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
http://www.nordichardware.com/news,7538.html

NVIDIA responds to the GeForce 9600GT PCIe irregularity
Written by Andreas G 21 March 2008 16:35

There has been a lot of buzz around the web regarding an article published by W1zzard over at TechPowerUp. He found an irregularity when testing the just launched GeForce 9600GT. It seemed that the card would perform exceptionally better when it was running with an overclocked PCIe bus, which isn't normal under these circumstances. He investigated it further and found that it seemed like the card used the PCIe frequency as a reference crystal, instead of the on-board physical crystal. A follow-up investigating the oddity was also posted.

The problem with this isn't so much that the card overclocks with the PCIe bus, it's actually quite nifty, but that the increased frequency wasn't reported by the drivers. The card seemed to operate at default frequency when it was not. People have been wondering why NVIDIA didn't reveal this to people reviewing the card, as they may have been lured into making the card look better than it was. That would be the paranoid angle of it, but right now it's the one dominating the discussions.

We still don't know why this information was omitted. It might just have been some sort of miscommunication at NVIDIA, because it has now made an official response saying that the card does indeed have two crystals. One on-board 27MHz crystal and one crystal which is connected to the PCIe bus. The things is, GeForce 9600GT isn't the only card that behaves this way. TechPowerUp discovered that GeForce 9800GX2 behaves the same way, and chances are that the rest of the GeForce 9 series cards do too.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
We still don't know why this information was omitted. It might just have been some sort of miscommunication at NVIDIA
LMFAO!

nVidia "whoops, we accidently made a card that overclocks itself but didn't find out until TechPowerUp told us".

Uh-huh, "miscommunication" :roll:

TechPowerUp discovered that GeForce 9800GX2 behaves the same way, and chances are that the rest of the GeForce 9 series cards do too.
I'm not the least bit surprised. In fact I expected this to be the case and one then wonders how many 9800 GX2 tests are out there where the boards are overclocking themselves without reviewers knowing about it?
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Firstly, how many boards auto OC their PCI-e clocks?

Secondly, why would one (manufacturers implement an auto OC) OC their PCI-e slots knowing that it can cause all sorts of stability problems? Increasing PCI-e frequency hardly improves performance because not many (if any) cards could saturate the increased bandwidth on the bus. A reason why the initial incantation of the link boost feature was dropped.

Thirdly, why wouldn't nVIDIA just release a higher stock clocks for the 9600GT if the increase in PCI-e frequency i.e core clock frequnecy doesn't affect stability? (i.e big headroom on the G94). nVIDIA's marketing department missing out on something like this is unheard of. Obviously they implemented this knowing that PCI-e clocks stay at 100MHz on almost ALL motherboards out there currently. (basically i agree with Lopri, the fact that they did hide this however i wouldn't accuse nVIDIA of doing a shady/dirty trick just yet which i believe is the single biggest reason why W1zzard at TPU is being flamed/harassed for, because he concluded that it was a "shady trick")

For reference the feature Link Boost was disabled by nVIDIA 10 months ago.

My current theory behind nVIDIA choice to do such a thing is because they are going to re-release "link boost" (they still keep it as a checkbox feature on their own website even though non of the chipsets currently support them) which in the past did not really affect performance. Since by then the entire geforce 9 series will be out, meaning that owners of the new nforce boards (high-end) that has the new "link boost" can use this feature to gain performance. In the meantime, you still get the same performance across the board whether your using an X38, p35, 790i, 680i etc because the PCI-e frequency is at 100MHz (default) on almost all boards.

I still think this is being blown way out of proportion though. Maybe we've missed the days where grabbing our pitchfolks/torches was something along the lines of a daily duty.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
so, BFG, do you really think that the performance increase that we see by O/C'ing the PCIe bus is NOT related to Link Boost?
No, because link-boost only raises bus speeds. The performance increase is coming from the 9xxx series being tied to the PCIe speed which just happens to be raised when link-boost is active.

If it was really link-boost doing it then any graphics card would be subjected to the same overclock.

... read the description of Link Boost again
http://www.nvidia.com/content/...-001_v01_LinkBoost.pdf

When the GeForce 7900 GTX graphics card is present on NVIDIA nForce® 590 SLI MCP, the system automatically recognizes its presence and dynamically boosts PCI Express and MCP HyperTransport bus speeds by 25 percent. Figure 2 demonstrates how LinkBoost technology works.
Nowhere does it say the GPU overclocks itself because that was never what link-boost was about.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
http://en.expreview.com/2008/0...9600gt-following-up-2/

NVIDIA shady trick to 9600GT following-up #2
March 20th, 2008 by Jeff Graphics, Hardware, NVIDIA

Related:Follow-up to NVIDIA?s shady trick on 9600GT

Some days after TPU?s NVIDIA?s shady trick, and our following-up, NVIDIA replied our questions. This is the first time NVIDIA officially answer us the question on 9600GT. But as you can expected, they still hiding some truth. However we can already guess what?s in their mind.

1. 9600GT using LinkBoost to cheat?

NVIDIA said: ?NVIDIA nForce board didn?t provide LinkBoost any more.?

TPU said: ?This feature was pioneered with the NVIDIA 590i chipset and is present in the NVIDIA 680i chipset too, but has recently been disabled as far as I know. Also some motherboards from ASUS and other companies increase the PCI-Express bus frequency beyond 100 MHz when the BIOS option is set to ?auto?.

?

The automatic increase of 25 MHz on the PCI-Express bus frequency yields an increase of 25% or 162.5 MHz over the stock clock (assuming a 650 MHz clock board design). With a final clock of 812.5 MHz you can bet this card will perform much better, when used by an unsuspecting user, on an NVIDIA chipset motherboard with LinkBoost.?

We said: Our test shows both nForce 680i and 780i have disabled LinkBoost. Please notice all our boards are from Asus.

2. GeForce Cards runs better on nForce Board?


NVIDIA said: ?NVIDIA cards performs good with all the mainboards. ?

TPU said: ?It is certainly nice for NVIDIA to see their GeForce 9600 GT reviewed on NVIDIA chipsets with LinkBoost enabled where their card leaves the competition behind in the dust (even more). Also it could send a message to customers that the card performs considerably better when used on an NVIDIA chipset? Actually this is not the case, the PCI-Express frequency can be adjusted on most motherboards, you will see these gains independent of Intel/AMD CPU architecture or Intel/NVIDIA/AMD/VIA chipset.?

We said: when we adjust the PCIe frequency, the scaling is alike. nForce boards performs only a little better than Intel board.

3. 9600GT?s core clock derived from PCIe clock, it is 1:1 scaling?


NVIDIA said: ?The core and SP clocks on 9600GT are derived from PCIe (PEX) clock. There are two reference clocks available on the chip: the 100 MHz PEX clk and the 27 MHz Crystal clock. Either can be chosen, but using a higher reference clock provides better clock stability (less jitter).

If users were to inadvertently set the PCIe clock really high, it could cause an excessive GPU clock speed increase, but no chip damage would ever occur because the GPU?s thermal protection circuitry would be triggered, and the chip would slow down.?

TPU said: ?On ?normal? VGA cards, when you increase the PCI-Express bus frequency you increase the theoretical bandwidth available between card and the rest of the system, but do not affect the speed the card is running at. On the GeForce 9600 GT, a 10% increase in PCI-Express frequency will make the card?s core clock run 10% faster!?

We said: our test in follow-up #1 shows the scaling is in a special shape, can not be easily concluded as 1:1. NVIDIA at least let us know the PCIe clock and core clock/SP clock is linked.

4?Will future GeForce cards also have the same clocking method?


NVIDIA said: We can not provide information indicating if the same clocking method will be used in future chips.

We said: We have found one of the future product also have the same method.

Final thought:

Though NVIDIA did not give us a 100% satisfy answer, but they at least let us know TPU?s speculation is right: 9600GT using a new clocking method. Though we still don?t know what part is changed in 9600GT besides core/SP when PCIe changes.

(According to our test, both reference and non-reference 9600GT have used the new clocking method.)


Because the new clocking method exists and NVIDIA did not want us to know, so calling it cheating is not wrong, it should blame NVIDIA PR itself.

So here is our speculation: With this new series GeForce cards user can easily overclock the graphic card via BIOS, by adjusting PCIe frequency. Instead of cheating, I like to call it a innovation, because now user can OC a graphic card like OC a CPU.

Do you still remember NVIDIA said multiple times that GPU in more important than CPU? It seems they are going to challenging someone big?
 

funboy6942

Lifer
Nov 13, 2001
15,324
401
126
They did this to make the card look "good" vs many other cards. It a shaddy thing to do, and not tell anyone about it. Its all done for sales, and what I just read now is they are using the same thing on the 9800 series cards as well. WTF chuck, they are forcing a porformance upgrade, that you wouldnt even know was going on, and bought the card because it kicks a 8800gt and 3870's ass, but does it really if Nvidia didnt cheat to make it seem faster then it is if they didnt do this "trick" to the card?

Some say who cares, but I care, they lied, didnt say it did this, and hid the fact they had to oc the card , secretly, to gain performance, and make it seems its kills all other cards hands down. Were lied to, and some dont care. You should because it would mean the card was inferior on its onw 2 legs against the competition, and did this to make you think it sooo much better, and make more sale.

Its just plain old WRONG. They should of disclosed this, and given ATI the chance to do the same type of deal with their cards to be competitive. But they didnt, you would never of know, so you would assume, its holding its own, and better then other cards in its price range. Thats the bad part of it all. What if it was made know, and ati did the same type of stuff to the 3870 as they did, would their card the kill the 8800gt and 9600, because they had the same exact advantage? I just cannot tell you how out raged I am over this. Its kinda like back in the 5900 days where they cheated with drivers to secretly show a performance upgrade over all other makers, to make more sales because you thoguht you were getting a crad that was made almost the same, but kicked much more ass, to find out it didnt render everything, making it seem faster, when it wasnt at all.
 

SniperDaws

Senior member
Aug 14, 2007
762
0
0
omg talk about going over the top, Nvidia havent done anything wrong, go play some games you idiots and stop getting upset over nonsense that makes no diffrence whatsoever.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
So.... to summerize the more insane posters...

It is WRONG to make faster, cheaper hardware, using "inelegant" solutions and undocumented improvements. If a card is faster it should only be through architectural improvements I personally think are neat and the method must be thoroughly documented so it can be copied by competitors easily!

Get a clue people, nvidia perfected a little trick to make their cards faster, it is gonna appear in every new nvidia card from now till forever.
 

NYHoustonman

Platinum Member
Dec 8, 2002
2,642
0
0
Originally posted by: lopri
Very interesting. It does have a point to call what NV's doing is a shady tactic, since I don't remember reading any 9600 GT reviews with this thing pointed out. Also to be noted is that this will only take place on NV chipsets if true. If you guys remember the 780i debut articles, the 780i SPP is not a native PCIe 2.0 part, but rather a massively overclocked PCIe 1.0 part that communicates to native PCIe 2.0 part (GF200). On Intel chipsets PCIe base clock maxes around 115MHz and going above will have a great chance of instability. Benchmarking the 9600 GT on 780i board and P35/X38 and comparing the numbers might let us have a better insight on this.

I expect some backlash to NV from reviewers if this is true in that NV essentially rendered reviewers clueless dummies.

I think the chipset difference is the issue here. On the surface this is an okay feature, nothing wrong with it, but the fact that they were hiding it... I felt like the point the article was trying to make was that NV did this to artificially enhance the performance of the 9600GT on their own chipsets, given those are automatically overclocked.

As I see it, all the driver bullshit over the past month boosts their 9-series graphics performance, and this little tidbit makes their 7-series chipsets look better. Neither is particularly honest... This is the kind of stuff that's hammered into my skull constantly in my engineering education, that you're supposed to be ethical -.- .

EDIT- didn't see there were four pages since this post XD. My opinion stands, though.
 

SniperDaws

Senior member
Aug 14, 2007
762
0
0
i dont have a supporting chipset that auto overclocks my 9600GT, but my 9600GT isnt any slower so this little cheat basically isnt a cheat at all, its an auto overclock feature for people that dont know how to overclock.


My motherboard has an auto overclock feature, it auto overclocks my cpu from 2.4Ghz to 3.0Ghz when needed and it works very well as i still get the powersaving options, so is this a cheat because Gigabyte dont advertise it ?

i will say it again, THE ONLY PEOPLE THAT WILL BENEFIT FROM THE AUTO OVERCLOCK ARE PEOPLE THAT

1 HAVE THE nVIDIA CHIPSET TO START WITH AND

2 HAVENT GOT A CLUE ABOUT OVERCLOCKING.

It gives ametuers a nice little boost so who cares, ill still get a better overclock from doing it manually.

its got nothing to do with making the 9 series look faster by deceit so get over it and stop trying to make out you have found a conspiricy becasue you really havent.
 

NYHoustonman

Platinum Member
Dec 8, 2002
2,642
0
0
Originally posted by: SniperDaws
i dont have a supporting chipset that auto overclocks my 9600GT, but my 9600GT isnt any slower so this little cheat basically isnt a cheat at all, its an auto overclock feature for people that dont know how to overclock.


My motherboard has an auto overclock feature, it auto overclocks my cpu from 2.4Ghz to 3.0Ghz when needed and it works very well as i still get the powersaving options, so is this a cheat because Gigabyte dont advertise it ?

i will say it again, THE ONLY PEOPLE THAT WILL BENEFIT FROM THE AUTO OVERCLOCK ARE PEOPLE THAT

1 HAVE THE nVIDIA CHIPSET TO START WITH AND

2 HAVENT GOT A CLUE ABOUT OVERCLOCKING.

It gives ametuers a nice little boost so who cares, ill still get a better overclock from doing it manually.

its got nothing to do with making the 9 series look faster by deceit so get over it and stop trying to make out you have found a conspiricy becasue you really havent.

When a product is benchmarked, it's supposed to be done with all other variables as constant as possible.

As I see it, in the case of NV chipset reviews, this artificially makes NV's product look faster than Intel's with respect to gaming. All things taken equal, with everything stock, if a 9600GT is used, the NV chipset is going to look faster when in reality the only reason for this is an overclock to the graphics card that's not reported in the drivers and not even acknowledged (yet) by NVidia. It may not matter *that much*, but it's dishonest engineering. The end user is going to look at these benchmarks and think, all things taken equal, the NV chipset is the better choice for gaming, when in reality this isn't necessarily true.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: SniperDaws
i dont have a supporting chipset that auto overclocks my 9600GT, but my 9600GT isnt any slower so this little cheat basically isnt a cheat at all, its an auto overclock feature for people that dont know how to overclock.


My motherboard has an auto overclock feature, it auto overclocks my cpu from 2.4Ghz to 3.0Ghz when needed and it works very well as i still get the powersaving options, so is this a cheat because Gigabyte dont advertise it ?

i will say it again, THE ONLY PEOPLE THAT WILL BENEFIT FROM THE AUTO OVERCLOCK ARE PEOPLE THAT

1 HAVE THE nVIDIA CHIPSET TO START WITH AND

2 HAVENT GOT A CLUE ABOUT OVERCLOCKING.

It gives ametuers a nice little boost so who cares, ill still get a better overclock from doing it manually.

its got nothing to do with making the 9 series look faster by deceit so get over it and stop trying to make out you have found a conspiricy becasue you really havent.

Overclocking is OK. That's fine. What is not right though is that nv driver hides overclock.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
People do realise that all nV chipsets DONT OC their pci-e freuquency since the fature that was available (only to 680i) was dropped 10 months go? So how on earth is nVIDIA gaining performance according to some of the above posters? If there are no OCes being done, then theres no hiding.

Not to mention that almost all the reviews used an intel chipset, and came to the same conclusion that this card gives HD3870 a run for its money. Thats that. Theres no underhand tricks, or vile scheme going on here like some people are making it out to be.

nVIDIA can decide not to share it and its their choice. Why? then why not them just give us the blue prints of their architecture that they are hiding? see the logic? Almost all mobos have their PCI-e frequency at default (100MHz). The new 9600GT cards dont auto OC because there is no PCI-e OC in the first place (theres absolutely no plans for any chipset to increase its PCi-e frequency out of specs). So the card stays at clock even though the way it reads it clocks are different, wheres the problem in this? they decided to implement this into the new G94 but it doesn't affect anyone or results because nobody is going to touch their PCI-e frequency or auto OC it. (if you do, you are risking stability issues)

This would be a problem IF nV mobos did infact auto OC their PCI-e frequency.

I also find this amusing since nVIDIA's cards already perform overall faster WITHOUT this feature anyway compared to its competition. Its the same when people were talking about driver cheats for G80 when infact it was the fastest thing out compared to its competiton.
 

Cuular

Senior member
Aug 2, 2001
804
18
81
As a quick refute to all the people saying that OCing the PCI-e only boosts the 9600's speed. No it does exactly the same for my 8800GTX. When I was messing around with Ntune when I got my 8800GTX and 680i based motherboard. Ntune actually pumped the PCI-e bus up from 2500 to 3600. At the time I couldn't figure out why it would bump that so much.

Well as it turns out, it was overclocking the 8800GTX. There is a huge performance difference between the standard 2500 and 3600 speed(See below for the conversion to 100 based numbers). And I've been running the system at that setting for 1.5 years. And it's still running strong. So I haven't seen that it's reduced the life of my hardware yet. For the people used to PCI-e bus is 100, I'm running it at 144.

The chipset I have has 3 separate PCI-e buses so I'm only running the one the 8800GTX is on at that speed.
 

NYHoustonman

Platinum Member
Dec 8, 2002
2,642
0
0
Originally posted by: Cookie Monster
People do realise that all nV chipsets DONT OC their pci-e freuquency since the fature that was available (only to 680i) was dropped 10 months go? So how on earth is nVIDIA gaining performance according to some of the above posters? If there are no OCes being done, then theres no hiding.

Not to mention that almost all the reviews used an intel chipset, and came to the same conclusion that this card gives HD3870 a run for its money. Thats that. Theres no underhand tricks, or vile scheme going on here like some people are making it out to be.

nVIDIA can decide not to share it and its their choice. Why? then why not them just give us the blue prints of their architecture that they are hiding? see the logic? Almost all mobos have their PCI-e frequency at default (100MHz). The new 9600GT cards dont auto OC because there is no PCI-e OC in the first place (theres absolutely no plans for any chipset to increase its PCi-e frequency out of specs). So the card stays at clock even though the way it reads it clocks are different, wheres the problem in this? they decided to implement this into the new G94 but it doesn't affect anyone or results because nobody is going to touch their PCI-e frequency or auto OC it. (if you do, you are risking stability issues)

This would be a problem IF nV mobos did infact auto OC their PCI-e frequency.

I also find this amusing since nVIDIA's cards already perform overall faster WITHOUT this feature anyway compared to its competition. Its the same when people were talking about driver cheats for G80 when infact it was the fastest thing out compared to its competiton.

Hrm, good point, didn't realize they didn't implement that feature in successive chipsets... It's still weird, and their response doesn't speak well for their credibility, but I suppose it's not quite so bad in that case XD.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |