- Jan 29, 2007
- 6,374
- 1
- 81
So you've heard about the 174 forceware boosting the 9600gt, well check this out...
http://www.techpowerup.com/rev...A/Shady_9600_GT/1.html
http://www.techpowerup.com/rev...A/Shady_9600_GT/1.html
Originally posted by: krnmastersgt
Well personally I don't care that they didn't advertise this little tidbit, and I suspect a lot of the users with 9600 GTs are going to be upping that PCI-E bus speed when they overclock, or if they have an nVidia chipset they can rest easy knowing it was designed to automatically do it.
Originally posted by: krnmastersgt
Well personally I don't care that they didn't advertise this little tidbit, and I suspect a lot of the users with 9600 GTs are going to be upping that PCI-E bus speed when they overclock, or if they have an nVidia chipset they can rest easy knowing it was designed to automatically do it.
Originally posted by: ttnuagadam
Originally posted by: krnmastersgt
Well personally I don't care that they didn't advertise this little tidbit, and I suspect a lot of the users with 9600 GTs are going to be upping that PCI-E bus speed when they overclock, or if they have an nVidia chipset they can rest easy knowing it was designed to automatically do it.
uping the pci-e bus is the best way to fry your card
Originally posted by: krnmastersgt
Did you read the article? The card was automatically raising the pci-e bus and it actually showed performance gains with it, it only does this automatically on nVidia chipset boards but manually overclocking the pci-e bus on a non-nVidia board would yield the same or similar results.
In case of the GeForce 9600 GT the strap says "27 MHz" crystal frequency and Rivatuner Monitoring applies that to its clock reading code, resulting frequency: 783 MHz = 27 MHz * 29 / 1. The NVIDIA driver however uses 25 MHz for its calculation: 725 MHz = 25 * 29 / 1
Originally posted by: Narse
Originally posted by: krnmastersgt
Did you read the article? The card was automatically raising the pci-e bus and it actually showed performance gains with it, it only does this automatically on nVidia chipset boards but manually overclocking the pci-e bus on a non-nVidia board would yield the same or similar results.
Where in the article does it talk about the pci-e speed at all? looks to my like they are talking about the crystal multiplier, witch has nothing at all to do with the pci-e speed.
In case of the GeForce 9600 GT the strap says "27 MHz" crystal frequency and Rivatuner Monitoring applies that to its clock reading code, resulting frequency: 783 MHz = 27 MHz * 29 / 1. The NVIDIA driver however uses 25 MHz for its calculation: 725 MHz = 25 * 29 / 1
Originally posted by: Narse
Where in the article does it talk about the pci-e speed at all? looks to my like they are talking about the crystal multiplier, witch has nothing at all to do with the pci-e speed.
As many of you know, the PCI-Express bus clock frequency which connects the card to the rest of the system is running at 100 MHz by default. What NVIDIA did is to take this frequency and divide it by four to get their reference frequency for the core clock.
Originally posted by: krnmastersgt
As to whether this is good or bad
...
the fact that they didn't market this info is rather strange.
Originally posted by: Narse
missed the 2nd page
heh
Originally posted by: krnmastersgt
I like Zap's reason, and it's doubtful they would release the cards knowing that it would kill boards out there that they built, otherwise they're going to be spending an awful lot of money replacing systems.
Originally posted by: vanvock
Originally posted by: krnmastersgt
I like Zap's reason, and it's doubtful they would release the cards knowing that it would kill boards <<out there that they built, otherwise they're going to be spending an awful lot of money replacing systems.
But what about sound, network, raid etc. in a slot that they didn't build? It oc's the whole bus not just 1 slot right?