Going to a 770 from a 570 is freaking gigantic with current games at 1080p and beyond.
Second best would be going from a 64GB SSD to say a 250GB Samsung 840 Evo ($170ish IIRC). SSDs really start flying at the 250GB size.
I sincerely doubt you'd notice the difference between a 2600K @ 4.7 and a 4770K @ 4.4 (typical air OC). If you were running iGPU yes, but dGPU? Not a chance.
Perhaps a caveat: it depends on HOW you're "running the iGPU." As standalone, it is little better than the NVidia onboard graphics of my Mom's Gigabyte 610i mATX board. Hourglasses, hesitation -- all because of the graphics processor. It was half the equation for upgrading her system in 2010-2011: for what Mom does, we spent maybe $30 on a budget GeForce card, an XFX GT 460 or something like that. "Free at last! Free at last!" And then we added a 128GB Elm Crest SSD (SATA-III) to her SATA-II port and replaced the HDD -- "Thank God Almighty, I'm free at last!"
And actually, with 512MB of RAM allocated to it, you simply discover that the iGPU isn't a "slug" for most things: You could certainly use it for HTPC functions. It WILL run games. For various computing sessions, with your desktop monitor connected to it, it is fine.
But if limited to two monitors when you want to deploy three or four, it becomes a more attractive resource. You can dedicate a GeForce-connected monitor to an HDTV, but you can also use the iGPU together with the dGPU card and its VRAM to get similar performance from a third graphics card.
NVidia had prepared its SLI software to do roughly what the Intel HD does with a single card. [And you can't use more than a single card with the Intel HD with VIRTU.]
It boils down to a need to use more monitors for specialized purposes. And -- the convenience of simply switching Lucid on and off in software with two mouse-clicks.
ADDENDUM/AFTERTHOUGHT: You're right: there was little difference between my 2600K @ 4.6 with 570GTX [dGPU mode] and Cinebench results for an i7-4770K @ 4.4. In fact, I actually came out a smidgeon better with 4.7Ghz. But that's only one particular benchmark. Even so -- probably indicative of "future-proof" for the near future.
Z15CAM said:
. . . .
This is the way my i7 2700K scales with EIST ENABLED :
4600 Mhz:
With a 46 Multiplier and - 0.005v offset the system Idles at 1600Mhz @ 0.992v/34C, at 4.6G under heavy application load like x264 encoding voltage ramps to 1.336v/47C then at 4.6G under Prime95 large FFT Stress Test voltage ramps to 1.376v/68C. NICE.
. . . .
Just a thought you may incline to file away. Reason it came up, I thought I'd "vet" my OC's again with LinX and Prime95. I had mentioned that there was a second voltage setting in the P8Z68-[ . . V, Pro, Deluxe] boards that seems to work mV-for-mV with the Offset Voltage: "Add'l Voltage for Turbo." Also -- the observation from various sources that too much LLC will cause overshoots or even a VCORE higher than VID. But turn away from the LLC issue for a moment.
Looking at your setting for 4.6, you're using an offset just 10 mV lower than mine if I understand correctly. So if you left the "Add'l Voltage for Turbo" ["AI Tweaker," "CPU Power Mgmt"-> . . ] at its default "Auto" setting, how much extra voltage is the latter providing in addition to the Offset Voltage setting? The only way to know would be through measuring load voltage minimums from similar stress tests -- between "Auto" and some fixed setting.
I say this, because with Offset +0.005V and Add'l Voltage . . (+ only) 0.008V, my Prime95 large-FFT load-voltage minimum is
1.322V -- while running Media Center "Live TV" in the background. Unfamiliar with "x264 encoding," but how is it that such a load source shows 1.336V, but Prime95 gives 1.376V? Or did you read the VCORE from the monitor at the wrong time -- maybe as a "Maximum" value that would show the unloaded Turbo voltage?
The only way to know for sure if "Auto" Add'l Turbo Voltage" gives the same 0.008V extra or something higher would be to recheck the Prime95 load Large-FFT at its minimum. The two softwares won't give the same statistics, or you misread your Prime95 load voltage.
With MC running in background, I DID discover an instability @4.7 with LinX after 30+ interations -- not so likely to happen after so many runs as you speculate about something like a Poisson distribution of errors. So my adjustment added 5mV to the load turbo-voltage to set things straight. That gave it a clean 50 iterations while watching TV.
What impressed me more -- whether it was the 1.65+ PLL Voltage setting or some other tweak that I made -- my load LinX temperature "average-of-maximums" has dropped to 72.5C (4.7 Ghz) prevailing at closer to 69C, and the 4.6 Ghz equivalent has dropped to 68C prevailing around 65C. Room ambient is 77F. I think I can find another 2C drop by plugging an air-leak with some carefully-cut art-board.
I'm thinking you have water-cooling, and I have the old D14 . . .
ADDENDUM/CORRECTION: For 4.6, I'd revised my VCORE settings from +0.005V/0.008V to either +0.005V/0.016V or +0.010V/0.012V. It passed stressing at the lower settings, but I didn't like the spreads of controlled GFLOPS, so I upped it a tad. These were the settings that gave me 1.322V minimums at highest Prime95 loading.