Interestingly enough, the more money is involved, the less your statement holds true.When time = $, SB is the only thing worth buying.
Interestingly enough, the more money is involved, the less your statement holds true.When time = $, SB is the only thing worth buying.
I built a Linux box recently, 160$ for 1090T, 100$ for a 880G AM3+ mobo and 70$ for 8 gigs of ram, around 330$, 6 real cores, runs my virtual machines like a champ.
You also seem to forget the 45% difference in price. That, and the difference between finishing an editing job in 26 minutes and 30 minutes isn't really significant.
Using your numbers, if you do editing 8 hours a day, you would save over 5 hours a week, or over 260 hours in a year (assuming 2 weeks off for vacation). Seems worth an extra $70 to me.
People have a budget. You seem to forget this.
Granted, and as a guy who spent a fair amount of his youth trying desperately to figure out how to make the rent payment, I am not going to argue that every computer user should throw cubic money at their next CPU. However, it's a matter of value for use. If a computer is used just to check email and mess around on Facebook (as many people do) then blowing a wad on a powerful CPU is a total waste of cash. However, in my personal case, I spend anywhere between 10 and 12 hours a day in front of my PC, seven days a week. A bare minority of that time is goofing off, the rest of it is in working to make a living. I put in an absolute minimum of 50 hours a week in PC work. I've had my i7 940 for about 2.5 years, so I can confidently say that I have spent well over 6,500 hours making money on this CPU. If I had spent, say, $250 more on the CPU it would amortize to less than 4 cents per hour. Is that 4 cents worth spending if it makes some complex functions faster to execute? With the rate I charge my clients, if I save one second per hour I've broken even on that 4 cents. That's why I'm fully expecting to pay around $500 for my next CPU which will last me another three years or so. The extra cost from a less capable processor is likely one of the best investments I'll ever make.
Hmm, I'm not quite sure I understand what you're saying. Didn't the Core i7 920 and the 940 OC the same? I mean, considering the 920 was half as expensive, it made more sense since if you were OCing you'd get the same performance from both.
Fortunately or unfortunately I'm in the vast majority of computer users who do not OC their CPUs. My primary reason is to keep the noise level way down in my system. When you're working the hours I do, the last thing you want is to put up with drone coming from the various fans. I have my side cover off my midtower and have a "reasonably quiet" HSF fan on my Arctic 7 Pro Rev 2 as the only airflow device. As it is I'm topping out at around 75C under load. I certainly don't want to do anything that adds more heat and thus requires more airflow, although for my next system I'm definitely eyeing the Noctua NH-D14 or similar top ranked HSF. But I'm still not going to OC it. ()
The Phenom II X6 1055T is great bang-for-buck for these applications at its current $140 price, especially since it can be OCed to 4GHz.
At 180$ a 1100T is 22% more expensive than the 1055T but with
a frequency increased by about 20% , a rare case where the extra
performance is proportionnal with the price difference..
Still 20% cheaper than a 2500K and better in multithreaded aps.
If you want something cheap and effective nowadays you can just get something like a Corsair A50. Better than the Hyper 212+ and cheaper. For something higher-end I'd recommend the Scythe Mugen 3. It's the same heatsink as the older Mugen 2 except for a bit more depth and one extra heatpipe. It's only $40 at Directron and only 3C off the NH-D14. The fan is even quieter than those of the Noctua.
You could've gotten a 920 and did a stock voltage OC to 2.93GHz. That would probably only rise temps around 1-2C over stock, thus requiring no additional cooling.
You are right. Assuming I run OS X, none of my OS X virtual machines will run on it. Good thing is, I have exactly zero OS X virtual machines and I am planning to run exactly zero OS X virtual machines on my server.Except for your OS X virtual machines.
Runs Linux? Check.Feature-wise, K10.5 is in the stone age.
Believe me, I'm well aware of the benefits of OCing as I've been lectured to by dyed in the wool enthusiasts for several years. Still... not for me. ()
Hmm...
Efficiency improves with stock voltage over-clocks. I OC the CPU on my LAPTOP from 2.8GHz to 3.2GHz with no ill effects (well, aside from a rise of 2C in temps) and I can get things done quicker.
But hey, whatever floats your boat.
Business use dude. Not worth the risk. Ever.
Hmm...? As far as I know there's no risk on stock voltage OCs as long as they're stable. What risks do you mean?
The risk of a BSOD or subtle data corruption.
Then why don't AMD/Intel just sell the chips at those frequencies?
And you retest your CPU every few months to avoid any regression? If you do, that's fine - but many people would not and in that case OCing and sensible data don't mix.That can't happen if the OC is stable, a lot less so on a moderate stock voltage OC.
Enlighten yourself to the world of RAS.LOL_Wut_Axel said:That can't happen if the OC is stable, a lot less so on a moderate stock voltage OC.Intel17 said:The risk of a BSOD or subtle data corruption.
When talking about enterprise-class reliability (i.e., for business use), there is no way for us to achieve a "stable OC". Only Intel and AMD have the tools to do a real CPU test (that achieves significant coverage) to determine, beyond just usual use cases, that something is truly stable for a particular clock-speed, voltage, temperature, and time/period/duration (years).That can't happen if the OC is stable, a lot less so on a moderate stock voltage OC.
When talking about enterprise-class reliability (i.e., for business use), there is no way for us to achieve a "stable OC". Only Intel and AMD have the tools to do a real CPU test (that achieves significant coverage) to determine, beyond just usual use cases, that something is truly stable for a particular clock-speed, voltage, temperature, and time/period/duration (years).
All our "enthusiast" stress-testing tools like OCCT, Prime95, LinX, etc., don't really cut it. They do not get significant coverage, and are little more than "max operating temp testers". This is why you hear over and over again stories of overclocks being stable in [Prime | OCCT | LinX | whatever ] for hours, but then crashes in another tool, or in a couple of games.
I try to delude myself by relying on AMD's Overdrive utility, convincing myself that it seems to do a wide-range of tests and must have been designed by AMD's CPU engineers so it's possible that it might actually be getting enough coverage. While it has not bitten me back yet, I have no illusions that it is a substitute for the real tests done at AMD/Intel labs, so still no good for any business/enterprise reliability needs.