I've used Linpack, so probably the same as your HPL. I found that I would hit a point where the system would lock up it seemed fairly stable (a couple of runs of Linpack) with a lesser OC.
I can't really compare my results to Windows as I have AVX on Linux and not Windows, but I know they...
Actually, I was reading recently (on VR-Zone, I think) that NVIDIA was planning to directly compete with Xeon and such – definitely with a high-power core, but I'm thinking also definitely with a lot of cores.
VT-d is for letting the VM use the part (graphics card, network card) directly, which at the same time means it can be only used on that VM. So it wouldn't provide any real benefit in that situation as far as I know.
While you bash the i7 3770 for high temps in that preview, please remember the i5 had very low temps – and please explain the difference at the same time. I think I can explain the difference myself, but I'm going to be interested to see what others come up with on that.
www.tweaktown.com/articles/4621/intel_ivy_bridge_overclocking_with_the_core_i7_3770k_and_core_i5_3570k_cpus/index.html
Interesting to see how the 3770K and 3570K temps compare – very different for a similar overclock.
The only downside could be that they start the GUI automatically, and you would have to disable it or it will be using resources. But that's a very minor issue.
Why does the power usage only decrease roughly proportionally to the square root of the area (I think that is what it is doing)? I would have expected it to decrease linearly with area (halve the area, double the resistance and halve the capacitance, surely?).
I won't presume to try to answer for him, but I would suggest that any degradation would most likely cause gradual increases in power usage which would translate to gradual increases in temperature.
No-one in their right mind will use a tablet or cellphone for any major content creation, ever. So PCs will always be available. Not to mention that tablets and cellphones are all essentially dependant on the internet, which requires servers. So I think the PC market is fairly safe, although the...
The IGP in the 2500K will probably consume a small amount of power when the graphics is disabled. It is possible that the 2550K has the IGP physically disabled, preventing it using any power whatsoever. But while saying that, the difference will probably be minimal, and not affect anything.
Plain and simple, you just proved that H67 does not overclock (not even limited overclock). It does use turbo though, as evidenced by the 34x multiplier the four-core turbo, which is used almost all the time.
Can't say they are bad when you're talking about what some distro provides. It is what they provide that matters in such cases, which is source – and is pretty good as you said yourself.
See below.
Did you read what you said? You said "I dont think there has ever been a 57x multy chip ever seen with liquid nitrogen cooling and 1.7 volts", and that was what I was replying to.
You know, I read a test that compared the stock Intel TIM and others, and the Intel stuff beat all the others – http://www.hardwaresecrets.com/article/Intel-Stock-Thermal-Compound-Review/1273
Dust. The cooler is mounted fine. The case has less airflow than some, but does have a duct on the side. I've seen low 70's when it is clean. I'm using the Intel Linpack binaries for Linux, which support AVX, and as I run it at runlevel 3 (no GUI) there is very little else going on to "distract"...
My i5 (on the stock cooler) will pass one run of Linpack (on Linux, 64-bit, AVX, runlevel 3 so no GUI, wouldn't try more runs) when overclocked slightly (can't remember, but around 3.7 – 4.0), but hit 90 C (slightly over 100 GFLOPS, if I remember). It gets low/mid-70's to upper-80's at...
But still note the "for 99% of the folks out there" – it is an issue for some. In both ways, depending on the PSU and the situation. A high wattage single-rail PSU would almost make a reasonable arc welder (especially if it was used as the power source for a constant current supply). A...
Then why bother with GNOME/Unity? Try a different desktop environment such as KDE or Xfce. Even a different distro such as openSUSE or Fedora. As for software, are you stuck with particular programs, or do you just need anything to do a particular job? And the loading times for me are much...
Did you notice that they give the Standard Error? i.e. each benchmark was run multiple times, and the mean and SE calculated – the mean being the length of the bar, and the SE being shown below the name and also with error bars (which are too small to see). So not "margin of error".
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.