monstercameron
Diamond Member
- Feb 12, 2013
- 3,818
- 1
- 0
haswell is all...well...and good but what about crystal well? how did it do that? I am still so impressed!
I'm still unclear on gaming advantages Haswell has over 3930k. I guess i'll have to keep digging, or realize that the benefit must be so small that no one has posted a huge banner announcing it.
Optimized softs have quite a racist opinion.
But since you re suddenly numbers agnostic , theses one should suit you...
Here what happens when all CPUs are more or less treated equally
in respect of their ISAs and architectures...
http://www.guru3d.com/articles_pages/core_i7_4770k_review,19.html
i'm not sure that is a fair assessment, the amd a10-5800k(ddr3 2133) still beats the hd4600(with ddr3 2400), however crystalwell is an equalizer which does. Hypothetical, if the a10-5800k had as much bandwidth do you think gt3e would still beat? in any case it does and amd needs to step up their game...2011: Intel's crappy IGPs will never beat AMD APUs.
2013: But, but, they cost much more!
Quite a change in 2 years. If Broadwell brings its rumoured 50% graphics boost things will heat up once again in 2014.
I think Intel's solution would still come out ahead. I wonder what the respective GPU die sizes are. Pretty sure Intel's 40 EU configuration is larger, and it's on a more advanced process as well.
if having ~2-3x bandwidth advantage only nets 6 frames higher, a 90+GB/s a10 would seemingly be higher...how well really does performance scale with bandwidth?
2011: Intel's crappy IGPs will never beat AMD APUs.
2013: But, but, they cost much more!
Quite a change in 2 years. If Broadwell brings its rumoured 50% graphics boost things will heat up once again in 2014.
well if you put the frame rate improvement as a percentage you can get your answer.
However, bandwidth scaling is a sliding scale. At some point you get "enough" and more is relatively little performance. Conversely at some point the graphics core is starved and bandwidth is nearly 1:1, so it's not possible to put a fixed number on it. The IGPs tend to operate towards the starved range, since it's easier to add GPU cores to the CPU than it is to add bandwidth.
Not sure if serious. Cost is always a factor when you try to sell products to the masses.
Iris Pro is $468.. also a $90 premium to make it competitive means its approaching $100 GPU discrete territory, which for desktop users, would blow away Intel's iGPU regardless.
Lets be honest, its an awesome mobile product from Intel, amazing perf/w, and its to be commended. But once you move into desktops, its pricing puts it into the fail category. Desktops aren't limited to ultra low TDP. A 3770K + discrete destroys Iris Pro for less $.
Just like AMD's some secrete ingredient will outshine Intel chips in some fringe tasks? Or how about my 1045T crushing my 2500K when I run 6 encoding jobs concurrently? That kind of relevance?
Your attempt to save anything and everything Intel is pathetic just as these new chips.
Is it just me or do motherboards have a huge effect on power consumption?
It's not just you.Is it just me or do motherboards have a huge effect on power consumption?
If it performs faster in one application, it performs faster in that application. There's no room for opinion there, regardless of whether or not the application in question is actually relevant to assessing the real world performance of a microarchitecture.Benchmarks are non neutral numbers...
When Intel agreed to help MS optimize Excel we can be sure
that the Intel devellopper put their opinion about the "best way"
to "optimize" this soft , so ultimately the rzesults are influenced
by the devellopper opinions , we are not in pure mathemlatics
where results are independant of humans wills...
It looks like a serious problem to me. According to Anandtech there is 11.8% increase in power consumption, with only 13% performance increase. You'd almost get the same results if you just overclocked an Ivy Bridge. What happened to the super savings from the on chip voltage regulator and the improved 22nm process?
Ah yes. Higher power consumption equals technological regression.Nothing, Intel had announced higher TDPs, therefore we already expected higher consumptions from this 'improved' technology :sneaky:.
I don't think Intel ever had any hopes of pulling away a significant number of desktop Sandy Bridge users to Haswell.The review seems kind of short. Load power consumption is pretty bad. Will stay with my 2600k.
Nothing, Intel had announced higher TDPs, therefore we already expected higher consumptions from this 'improved' technology :sneaky:.
Is it just me or do motherboards have a huge effect on power consumption?