And they're very confident on delivery too - since they have an established fulfillment schedule: "Usually ships within 2 to 4 weeks. "
Not many user reviews - are you sure this is a trustworthy product? ;-)
It wouldn't be, unless the leakage current is significantly lower.
http://www.computerbase.de/2015-06/intel-core-i7-5775c-test/
There is a big test from Computerbase.de.
That was to be expected with small(er) nodes & Intel not doing solder TIM on desktop parts (non HEDT) anymore, I expect Skylake to run even hotter but with better overclocking potential though not by much as TDP has also gone up to 95W.Hotter than Devil's Canyon and max OC seems to be 4.1-4.2.
http://pc.watch.impress.co.jp/docs/topic/review/20150613_706785.html
Broadwell GT3e does much better in games. 3dmark pretty much on par with Kaveri but in all gaming benchmarks (not only in this review) easily faster than Kaveri.
Perhaps the question should be "why does it need more voltage?"
Base clocks for i7-4700 - 3.4GHz, i7-5775R - 3.3GHz, i7-6700 - 3.4GHz. TDP has reduced but remember this is a thermal specification and not how much power the CPU can use. Lowering TDP would seem to indicate higher temperatures per Watt, i.e. under Intel's worse case scenario at the base frequency only 65W required to hit thermal specs.
Also seems there might be some problems producing chips with good voltage scaling for the higher frequencies over these newer generations (Skylake real performance yet to be seen).
What do you think?
Now here's the interesting part for dGPU users. It's faster than Core i7 4770K and a mere 1% slower than Core i7 4790K @ 1080p gaming tests using a dGPU, despite much lower clocks. That's Broadwell's IPC and eDRAM working in its favour.
IfI recall right the production cost for the eDRAM is 3$. So even with a 100% margin on that part its not a problem to integrate it everywhere financially. There are other challenges related tho. But i think we will soonish see eDRAM being the defacto standard until stacked memory takes over for good in the consumer space.
I would love to see edram and at least gt3e or gt4e standard across the entire mobile big core lineup. On the desktop, I dont really care. I know you disagree Shintai, but I think it will be many years before igpus compete with dgpus on the desktop. Yes skylake gt4e my be a 50% improvement (yet to be seen, BTW), but there are 14nm dgpus coming as well, which should give a huge boost to both performance and efficiency, so yes it is improving, but shooting at a moving target.
If dGPU share keeps shrinking at the current rate, then 14nm dGPUs will be the last dGPUs you ever see.
Thats why I keep saying the IGP doesnt need to beat the dGPU to win. It just need to remove the ROI from dGPUs.
Perhaps, but the prices of workstation gpus are astronomical and the users will pay even move if they have to because their work depends on it. So there will still be funds and motivation to pay for development of dgpus.
So the igpu is about twice as fast as GT2, and has 2.4 times the shaders. I wonder how the performance would scale without the edram.
Board number two behaved much more reasonably, particularly at stock settings. All of the benchmarks completed, and while performance in some cases was lower than board one, that was thanks to the CPU using the correct clock speeds. Then we tried overclocking and ran into problems again. We’ve seen reports of people hitting anywhere from 4.4–4.8GHz with Broadwell samples, but our CPU doesn’t want to go much beyond 4.2GHz. We were able to hit that on all four cores at 1.36V, which represents a respectable 14–27 percent overclock, but 1.4V at 4.3GHz proved unstable, so we stopped there. 4.2GHz also matches nicely with our 4790K clocks, which range from 4.0–4.4 GHz but generally run around 4.2GHz.