DDR in GDDR5 is probably in relation to the forwarded write clock and not the command clock in a GDDR5 DRAM. So for a DRAM operating its command clock at 1.2GHz the forwarded clock is physically running 2.4GHz.
Also what you mention about the DRAM core frequency is not quite right. DDR3...
See: http://blogs.technet.com/b/markrussinovich/archive/2008/11/17/3155406.aspx
Admittedly, the measurable impact of the page file size after your physical memory size exceeds the peak working set size is probably extremely small.
I don't think this is quite the reason, mission-critical CPUs such as Itanium/POWER are the only processors which have the soft-error detection in the core datapath itself and can retry instructions when an error is detected. (source, page 13)
If it's Vcc correlated, maybe a more plausible...
Some percentage of that system power consumption could also be attributed to power delivery losses in the VR. It certainly wouldn't be enough to bring the CPU-only power to be below its rated TDP though.
If I remember correctly you will generally want to hook into the submit event of the form (as opposed to a click event of a button) because if the user manages to submit the form without the button (through the keyboard for example) then returning false from the submit event would be the only...
I don't think this is true. A lot of subtle multiprogramming bugs (such as deadlocks/livelocks/unpredictable memory ordering) won't show up at all when the threads are forced to serialize on a single core CPU. So not only do you lose on performance but you will not be able to reproduce these...
Densest kit on NewEgg that I could find was this one (your platform has to support registered DIMMs):
http://www.newegg.com/Product/Produc...%20x%2016GB%29
4x 16GB DIMMS: $679.99 USD
With an 8 slot LGA2011 mobo you can get 8 sticks for 128GB for ~$1300 USD.
A Samsung 830 128GB SSD is...
Propagation velocity in typical PCBs (FR-4) is about 6 inches/nanosecond, compared to the speed of light which is about 12 inches/nanosecond. Consider a DDR3 bus transferring 1600 Mbits/second, a bit gets to travel about 3.75 inches before the next bit will get launched on the bus. Considering...
QPI/HyperTransport is primarily an interconnect used in multi-socket systems, to allow processors to access the main memory attached to other processors while maintaining cache coherency. This is needed now because as you pointed out, memory is now connected directly to the memory controller...
Your CPU supports Intel Dynamic Acceleration, which lets it "turbo" up by one multiplier if it is running a single threaded workload. It is basically a simple version of TurboBoost seen on newer Core ix CPUs but isn't a big advertised feature.
I don't own an iPhone but it sounds like it is not mounting as a USB Mass Storage device which is preventing Windows from assigning a drive letter to it. I know on Android you can select to connect as Mass storage when plugging in via USB, is there something similar on iOS?
Actually I can't seem to find such a thing as a Xeon E5-1240, I originally assumed it was an LGA1155 processor, none of which support dual sockets. Some LGA2011 CPUs do of course support dual sockets (and I believe parts with quad socket support is planned).
Yes, I bet the heat density delta would be much greater if only the cores/cache area are considered without the IGP, especially as the IGP-to-Core area ratio increased in IVB.
I know VMware Player will recognize HT in the sense that I can allocate 8 cores to a VM on a quad core machine w/ HT. However I've generally only allocated 1-2 cores on a VMware Player VM so can't really comment on how well HT works in the VM. Overall though Sandy Bridge 2500k+ is a good boost...
Moore's Law doesn't have to be fueled solely by node shrinks. There's plenty of other possible tricks to play to continue performance improvements for some time, I think. 3D stacking/advanced packaging, new materials (III-V/Graphene), optical interconnects, logic representations beyond binary...
Yeah, my main point was that the extra DIMMs did affect stock performance and not just potential overclockability of the RAM. Though, I'm not aware of other chipsets which change the stock RAM speeds based on # of populated slots. For X79 it's something to keep in mind if you add RAM but don't...
X79 mobos with 8 slots will drop the standard memory speed when 8 sticks are installed vs. 4:
http://www.anandtech.com/Show/Index/5089?cPage=2&all=False&sort=0&page=1&slug=sandy-bridgee-and-x79-the-asus-p9x79-pro-review
In that case you could see a drop in performance if you don't require...
Even SRAM gets slower when you make the memory array larger (think increasing hit latencies going from L1->L2->L3, which goes from ~3-4 cycles in L1 to dozens of cycles at L3). The same can be said about other solid-state memory technologies. As long as this is true there will always be a need...
Cache type can be set as a page attribute on x86, so it would have a minimum of 4KB granularity. The privileged code can change this on the fly although I am skeptical if it can be done while maintaining correctness and also have acceptable performance.
The other issue is atomic memory...
Support for I/O devices is typically done using methods like marking those memory regions as uncacheable. This would be a horrible idea to use for shared memory between two CPUs, though.
You might be able to build a system based around message passing, the problem will be that legacy...
Other than the IGP improvements, the drop in system power consumption under load compared to the 2600K was probably the most impressive result. IB will be great for notebooks.
Maybe you mean only computationally wise, but from a platform standpoint you will need the help of the real CPU if you intend to perform any I/O with the other parts of your system.
This is some disappointing news indeed.
EETimes is generally quite a respectable site. Or you want another person from Intel to confirm it? It would be unusual for them to change their reasoning behind the delay, but I guess anything is possible o_O
In the case of HyperThreading (which implements simultaneous multithreading, the form of MT where two threads can both execute at the same instant), it is actually not necessary for one thread to be stalled for the other thread to issue instructions and use some of the execution resources...
Hi and welcome to the forums!
No, as you mention if the software is not designed for multi-core then it will not use all of the available cores in the CPU.
The other cores will be unused at a given instant. However, the OS may decide to schedule a program on a different core on a...
Server-grade chips need to support features that might end up being unnecessary overhead when used for client-type applications. For example, multi-socket support, extra error checking/RAS, scalability for 4+ cores on a chip, all don't come for free in terms of performance. In this case the...
Not if Intel's MIC counts, supposedly 50+ cores on 22nm, and 64+ would not be a stretch on the next node. :P
This reminds me of the "10 Ghz" prediction that we are supposed to be at by around now...turns out wrong as well.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.