Wasn't there a news post with Jim Keller saying K12 would have a bigger engine than Zen?
Yeah, K12 was wider (better IPC) and much better perf/W but was limited in clock speed. Zen was higher clock speeds and x86 compatibility but bad perf/W.
Wasn't there a news post with Jim Keller saying K12 would have a bigger engine than Zen?
Yeah, K12 was wider (better IPC) and much better perf/W but was limited in clock speed. Zen was higher clock speeds and x86 compatibility but bad perf/W.
The 40nm X-Gene can compete with the 22nm Atom C2000 performance wise, and that is definitely an accomplishment on its own. But the 40nm process technology and the current "untuned" state of ARMv8 software does not allow it to compete in performance/watt.
So it may be that AMD is treading the ARM vs. X86 server waters lightly to see how aggressive Intel is with the cores. I would presume more aggressive core increases by Intel might discourage software development for the ARM servers (indirectly hurting performance per watt).
Given what Intel is doing with Xeon D, it's not going to be easy now and probably not worth it now to try given their lack of money.
In an attempt to strengthen the entry of ARM processors into the server market, British chip designer ARM has put together the Server Base System Architecture (SBSA), a definition of a standard platform for ARM-based servers. This move should reduce the abundant variation and complexity that has hitherto been a feature of ARM systems. SBSA was assembled by ARM along with its partners, including HP, Dell, AMD, Citrix, and Microsoft.
Even as ARM processors have proliferated in smartphones and tablets and are starting to make their first tentative steps into the server room, ARM has not been a platform in the way that the x86 PC is a platform.
Way back in the early 1980s, the IBM PC defined the way the computer booted, initialized its hardware, laid out its memory, and provided access to standard features like graphics and the keyboard. This enabled an ecosystem of PC software to develop. The PC platform was cloned by Compaq and others, and these clones were functionally equivalent to IBM machines. Operating system software that worked on one clone would work on any other, and it would work on the PC itself.
Over the years, the PC platform has changed, but this compatibility has remained as a core feature.
To the chagrin of operating system developers, ARM has lacked a comparable platform. Linux creator Linus Torvalds once described the proliferation of inconsistent, incompatible ARM systems as a "fucking pain in the ass," and implored the ARM community to "push back on the people sending you crap" and devise a common platform. Intel, likewise, has used this diversity to criticize ARM.
Since that statement in 2012 there has been some progress. Microsoft essentially defined an ARM tablet platform for Windows RT, enabling its kernel to work on both Qualcomm Snapdragon and Nvidia Tegra 2 and Tegra 3-based systems. Linux developers have also managed to consolidate their support for some of the diverse ARM platforms.
Without any clear market leader in the nascent ARM server market, this diversity and lack of platform could be deeply problematic. It would prevent easy software compatibility, with each different kind of system needing its own customized kernel.
The SBSA is ARM's effort to address that very problem. An operating system that targets SBSA will be able to run on any SBSA system: it will have the same basic platform components, put together in the same way, with the same kind of firmware, boot process, interrupt and I/O handling, hypervisor, and more. For example, SBSA will require all USB 2 controllers to conform with the EHCI 1.1 specification, all USB 3 controllers to conform with XHCI 1.0, and all SATA controllers to conform with AHCI 1.3.
ARM's server ambitions have faced setbacks recently with the collapse of ARM pioneer Calxeda and competition Intel's server-oriented Atom Avoton platform. SBSA could prove an important step toward making these plans come to fruition.
Cost shouldn't be much of an issue in this case. The smallest BDW-EP die is probably 10c (smallest HSW-EP die is 8c). With the 14nm shrink, the BDW 10c should be quite a bit smaller than HSW 8c. Most sales will be for 6c/8c, so Intel won't need too many fully functional 10c dies. Even with the 14nm yield problems, this shouldn't be a problem from the manufacturing side. Depending on the die size of BDW-EP 10c, the cost structure may actually improve for Intel. This isn't mainstream desktop, where most of a process shrinks advantage is invested into iGPU.Still think 6950X is going to be more than $999. While the yields are getting better there's still been talk of the die cost being still high. Milking out more money out might be one way of making that up.
How easy would it be to push 6900K 4GHz+ on air? Any guesses?
Cost shouldn't be much of an issue in this case. The smallest BDW-EP die is probably 10c (smallest HSW-EP die is 8c). With the 14nm shrink, the BDW 10c should be quite a bit smaller than HSW 8c.
Probably super easy. 4GHz is pretty much no problem for any of the current HSW-E chips.
It might need specialized cooling, removing the heat from that many cores in such a tiny space starts to be a real challenge.
Heat density is probably becoming a real issue at this point, unless Intel decides to use something very conductive like a silver solder (which they really should use anyway, especially considering price, but I digress). Perhaps to push up performance, research should be done into more heat-resilient materials (think 250C+).Probably super easy. 4GHz is pretty much no problem for any of the current HSW-E chips.
Heat density is probably becoming a real issue at this point, unless Intel decides to use something very conductive like a silver solder (which they really should use anyway, especially considering price, but I digress). Perhaps to push up performance, research should be done into more heat-resilient materials (think 250C+).
I thought we were talking eight cores, nearly double the heat emanating from a not much larger area.
You are ever the optimist, but there is a reason why these higher core count CPUs generally don't run quite as fast as quads, and that is party because the heat can't get out of the die fast enough. So a good AIO or custom loop become more necessary on the average, but those who can afford one of Intel's latest octacores probably don't need to worry about struggling along with a Hyper 212 Evo anyway.
FUGGER over at XS who dropped info a few months ago about his BDW-E being a nice surprise for him had this to say after the 10c info leaked:
"Broadwell-E will have dual mode FIVR LVR
non linear droop control
lower vcore -10%
more efficient than HW-E
3DL modules moved under die
10%~15% improvement in performance
14nm = smaller die
very aggressive power management
SL-E hopefully will not have FIVR
I hope we get to 5Ghz on retail BW-E chips "
Granted, they are soldered, but anything close to 5ghz on BW-E seems quite optimistic. If I recall the quads usually only managed around 4.2, lower than both Haswell and Skylake. Of course they did have a big igp and e-dram, which the E series chips dont.
The quads were mobile-focused chips with eDRAM. These are completely different designs.
Even quads put out a tremendous amount of heat when going for max clocks. An octacore that overclocks well with a 212 Evo is probably a golden chip being wasted on a skinflint fool.
How so? It is the same process and basic architecture is it not? I have no idea how BW-E will overclock, but I am basing my estimate on the fact that since Sandy Bridge pretty much everything has required an extremely golden chip and heroic measures to reach 5ghz. And even despite all the golden memories of SB, I dont think the majority of those actually hit 5 ghz either.