Eh, there are plenty of different models for software engineering compute resources. For any serious work though using shared server resources reign supreme as then you don't have to provide an 8+ core workstation for every engineer which is going to sit idle how much of the time? Instead you just give your engineers 'light' client machines, typically laptops, which provide the additional advantage of enabling them to work outside the office with basically the same capabilities.
Of course, I still prefer to have an actual workstation-class (aka quad core) laptop because the dual core alternatives don't handle all the non-work stuff anywhere near so well
The cost of proper desktop PCs for the developers is nothing compared to the increased productivity you gain.
Also, often you need to check the build output and have the SW build output locally anyway, e.g. when debugging in MS Visual Studio, Trace32 / Lauterbach, gdb, or whatever you use.
3x battery life, 2.5x performance, 30x graphics performance. Really? No wonder they need the disclaimer footnotes.
And having build run on greater than 8 core servers rather than desktop workstation equivalents represents a further gain in productivity. I'm not disagreeing with that in the least, just saying that a local workstation isn't the absolute best option out there.
Sure, if you have really heavy builds, it might be worth assigning them to a build server even when developing (i.e. not just for continuous integration and similar). But often I find it's not worth the extra hassle of using a build server for that, having to pull the build output to your PC from the build server when you need to test and debug it etc. In addition developers often do incremental re-builds (i.e. do not compile everything from scratch each time), which usually do not take that long if you have a good workstation PC.
Sorry, I couldn't help myself. But this was simply too much marketing BS for me!
AnandTech said:3DMark Sky Diver and Cloud Gate: 3DMark is a staple of our reviews. Both run through several graphics and physics tests which work both the CPU and GPU. Sky Diver is the more difficult of the tests. Sky Diver is about five minutes, and Cloud Gate is about three minutes.
So Core m7 6Y75 is 40% faster than Core M-5Y71 @ 3DMark Sky Diver Overall.
This test is 5+ minutes long and Broadwell-Y can only run it at ~600-700MHz. If true this suggests Skylake-Y can sustain higher clocks.
Good news in that case! But where did you find the 6Y75 benchmarks to compare with the Broadwell benchmarks above?
3x battery life, 2.5x performance, 30x graphics performance. Really? No wonder they need the disclaimer footnotes.
"Amazing experiences", "Stunning visual performance", "Ready for Business". The marketeers sure have gone overboard with meaningless buzzwords...
Also, "Intel's best processors ever". Well, what else should we expect? If they weren't any better than the old ones, why bother releasing them?
Sorry, I couldn't help myself. But this was simply too much marketing BS for me!
They are comparing a 5 year old 1st Gen i5 ultramobile vs a current ultra mobile. Yes with 5 generations of improvements and 2 die shrinks you can get 3x battery life, 2.5x performance, and night and day performance from that crappy intel integrated graphics from 2010.3x battery life, 2.5x performance, 30x graphics performance. Really? No wonder they need the disclaimer footnotes.
Intel’s 6th Generation of its Core product line, Skylake, is officially launching today. We previously saw the performance of the two high end Skylake-K 91W processors, but that was limited in detail as well as product. So it is today that Intel lifts the lid on the other parts from 4.5 W in mobile through Core M, to 15W/28W in Skylake-K, 45W in Skylake-H and then the 35W/65W mêlée of socketed Skylake-S parts. For today's formal launch we will be taking a look at the underlying Skylake architecture, which was unveiled by Intel at their recent Intel Developer Forum this August.
As noted in the previous pages, the eDRAM from Intel will come in two flavors – 64 MB and 128 MB, which is different to the 128 MB only policy for Haswell and Broadwell. Back when Intel was discussing eDRAM before this, it was noted that Intel considered 32 MB ‘enough’, but doubled it and doubled it again just to make sure the system truly saw some benefit. It seems that for some circumstances (or some price points for that matter) 64 MB is felt as a better fit in that regard, given that Intel believes that its initial design had plenty of headroom. As far as we can tell, eDRAM will be available in 64MB for GT3e and 128MB for GT4e configurations (48 EUs and 72 EUs respectively), although we might see some variation as time goes on. We have confirmed with Intel that the 64 MB implementation is a half-silicon implementation (rather than a disabled full-silicon), but the bandwidth to the system agent is the same in both circumstances.
It is worth noting that the Skylake-Y package is crazy small. At IDF Intel showed off a finished Skylake-Y package to demonstrate the CPU die size in relation to the package size, and to compare the overall package size to the already small Broadwell-Y package. While both packages are still 16.5mm wide, Intel has compacted the overall layout and squared off the CPU die, bringing the height of the Skylake-Y package down from 30mm to 20mm, a 33% savings in package size.
Pentiums have same HD Graphics 530 as Core i3's?? Interesting.....
Of course this means "mini ITX system", and it's the first thing that popped into my mind :biggrin: :biggrin:
All that's left now is for someone to fork over/leak the release date and the MSRP of these various chips. I checked over on WCCFtech but they didn't have anything. I'm sure the release dates and prices will be leaked at some point very soon.