Yep, the graphics performance increase in 3DMark is impressive. It's also quite good, but not as good, in GFXBench. So far, it seems as if Apollo Lake is a pretty good upgrade over Cherry Trail in both CPU and GPU.
I just started the built-in benchmark via the menu. I assumed no other settings were needed. Would be interesting to see (Windows) results from someone with a Bay Trail or Cherry Trail system.
I just started the built-in benchmark via the menu. I assumed no other settings were needed. Would be interesting to see (Windows) results from someone with a Bay Trail or Cherry Trail system.
I didn't mean to say your result was wrong But I guess many sites don't necessarily use the default settings. For instance I found a site with a Bay Trail POV-Ray result of 240 seconds which obviously is not using the same settings as the default.
I didn't mean to say your result was wrong But I guess many sites don't necessarily use the default settings. For instance I found a site with a Bay Trail POV-Ray result of 240 seconds which obviously is not using the same settings as the default.
No worries! I did another run, specifically loading the benchmark ini file and running benchmark.pov manually. The result? 530 PPS / 495 secs, so pretty much the same.
Here are the final benches:
Cinebench R11.5 (64-bit)
OpenGL: 12.78 FPS
ST: 0.68
MT: 2.47
No worries! I did another run, specifically loading the benchmark ini file and running benchmark.pov manually. The result? 530 PPS / 495 secs, so pretty much the same.
This MT score basically matches a Haswell-U Core i5-4200U! ST is on par with Core 2 Quad Q6600, and 36-48% faster than similarly clocked Pentium N3700/N3710.
This MT score basically matches a Core i5-4200U (Haswell-U) and the Haswell-based desktop Celeron/Pentium! ST is on par with Core 2 Quad Q6600, and 36-48% faster than a similarly clocked Pentium N3700/N3710.
Apollo Lake is basically 2.4x as fast as Braswell in Fire Strike Graphics, that's impressive. Relative to AMD it's faster than Carrizo-L and ahead of Stoney Ridge in most GFXBench subtests. It's actually close to HD Graphics 515 performance.
Fritz Chess Benchmark 12
Pentium J4205:
Relative speed: 10.29
Kilo nodes/sec: 4937
POV-Ray 3.7
Pentium J4205:
527 PPS / 498 secs
Very nice IPC gain relative to Pentium J2900!
Fairly close to Core 2 Quad Q9400:
PCMark 8 Home Conventional
Pentium J4205: 1640
That's not the end though, expect me to add more comparisons over time.
The thing is, when you question and don't even know how things are run, neither provide other data to compare (while others spent time running benchmarks or searching test results) you're not adding anything to the discussion. The way you post makes it sound like anything remototely positive about Intel should be questioned. Just look at the other tests, it's a significant upgrade from Silvermont/Airmont.
Edit: The website doesn't mention anything about settings, so I will assume it's default like Brunnis ran. Feel free to prove otherwise.
The thing is, when you question and don't even know how things are run, neither provide other data to compare (while others spent time running benchmarks or searching test results) you're not adding anything to the discussion. The way you post makes it sound like anything remototely positive about Intel should be questioned. Just look at the other tests, it's a significant upgrade from Silvermont/Airmont.
Oh please, I have posted numerous posts positive to Intel, even in this very thread because I think Goldmont is at last a solid Atom CPU. The thing is that I'm not an Intel marketing parrot. You should learn that when you keep on being positive to a brand, you suddenly become suspiciously biased (and that works if you keep on being negative).
As far as other benchmarks go, here's Bay Trail running POV-Ray in 240 seconds which I previously mentioned. Contrary to your accusations, I search for information and instead of stockpiling benchmark results in a forum I rather spend time trying to understand why some results look odd.
Edit: The website doesn't mention anything about settings, so I will assume it's default like Brunnis ran. Feel free to prove otherwise.
Again, if something is not in line with other results, this raises questions. Do you want me to dig out old AnTuTu scores that showed Intel Atom faster than competition despite all other benchmark showing the contrary? The cheating was demonstrated by some people in this very forum (and I showed how Intel ICC was cheating to get that result). And note I don't claim there's cheating, I'm just asking a valid question about the validity of your comparison.
Nice personal attack here. I suggest you read forum rules if you're not familiar with them.
Contrary to your accusations, I search for information and instead of stockpiling benchmark results in a forum I rather spend time trying to understand why some results look odd.
Pentium J2900 completed the test in 824.61 seconds, while Brunnis's Pentium J4205 did it in 498 seconds. Now you're making up excuses as to why the comparison is invalid based on the fact that it looks like an outlier to you. I beg to differ:
Let's take a look at Cinebench 11.5: Notice how J4205's MT score lags a bit behind an Athlon II X3 445, exactly like XtremeHardware's POV-Ray 3.7 results indicate.
2.47 (J4205) vs 2.61 (X3 445) / 498 secs (J4205) vs 475 secs (X3 445)
And note I don't claim there's cheating, I'm just asking a valid question about the validity of your comparison.
Without providing anything useful to support your claim, and the AnTuTu reference is just laughable. Goldmont results here are about what you would expect looking at Cinebench and other tests. Come back when you have something substantial, otherwise the POV-Ray comparison stays in my post.
Good and bad news. I got my 2x4GB DDR3L-1866 CL11 yesterday. It's Kingston HyperX Impact with article number HX318LS11IBK2/8. The good news is that it works with this board, in dual channel and at 1866 MHz. The bad news is that it doesn't really seem to improve performance. There are marginal increases of 1-2% in several benchmarks (56 ST and 198 MT in CB R15, for example), but a few graphical tests (3DMark, GFXBench) actually run ever so slightly slower. We're talking 1-2% there as well, so not exactly noteworthy. The big take-away here is that 1866 MHz memory does not seem worthwile over 1600 MHz. The one caveat here is that I'm comparing 2x8GB 1600 MHz vs 2x4GB 1866 MHz and that the modules have different memory chip layout (2 rows per side on the 1600 MHz modules vs 1 row per side on the 1866 MHz modules) and probably different subtimings (which can't be set manually). I don't know, but perhaps this might have a minor effect.
I'd like to point out that they say in the review that they're running a rendering of the Chess2 image (chess2.pov). That is not the default benchmark, which is called benchmark.pov. So, at least that result is out.
ASRock J3355B-ITX Intel Dual-Core Processor J3355 (up to 2.5 GHz) Mini ITX Motherboard/CPU Combo
Supports HDMI with max. resolution up to 4K x 2K (3840x2160) @ 30Hz or 2560x1600 @ 60Hz
Still NO "4K@60" support, even from the Gen9 iGPU in Apollo Lake? Or is this one of those "product differentiation" things, and the Pentium J4205 has it?
I would think, what with the proliferation of both inexpensive 4K HDTVs and monitors, as well as competing ARM-based "TV Boxes" that support 4K@60, that this would be standard fare, at this point in time. WTF is Intel thinking?
IMHO, lack of 4K@60 makes these parts a non-starter for HTPC.
Still NO "4K@60" support, even from the Gen9 iGPU in Apollo Lake? Or is this one of those "product differentiation" things, and the Pentium J4205 has it?
I would think, what with the proliferation of both inexpensive 4K HDTVs and monitors, as well as competing ARM-based "TV Boxes" that support 4K@60, that this would be standard fare, at this point in time. WTF is Intel thinking?
IMHO, lack of 4K@60 makes these parts a non-starter for HTPC.
Still NO "4K@60" support, even from the Gen9 iGPU in Apollo Lake? Or is this one of those "product differentiation" things, and the Pentium J4205 has it?
Wow, is Intel really behind then. We keep getting told, by the "usual suspects", that Intel is SO far ahead in process technologies, and that even if TSMC and Samsung (and GF, LOL) catch up, that Intel has the differentiating lead due to their co-development of architecture with process, that will keep their products in the lead.
Well, where I stand, Intel is just falling farther and farther behind. HDMI2.0 / 4K@60 should be STANDARD, on any newly-released products today. This is pitiful, Intel. WTF, get your collective head out of your ***.
Edit: I was really looking forward to an Apollo Lake Brix unit, with a quad-core Atom, similar IPC to a Q6600, and HDMI2.0 / 4K@60, to pair up with a cheap $250 Walmart-special Sceptre 43" 4K LCD HDTV with three HDMI2.0 / 4K@60 ports on it, for my new everyday rig. I thought it would be glorious. Now my dreams are shattered, once again, by Intel. Thanks guys.
Wow, is Intel really behind then. We keep getting told, by the "usual suspects", that Intel is SO far ahead in process technologies, and that even if TSMC and Samsung (and GF, LOL) catch up, that Intel has the differentiating lead due to their co-development of architecture with process, that will keep their products in the lead.
Well, where I stand, Intel is just falling farther and farther behind. HDMI2.0 / 4K@60 should be STANDARD, on any newly-released products today. This is pitiful, Intel. WTF, get your collective head out of your ***.
Edit: I was really looking forward to an Apollo Lake Brix unit, with a quad-core Atom, similar IPC to a Q6600, and HDMI2.0 / 4K@60, to pair up with a cheap $250 Walmart-special Sceptre 43" 4K LCD HDTV with three HDMI2.0 / 4K@60 ports on it, for my new everyday rig. I thought it would be glorious. Now my dreams are shattered, once again, by Intel. Thanks guys.
Excuse my ignorance, but isn't the actual output identical to what it would be with native HDMI 2.0 from the SoC? The Apollo Lake BRIX units have HDMI2.0 4K@60Hz as well, so probably same/similar solution as Asrock uses.
Good and bad news. I got my 2x4GB DDR3L-1866 CL11 yesterday. It's Kingston HyperX Impact with article number HX318LS11IBK2/8. The good news is that it works with this board, in dual channel and at 1866 MHz. The bad news is that it doesn't really seem to improve performance. There are marginal increases of 1-2% in several benchmarks (56 ST and 198 MT in CB R15, for example), but a few graphical tests (3DMark, GFXBench) actually run ever so slightly slower. We're talking 1-2% there as well, so not exactly noteworthy. The big take-away here is that 1866 MHz memory does not seem worthwile over 1600 MHz. The one caveat here is that I'm comparing 2x8GB 1600 MHz vs 2x4GB 1866 MHz and that the modules have different memory chip layout (2 rows per side on the 1600 MHz modules vs 1 row per side on the 1866 MHz modules) and probably different subtimings (which can't be set manually). I don't know, but perhaps this might have a minor effect.
I'd like to point out that they say in the review that they're running a rendering of the Chess2 image (chess2.pov). That is not the default benchmark, which is called benchmark.pov. So, at least that result is out.
Yep, definitely My point was just that some people on the Web are using non-standard settings. At least that page made it clear.
To get a better picture, I made a quick comparison of J2900 vs J4205 with Geekbench v4 floating-point benchmarks. In MT, results are ranging from -10% (J4205 slower) up to 62%, the geomean is 35% better (in ST the speedup is 45%).
Sweepr quoted a result for CB11.5 J2900 of ST 0.47 / MT 1.88 vs yours of ST 0.68 / MT 2.47. That means +45% for ST and +31% for MT.
So POV-Ray being 65% faster is odd. Not necessarily wrong, but some more study would be needed, for instance finding another J2900 result (which I could not find).
That's no big deal in the end, J4205 looks good, but that's not a reason to swallow results without giving some thought
Excuse my ignorance, but isn't the actual output identical to what it would be with native HDMI 2.0 from the SoC? The Apollo Lake BRIX units have HDMI2.0 4K@60Hz as well, so probably same/similar solution as Asrock uses.
Not really. It was my understanding from people using those Club3D active DP-to-HDMI2.0 adapters for 4K@60 support, that they still suffered from the "shrinking / disappearing desktop (icons)" in Windows, when the monitor went to sleep, due to how DP versus native HDMI signalling works in Windows.
Not really. It was my understanding from people using those Club3D active DP-to-HDMI2.0 adapters for 4K@60 support, that they still suffered from the "shrinking / disappearing desktop (icons)" in Windows, when the monitor went to sleep, due to how DP versus native HDMI signalling works in Windows.
Interesting. It's weird that this isn't better implemented by the OS. It seems the problem is that when the display is disconnected (which apparently often happens when a DP monitor is turned off), Windows defaults to some simulated/virtual screen with a different resolution. And all windows and possibly icons get resized/rearranged. It's puzzling that they don't just use the same resolution for the virtual display as was used on the display that was just disconnected...
According to @evleaks, Microsoft could release a new handset that will be powered by Windows 10 operating system and a laptop-class Intel processor. The tweet says that this is "Much more than just a concept. Stay tuned," so Microsoft could be moving forward with it.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.