Since new games requieres modern quads than HT duals por modules, the i3 is great, but going i5 is the real dealMakes sense that with smaller L3, this chip is more sensitive to RAM speeds.
Looks like a great part for budget gaming PCs.
Since new games requieres modern quads than HT duals por modules, the i3 is great, but going i5 is the real dealMakes sense that with smaller L3, this chip is more sensitive to RAM speeds.
Looks like a great part for budget gaming PCs.
This is amazing.
Their FX6300 platform seems to have some problems
It is loosing half of the performance in Ryse and Crysis3 compared to 8350.
It is just as fast using 2133 MHz DDR4. Why even bother with the faster stuff? At any rate there has to be something wrong here, most DDR4 benchmarks show relatively modest gains over DDR3. What we are seeing here is an aberration so large it cant be trusted. I mean, look at the Ryse numbers. It shows DDR3 2133 being way faster than DDR4 2133. That to me screams broken data.
3.5GHz vs 4GHz, 3 modules vs. 4 modules. I can easily see the 6300 bogging down when the 8350 would handle it fine.
Remember, CAS latencies on DDR4 are about twice what they were on DDR3. I wonder what low-latency DDR4 would look like? :hmm: And I assume the cheaper boards will allow low latencies?It shows DDR3 2133 being way faster than DDR4 2133.
And expensive. I considered it, but decided it was to close to i5 territory price-wise to make sense.
Not that I'm arguing with that statement, but I think the final decision depends on what you're going to use it for.
The closest Skylake i5 price wise is the 6400, but that only has a base clock of 2.7GHz (3.3GHz turbo). The 6320 is 3.9GHz, so for older software or pure single thread performance, the 6320 potentially has a 1.2GHz(!) frequency advantage. Even with turbo its still a 600MHz advantage. And lets face it, most software is still single thread bound. In those cases the 6320 is very hard to beat. In effect you'd need a 6600 minimum to match it with an i5.
For a casual non-gamer, the Skylake i3s look really attractive. Intel has a potential gem here, and that's not even considering the very nice Gen9 graphics.
This is why RAM latency should be measured in nanoseconds.
Core i3 6100 only has 3MB of L3 cache.
Core i5 6400 has 6MB of L3 cache.
Single Thread will be very close between the two even if i3 has 600MHz advantage. Multi Thread and Gaming will be way faster with the Core i5 due to larger L3 cache and Quad cores.
I would take a Core i5 with a cheap mobo than Core i3 with expensive Z170 mobo + more expensive ram.
Yeah, I should have mentioned that looking the other way, you only give up 200Mhz by going with the 6100, but there is also the issue of the L3 cache being 1MB less. Traditionally that deficit wouldn't make that much difference in the kinds of things an i3 will be expected to run, but that might be changing. Probably be hard to isolate the effect of cache size.Not that I'm arguing with that statement, but I think the final decision depends on what you're going to use it for.
The closest Skylake i5 price wise is the 6400, but that only has a base clock of 2.7GHz (3.3GHz turbo). The 6320 is 3.9GHz, so for older software or pure single thread performance, the 6320 potentially has a 1.2GHz(!) frequency advantage. Even with turbo its still a 600MHz advantage. And lets face it, most software is still single thread bound. In those cases the 6320 is very hard to beat. In effect you'd need a 6600 minimum to match it with an i5.
For a casual non-gamer, the Skylake i3s look really attractive. Intel has a potential gem here, and that's not even considering the very nice Gen9 graphics.
The i3 6320 has 4MB of L3 cache. That's actually more per core cache then the i5...
Who said anything about a Z170 mainboard + more expensive RAM? If you're on a budget the i3 will run fine on a cheap H110/B150 board with the stock cooler, and regular DDR4-2133 or DDR3-1866.
I'm not advocating i3's for enthusiast use, unless one has specific needs, but for regular non-enthusiast consumers the Skylake i3's are unbelievable value.
Yeah, I should have mentioned that looking the other way, you only give up 200Mhz by going with the 6100, but there is also the issue of the L3 cache being 1MB less.
Traditionally that deficit wouldn't make that much difference in the kinds of things an i3 will be expected to run, but that might be changing. Probably be hard to isolate the effect of cache size.
L3 Cache is shared with all cores. So at single threaded loads, the Core i5 Single Core, it has 6MB available of L3 Cache.
3.5GHz vs 4GHz, 3 modules vs. 4 modules. I can easily see the 6300 bogging down when the 8350 would handle it fine.
So let's get this straight: if you pair an i3-6100 with a DDR4-2666 kit running at DDR4-2133 with tightened timings - which is something that is more-or-less an old Deneb/Thuban trick thanks to their cranky IMCs - you can achieve the same performance as if you had fully adjustable memory ratios?
Has anyone tested this to confirm? And is all that performance available on a budget chipset?
Also, what happens to the HD530 performance when you run it with DDR4-2133 CL13 vs DDR4-2666 CL15?
Something is definitely not right with the Ryse results - 92.8FPS vs 38.3FPS!
Makes sense that with smaller L3, this chip is more sensitive to RAM speeds.
Quite impressive for the i3. I just got an HP Envy to rip the 6700k from and will pair it with a Asrock Z170 Extreme7 with some 3000 ram combo, and was going to put a Pentium G4550 ($86) in there to make a nice HTPC upgrade. But i might as well spend the $30 for the lowest bin i3 ($117) to give it some gaming viability someday. Now i gotta debate if 1MB more cache is worth $20...
The timings in conjunction with the clock speed determine actual latency which you would measure in nanoseconds. The timings aren't an absolute measurement of latency.
I'm not talking about just bandwidth/latency numbers. Has anyone actually done head-to-head application/game benchmarks with these different settings?
Also, I never take for granted what a GPU is going to do until I've seen the benchmarks. I know GCN iGPUs like memory clockspeed over low timings, but I haven't seen any benches of HD530 showing this effect. It makes sense, but still.
One thing I've noticed with Ryse is that this game is VERY sensitive to bandwidth. I remember reading a TPU article awhile ago measuring the effects of PCIe bandwidth on a 980 and the only game to show a large performance difference was Ryse. I think it has something to do with it being an Xbox One technological showcase title (purely speculation on my part though).
It's Haswell but you can extrapolate