Spoken like a true enthusiast!Waiting for RyZen to officially lauch feels like being a child on Christmas Eve!
PLEASE SANTA when are you going to get here!
Most people after all news were like
instead they should be more like
Spoken like a true enthusiast!Waiting for RyZen to officially lauch feels like being a child on Christmas Eve!
PLEASE SANTA when are you going to get here!
I bought for work a 3820 with 2011 to have the quadchannel (i perform heavy and memory hungry calculations)... Newer notebooks with newer intel architecture are quite comparable in speed... With good prefetcher only simple calculations on many data are memory bound... I wrote also CUDA software (image filtering) that is L1 limited and not memory limited... So with enough L1/2/3 bandwidth, the memory performance will matter only for iGPU. But 2011 chips and Zen does not have iGPU...It's a pretty much a given it will be cheaper. AMD needs to break the inertia of people buying Intel. Also there is plenty of room for AMD to price the Ryzen for decent margins. There will probably be a top bin gold sample part that's going to be expensive. But I am convinced the main SKUs will be significantly cheaper than Intel.
If AMD was going after the $1k CPU market they would have included quad channel memory. Their intentions are clear, 8c/16t will be sold at mainstream prices. Around $500 or less.
So when has your Zen sample arrived?
Waiting for RyZen to officially lauch feels like being a child on Christmas Eve!
PLEASE SANTA when are you going to get here!
Most of what you posted were GPU limited tests, We don't even know the details on how the games were tested by CanadrdPC, and which were CPU limited, etc, except to say there's clear scaling between the i5's with different frequencies, so, on average there is at least some CPU limitation.
Given most of the titles tested are quite old, we know by default they won't scale well with higher thread count, and it's plain obvious from the fact Skylake is soundly beating BW-E. Something that rarely happens in modern titles.
Kinda offtopic, but just want to add, if we're talking about games, IMO, Starcraft 2 would be the most revealing game if you want to test a CPU's single core gaming performance. Starcraft 2 should be in every single game-based CPU benchmarks. Not to mention since the game is absolutely CPU limited, memory speeds also scale well with OCs.
Remind me where I was talking about compiler developers being the central issue here. Rather, it is that the Blender devs didn't bother to use a compiler that uses modern instructions. But nice red herring.Its not about just setting some flags, you have to write/ refactor your code so that unrelated variables can be packaged together and executed with the same operation. Its so HARD that Compiler developers spend a massive amount of time & money on making their compilers do auto-vectorisation and guess what it still sucks and doesn't work well. Those lazy ass Compiler developers who do they think there are with their fancy Comp Sci based degree's and PHD's if they just stopped being so lazy .
And now we come to the false dilemma fallacy.I guess all computer game Dev's are lazy for not using AVX either, you realize you now expect the developer to maintain two complete different sets of code for one application. One factored in a way that allows the Dev to take advantage of AVX/AVX2/FMA operations and another to keep all the Pentiums and Celerons K10's westmere's etc usable.
The Stilt's SIMD build runs faster on Lynnfield by a very small amount, vastly faster on Piledriver, and vastly faster on Haswell/Skylake.You obviously dont understand, Compilers have many many flags that can be set, there are always trade offs to be made. There is then the version of the compiler which can have a big impact and the COST of the compiler can also be a factor. Choose the wrong optimization and things can go horribly wrong depending on how your code operates.
Another red herring.all the things that tried to just be a benchmark get gamed to hell and back ( SPEC for example).
Another red herring. Your argument and mine are not the same. Mine is that if you're going to use a benchmark, especially basically by itself, to claim something then you should be certain it's up to the task. The Stilt's builds show that the stock Blender builds aren't.All applications are benchmarks even if they are completely crap internally. They all tell you something if you take the time to understand.
No. The stock Blender builds are telling us that they're not using modern instructions and are therefore anachronistic.Blender is telling us that Bulldozer has internally bottlenecks that other cores don't have.
Unless it took The Stilt a lot of effort to make those builds your point is invalid. And, Blender has been in development for a long time. It makes perfect sense to assume that the effort put into coding the program is far beyond the effort it takes to recompile it with a decent compiler.No it isn't, its the complete point that you choose to ignore and dream up other crap like its the app and the Dev's fault. Those lazy dev's (shakes fist).
If you want to harp about Bulldozer (which is particularly amusing since I benchmarked Piledriver NOT an 8150) then do as I ask and post the data progression with The Stilt's builds. The SIMD build would suffice.Bulldozer has serious technical issues why do you think it's performance is so bad per clock! Go look at an ARM A73 core its a narrower design yet smokes it in performance per clock, why?
The vast speed increase on a processor design from 2012 (has anyone tested the 8150 yet?) that also is seen on current Intel processors is clearly worthwhile. Fixing any bugs relating to using a clearly superior compiler is clearly worthwhile.The point is its not just setting a flag you have to make your code fit the model the execution units use. if you go use the 2.75 code base and compiled it with AVX,AVX2,FMA/etc you would see that. Then what does refactoring your code like that do to products that don't support that optional instruction set.
Another red herring. The stock Blender builds are super slow unless you're still running Lynnfield.You realize Blender is already using 128bit SSE operations its not like those lazy developers didn't know how to use SIMD
Great!the FACT is bulldozer has issues with large amounts of FP operations in flight
the FACT is bulldozer has issues with a large amount of FP stores in flight
the FACT is bulldozer/piledriver has to round Robbin its instruction decode between both cores
the FACT is if you vectorise something (pack 4 32 bit ops into 1 128bit op) you reduce:
the amount of instruction Decode from 4 to 1
the amount of scheduled ops from 4 to 1
the amount of stored data from 4 to 1
Bulldozer Bulldozer Bulldozer! Repeat it three times and maybe WInona Ryder will pop out of a cake.There is no conspiracy here if bulldozer see's a larger gain from the REFACTORING of code then other products then Bulldozer has a bottleneck in a space where no other Core does! Now given one of the Zen architects explicitly called out this issue with Bulldozer.........
I appreciate that.This is going to be my last post on the matter, im not wasting anymore time.
The better question is what actually changes between SSE build and AVX/AVX2 build?
Is the compile creating more friendly code thax to 3 operand?
are some of the new optimization in the code base that allow such a big uplift removing a bottle neck?
Is the improvement from using a different/newer compile that handles bulldozer better?
He also ignores the fact that the FX8370 and A12 9800 are tied in the gaming tests. The A12 9800 has core IPC improvements but is hampered by only 2MB of L2/L3 cache in total and also cannot hit maximum boost speeds due to TDP limitations. That hints at most of the games using at most 4 threads or less,and being more limited by single core performance.
Its literally staring people in the face. They are so desperate to prove Ryzen has not improved per core performance,that they ignore the bottom of the gaming graph.
He also ignores the fact that the FX8370 and A12 9800 are tied in the gaming tests. The A12 9800 has core IPC improvements but is hampered by only 2MB of L2/L3 cache in total and also cannot hit maximum boost speeds due to TDP limitations. That hints at most of the games using at most 4 threads or less,and being more limited by single core performance.
Its literally staring people in the face. They are so desperate to prove Ryzen has not improved per core performance,that they ignore the bottom of the gaming graph.
You claim that games don't scale beyond 4 threads
So, sorry but games scale beyond 4 threads
Wow that is a hihgly interesting and impressive piece of work. Thanks, looking forward to it! And since you now have a 1070, if you wouldn't mind, try adding Arkham Knight into the mix. That game is more CPU intensive than what most people think it was.You are quite correct thinking a Starcraft 2 benchmark as a good gaming cpu benchmark, but during my 2500k/860/Q9550 testing on the GTX 970, I have found some others that are even worse.
http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...rks-bonus-i7-860.2428699/page-2#post-37391692
Actually Diablo 3 seems to be the leader, since it maintained a very high delta for both the Q9550 and the 860. Thief is also a good candidate but it seems that newer cpus are mostly OKish.Unreal Engine 4 although multhithreaded, is very demanding on the cpu too.
Mafia 2's benchmarks with hardware accelerated Physx, does a number on the cpus too. And of course let's not forget GTA IV. I recently tested it on my GTX 1070 and I only got 59fps from 53fps with the 970, with very low GPU usage of course, indicating a huge cpu limit. This game is a timeless cpu test. Haven't tested any of the other top cpu limiters however to see what's what.
If Rysen is any good however, I will be testing them myself anyway because I will be having one, thank you very much!
No,you are claiming that I claim that,by lying through your teeth.
I said MOST(not ALL) of those games tested,ie,4 of those 6 games tested,don't scale well with more cores and threads. They are far more sensitive to single core performance,cache and don't really make any real use of above 4 cores.
But you would know that if you knew anyone who played those games and conveniently editing out those other charts is not helping you.
You are trying to help your mate out by trying to sound clever.The damage limitation is getting serious now!! Basically ignore all evidence about the games being tested and the AMD CPUs being tested.
but also think this review points at Zen reaching an IPC just like Broadwell or a bit behind (rather behind in games, unless Zen scales much better with frequency)
by lying through your teeth
But you would know that if you knew anyone who played those games and conveniently editing out those other charts is not helping you
You are trying to help your mate out by trying to sound clever
So at this point - you and your mates have not really proved anything
I think some of you younger enthusiasts have forgotten how things were back 15 years ago
8C/16T Ryzen chip can get within 10% of a Core i7 6900K whilst being 10% lower clocked
(and gain very little, allow me to add). So pure linear frequency scaling is NOT the norm. Then, I have not used those bench you pasted because I sincerely doubt they use exactly the same settings than that review of Zen, so I don't think that results could be extrapolated as they are to those in the review. Feel free to prove me wrong extrapolating one of the results in the Zen review (not Zen, whatever other known CPU there) with your bunch of benchmarks; I promise I would be really impressed and acknowledge so in public. But until that, please allow me to be a bit sceptic about perfect clock scaling of Zen architecture (10% gain in those average of games for 10% more frequency, as you claim).The FX9590 is clocked 1GHZ higher than the FX6300
I expect you will ignore all this and make something else up to try or something,but I have been long enough lurking on forums to remember the same response when CPC leaked the Athlon 64 results,etc.
TodayThis is going to be my last post on the matter, im not wasting anymore time.
Next time either mean what you say or don't post flaming posturing things like promises you're not going to continue a debate that you chose to turn into flaming.There is no false dilemma, thats just you failing to see the forest through the trees.
Let's use your logic with data shown in the review. You claim that games don't scale beyond 4 threads (just ignoring for a bit than FX8370 does 73.6% in games vs 69.8% for A12-9800 and you take that for "tied"). Very well, just here we have just 3 datapoints: Core i5-6400 (3.1GHz all core turbo), Core i5-6500 (3.3GHz all core turbo) and Core i5-6600. All of them have exactly the same architecture, probably same mainboard and so on, so they just differ in frequency. Presto, some Excel (calc, btw) and you get a beautiful linear regression to correlate frequency with the score in games (don't adjust well, so here you have your first tip than games don't scale linearly with frequency).
1- First, just linear regression: [Freq in MHz] = 66.72972x[Score] - 2994.1337. Now put "98.6" in score and you get 3598MHz as frequency needed to reach that score. Ignoring for a bit than zero score points to -2994MHz frequency ;-), let's input 118.2, which is the score of the Core i7-6700K. You get 4892MHz, so, assuming games don't scale beyond 4 threads, this really proves than Sylake core in Core i7-6700K has a 16% higher IPC (4892/4200) than Skylake core in Core i5 family. Right, sure.
2- Well, as that linear regression don't cross 0,0, let's use a second order polynomial regression: [Freq in MHz] = 0.33466x[Score]^2 + 3.3795x[Score]. Again, let's check with "98.6" and this time gets 3587MHz as the frequency needed for that score. And now using "118.2" we get 5075MHz as the frequency needed to reach that score; this time, with a regression that goes to zero frequency when score reach zero, Skylake core in Core i7-6700K has, amazingly, 21% higher IPC than Skylake core in Core i5 series.
So, sorry but games scale beyond 4 threads, as just you could verify using Skylake cores only. Now a small disclaimer: just because Skylake core scale more or less with those regressions don't mean than OTHER cores scales the same. We don't have more datapoints beyond Skylake (A12 and FX8370 use different cores and differ in core count, i7-6800K and i7-6900K differ in core count, and there are just one entry for Zen and Haswell quadcore), so we couldn't know how every other core scales in those games. And yes, I do think Zen has made a huge leap in IPC vs Con cores (just compare to FX8370), but also think this review points at Zen reaching an IPC just like Broadwell or a bit behind (rather behind in games, unless Zen scales much better with frequency).
Byes