I guess tential was right, and that by me getting a Ryzen 5 1600 6C/12T CPU, I re-defined entry-level.I find it great what is happening. Soon 6c is the new 2c pentium/i3 class.
Well now they have to fight a 32 core for the same market with a 18c die. Dont know if it hadnt been better with a 10c vs 16c. They entered a gunfight with a knife.
It shares the same socket as Naples, so AMD can just release Naples as "ThreadRipper 2" or whatever. It looks like Intel won't be able to win the marketing game, since they only have up to 28 cores. After all, this farce isn't about performance or value or anything else. Both companies just want to be able to claim to have the most cores for PR reasons.AMD has said Threadripper is a 16C/32T part, not 32C/64T.
The price leaks cemented the HEDT series as dead to me. Just like the previous generation, where you could get 14 cores for the same price as the i7-6950X or 10 cores for the same price as the i7-6900K, the smart shopper is going back to Xeons again. That said, the R7-1700 at 4 GHz is a hard proposition to turn down, and it looks like Intel continues to have no answer to it in terms of value.Well, with the leaks over the past few days, I think I've decided to go SKL-X. I think the 8 and 10 core models are probably the best combination of ST speed and MT performance, but the 8 core will likely have more OC headroom so that will likely be my choice.
The price leaks cemented the HEDT series as dead to me. Just like the previous generation, where you could get 14 cores for the same price as the i7-6950X or 10 cores for the same price as the i7-6900K, the smart shopper is going back to Xeons again. That said, the R7-1700 at 4 GHz is a hard proposition to turn down, and it looks like Intel continues to have no answer to it in terms of value.
That said, the R7-1700 at 4 GHz is a hard proposition to turn down, and it looks like Intel continues to have no answer to it in terms of value.
I agree, if these leaked prices are accurate they make no sense when you can get a 1700 8 core CPU for close to 300. There's no way I'm paying 650-700 for a 6900k replacement. I'm hoping AMD thread-ripper CPUs are aggressively priced as I will likely pick one up.
I agree, if these leaked prices are accurate they make no sense when you can get a 1700 8 core CPU for close to 300. There's no way I'm paying 650-700 for a 6900k replacement. I'm hoping AMD thread-ripper CPUs are aggressively priced as I will likely pick one up.
Sent from my Nexus 6P using Tapatalk
Well, with the leaks over the past few days, I think I've decided to go SKL-X. I think the 8 and 10 core models are probably the best combination of ST speed and MT performance, but the 8 core will likely have more OC headroom so that will likely be my choice.
It is literally stated on Intel's slide deck, Skylake-X has AVX512, Kaby Lake-Rip Off Extreme does not.Probably the most frustrating thing is the AVX-512 status, which is still unknown.
Why would it? For all i know full-speed AVX512 can easily run you the same speed as AVX2 on Haswell/Skylake does currently, at 1 AVX512 FMA per clock.but a full speed AVX-512 implementation should be reaching 1.7 TFLOP/s double-precision and 3.4 TFLOP/s single precision at 3 GHz.
Of course the slide says it has AVX-512. What people actually want to know is if it supports AVX-512 at 2 IPC. Sure, Intel can make a gimped version that only does 1 IPC like AMD with Zen. It'll get the exact same (actually worse, because of latency) performance as regular AVX, so it'll be useless.It is literally stated on Intel's slide deck, Skylake-X has AVX512, Kaby Lake-Rip Off Extreme does not.
Why would it? For all i know full-speed AVX512 can easily run you the same speed as AVX2 on Haswell/Skylake does currently, at 1 AVX512 FMA per clock.
I'll let a certain guy running AVX512 software on Skylake-SP chime inIt'll get the exact same (actually worse, because of latency) performance as regular AVX, so it'll be useless.
Depending on the application much more than that. I'm running a few Skylake-EP instances on GCP as we speak and they are magical.
Intel CPUs were always cache starved and AVX-512 with CDI is a game changer since it allows you to vectorize code that otherwise would not be possible to vectorize at all due to memory conflicts.
ImageMagick recompiled with AVX-512 support gets 5 fold increase in historgraming, blur/beblur and many other image processing functions. FFTW which is used in a metric tons of things also gets a similar performance boost.
I need to take a deeper look at handbreak and a few other things in the future.
As a programmer (for part of my job), it is extremely difficult to properly sub-divide the problem into exactly 2, 4, 6, 8, 10, etc parts. Especially when you have to consider many different types of computers that your software will be used on. So, when you are already working 60+ hour weeks just to meet the deadlines, you don't have the time to sit back and optimize everything for every end-use. The default for me, unless there is a known terrible bottleneck, is just to divide the software into chunks that are logical to program--not into chunks that are ideal for every possible CPU.Now that Intel has gone Core Crazy too, perhaps now we can have honest discussions about how few desktop applications actually benefit meaningfully from more than 4 cores.
Now that Intel has gone Core Crazy too, perhaps now we can have honest discussions about how few desktop applications actually benefit meaningfully from more than 4 cores.
Why cry foul when the extreme edition level chips have been priced EXACTLY like that for years and years.No one is crying foul that their 8c is $599 and then... a straight jump to $1000 for 2 more cores? How is anyone ok with that?
Kinda sucks that they are still going to be using TIM for their $2000 18C chips.. Same old Intel ha..
$400+ CPU's will not bring about a multi-core/multi-threaded software revolution.For today you are mostly right. Tomorrow however... Besides, now with a plethora of many core CPUs devs may be moved into that direction as well.
Why cry foul when the extreme edition level chips have been priced EXACTLY like that for years and years.
Skylake: 7820X $599, 7900X $999 (post-Ryzen, Intel went back to the old pricing)
Broadwell: 6850K: $617, 6900K $1089 (pre-Ryzen was a bit higher than their normal pricing)
Haswell: 5930K: $583, 5960X: $999
Ivy Bridge: 4930K: $583, 4960X: $999
Sandy Bridge: 3930K: $583, 3960X: $999
Westmere: 980: $583, 990X: $999
Nehalem (2009 version): 950: $562, 975: $999
Nehalem (2008 version): 940: $562, 965: $999
Same old Intel, you are correct.
I do get that you have no idea what you're talking about. Thanks for letting me know that I'm making "burn-in tests," instead of useful software. I'm sure my management will find that information useful.I'll let a certain guy running AVX512 software on Skylake-SP chime in
Did you get the hint? If you did not, let me spell it out: who cares about throughput when you can't use it. AVX512 by it's own existence allows to use the throughput Skylake/Haswell had often underutilized outside of burn-in tests.
So you're saying that since we've had gouged margin products historically that its actually supposed to be ok that Intel charges $400 more for 2 additional cores on the same socket?
History has nothing to do with this. the 10 core chip is an absolutely awful value proposition and since we aren't talking about gaming core count/$ is everything.