Crumpet
Senior member
- Jan 15, 2017
- 745
- 539
- 96
I wonder what will happen on the reviews day, what kind of workloads will be used by certain websites .
I'm expecting networks to crash from the number of people trying to read them....
I wonder what will happen on the reviews day, what kind of workloads will be used by certain websites .
I wonder what will happen on the reviews day, what kind of workloads will be used by certain websites .
No need or that, they'll just force ST in any workload that allows it to protect Kaby Lake.
And ofc test Titan X at 1080p to show how Kaby is best for gaming.
The HEDT platform can lose but Intel needs to protect Kaby Lake.
What interest do review sites have in "protecting" Kaby Lake?
If the 7700k had been an AMD chip, the world would have been in an uproar.. "not a new chip at all", "AMD space heater", "OMG you have to delid it, AMD should just give up" etcetc...
Bristol Ridge looks really good from a mainstream perspective but the real issue is that it is currently only available through OEMs. AMD has given no timeframe as to when these chips would be available in the DIY market but we can believe that a couple of these chips will be available later this year when Zen launches on AM4.
What interest do review sites have in "protecting" Kaby Lake?
I wonder what will happen on the reviews day, what kind of workloads will be used by certain websites .
What interest do review sites have in "protecting" Kaby Lake?
You are correct of course but conclusion of a review can be skewed due to results in some workloads. I can already see something like " Ryzen is overall very good BUT performance in (prime/physics/linpack/insert whatever) cannot make us give recommendation to buy it versus XYZ part".I don't think anything particular happens. If the reviewers would only use 256-bit workloads, then there wouldn't be too many workloads in their test suite.
I don't see any reason to avoid 256-bit workloads either, since they are becoming more and more common (which they would already be, if Intel hadn't been sand bagging for years). For example it is hard if not impossible to find a modern video encoder which wouldn't support 256-bit AVX/2 (X264, X265, VPX).
For example in X265, the gain from AVX2 is >20% on Haswell and newer. Also many of the heavier workloads e.g rendering (Blender, Embree, etc) support AVX/AVX2 as do many of the scientific workloads / libraries. Since Ryzen in 8C/16T configuration is a HEDT oriented
part, I see no reason to exclude those workloads.
I don't think anything particular happens. If the reviewers would only use 256-bit workloads, then there wouldn't be too many workloads in their test suite.
I don't see any reason to avoid 256-bit workloads either, since they are becoming more and more common (which they would already be, if Intel hadn't been sand bagging for years). For example it is hard if not impossible to find a modern video encoder which wouldn't support 256-bit AVX/2 (X264, X265, VPX).
For example in X265, the gain from AVX2 is >20% on Haswell and newer. Also many of the heavier workloads e.g rendering (Blender, Embree, etc) support AVX/AVX2 as do many of the scientific workloads / libraries. Since Ryzen in 8C/16T configuration is a HEDT oriented
part, I see no reason to exclude those workloads.
Nothing not known to anybody who ever had any dealing with other humanoids on this planet. You dont bite the hand that feed you. Simple as that, nothing specific Intel or AMD about it.
Either way the majority will surely paint a believable picture
That's an APU, Personally I don't believe that many enthusiasts, especially the kind that like to discuss things on a forum are even that interested in APU's...
Well it already does get a lot of protection considering what it is..
A skylake chip with a light overclock, and distinctly average TIM that means people need to delid or disable hyperthreading to get reasonable temperatures in certain gaming titles...
And yet it's still the best thing since sliced bread.
If the 7700k had been an AMD chip, the world would have been in an uproar.. "not a new chip at all", "AMD space heater", "OMG you have to delid it, AMD should just give up" etcetc...
That is not the narrative I've been reading on kabylake, we must read different sites.
At the end of the day, Zen day, AMD will not bring anything to the table not allready there just more expensive. If successful we will get 6-8 cores mainstream, thats progress while software still struggles to exersice more than 4. In my oppinion what this Ryzen brings to the table is competition to a stagnant field wich means that in ~5-10 years we will reap the real benefits of this comeback... But thats just me .
If AMD brings something that doesn't exist on the market right now, then we are left with two monopolies that can charge whatever they want. We wouldn't have true competition. It wouldn't bring the high end back down in price. The best way to bring prices down is for AMD to offer something very similar to Intel performance wise.Having the high end reset back to 500-700 USD just like in the glory days of kentsfield etc is very nice. If any of the OC rumors are true then it will bring something that doesn't exist on the market right now.
IMO, review sites are much more likely to be biased in favor of AMD than Intel. There is very little love for Intel among enthusiasts but there is a lot of love for AMD, and I bet you AMD articles/reviews are substantially more profitable than anything to do with Intel.
Can you list what software struggles with more than 4 cores? Ofc software that requires the horsepower and is not offloading most of it to the GPU.
I'm really asking as i struggle to find much.
Ofc how well something scales is debatable but if the scaling above 4 cores doesn't get worse, it's not an issue in this context.
You are correct of course but conclusion of a review can be skewed due to results in some workloads. I can already see something like " Ryzen is overall very good BUT performance in (prime/physics/linpack/insert whatever) cannot make us give recommendation to buy it versus XYZ part".
@The Stilt
Power consumption numbers in all or many tasks would be nice, as opposed to just "max" load. Heavy and light workloads as light matters too.
Even in gaming it would be interesting to see power and if any games can use less power by spreading the load on more cores.
And ofc multitasking perf and power, gaming plus light workloads would be likely something that many would see in real use.
If hexa and octa are going mainstream, we need more data to fully understand them.
Oh, really?
http://wccftech.com/amd-bristol-ridge-a12-9800-am4-platform-performance/
Bristol Ridge was to Carrizo what Kaby Lake is to Skylake.
The interesting part here is i don't think many of what you have listed actually benefit that much from the increased vector width, they benefit from the better register/data manipulation instructions found in AVX2. Are you going to force compile to 128 vs 256bit in your testing?