Interesting results. But as someone who does 3D, I'd rather make 2 Ryzen Systems for the price of one 6900k system.
You're missing the point. The point is that Intel's margin of victory increases with heavier workloads, and with greater AVX2 optimization. I've never done rendering, but I've heard that professional rendering/encoding jobs can take many hours to complete. With that in mind, the performance gap will likely be much larger than the small rendering and encoding jobs that we see in the Ryzen reviews, including even Computerbase.de's review and they used the biggest jobs I could find.
Interesting results. But as someone who does 3D, I'd rather make 2 Ryzen Systems for the price of one 6900k system.
Not so bad really, maybe 15-20% slower for less than half the cost of equivalent intel 8 core HEDT CPU.
It all comes down to if you have the $$$ i guess. Still a huge win on price/perf.
That and with AM4 you should be good for the next 2 zen generations, with intel HEDT is dead end platform thats obsolete in few months..
Use your imagination and just 10x the shown results. Doesn't look like an earth shattering sway in favor of Intel. People should really be happy that Ryzen is so competitive.
Interesting results. But as someone who does 3D, I'd rather make 2 Ryzen Systems for the price of one 6900k system.
I'm curious, but how much time does the average render take for a professional job?
I'd hardly say it's obsolete. I just ordered a 6900K after owning a 5930K. I hope to overclock the 6900K to at least 4.2ghz, and it will be paired with a DDR4-3200 32GB quad channel kit. A setup like this will be viable for years, although I upgrade my platform much faster and I will probably upgrade again once the die shrink for Skylake-E becomes available or PCI-E 4.0 is released.
Completely depends on what you're trying to do. Some of the most complex shots in films could take days for a single frame.. or it could take a couple minutes. This is an unanswerable question, and any benchmark results can not be thought of in a vacuum or averages. Scale is the real metric. So price/perf is king.
AVX2 relevancy is seriously being over stated. For these tasks, GPUs are completely blindsiding any CPU for 3D, AVX2 adoption isn't even a blip.
Those will murder any AVX based design for FP throughput. AMD's design philosophy is very simple. Maximum compute density for servers and using the best compute engine for the job. For traditional servers its Naples and for HPC its Naples servers with Vega GPUs and Zen/Vega APUs.
Increasing the width of the AVX data path isn't without cost. It takes up die space, and affects thermals - note that Intel had to create a second, lower base clock rate specifically for AVX on its newer chips. IMO, the vast majority of laptops don't need AVX. Nor do gaming systems. This is pretty much purely a workstation/server feature.
Why would AVX2 not work with light workloads?
You're missing the point. The point is that Intel's margin of victory increases with heavier workloads, and with greater AVX2 optimization. I've never done rendering, but I've heard that professional rendering/encoding jobs can take many hours to complete. With that in mind, the performance gap will likely be much larger than the small rendering and encoding jobs that we see in the Ryzen reviews, including even Computerbase.de's review and they used the biggest jobs I could find.
As you can see, the lighter the workload, the more favorable Ryzen appears, but as the workload lengthens or uses higher quality settings, then Intel's full width 256 bit SIMD throughput begins to gain steam.
It's half the throughput of its competitor. Also, the number of AVX2 enabled programs continues to increase, along with the level of optimization. Just look at the benchmarks, and how non AVX2 CPUs do in comparison to AVX2 enabled processors. This 2x128 bit thing is a half measure, and whilst it might suffice for now, it won't cut it in the future as Intel seems hell bent on widening the SIMD vectors even more. AVX-512 will eventually make it to consumer chips one day, and if AMD is still on 128x2 because of "efficiency" then they're going to get a rude awakening.
the problem for Intel and its optimizations is that 3d rendering and video encoding are moving pretty fast to the GPU side, using less and less CPU for the tasks. Every year a lot of filters and programs pass more and more calculations to the GPU. Today you can buy a $75 GPU and encode video faster than a $4000 build, and in a few years 3d rendering will be done by the GPU or by a stack of GPUs instead a farm of x86 processors.
For example, Maxon (Cinema 4d) has signed with AMD to use their GPU render
We've had GPU encoding/transcoding for years now, but it's still inferior to encoding/transcoding on a CPU when it comes to actual quality. What's the point of encoding a video at breakneck speed on a GPU if it's going to end up looking like VHS?
Now rendering is another matter, as rendering is naturally more suited to the GPU. But even that has some limitations, as GPUs don't have as much RAM to play with as CPUs. I tried looking, but I couldn't find any examples of a big rendering studio that uses GPUs for rendering.
But most of the big 3D CGI companies look like they are using CPUs to render rather than GPUs. And if they are using CPUs to do rendering, then any type of SIMD would be useful.
That said, I don't expect CPUs to equal GPUs when it comes to rendering, as GPUs are explicitly made for that purpose.
We've had GPU encoding/transcoding for years now, but it's still inferior to encoding/transcoding on a CPU when it comes to actual quality. What's the point of encoding a video at breakneck speed on a GPU if it's going to end up looking like VHS?
My understanding is that most (all?) large rendering farms are only using CPUs. Easier integration, less software issues, more RAM, etc.Now rendering is another matter, as rendering is naturally more suited to the GPU. But even that has some limitations, as GPUs don't have as much RAM to play with as CPUs. I tried looking, but I couldn't find any examples of a big rendering studio that uses GPUs for rendering.