PotatoWithEarsOnSide
Senior member
- Feb 23, 2017
- 664
- 701
- 106
^^^^proof, if ever, that you can fall victim to an April Fool's joke even if you aren't stupid.
The nVidia driver doesnt handle the scheduling. There is no difference to AMD. "inter warp scheduling" has nothing to do with it.
And this wrong, too. nVidia creates helper threads under DX11/OpenGL to support the main render thread - aka offloading workload to other threads.
Instead of listening to people on youtube you could watch GDC videos from nVidia about DX11 and DX12: http://www.gdcvault.com/play/1023517/Advanced-Rendering-with-DirectX-11
And here is one for OpenGL and Vulkan: http://www.gdcvault.com/play/1023516/High-performance-Low-Overhead-Rendering
Maybe you guys can now go back to Ryzen. This is so more interesting than this nonsense about nVidia, AMD, DX11 and DX12. The last two pages are really offtopic.
I'd need to ask him about TB3, but the AdoredTV thing is not what we set out to test. It was more in general to see if AMD GPU's play nicer with Ryzen. The threaded optimization tests later in the review were done to find the cause of NVIDIA GPU's not liking Ryzen as much, but it didn't answer the question.
The nVidia driver doesnt handle the scheduling. There is no difference to AMD. "inter warp scheduling" has nothing to do with it.
And this wrong, too. nVidia creates helper threads under DX11/OpenGL to support the main render thread - aka offloading workload to other threads.
Would be an amusing test, to somehow artifically limit bandwidth to be the same as the 2133 CL16 RAM, and then compare gaming performance of 1800X with Ryzen having 3200 CL14 and 6900K having 2133 CL16.New hardware.fr article about DDR4 scaling on 1800X and 6900K is up:
http://www.hardware.fr/articles/958-1/influence-ddr4-ryzen-7-1800x-core-i7-6900k.html
Would be an amusing test, to somehow artifically limit bandwidth to be the same as the 2133 CL16 RAM, and then compare gaming performance of 1800X with Ryzen having 3200 CL14 and 6900K having 2133 CL16.
Latency would be the same, and with artifically limited bandwidth would be the same, so it would be apples to apples kinda sorta. (Not very fair for Broadwell in the grand scheme of things )
You are so wrong and so clueless its not even funny.
Intels AVX/AVX2 implementation is not significantly better then AMD's.
1. Intel take a big hit when in 128bit mode when executing 256bit data vs a traditional 128bit SIMD unit until the top 1/2 of the 256bit unit becomes active.
2. They both have around the same instruction latency/throughput with some better for amd some better for intel.
3. They both decode similarly ( this was an actual problem for Bulldozer)
Now on to the Cache, i dont know what you are smoking but it is so wrong.
The L2 in core is not for "support", it is where the streaming prefetchers are and the L1 and L2 aren't inclusive or exclusive of each other but are both inclusive in the L3.
In Zen the the L1 is write back so i assume it is exclusive(it might be inclusive) of the L2, The L3 hold L2+L1 tag data and maybe some inclusive lines but is largely exclusive.
Whats the difference between the two? really its about multi core scaling and handling of cache coherency. In terms of general performance, generally speaking you can treat them largely as equal.
Now one obvious difference is the width of the read/write ports. intel has end to end 256 bit datapaths ( execution, load store , cache). AMD has end to end 128bit datapaths (execution , load store, cache).
intel can most definitely hit higher throughput, but instruction throughput and latency isn't any better.
What this all actually means is only in workloads where 128bit load and store becomes a bottleneck does intels design offer advantage. Go look at the stilts data to see how many apps across a large suite of app's that actually is. If high ILP 256bit was actually that common they wouldn't be shutting off 1/2 of their SIMD units by default, would they?
So no, AMD's avx/avx2 is not greatly subpar compared to Intel's, thats just FUD from someone who has no clue what they are talking about!
I have a quick question (planning my Ryzen build) but not sure where to post, so figured I'd put it here.
Is there a low-level hardware difference between the 1700X and the 1800X or is it just a matter of binning and higher clock speed?
The latter. The 7 series are all identical hardware.Is there a low-level hardware difference between the 1700X and the 1800X or is it just a matter of binning and higher clock speed?
Is there a low-level hardware difference between the 1700X and the 1800X or is it just a matter of binning and higher clock speed?
I have a quick question (planning my Ryzen build) but not sure where to post, so figured I'd put it here.
Is there a low-level hardware difference between the 1700X and the 1800X or is it just a matter of binning and higher clock speed?