EPYC has only 42 GB/s bandwidth per domain. Having "more" bandwidth through 8-way NUMA is meaningless if that bandwidth is unavailable when it is needed. The most important server workload, the database, performs exceedingly poorly on EPYC.EPYC has more bandwidth, also in real-world workloads. It also has capability for more memory, which is a concern for some niche workloads.
The Anandtech site benchmarks are a full of obsolete code and basically a joke. Furthermore, AVX-512 offers great performance gains for sparse matrices, as it features improved load/store instructions like embedded broadcast, compress/expand, wide permute etc. Sparse matrix operations are bound by instruction latency, not memory bandwidth.In sparse matrix FP workloads (where you can't pack stuff neatly to use AVX512 units) EPYC outperforms Xeons significantly.
Everybody and their mother can get their chip announced by the big-names and some PR stuff published. What matters is actual shipments and actual datacenter usage. You may notice how all the Cloud announcements of Intel "alternatives" are for non-performance sectors like ultracheap or storage.But yeah, of course DELL will go through all of this trouble, validating and announcing their EPYC line just to produce servers that are slower in every possible real-world workload
Basically nobody is investing seriously in first-generation EPYC processors. All the major players are waiting for Zen2 with 16-core dies and possibly reworked interconnect. Given Icelake delays and the expected lower core count, the first opportunity for AMD to move real volume in the datacenter will be later this year on Zen2.