Older versions of Folding@Home's CPU-only application ( = not GPU accelerated application) did scale easily to, for example, 128 threads of a dual-socket 32c/64t EPYC computer of mine. That is, the performance of this application was not hurt by cross-socket communications much, or at all. It used, and I believe it still uses, an inter-thread (inter-process) mechanism for this (MPI) which is also used in HPC clusters for communication not only across sockets but also over Infiniband, Ethernet, or other node-to-node/ cabinet-to-cabinet interconnects.
Last time I checked though, Folding@Home cannot scale to that many threads anymore. Past a certain thread count, which I don't remember anymore, it would either plainly fail or run extremely slowly. You can test this yourself quite easily. Just configure the thread count of the "CPU slot" differently and watch F@H's log.
Whether or not cross-socket communications still perform as well as in the past is not known to me.
So in short, F@H on high-core-count CPUs (or on CPUs in general) is a drag nowadays. But as
@TennesseeTony pointed out, the F@H consortium runs different experiments on CPUs than they do on GPUs, therefore CPU contributions are not obsolete. They do however give poor points per Watt hour compared to most (discrete) GPUs.
The alternative which several pointed out here already, i.e. various BOINC based projects, tends to scale to huge core counts easily because most of these projects use
n copies of single-threaded tasks on such hosts, with
n = as many of the host's hardware threads as you want.