72 cores and 64GB RAM is measly?
64 GB RAM for 72 cores is measly. (Due to high RAM prices, my own dual-processor hosts have only 64 GB too, incidentally.)
According to the old formulae (1.4 GB + n*0.8 GB ; n= number of allocated cores) your client has more than enough RAM.
Yes, but this is just one formula of several that you can find at the LHC@home site. And like so much other info at the site, it is outdated/ wrong/ misleading. Keep this in mind whenever you look antyhing up at that site. (Also take into account that
n should not be set to too high. Scaling is limited, and the application has single- and lowly threaded periods, IIRC.)
Another, more recently posted formula,
posted by a user whose insight into LHC@home I trust to some extent, is 3.0 GB + 0.9 GB/thread for each vbox based ATLAS@home task.
While I am not saying here whether or not I will participate in this sprint, everybody can easily find that I never run LHC@home outside of competitions, like Pentathlon 2017. SixTrack tasks are nonsense, the vbox based applications are nonsense, and (at least) the external dependencies of the native Linux ATLAS application are nonsense.