It never fully utilizes the cores,
I agree. I suspect this could be mostly fixed by running Atlas-Native with a low thread count per task. But I am not going to install and operate LHC's distributed filesystem (...natively, that is. The VM jobs are most likely running that same filesystem too.), hence I will never find out for myself.
gives me various error message,
Yes. Just now I had all vbox based jobs go into "unmanageable" state on each of my computers. This incidence was likely caused by me downloading and/ or installing system updates, which I haven't done on these OSs for a while. Maybe it was caused by how the package manager installed the updates, or it was caused by the high internet bandwidth use during the downloading.
On the first computer, I restarted the boinc client, and this resulted in the previously postponed vbox jobs to fail with computation error. I rebooted the other computers, and on these the jobs resumed were they stopped and appear to continue to run properly.
and/or waits for memory when plenty is available.
Did you remember to set "When computer is in use | is not in use, use at most [ . . . ] %" to something like 90 %?
Still, it seems to me that the vbox jobs allocate fewer memory than what the boinc client is told that their (peak) RAM usage will be. This surprises me; I thought vbox allocates the whole RAM, which a VM is meant to get, right away at the start of the VM.
For me that means 90 % RAM granted to boinc = 60 % actually used. (Running the native application with its lower RAM requirement would solve this AFAIK. But again: No, thank you very much, I'm not going to manage a distributed filesystem on my computers.) Two boinc clients could be used to work around that, but then extra caution needs to be applied to not to exceed the available RAM.
LHC@home is certainly the worst integrated one of the wrapper projects in the boinc world.