@crashtech, in short, the configured RAM limit works perfectly for me so far. But I only need it on Linux, not on Windows.
Long story,
on my Linux boxes:
Long story,
on my Linux boxes:
- I looked at RAM usage only during the first ≈2.5 hours, and at one time ≈ten hours later.
- I started with 14e, 15e, and 16e tasks, i.e. a mixture of ≤0.5 GB and ≤1.0 GB tasks. But right now I have 16e tasks only (≤1.0 GB per task). Both situations worked out right.
- So far, the percentage of RAM which I configured controls very well how many tasks are allowed to run at any time. I see it by the amount of used vs. free RAM, and by the occasional blocking of tasks in "waiting for memory" on those machines which are tight on RAM.
- On dual-socket machines with 64 GB RAM, I chose 88 % RAM "during use" as well if "not in use". I.e. ≈7 GB RAM remain for the system and the rest.
- I currently don't have anything else running on these machines. Granted, there will be one or another cron job running at night. I'm crossing fingers that they don't cut too much into the remaining RAM.
- Swap partitions are very small on these machines; I should do something about that eventually. In their "real life", these machines are not supposed to swap. Swap space is currently unused, as it should be.
- Earlier this year during Yoyo races with ECM tasks, it happened sometimes that the Linux kernel's out-of-memory killer had to take out a few ECM tasks, sadly after they already ran for quite some time. Those tasks were then shown in boincmgr and in the results tables at the Yoyo web site as error during computing. The host's system log would of course show that the OOM killer was engaged.
This sort of thing did not happen yet with NFS.