8th Annual BOINC Pentathlon

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
Should be easy to do on machines with multiple drives. Just gotta wonder what it does to compute times

If you let two boinc instances run with 100% allowed CPU both, task runtimes will be roughly double of course. But this is not the typical use of multiple instances. Some better examples:
  1. I have a PC with a CPU cooler which gets loud when under some load, but with a GPU cooler which stays tolerably quiet even under high GPU load. So I would like to run a GPU project all day, and a CPU project only when I am away.

    • I could set up one boinc instance for the GPU project, set it to run 24/7, and allow it as many CPU-% (or better yet, appropriate <max_concurrent> in app_config.xml) as needed to get the GPU utilization that I want.

    • Then I set up a second boinc instance for the CPU project. In this one I configure a daily schedule, so such computing is suspended by time of day when fan noise isn't tolerated. I configure allowed CPU-% such that enough capacity remains for the first boinc instance which runs the GPU feeder.
  2. I have a dual-processor server, i.e. lots of CPU cores in a single machine, and want to load it up with several days worth of WCG tasks. But a while after downloading tasks, it says: "Not requesting tasks: too many runnable tasks". This is because boinc has a built-in limit of how many tasks to enqueue. I have read that it doesn't queue up more than 1000 tasks.

    • So I let this boinc instance sit there and crunch at 100%, while having network disabled until the official start of the race. Yet a while before the race, this boinc instance completed all of the (1000?) tasks that I had downloaded, but don't want to upload yet. The computer is now idle.

    • I create a second boinc instance, connect it with the project, request tasks, let it start crunching, disable networking until the start of the race.

    • Since the 1st instance is already idle, the 2nd instance can make use of the CPUs fully.

    • Actually the 2nd instance can be set up even before the 1st instance runs idle. You can optionally suspend the project on the 1st instance temporarily while you get the 2nd instance up and running. (In order for the 2nd instance to download tasks, it must have the project active, not suspended, and all tasks of the project need to be active too, not suspended.)
A variation of the first example is when you want one project to have network access 24/7, and a second project to have network access disabled for a period (i.e., bunkering only in that second project, while your first project keeps going undisturbed). This can be achieved with etc/hosts file manipulation (downside: your web browser can't access the project web site either), or with firewall rules presumably (downside: you need to know your way in firewall configuration), or by dedicating separate boinc instances to each project. You could disable networking of the 2nd project = 2nd instance by a given calendar/clock schedule, while networking remains up in the 1st project = 1st instance.
 
Last edited:

Orange Kid

Elite Member
Oct 9, 1999
4,355
2,154
146
Oh yea I get it now. For some reason I was thinking to run then at the same time. (Not enough coffee) Run one instance till the work is complete, then the next, repeat as necessary.
I'm going to try...one for cpu and one for gpu as I can't get enough tasks to keep the gpu busy
 

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
[Marathon, Cosmology@Home]

Contrary to all the horror stories that I have read at various places, getting the virtualized applications of Cosmology@Home up and running went pretty much painless on the first 2 of the machines that I want to use for this. I fetched the current installer from virtualbox.org. I also had to enable the virtualization features in the BIOS of both machines. I don't recall whether these options were off out of the factory, or whether I had them disabled myself at some point.

The 1st machine is loaded up, and I "disabled" networking by simply pulling the Ethernet cable. 2nd machine is still downloading tasks. I switched my preferences to fetch Planck only for the time being.

The project admin Marius is super responsive and has created more tasks.

Thanks to @TennesseeTony for all the early testing; based on your info I have surely avoided several pitfalls. Also thanks for the app_config examples. (I added sections for the Planck application, named "lsplitsims", for myself.)
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,280
3,903
75
Cosmology refuses to give me any non-legacy work, even though I got VirtualBox set up. I'm bunkering WCG instead.
 

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
building one bunker after another

A variation of this would be with multiple boinc data directories, but just one boinc instance running at a time:
  • Download many tasks, disable networking, crunch.
  • When all tasks are completed, shut down the boinc client.
  • Rename the boinc data directory and create a new one.
  • Start boinc, connect with project.
  • Repeat.
(Edit) And once the race starts, work your way backwards: Let client upload work, shut down client, swap directories by renaming, start client, repeat.

(Edit 2) Caveat, I haven't tried it myself yet. A cosmetic flaw of this could perhaps be that your host will appear listed multiple times at the project's web site where you look at stats, task results and such.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
I'm bunkering WCG instead.

That's what I do on my Linux machines (which have far more cores than my Windows machines). It's painfully slow and ridden with the server deferring download requests due to high load.
 

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
[Marathon, Cosmology@Home]

Regarding HT on or off, and ideal thread count per task:

On my i7-6950X (10C/20T, quadchannel RAM), I am running with hyperthreading enabled, 100 % CPU usage allowed to BOINC, 5 threads per task configured in app_config.xml, therefore 4 tasks running simultaneously. Tasks take typically about 4...6 minutes to run. (Some take up to 15 minutes.)

While I haven't actually measured whether this config gives best throughput, it is probably fairly close to the optimum: During at least the first 30 seconds, a task will not use any CPU at all. From what I read, the docker container attempts to download incremental updates during that time. With four tasks running at the same time, this results in ~70 % CPU usage by all tasks combined for almost half of the time (when one out of four tasks is dragging its feet), and ~95 % CPU usage for the other half of the time (when all tasks do real work).

Sometimes two tasks are dipping down to low CPU usage in parallel, giving occasional periods with merely ~60 % CPU usage.

Therefore I think that at least on this high-clocked CPU with quadchannel RAM, it is appropriate to have hyperthreading enabled and to use all logical cores during times of peak utilization.


On my laptop with i7-4900MQ and dualchannel RAM, I decided on a whim to switch off hyperthreading (hence 4C/4T) and run only 1 task at a time with 4 threads. Task duration is at the order of 10...20 minutes. (I am surprised that performance is so bad compared to the i7-6950X.)

That means those initial periods of 30 s, when a task is not computing but only looking for docker updates are unfortunately not being covered by another task computing in parallel. But on the other hand, these dips to ~0 % CPU usage only happen once every ~1/4 hour, for 1/2 a minute, which is not much.

Perhaps a better utilization of this machine would be with HT on, running 1 C@H task with 4 threads, and 1 WCG task in parallel. Maybe I try this if the bunker completes before the race begins, or during the race.


On an i7-4960X (6C/12T, i.e. HT on, quadchannel RAM), running a single task with 4 cores, plus a last few NumberFields stragglers at lowest scheduler priority, gives 16 minutes duration of the C@H tasks. (Taken from merely two WUs so far.)

A single C@H task with 6 threads with almost no other background load takes 6.5...11 minutes. (Taken from 4 WUs so far.)

So, I will keep HT on, run merely 1 C@H task at a time (with 6 threads per task), and 2 WCG tasks in addition to increase utilization.

(Edit: There are some longer running tasks on the 6950X and 4900MQ.)
(Edit 2: added 4960X)
 
Last edited:

crashtech

Lifer
Jan 4, 2013
10,546
2,138
146
I hope you guys will let me know if any of these races look like a close thing and you need a hand. Otherwise I think I'm going to stick with the Formula BOINC marathon for now, it's about all I can handle for now. I should be looking into better (faster) ways to manage all my PCs and to allocate resources to the various projects.
 

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
@crashtech, to make matters worse, there is even another Formula BOINC sprint during the Pentathlon. (Sprint project TBA May 10, running May 11-14.)

OTOH, choice is good.
 

crashtech

Lifer
Jan 4, 2013
10,546
2,138
146
I'm still figuring out how thin I can spread things out and still make a difference.
 

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
Hmm, a downside of Pentathlon is that we are probably one of the smaller teams there.

An upside is, every WU that we crunch for Pentathlon will also benefit our Formula BOINC marathon standing.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,280
3,903
75
I hope you guys will let me know if any of these races look like a close thing and you need a hand. Otherwise I think I'm going to stick with the Formula BOINC marathon for now, it's about all I can handle for now. I should be looking into better (faster) ways to manage all my PCs and to allocate resources to the various projects.
Which projects do you plan to work on? So we can let you know when one might be a more helpful choice over the others.
 

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
@Jondi ,
trying cosmology now with with Vbox and on both camb_docker and planck my CPU usage is a big fat zero.

I saw this happen when VT-x is disabled in the BIOS (or not implemented in the processor in the first place). These WUs will show up with an error in the results list at the C@H page, and the results details as well as the local boinc log should contain some more or less helpful notes about the error source.
 

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
@crashtech,
how about the WCG/OpenZika "City Run" then? It starts in ~3d4h, lasts for 5 days. Bunkering early is a PITA, especially with the WCG servers having load issues. But building a ~2 day bunker should work alright, if not even a 3 days bunker. (I had 3 days bunkered with ease on my 28C/56T boxes during the last sprint, but that was with MCM tasks which are larger than the OpenZika tasks which are needed now.)
 

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
@crashtech I second the motion, to (start as soon as possible and) bunker about 4 days of WCG OpenZika, as that will meet your goal of assisting both Formula BOINC standings, and it is also the first 'short' race of the Pentathlon (City Run, for 5 days).
 
Reactions: crashtech

crashtech

Lifer
Jan 4, 2013
10,546
2,138
146
@StefanR5R and @TennesseeTony , thanks, this is exactly the kind of help I was hoping for! I know I ought to be able to figure it out on my own, if I wasn't being pulled in so many directions. I'll work on getting this done asap.
 

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
[Marathon, Cosmology@Home]
[City Run, WCG OpenZika]

Regarding HT on or off, and ideal thread count per task:

I now have the 3rd Windows box running with C@H, virtualized Planck tasks. The very first task was slowly increasing its estimated remaining time into several days, so I aborted it. All subsequent tasks ran just fine.

So I went a bit overboard and set up a mixed bunker of C@H/Planck and WCG/OpenZika on that box. It is filled up now till its allowed amount of runnable tasks, and I disabled networking by adding the following lines to C:\Windows\System32\drivers\etc\hosts:
Code:
127.0.0.1 localhost
::1 localhost

127.0.0.1         www.cosmologyathome.org

127.0.0.1         www.worldcommunitygrid.org
127.0.0.1   scheduler.worldcommunitygrid.org
127.0.0.1       swift.worldcommunitygrid.org
127.0.0.1        grid.worldcommunitygrid.org

I picked up the WCG server names from C:\ProgramData\BOINC\client_state.xml. I don't know whether they are all needed in there; neither do I know yet whether they will suffice to prevent uploads. The first WCG task will complete in an hour when I need to be sound asleep. Edit: They successfully prevent upload of finished OpenZika tasks.

So far the whole VirtualBox affair turned out smoothly here. When the City Run is over, I will attempt to get VirtualBox installed on one of my 2P Linux machines which runs under OpenSuse. All my other Linux machines run Gentoo, and on those I am not keen on getting VirtualBox installed (and later de-installed).
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,280
3,903
75
I avoid using the hosts file on Linux. It tends to get cached for me, and I have to reboot to fix it.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
.....A word of caution. We will be competing against the world. No leagues, like Formula BOINC. We typically place in the mid teens. However, we do have several new power users this year, so who knows, maybe we can get closer to 10th, as opposed to 14th-16th? ....

I'm trying to set some expectations for the City Run (WCG-OpenZika). Based on the Formula BOINC results for WCG, our TeAm finished 11th, across all three leagues. IBM will not have registered for the Pentathlon, so make that 10th.
But those results were based on a full onslaught of all machines, focused on the one project. With the Marathon running, and with a third project also likely to overlap, who knows what the result will be. But we at least know we can be top ten there if we play our cards right.

We certainly will have a much improved presence this year!

EDIT/side note: ThunderStrike is now up and running, partially. Over 1000 Zika tasks to crunch through with his 56 threads, will have to add some Cosmology to keep him busy until the race starts.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
You can duplicate the camb_boinc2docker sections and replace camb_boinc2docker by lsplitsims in <name> and <app_name>.

Edit: Here are the app_configs which I use on the 6C/12T i7-4960X:
C:\ProgramData\BOINC\projects\www.cosmologyathome.org\app_config.xml
Code:
<app_config>
    <project_max_concurrent>1</project_max_concurrent>
    <app_version>
        <app_name>camb_boinc2docker</app_name>
        <plan_class>vbox64_mt</plan_class>
        <avg_ncpus>6</avg_ncpus>
    </app_version>
    <app_version>
        <app_name>lsplitsims</app_name>
        <plan_class>vbox64_mt</plan_class>
        <avg_ncpus>6</avg_ncpus>
    </app_version>
</app_config>


C:\ProgramData\BOINC\projects\www.worldcommunitygrid.org\app_config.xml
Code:
<app_config>
    <project_max_concurrent>2</project_max_concurrent>
</app_config>

I leave the global computing preferences at "Use at most [ 100 ] % of the CPUs".
I.e. boinc sees 12 logical CPUs, but doesn't use all of them in the end, because the two files above enforce a maximum of 1 C@H job and 2 WCG jobs at any time.

Note, I did not download any legacy application tasks from C@H. AFAIK these are always single-threaded, which would require a slightly different setup to keep the desired balance between C@H and WCG.

With 6 + 2 worker threads, I have the 6 physical cores of this hyperthreaded CPU slightly oversubscribed. But since the instruction mix of C@H and WCG differ, since the C@H jobs have those 30 seconds long periods of idling at their beginning, and with the caches and memory controller of this Ivy Bridge-E behind the hyperthreaded cores, I figure it's better to run slightly more threads than physical cores.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
Argh... My 2P boxes have already finished all their OpenZika work. Off to create additional bunkers.

Edit: The guide from OCN about multiple instances works basically. But at WCG, it has the downside that client identity is not copied to the new instance, and therefore WCG wants to see some valid results returned from the new instance before handing out bigger numbers of WUs. -- For the future, I need to figure out how to copy client identity without copying full client state.

Edit 2: On the other hand, if the task limit which I encounter is not imposed by the client but by WCG, then there is no way around creating new clients with new identity.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,680
8,226
136
Added app_config.xml to post #98.

This is from a machine on which I first downloaded as many C@H tasks as I could, and then downloaded as many WCG jobs as I could, and then configured a fixed ratio of cores for C@H + cores for WCG, both running at the same time.

On my other two machines which I treated with VirtualBox, I had a number of WCG OpenZika tasks left over from the last Formula BOINC sprint, and downloaded only C@H tasks newly. On these machines I only have www.cosmologyathome.org/app_config.xml specifiying how many threads per task shall be used for the virtualized jobs. But I did not specify a <project_max_concurrent> limit, neither for C@H nor for WCG. I simply let boinc's scheduler decide when to run C@H, and when WCG.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |