ao_ika_red
Golden Member
- Aug 11, 2016
- 1,679
- 715
- 136
I'm planning to re-connect to s@h server at 11p.m. (ET) to avoid initial data surge.
Last edited:
Back online, but out of GPU tasks at the moment.
<app_config>
<app_version>
<app_name>setiathome_v8</app_name>
<plan_class>cuda50</plan_class>
<avg_ncpus>0.01</avg_ncpus>
<ngpus>0.5</ngpus>
</app_version>
<app_version>
<app_name>setiathome_v8</app_name>
<plan_class>cuda42</plan_class>
<avg_ncpus>0.01</avg_ncpus>
<ngpus>0.5</ngpus>
</app_version>
<app_version>
<app_name>setiathome_v8</app_name>
<plan_class>opencl_nvidia_SoG</plan_class>
<avg_ncpus>0.01</avg_ncpus>
<ngpus>0.5</ngpus>
</app_version>
</app_config>
@Thebobo, check out the link from Tony's post:
http://www.overclock.net/t/1628924/guide-setting-up-multiple-boinc-instances
Running more than one client on a single host (one after another, or several at the same time) is especially useful in cases like these:
- You want to download a large number of tasks, so that you can run the machine for an extended time without network connection, or without having to rely on steady network connection and steadily working project server. But the project server may allow only few tasks-in-progress per client. (And the client limits itself to 1000 tasks in progress.)
- A variation of the theme: You want your host to issue project updates more frequently than the server allows per client.
Another potential use case would be to improve utilization of a large GPU by running more than one job on the GPU at the time. Actually, this can also improve utilization of medium-size GPUs: Most GPU applications have a setup and teardown phase in which they use CPU but not GPU. Rather than leaving the GPU idle during that time, better rune another GPU job with a suitable shift in time. — However, you don't need multiple clients in order to run more than one job per GPU at a time. You just need to provide a suitable app_config.xml file in the project directory.
- You want to run two or more projects at the same time, but you need different local preferences for each project. E.g. you want one project to have networking connection, but want to disable networking for the other project temporarily. Other example: You want to finely control how many % CPU each of the projects get, i.e. you don't want the client to sort this out by itself via the coarse method of "resource share" percentages per project.
Here is a C:\ProgramData\BOINC\projects\setiathome.berkeley.edu\app_config.xml which contains entries for current NVidia GPU applications:
In this example, "<avg_ncpus>0.01</avg_ncpus>" makes the client believe that this application does almost not use CPU at all. This ensures that the client continues to launch GPU jobs even if it is loaded with CPU jobs simultaneously. In reality, the SETI Nvidia GPU tasks will need one CPU thread for some short periods of time. So, take care that you don't overwhelm your CPU with CPU tasks + GPU tasks.Code:<app_config> <app_version> <app_name>setiathome_v8</app_name> <plan_class>cuda50</plan_class> <avg_ncpus>0.01</avg_ncpus> <ngpus>0.5</ngpus> </app_version> <app_version> <app_name>setiathome_v8</app_name> <plan_class>cuda42</plan_class> <avg_ncpus>0.01</avg_ncpus> <ngpus>0.5</ngpus> </app_version> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_nvidia_SoG</plan_class> <avg_ncpus>0.01</avg_ncpus> <ngpus>0.5</ngpus> </app_version> </app_config>
And "<ngpus>0.5</ngpus>" lets the client believe that the application will only use half of the computational resources of the GPU. Therefore the client will then always launch two GPU jobs in parallel. In reality, the application will use varying amounts of the GPU over time, if you run a single job. You can watch GPU utilization (cores and memory) e.g. with the GPU-Z application or other sensor applications. If you run two jobs, and especially if you use boincmgr to defer start of the 2nd job a little bit after start of the 1st job, then the jobs will increase GPU utilization while they fight each other a bit for resources.
Store this as plain text file but with .xml file extension of course in the mentioned directory, then use the "Options/ Read config files" menu item of the advanced view of boincmgr, and the file becomes active.
Note, I am a SETI@Home newbie, and I suspect there are more and possibly better ways to configure SETI@Home as best as possible for a given GPU model.
Back to the topic of multiple clients per host: Another potential use case is if you want to launch more CPU tasks than you normally can with 100 % allowed CPU usage. You may want to do so in rare cases of applications which have poor CPU utilization. However, this can also be solved with a single client per host by means of the <ncpus> option in C:\ProgramData\BOINC\cc_config.xml. (Documentation: https://boinc.berkeley.edu/wiki/Client_configuration)
It seems there will be two horse race between Virgo and Taurus. But, there's 12 days to go. Anything can happen.Woohoo! Taurus leads the pack! Go Taurus! (and TeAm)!
Murphy is at work...............
I am on vacation and apparently a thunderstorm has knocked out some of my rigs. (or network equip) And me nowhere near home. It will be a couple of days before I can get them back up and running.
Great! Thanks Stefan. Take me a while to digest all that. : )
Had two of my rigs do down today. One's back and time for a nap before the next. May have lost a gpu, but Oh,well
I don't think a place filled with multi TFLOPS of GPUs can be called as a junkyard.No worries. Have a junkyard of computer pieces taking up a full garage bay. I'll get it up.
I partially fix my problem by enabling 2 tasks per GPU (usually 1 task per GPU because its utillisation already hit 100%). Now it needs 18 minutes to do single GPU task. Not optimal solution, but it's better than nothing.I only use single client, usually a gpu task done in 12-14 minutes time, but today it needs 20+ minutes to do it.
The top user brought 113 hosts.Woohoo! Taurus leads the pack!
I have seen such variations too. Could this just be the difference betweenI've had some problems with GPU tasks from 2nd/3rd BOINC clients on the same computer, taking far more than an hour, when it should be only 12-20 minutes (4 at a time per GPU).