8th Annual BOINC Pentathlon

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
This will give you 4 WU, each using one core/thread. This is just for the docker application, not the Planck one.

<app_config>
<app>
<name>camb_boinc2docker</name>
<max_concurrent>4</max_concurrent>
</app>
<app_version>
<app_name>camb_boinc2docker</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>1</avg_ncpus> This line specifies the number of threads per VM
</app_version>
</app_config>

Stefan posted this one, to control both apps, but it didn't work for me the first time, trying it again now. I've modified it to 4 WUs, each with one core/thread:

<app_config>
<project_max_concurrent>4</project_max_concurrent>
<app_version>
<app_name>camb_boinc2docker</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>1</avg_ncpus>
</app_version>
<app_version>
<app_name>lsplitsims</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>1</avg_ncpus>
</app_version>
</app_config>
 
Reactions: Ken g6

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
Regarding LHC, if you already had it installed, if it shows as LHC 1.0, you may need to remove that project in the manager and reinstall it. The current version just says LHC.

EDIT: Not sure why, one machine will only load up LHC@home 1.0, most recent BOINC manager, just like the rest. But I can't get any work from that.

The address should be http://lhcathome.cern.ch/ when adding the project, as compared to http://lhcathomeclassic.cern.ch/sixpack for the 1.0 version.

And holy moly, 1.8GB for the first VM download.
Oh wait, there's two more .vdi's, and a library perhaps, for another 1.45GB total.

 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,282
3,904
75
This will give you 4 WU, each using one core/thread. This is just for the docker application, not the Planck one.

<app_config>
<app>
<name>camb_boinc2docker</name>
<max_concurrent>4</max_concurrent>
</app>
<app_version>
<app_name>camb_boinc2docker</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>1</avg_ncpus> This line specifies the number of threads per VM
</app_version>
</app_config>

Stefan posted this one, to control both apps, but it didn't work for me the first time, trying it again now. I've modified it to 4 WUs, each with one core/thread:

<app_config>
<project_max_concurrent>4</project_max_concurrent>
<app_version>
<app_name>camb_boinc2docker</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>1</avg_ncpus>
</app_version>
<app_version>
<app_name>lsplitsims</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>1</avg_ncpus>
</app_version>
</app_config>
Thanks, but single-threading turned out not to be effective for me. 6 HT cores, + 2 Einstein, completes a camb in under 5 minutes. 4 camb's + 2 Einsteins completes each camb in over 20.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
@Ken g6, what Tony said. Critical for you is avg_ncpus within app_version. Whether or not to configure any limit of concurrent tasks is then of course up to the host utilization that you want to achieve, and there are many ways how to do that.

On my 4-core laptop (HT off for now), I am running Cosmo alone. I have configured planck and camb for avg_ncpus=4 and did nothing else otherwise.

On the 6C/12T Ivy Bridge-E, I run one 6-threaded Cosmo task, 2 WCG tasks, and 2 Einstein tasks, all in a single client. I configured Cosmo for avg_ncpus=6 but no other limit. I configured WCG for project_max_concurrent=2. Einstein is configured for 0.5 GPUs and 0.01 CPUs per task. Then I set BOINC's global usage to 67% CPU = 8 logical CPUs, giving the mix of 1x6 + 2x1 + 2x"0.01".

On the 10C/20T Broadwell-E, I run two 5-threaded Cosmo tasks when available or 10 WCG tasks when not. For this purpose, I configured Cosmo to avg_ncpus=5, and set a global processor usage of 50 %. Then I have a second client instance on this PC which drives 3 GPUs with two Einstein tasks per GPU.

BTW, in this second instance with Einstein, I had set <cc_config><options><process_priority_special>2</...></...></...>, so that the Einstein feeders run at "normal" priority just like the Cosmo VMs. But now I lowered this to 1 = "below normal", and I still get the same GPU utilization. I'll keep it at 1.

Regarding LHC:
I completed a multithreaded Atlas on the BDW-E just fine, as a first smoke test. I also tried the IVB-E (which is able to run multithreaded Cosmo, as mentioned), but the Atlas task ran its firts 5 minutes without CPU utilization. I aborted one such task and suspended another. Need to test more when there is time.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
Anyone know how to change the network settings for BOINC, to transfer more than 2 files at a time?

In cc_config.xml, between <cc_config><options> and </options></cc_config>:
<max_file_xfers>N</max_file_xfers>
Maximum number of simultaneous file transfers (default 8).​
<max_file_xfers_per_project>N</max_file_xfers_per_project>
Maximum number of simultaneous file transfers per project (default 2).​

[https://boinc.berkeley.edu/wiki/client_configuration]
 
Reactions: TennesseeTony

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
With all these VM's running, it's time to shut some stuff down and start handing out handfuls of RAM to the kiddies. I stole 48 GB or ECC DDR3 off ebay for less than one dollar per GB. Can't believe no one else bid on it.

Thanks Stefan! But if I remember correctly, the internet itself is designed to max out at 8 connections? So don't go past 8? I think SystemMechanic used to allow you to change the entire computer to 10, but it wasn't advised.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
The bar graph is averaged over all days, hence showing only a little more than a 3rd of the peak.

With all these VM's running, it's time to shut some stuff down and start handing out handfuls of RAM to the kiddies. I stole 48 GB or ECC DDR3 off ebay for less than one dollar per GB. Can't believe no one else bid on it.
What, with current prices for new RAM being as high as they are now? I trust that you will run a memory test over the sticks as a first order of business.

But if I remember correctly, the internet itself is designed to max out at 8 connections?
I don't know; 8 sounds far too low as a global system limit. If you take a look at resource monitor, networking tab, tcp connections, you might see already a lot more active connections at any time.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,282
3,904
75
I finally found where to enable virtualization on my laptop. So I'll be able to switch it over to Cosmo from WCG.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
[Marathon, Cosmology@Home]

Marius of C@H wrote on May 7:
"The in-progress standings of the Planck contest can now be seen here! (note: those standings include only Planck jobs, the Pentathlon counts any Cosmology@Home job)."

"Additionally, validation had been lagging a bit, but that is now fixed. It should take about 48 hours for all the jobs that had piled up waiting on validation to finish and for credit to be granted. There is also a shortage of work for the camb_boinc2docker and planck_param_sims applications, which I'm currently working on now and I expect to be fixed by tonight. Apologies for this!"​

Right now at C@H's own Planck app 2017 Pentathlon contest:
team rank 7 = TeAm AnandTech
user rank 7 = xii5ku​
I guess that's due to a little bit of patience/ experimenting/ micromanaging, and a boat load of pure luck in getting the right tasks.
 

crashtech

Lifer
Jan 4, 2013
10,554
2,138
146
Two of my machines chewed all the way through their Einstein bunkers and are waiting.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
My triple-GPU box is now halfway through its third Einsteinian bunker.

Yesterday I decided to give a second box another try which had been unstable all the time while I had two GPUs in it: GTX 1070 with triple-slot cooler, and W7000 with triple-slot cooler, crammed onto a µ-ATX board. One card had to sit in an 8-lane slot. I removed the W7000 and have been doing 6-thread Cosmo + 2x WCG + 2x Einstein on the 1070 without issue now. Apparently, power distribution was not up to snuff for two cards.

[Swimming, LHC@Home]

pschoefer of SETI.Germany wrote on May 7 (in German):
(responding to a discussion of high barrier of entry due to VirtualBox)
"OK, WU supply of LHC@Home's [non-vbox] standard subproject SixTrack is difficult at the moment. But I know that somebody is present at CERN today, taking care of SixTrack especially."​
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
My triple-GPU box is now halfway through its third Einsteinian bunker.
I love how E@H is still reporting credits, even though I haven't done a single task in over 1.5 months
It can't hurt then that I even have a 4th bunker waiting to be processed. That way I have another ~1.5 days worth of old WUs with a somewhat higher chance of getting validated before the end of the race, compared to freshly downloaded WUs.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
On the 10C/20T Broadwell-E, I run two 5-threaded Cosmo tasks when available or 10 WCG tasks when not. For this purpose, I configured Cosmo to avg_ncpus=5, and set a global processor usage of 50 %. Then I have a second client instance on this PC which drives 3 GPUs with two Einstein tasks per GPU.

BTW, in this second instance with Einstein, I had set <cc_config><options><process_priority_special>2</...></...></...>, so that the Einstein feeders run at "normal" priority just like the Cosmo VMs. But now I lowered this to 1 = "below normal", and I still get the same GPU utilization. I'll keep it at 1.
I shut down WCG on this machine, and it is now running
3x 5-threaded Cosmo VMs at "normal" Windows scheduler priority,
6 Einstein GPU feeders at "below normal" Windows scheduler priority.​
The Einstein GPU feeders do their work in a staggered timeline, so that their peak CPU activities do not coincide. I suspect the rest of their CPU usage is dumb polling without actual computation.
With these 21 (!) active threads on the 10C/20T CPU, Cosmo task durations look still good, and the GPUs are still well fed.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
Czech National Team emptied a bunker of ~1.1 Million WCG points a few hours ago, and almost, but not quite, overtook us in the City Run. Their normal hourly output is a little bit lower than ours. So the question remains what amount of dark WCG production will be uploaded when Cross Country starts (6 hours from now).
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
Oops - CNT did it again an hour ago, and this time overtook not only us but OcUK too by means of a ~2.5 M bunker.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
Congrats Stefan on the Planck standings! Well done!

Most of my Einsteins will run out before the dumping begins, only ThunderStrike with the quad 1080s has already finished.

I was wondering if CNT was toying with the competition. Worse yet, I wonder how many toys they have left.

@Markfw Will you be joining us with your GPU's for Einstein? Need to 'finish' your tasks soon then pause the client if so, it starts in less than 4 hours, runs for 5 days. I'm sure others here on the TeAm will be glad to promise to run Folding for a while, after the Pentathlon is over, to make up the points to the TeAm in Folding.

If so you will want to leave 2 threads available per GPU, and use this app_config.xml:

EDIT: Actually, thanks to @iwajabitw you can forgo creating an app_config. See the Edit at the bottom.

Code:
    <app_config>
    <app>
    <name>hsgamma_FGRPB1G</name>
    <gpu_versions>
    <gpu_usage>.5</gpu_usage>
    <cpu_usage>0.1</cpu_usage>
    </gpu_versions>
    </app>
    <app>
    <name>einsteinbinary_BRP6</name>
    <gpu_versions>
    <gpu_usage>.5</gpu_usage>
    <cpu_usage>0.1</cpu_usage>
    </gpu_versions>
    </app>
    </app_config>

But you will need to set up your account at Einstein too.
Once you join, choose the TeAm, etc, make sure you change the projects too, disabling the CPU apps for example.
Once logged onto Einstein's webpage, go here to change stuff.
EDIT 2: Once you are on your preferences page, in order to change the GPU's to run two tasks at a time, instead of an app_config, just change these to 0.5 instead of 1.0:
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
only so much we can do, theres only around 7 people in our team taking part lol
stats.free-dc.org WCG tables, filtered by people with output yesterday, shows the following output during the last 7 days:
CNT ....... 1 user >500 k, 10 users >100 k, 20 users >50 k, 101 users >10 k, 218 users >1 k
OcUK ..... 2 users >500 k, 8 users >100 k, 11 users >50 k, 18 users >10 k, 23 users >1 k
TeAm ..... 3 users >500 k, 9 users >100 k, 11 users >50 k, 23 users >10 k, 34 users >1 k

That's counted over all WCG subprojects of course; there is no way to tell how many of those are on OpenZika exclusively right now. Count is cumulative; i.e. a 500 k user is also a 100 k user.
Edit: The count of CNT's top users may be off because CNT's 500 k bunker drop isn't in these stats yet.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
CNT now passed L'Alliance and Gridcoin in the City Run as well, and OCN is taking notice.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,741
14,773
136
Congrats Stefan on the Planck standings! Well done!

Most of my Einsteins will run out before the dumping begins, only ThunderStrike with the quad 1080s has already finished.

I was wondering if CNT was toying with the competition. Worse yet, I wonder how many toys they have left.

@Markfw Will you be joining us with your GPU's for Einstein? Need to 'finish' your tasks soon then pause the client if so, it starts in less than 4 hours, runs for 5 days. I'm sure others here on the TeAm will be glad to promise to run Folding for a while, after the Pentathlon is over, to make up the points to the TeAm in Folding.

If so you will want to leave 2 threads available per GPU, and use this app_config.xml:

EDIT: Actually, thanks to @iwajabitw you can forgo creating an app_config. See the Edit at the bottom.

Code:
    <app_config>
    <app>
    <name>hsgamma_FGRPB1G</name>
    <gpu_versions>
    <gpu_usage>.5</gpu_usage>
    <cpu_usage>0.1</cpu_usage>
    </gpu_versions>
    </app>
    <app>
    <name>einsteinbinary_BRP6</name>
    <gpu_versions>
    <gpu_usage>.5</gpu_usage>
    <cpu_usage>0.1</cpu_usage>
    </gpu_versions>
    </app>
    </app_config>

But you will need to set up your account at Einstein too.
Once you join, choose the TeAm, etc, make sure you change the projects too, disabling the CPU apps for example.
Once logged onto Einstein's webpage, go here to change stuff.
EDIT 2: Once you are on your preferences page, in order to change the GPU's to run two tasks at a time, instead of an app_config, just change these to 0.5 instead of 1.0:
When my 1080TI comes in on wed, I might. How long is it for ? That one monster alone could help.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |