PrimeGrid Races 2017

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
a 10-core CPU which is currently carrying 6 GPU feeder tasks for Einstein, is now running PSP-LLR on the remaining 4 cores like so:
Code:
<app_config>
   <app>
      <name>llrPSP</name>
      <max_concurrent>2</max_concurrent>
      <fraction_done_exact/>
   </app>
   <app_version>
      <app_name>llrPSP</app_name>
      <cmdline>-t 2</cmdline>
      <avg_ncpus>2</avg_ncpus>
      <max_ncpus>2</max_ncpus>
   </app_version>
</app_config>

There are three aspects in here:
  • cmdline tells boincmgr to start the application with the multithreading option (in the quoted example: dual-threaded).
  • avg_ncpus and max_ncpus tell the scheduler how many logical CPUs this application is going to occupy (here: 2), so that the scheduler can calculate how many concurrent instances of this it can start (besides other applications, if any) until as many CPUs are saturated as is allowed per general computing preferences.
  • max_concurrent in the above example says that I want at most two llrPSP processes (hence, 2x2 llrPSP worker threads) running at any time, because I wanted the other CPU cores for Einstein. I omitted this line on my PCs that are running PrimeGrid exclusively.
 
Reactions: Orange Kid

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
avg_ncpus and max_ncpus take effect immediately when "Options"/ "Read config files" is used, whereas cmdline takes effect
  • when new tasks are started,
  • if previously running tasks are restarted when the BOINC client was completely shut down and restarted,
  • if previously running tasks are suspended, then resumed, provided that "Options"/ "Computing preferences..."/ "Disk and memory"/ "[ ] Leave non-GPU tasks in memory while suspended" has been switched off before suspending the tasks.
Contrary, if tasks are suspended, cmdline changed, configs re-read, and then the tasks are resumed, the new cmdline does not take effect if "[ x ] Leave non-GPU tasks in memory while suspended" is on.

So, IME you can switch to a different set of cmdline, avg_ncpus, max_ncpus anytime you want, if you follow one of the three points above.

Furthermore, if you encounter what @Ken g6 reported (fewer tasks running than expected - which I cannot reproduce right now), then setting avg_ncpus and max_ncpus to slightly less than the true value (e.g. 1.9 instead of 2) may convince the scheduler to launch enough concurrent tasks.
 
Reactions: Ken g6

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
PS:
If you remove a tag such as cmdline from app_config.xml, or simply delete the data within the tag (the "-t n" or whatever) leaving the tag empty, then force re-reading the config files, the boinc client will happily continue to apply the former tag data. Any tags that you want to change, you must leave in the file and explicitly provide with new data (e.g. "-t 1" in cmdline if you want to go from multithreaded back to singlethreaded for some reason).
 

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
So if I wanted to run four tasks, using 4 threads each...??

<app_config>
<app>
<name>llrPSP</name>
<max_concurrent>4</max_concurrent>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrPSP</app_name>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>4</max_ncpus>
</app_version>
</app_config>


And five tasks with 4 threads each would be...

<app_config>
<app>
<name>llrPSP</name>
<max_concurrent>5</max_concurrent>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrPSP</app_name>
<cmdline>-t 4</cmdline>
<avg_ncpus>4</avg_ncpus>
<max_ncpus>5</max_ncpus>
</app_version>
</app_config>
 

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
The second one needs <max_ncpus>4</max_ncpus> too. It states the maximum number of logical CPUs that one instance of this application is going to use, from what I understood from the documentation.

Actually, max_ncpus may be unnecessary, and just stating avg_ncpus could be sufficient. But I haven't tried that; I originally took the partly redundant avg_ncpus + max_ncpus specification from a primegrid forum post.

max_concurrent is really an upper bound, among other potential upper bounds. The scheduler will fire up fewer llrPSP tasks than that if the number of logical CPUs times "Use at most xy % of the CPUs" is less than max_concurrent times max_ncpus, or if the scheduler determined that CPU time should be spent on tasks of other active projects of course.
 

Orange Kid

Elite Member
Oct 9, 1999
4,356
2,154
146
So this gets me one instance running with 2cpu's. The problem is that the task manager shows cpu utilization of only 57%. If I change the max_ concurrent to 2 then HT kicks in with two instances and 2 cpu's and the cpu goes to 100%.
So for my laptop I5 I'll just let the four run as the difference doesn't seem that great. I'll play with the AMD antiques tomorrow.

Code:
<app_config>
   <app>
      <name>llrPSP</name>
      <max_concurrent>1</max_concurrent>
      <fraction_done_exact/>
    </app>
   <app_version>
       <app_name>llrPSP</app_name>
       <cmdline>-t 2</cmdline>
       <avg_ncpus>2</avg_ncpus>
       <max_ncpus>2</max_ncpus>
   </app_version>
</app_config>
 

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
</app_version>
<app>
<name>llrPSP</name>
<max_concurrent>6</max_concurrent>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrPSP</app_name>
<cmdline>-t 4</cmdline>
<avg_ncpus>6</avg_ncpus>
<max_ncpus>6</max_ncpus>
</app_version>
</app_config>

EDIT: This results in each task running on 6 cpus/threads. (See updated post below.) Currently only one task, as I'm finishing up the tasks from the previous project. Will update what happens when Leiden Classical finishes up in 40 minutes or so.
 
Last edited:

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
cmdline does indeed control the number of threads (to a degree?), but the BOINC manager was saying it was going to use 6 out to the right side of each task (in parenthesis). I've eliminated some of the entries that didn't apply to my situation, and only have the cmdline, currently set for 26 threads. BUT, looks like it's only using 23-24 threads, as the cpu usage should show 93% (plus 1-2% for background tasks), but it's fluctuating between 80-90% usage. Never-mind TaskManager saying it's at 99%, it seems to base that on non-Turbo speeds.

EDIT: This config looks like it will finish one task in about 5 and a half hours. Est time remaining is so frustrating on PG tasks, lol, always INCREASING, then dropping down again. The only sure way to know how long it takes is to check the log after it completes and uploads.

 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
So this gets me one instance running with 2cpu's. The problem is that the task manager shows cpu utilization of only 57%. If I change the max_ concurrent to 2 then HT kicks in with two instances and 2 cpu's and the cpu goes to 100%.

In terms of total number of tasks finished during a week, LLR is said to perform best when hyperthreading is switched off in the BIOS, second best if HT is on but only half of the logical CPUs is occupied, worst if HT is on and all logical CPUs are occupied.

But it is also said to vary somewhat between CPU types, RAM config, and possibly OS. Since measuring this takes a lot of time, I have checked it so far only with single-threaded SoB-LLR v7.06 on mobile Haswell (HT off is 5 % better than HT on) and Broadwell-E (HT off is 38 % better than HT on), and with single-threaded PSP-LLR v8.00 on Broadwell-EP (HT off is 5 % better than HT on). I suspect multi-threaded PSP-LLR reacts similarly to HT.

So for my laptop I5 I'll just let the four run as the difference doesn't seem that great. I'll play with the AMD antiques tomorrow.

The AMDs will probably be slow despite more cores and no HT, because they have less vector processing power.

<cmdline>-t 4</cmdline>
<avg_ncpus>6</avg_ncpus>
<max_ncpus>6</max_ncpus>

<cmdline>-t 4</cmdline> will result in the application truly generating a load which amounts to 4 logical CPUs. <{avg,max}_ncpus>6</{avg,max}_ncpus> results in the BOINC scheduler believing that the application is using 6 CPUS, and spawning respectively fewer application instances.

AFAIU the BOINC scheduler does not measure CPU occupation itself; it relies on what the task description says or on the hints in app_config.xml.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,282
3,903
75
Day 3.1 stats:

Rank___Credits____Username
9______1228956____xii5ku
109____84887______Ken_g6
136____68569___10esseeTony
156____55909______SlangNRox

Rank__Credits____Team
5_____2465726____BOINC@Poland
6_____1833078____BOINC@MIXI
7_____1636711____Crunching@EVGA
8_____1438322____TeAm AnandTech
9_____1067946____Team 2ch
10____1044461____Rechenkraft.net
11____972460_____The Knights Who Say Ni!

Ah, I have company. And Stefan's really going to town!
 

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
I've eliminated some of the entries that didn't apply to my situation, and only have the cmdline, currently set for 26 threads. BUT, looks like it's only using 23-24 threads, as the cpu usage should show 93% (plus 1-2% for background tasks), but it's fluctuating between 80-90% usage.

On one of my two 2x14 core Linux boxes, on which I set 14 dual-threaded tasks, "top" shows 200...190 % CPU utilization per task. (Optimum would be 200 %.) On the other box with 4 seven-threaded tasks, it's 680...640 % per task, with occasional dips below 500 %. (Optimum would be 700 %.)

Box 1 with 14 dual-threaded tasks, on early April 9:
14 tasks completed
run time 100,000...110,000 s each (105,000 s average)
CPU time 200,000...220,000 s each (210,000 s average)
ratio of CPU time to run time 1.989...1.990 (1.990 average, theoretical upper bound is 2.0)
5 validated tasks, 13,545...14,664 credits/task (14,200 average)
162,000 PPD in total​

Still box 1 with 14 dual-threaded tasks, on April 10 noon:
14 tasks completed
run time 200,000...220,000 s each (210,000 s average)
CPU time 200,000...220,000 s each (210,000 s average)
ratio of CPU time to run time 1.000...1.053 (1.015 average, theoretical upper bound is 2.0)
4 validated tasks, 13,535...14,665 credits/task (14,100 average)
82,000 PPD in total ​

The tasks which completed on April 9, as well as the ones which completed on April 10, were all downloaded on April 7, 22:26 UTC. I looked at the system log, sensors, etc., and could not spot anything that would explain the sudden drop in performance.

Box 2 with 4 seven-threaded tasks, between April 8 and April 10:
47 tasks completed
run time 19,000...134,000 s each (62,000 s average)
CPU time 100,000...140,000 s each (130,000 s average)
ratio of CPU time to run time 1.00...6.77 (2.83 average, theoretical upper bound is 7.0)
9 validated tasks, 13,545...14,664 credits/task (14,200 average)
110,000 PPD in total​

Still box 2,
April 8: 16 tasks completed, run time avg. 49,000 s, 131,000 PPD
April 9: 19 tasks completed, run time avg. 87,000 s, 93,000 PPD
April 10: 12 tasks completed (4 more to come), run time avg. 39,000 s, 123,000 PPD​

Note, all PPD values given above were calculated based on credits per task, number of simultaneous tasks, and average run time of a task. This means, if reported run times are wrong, calculated PPD are wrong as well.

I am confused. I will give it one more day to see how these two boxes compare with each other. I am beginning to believe that the run times and CPU times are misreported at the web site. Since all tasks give roughly the same credit per task, I will simply look at the total number of tasks completed and copy the app_config.xml of the better box to the worse box.

Edit:
Actually, looking at the times when the tasks were reported as completed, box 1 completed its first batch of 14 tasks in about 1d6h = 110,000 s, and the second batch of 14 tasks again in 1d6h = 110,000 s = 155,000 PPD for the box. IOW the runtimes at the web site are definitely wrong.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,282
3,903
75
I'm a little late! Day 4.2 stats:

Rank___Credits____Username
8______1719868____xii5ku
84_____194454___10esseeTony
134____126987_____Ken_g6
181____77887______Orange Kid
191____67140______GLeeM
205____55909______SlangNRox
244____41598______zzuupp

Rank__Credits____Team
5_____3549606____BOINC@Poland
6_____3000213____Crunching@EVGA
7_____2479217____BOINC@MIXI
8_____2283847____TeAm AnandTech
9_____1894909____Rechenkraft.net
10____1737479____US Navy
11____1722342____Team 2ch

Glad to see more people are in this race after all.

P.S. I was late because I was buying a better CPU.
 

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
Our distance from ranks 9, 10, 11 looks decent. But "day 4.2" in this race is really like "hour 4.2" in a more normal race.

And the upcoming Formula Boinc sprint could stir up the ranks.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
Nice price on the cpu!

Yes, the next FB race will decrease our output, short term....I've reached my goal of 100,000,000 on PG, so i am only here to stay ahead of Ken so I don't get fussed at.
 
Reactions: Ken g6

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
Box 1 with 14 dual-threaded tasks:
41 tasks completed in 3d22h
= mean time between task completions: 2h18m
~150,000 PPD
mean task runtime 1d8h​

Box 2 with 4 seven-threaded tasks:
68 tasks completed in 3d22h
= mean time between task completions: 1h23m
~240,000 PPD
mean task runtime 5h32m​

Switching box 1 from impulse to warp now.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,282
3,903
75
Day 5.06 stats:

Rank___Credits____Username
8______2364108____xii5ku
67_____341330___10esseeTony
125____178806_____Ken_g6
165____126618_____Orange Kid
229____67140______GLeeM
249____56253______zzuupp
250____55909______SlangNRox

Rank__Credits____Team
4_____11910046___Czech National Team
5_____4711176____BOINC@Poland
6_____4321376____Crunching@EVGA
7_____3190166____TeAm AnandTech
8_____3058648____BOINC@MIXI
9_____2386189____Rechenkraft.net
10____2362561____Team 2ch

Wow, we're up to 7th!
 
Reactions: TennesseeTony

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,282
3,903
75
Day 6.17 stats:

Rank___Credits____Username
9______3057884____xii5ku
85_____356365___10esseeTony
130____222974_____Ken_g6
179____152271_____Orange Kid
235____94000______GLeeM
257____70930______zzuupp
280____55909______SlangNRox

Rank__Credits____Team
4_____14881374___Sicituradastra.
5_____5844038____Crunching@EVGA
6_____5696457____BOINC@Poland
7_____4010336____TeAm AnandTech
8_____3791746____BOINC@MIXI
9_____3342910____Rechenkraft.net
10____3269895____Team 2ch

Still about the same. Looks like any impact from Formula BOINC hasn't hit yet.
 

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
Thanks for the stats. I slipped a notch down in individual ranks, but the user who passed me, like all other users before me, is from one of the top 6 teams, not from any of the teams that chase us.

Looks like any impact from Formula BOINC hasn't hit yet.

I for one took only an Ivy Bridge-E from parttime PG to fulltime NF@H. 11 NF@H threads run quite a bit cooler and quieter than the mere 4 PSP-LLR threads which I had on it while acoustically permissible. Edit: IOW, my Haswells and Broadwells with their beefed-up vector units stay here.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,282
3,903
75
Day 7.1 stats:

Rank___Credits____Username
9______3630799____xii5ku
101____356365___10esseeTony
161____222974_____Ken_g6
196____152271_____Orange Kid
247____99559______zzuupp
255____94000______GLeeM
304____55909______SlangNRox

Rank__Credits____Team
4_____17469441___Sicituradastra.
5_____7016831____Crunching@EVGA
6_____6780149____BOINC@Poland
7_____4611878____TeAm AnandTech
8_____4352579____BOINC@MIXI
9_____3803808____Team 2ch
10____3757514____Rechenkraft.net

About the same as yesterday.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,282
3,903
75
Day 8.14 stats:

Rank___Credits____Username
9______4157125____xii5ku
116____356365___10esseeTony
160____265028_____Ken_g6
220____152271_____Orange Kid
247____114464_____zzuupp
272____94000______GLeeM
330____55909______SlangNRox

Rank__Credits____Team
4_____20225477___Sicituradastra.
5_____8259633____Crunching@EVGA
6_____7808780____BOINC@Poland
7_____5195165____TeAm AnandTech
8_____5070265____BOINC@MIXI
9_____4635906____Team 2ch
10____4588185____Rechenkraft.net

I think our progress is taking a hit from the NF@H race. I know mine is.
 
Reactions: TennesseeTony

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,252
136
Thanks for the stats.

Daily production:
TeAm: 647 k yesterday, 561 k today (13 % less than yesterday)
Mixi: 603 k yesterday, 690 k today (129 k more than TeAm today)​
Distance:
from TeAm to Mixi: -260 k yesterday, -125 k today​

So the TeAm might easily slip to rank 8 tomorrow. But if we get back to normal after the Formula Boinc sprint, we should be able to straighten this out again.

I think our progress is taking a hit from the NF@H race. I know mine is.

I allowed to let some NF@H tasks slip onto some of my nodes. Those are done now.

 
Reactions: TennesseeTony
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |