Formula Boinc Sprints 2018

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

crashtech

Lifer
Jan 4, 2013
10,554
2,138
146
I had to take my only R9 290 out of the Sprint because it was making a lot of errors and invalid WUs. Not sure why, it was actually underclocked slightly and showed no signs of problems.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
Currently, free-dc lists the following teams with more points yesterday than on the day before yesterday:

TeAm AnandTech (x 1.4, FB league 1)
XtremeSystems (x 8.6, FB league 2)
[H]ard|OCP (x 4.5, FB league 2)
Anguillan Pirates (x 38.7, FB league 3)
Overclock.net (x 3.9, FB league 1)
Planet 3DNow! (x 1.5, FB league 1)
Rechenkraft.net (x 1.4, FB league 1)
Crunching@EVGA (x 7.1, FB league 2)​

However, free-dc's numbers are uncertain signs for whether or not teams are sprinting. LAF (x 0.8) and CNT (x 0.9) are definitely taking part in the sprint too, more or less.
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
I had to take my only R9 290 out of the Sprint because it was making a lot of errors and invalid WUs. Not sure why, it was actually underclocked slightly and showed no signs of problems.
Power delivery probably. How about putting it on other machine and see if the problem persist.
LOL. Torpedo and broken windmill. They know how to keep party rolling.
On another note. Finally encouraged myself to try Lunatics v0.45b6 since yesterday and the results are more than my expectation. CPU app performance has the most significant improvement, meanwhile GPU app performance is very close or just a wee bit faster than stock app.
I'm regretting of not using it earlier.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
From what I read, the Lunatics GPU applications are no longer up to date and should generally be avoided. But since it does not regress on your hardware, this rule does not apply universally, I suppose.

Regarding performance tracking: CPU tasks have wide variability; it is absolutely necessary to take the total runtime and total credit of a lot of validated tasks in order to make valid comparisons. I haven't looked whether that's true at the same extent for GPU tasks.
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
From what I read, the Lunatics GPU applications are no longer up to date and should generally be avoided. But since it does not regress on your hardware, this rule does not apply universally, I suppose.
This also surprises me. My last endeavour with Lunatics gpu app on R7 360 was quite dismal, but it works better with new hardware. The duration is pretty consistent between 20 - 30 minutes per 2 tasks, just like stock gpu app.
Regarding performance tracking: CPU tasks have wide variability; it is absolutely necessary to take the total runtime and total credit of a lot of validated tasks in order to make valid comparisons.
I'm well aware of this. But so far, with custom app, it rarely dips into 2 hours mark while on stock app usually hit 3 hours for the longer tasks.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
We overtook KWSN and are now 4th again, behind the three big groups. LAF are still stuck at 6th, and CNT inched their way from 12th to 11th. OCN are still 15th, sandwiched between various Asian groups.

In league 2, [H] elbowed themselves onto rank 1. Crunching@EVGA, still at 5th, are gaining on the US Navy. League 3 still appears to have a single sprinter, the Pirates of the British overseas territory at rank 1.

Here is the change of daily output from Thursday to Friday, according to free-dc, filtered for teams with >10 % increase, ordered by points on Friday.
Code:
team              change Fri:Thu  league
----------------------------------------
SETI.USA                  1.13       1
TeAm AnandTech            1.42       1
L'Alliance Francophone    1.37       1
[H]ard|OCP                2.09       2
XtremeSystems             1.45       2
Czech National Team       1.31       1
Crunching@EVGA            3.83       2
Anguillan Pirates         1.16       3
BOINC Synergy             1.11       2
Overclock.net             1.58       1
Carl Sagan                1.15       3
SETI Sverige [Sweden]     1.31       -
Wyoming Team              1.11       -
BOINC RUSSIA              1.16       3
Universe Examiners        1.11       3
The Scottish Boinc Team   1.69       2
Planet 3DNow!             1.21       1
Team MacAddict            1.10       -
Rechenkraft.net           1.40       1
SETI@Deutschland          1.11       -
OcUK - Overclockers UK    1.34       1
wcnews                    1.11       -
LinusTechTips_Team        1.10       2
 

biodoc

Diamond Member
Dec 29, 2005
6,271
2,238
136
I'm impressed by the TeAm's ouput for this sprint.

I have my 4 nvidia GPUs in the race and am surprised how slow the 8.01 (cuda60) WUs are processed vs the 8.22 (opencl_nvidia_sah) & 8.22 (opencl_nvidia_SoG) WUs. The points awarded for completion of all three types of WUs are similar so I'm assuming, perhaps incorrectly, the same amount of work is incorporated into each WU. I searched for optimized apps for linux and found it too confusing and since it was such a short race, I stuck with the standard apps.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
My GPU-less Broadwell-EPs run the Lunatics application about doubly as fast as my single-socket Broadwell-EP with three GPUs. I haven't seen such a large performance difference when I ran PrimeGrid multithreaded tasks on the CPUs and SETI on the GPUs. All on Linux.
 

Orange Kid

Elite Member
Oct 9, 1999
4,356
2,154
146
Day two and we have moved to fourth.
Seems the Knights have stopped to admire one to many shrubberies. They may get back on their horses and pass us.....so search on

1 Gridcoin ..............................25 .........20,818,658
2 SETI.USA ...........................18 ..........7,755,690
3 SETI.Germany ....................15 ..........6,680,912
4 TeAm AnandTech ...............12 ..........3,377,751

5 The Knights Who Say Ni! ....10 ..........3,246,317
6 L'Alliance Francophone ........8 ...........3,013,384
7 USA .......................................6 ...........2,577,346
8 Canada ..................................4 ...........2,419,345
9 The Planetary Society ............2 ...........2,322,856
10 BOINC@AUSTRALIA ..........1 ...........2,317,302
 

Howdy

Senior member
Nov 12, 2017
572
480
136
Dialed in and finally running smoothly.......all systems go.....hopefully I do not end up like Major Tom.
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
I have my 4 nvidia GPUs in the race and am surprised how slow the 8.01 (cuda60) WUs are processed vs the 8.22 (opencl_nvidia_sah) & 8.22 (opencl_nvidia_SoG) WUs. The points awarded for completion of all three types of WUs are similar so I'm assuming, perhaps incorrectly, the same amount of work is incorporated into each WU.
Certainly. Also, wingmen may run an entirely different application on a different OS for the same WU.

I searched for optimized apps for linux and found it too confusing and since it was such a short race, I stuck with the standard apps.
I was lucky to get the right pointers already last year for CPU applications that are best on my CPUs, from @Kiska. And this time around another tip of his pointed me towards the best GPU application, but due to lack of spare time for experimenting I waited and watched what others would get out of it. Notably, I have to thank @Ken g6 for testing the waters.

So, earlier today I put the optimum GPU application on hosts with dual GPU but small CPU, and didn't bother to put the optimum CPU application on it. I conveniently let them run TN-Grid instead. But now I also went through the motions to put the optimum CPU + GPU applications, both MultiBeam and AstroPulse, on my only host which has a big CPU besides GPUs. While doing so, I even discovered a mistake in my previous CPU-only app_info which prevented me from receiving AP CPU tasks.

Now there remains one thing for me to do: Figure out a good <cmdline> for the GPU application for running multiple tasks in parallel. If combined with multiple client instances, that would allow me to run deeper task queues which (a) are comparably deep to typical wingmen's queues and (b) make me survive "maintenance Tuesday" during the next big SETI@Home competition without having to set up tricky daily schedules.
 

crashtech

Lifer
Jan 4, 2013
10,554
2,138
146
Power delivery probably. How about putting it on other machine and see if the problem persist.
Do you think so? I might have to wait until after the Sprint, don't want to take a different proven combo offline for troubleshooting. I hope it's not the card, market conditions being what they are, there are likely no more GPU purchases in my future for 2018, anyway.
 

bbhaag

Diamond Member
Jul 2, 2011
6,755
2,130
146
Thought I would jump in and help out if I can. I signed up over at seti@home and joined the team. I've got BOINC running now and this is what it looks like. Does this correct to you guys?
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
Do you think so? I might have to wait until after the Sprint, don't want to take a different proven combo offline for troubleshooting. I hope it's not the card, market conditions being what they are, there are likely no more GPU purchases in my future for 2018, anyway.
I had experience with bad PSU in the past. It was on my friend's PC but not related to DC, instead it's used for gaming. He suffered stuttering issue on his 750ti that according to him was never happened before. After some hours of trial and error, we ended up borrowing other's PSU and voila, the problem's gone. It was pretty old PSU though. IIRC, it was Antec Signature from early 2010s era.


Thought I would jump in and help out if I can. I signed up over at seti@home and joined the team. I've got BOINC running now and this is what it looks like. Does this correct to you guys?
Use advanced view (Ctrl+Shift+A) so you can monitor each task progress.
 
Reactions: Ken g6 and bbhaag

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
That 100 task limit applies to both CPU and GPU tasks, combined, correct?

********************************

Apologies for using this thread as a dumping ground, so I can transfer my app_config to my Linux box with a 10 series GPU. Yes, I'm too lazy to look for a thumb drive.

<app_config>
<app>
<name>astropulse_v7</name>
<gpu_versions>
<gpu_usage>.25</gpu_usage>
<cpu_usage>.01</cpu_usage>
</gpu_versions>
</app>
<app>
<name>setiathome_v8</name>
<gpu_versions>
<gpu_usage>.25</gpu_usage>
<cpu_usage>.01</cpu_usage>
</gpu_versions>
</app>
</app_config>

Cool! Didn't know I could do that. Thanks!
Oh, if you have beefy GPU (GTX1070 or above; R9 Fury or above), you may use Tony's app_config. This will force BOINC client to crunch 4 GPU tasks simultaneously. Be sure to left a vacant thread for each GPU task. For example:
My system has 4 available threads, but because I also run 2 GPU tasks simultaneously, I have to spare 2 threads to feed the GPU and 2 other threads to crunch CPU tasks.
 

bbhaag

Diamond Member
Jul 2, 2011
6,755
2,130
146
Oh, if you have beefy GPU (GTX1070 or above; R9 Fury or above), you may use Tony's app_config. This will force BOINC client to crunch 4 GPU tasks simultaneously. Be sure to left a vacant thread for each GPU task. For example:
My system has 4 available threads, but because I also run 2 GPU tasks simultaneously, I have to spare 2 threads to feed the GPU and 2 other threads to crunch CPU tasks.
Unfortunately I don't have a beefy gpu just a RX480 with 4gb of ram. I am also REALLY new to all this so I think sticking with the stock configs for now is my best bet. Thanks for the advice though I really appreciate it and as I get into this a little more I'm sure I'll try and squeeze every ounce of compute power from my system.haha
 

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
Here is a look at free-dc for changes in team outputs again. This time there are 39 out of the first 100 teams with an increase of output by more than 10 % (because of the weekend?), so I am listing only teams with more than 20 % increase:
Code:
team              change Sat:Fri  league
----------------------------------------
TeAm AnandTech            1.42       1
[H]ard|OCP                1.69       2
L'Alliance Francophone    1.34       1
XtremeSystems             1.33       2
Czech National Team       1.37       1
Crunching@EVGA            1.68       2
Rechenkraft.net           1.60       1
OcUK - Overclockers UK    1.26       1
Gay USA                   1.29       3
Michigan Tech             1.21       -
Team x86                  1.71       -

Rechenkraft are actively sprinting too; I'll add their forum thread to post #50. Edit, SETI.USA may be sprinting as well; impossible to tell from their output curve or from the public sections of their forum.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
Unfortunately I don't have a beefy gpu just a RX480 with 4gb of ram.
In setiathome's own ranking, the RX480 compares quite well to other AMD GPUs:
https://setiathome.berkeley.edu/gpu_list.php

This page does not show a comparison across GPU vendors though, and these automated rankings are unreliable because they cannot take into account whether people run one or more tasks at a time on the same GPU, as far as I understand.

I am also REALLY new to all this so I think sticking with the stock configs for now is my best bet. Thanks for the advice though I really appreciate it and as I get into this a little more I'm sure I'll try and squeeze every ounce of compute power from my system.haha
Concerning performance tracking and optimization, SETI@Home is a bit special among most DC projects because
  • the GPU applications which they have are not magnitudes faster than the CPU application,
  • app_config tweaks don't have a particularly big impact on performance (IME with Nvidia cards at least) compared to some other projects where app_config can matter quite a bit,
  • SETI@Home has got several GPU application versions, and the server sends all of these to a new client in waves, tracks how each seems to perform, and then tends to send the one which seems best for the client,
  • there exist recompiled or majorly reworked application versions from external developers which may do wonders on fitting hardware,
  • yet these 3rd party applications are partially obsoleted by newer stock applications, but there are still misleading (or at least too general) recommendations on the web to use those outdated 3rd party applications.
But all this shouldn't worry you in the context of a quick 3(+1) day sprint like this, because there is one other factor which has a huge impact on the sprint result, affecting stock setups and tweaked setups alike (or tweaked setups even more unfavorably):

Only those results count which are validated within the duration of the sprint. SETI@Home, like many other projects, works with a quorum of 2, meaning that each work unit must be computed by two clients independently and must have the same outcome (perhaps within a small range due to allowable rounding errors). In case of SETI@Home, this validation may take a long while, which means that many of the tasks that the contestants complete during this sprint will not be validated before the sprint is over.

Or in short, a big factor in this sprint is the luck (or statistical probability) of how many WUs everybody will get validated before the finish line.

There are strategies for improving this probability, but they typically require micro-management.

Just now I went looking at my tasks list at setiathome.berkeley.edu, and my ratio of (Validation pending + Validation inconclusive) : (Valid) is 0.6 : 1. The ratio should be lower the slower the hosts are.
A pure CPU host of mine: 0.51 : 1,
a pure GPU host: 0.66 : 1


Edit,
The "setiathome_v8 8.01 (cuda90)" application produces about the most melodic coil whine on my GPUs that I heard from any DC application so far.
 
Last edited:
Reactions: ao_ika_red

StefanR5R

Elite Member
Dec 10, 2016
5,687
8,258
136
now I also went through the motions to put the optimum CPU + GPU applications, both MultiBeam and AstroPulse, on my only host which has a big CPU besides GPUs. While doing so, I even discovered a mistake in my previous CPU-only app_info which prevented me from receiving AP CPU tasks.
So I fixed my app_info for AstroPulse CPU tasks, and this in turn revealed that the Lunatics application always fails with error on my hosts (illegal instruction). Need to splice the stock application back in, but until then I'm satisfied with running AstroPulse on GPU.
 

Orange Kid

Elite Member
Oct 9, 1999
4,356
2,154
146
The first sprint of the year is complete. A 4th place in our league and a 4th place over all in all leagues!!
Thanks and congrats to all.
A week to rest and then next week another sprint.

1 Gridcoin ..................................25 .........31,538,663
2 SETI.USA ...............................18 .........12,104,450
3 SETI.Germany ........................15 .........10,272,164
4 TeAm AnandTech ...................12 ..........5,953,914

5 L'Alliance Francophone ..........10 ...........5,172,934
6 The Knights Who Say Ni! .........8 ...........5,002,199
7 USA ..........................................6 ...........3,928,115
8 Czech National Team ...............4 ...........3,839,654
9 Canada .....................................2 ...........3,692,783
10 The Planetary Society .............1 ...........3,599,978

EDIT:for those interested the F1 results
March 25: Australian Grand Prix
1. Sebastian Vettel, Ferrari
2. Lewis Hamilton, Mercedes
3. Kimi Raikkonen, Ferrari
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |