Info 11th BOINC Pentathlon 2020

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

StefanR5R

Elite Member
Dec 10, 2016
5,681
8,227
136
Remember folks,
if you have built bunkers, verify that stats tables appear and the sentence "Credits that are granted now will not be counted" disappeared here...
...before you release the results.
While it is probable that the stats will be initialized on time, they might not if the project servers are very busy.

On the other hand, don't be shocked if the Rosetta server is very busy and maybe even unresponsive during the first hour or so after the start.

I on the other hand hope to find everything in best working order tomorrow morning. My computers are set to remain busy until well after sunrise when I'll begin to become functional and can tend to them. :-)
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,736
14,767
136
Remember folks,
if you have built bunkers, verify that stats tables appear and the sentence "Credits that are granted now will not be counted" disappeared here...
...before you release the results.
While it is probable that the stats will be initialized on time, they might not if the project servers are very busy.

On the other hand, don't be shocked if the Rosetta server is very busy and maybe even unresponsive during the first hour or so after the start.
With over 5,000 tasks to upload, just that part alone may take a while. I have 50.50 mbit internet, but 11 boxes and 0ver 5,000 tasks ??? I will reply with status.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,736
14,767
136
Make sure the BOINC client under //Options: disk/memory usage// does NOT have 'leave non GPU tasks in memory' selected.
Remember, its an ES, and only 7 memory channels are working, so its really only 110 ggi for 125 tasks. Not enough for 4.2. I did check that.

64 gig coming thursday to replace 32 gig, should be enough to fix the problem (32 gig more)
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,736
14,767
136
well, that odd... We were up my 950k points and hour ago, and now its 460k ????
 

StefanR5R

Elite Member
Dec 10, 2016
5,681
8,227
136
P3D and DPC have far higher hourly output than we have.
Scroll down at one of the team stats pages, e.g. P3D's, to see a time history graph with hourly updates.
 

StefanR5R

Elite Member
Dec 10, 2016
5,681
8,227
136
PS,
also check Rosetta's top_teams to see the activity of some teams during the time before Pentathlon.

DPC for example has been, and still is, faster than even Gridcoin, due to an internal event of their own at Rosetta in April (their "stampede") and due to participation of the datacenter of a Dutch physics research institute in their team (having a RAC of 3.3 M, compared with a RAC of 7.8 M of DPC in total, or of 1.1 M of TeAm AnandTech).

And P3D should have a huge army of Ryzens nowadays.

PPS,
typically, the teams with small user count and frequent participation in contests (like ourselves) bring bunkers, while teams with a huge usercount (such as P3D) rarely have any bunkers to speak of.

There is also an in-between class of teams, with SETI.Germany as an example, who have a large number of users who don't bunker, and less than a handful of users with many computers who sometimes bunker.
 
Last edited:
Reactions: biodoc

StefanR5R

Elite Member
Dec 10, 2016
5,681
8,227
136
Marathon at Rosetta@home — about task duration:

[...] when the watchdog was implemented in Rosetta, the developers and admins decided to expose the target CPU time to contributors as a configuration option, allowing them to deviate from the default if they use their computers in a different manner than the project developers anticipated.

The setting is located at the user page at the Rosetta@home web site, in "Rosetta@home preferences". You can choose non-default target CPU times anywhere between 2 hours or 36 hours.

[...] While the tasks themselves are affected as described above, the runtime estimation of boinc-client is not influenced by a change of this setting. Boinc-client merely recalls how long Rosetta tasks used to take in the past, and assumes that new tasks will take just as long. The client will require a very long period of observing further tasks in order to gradually adapt its runtime estimation.

Consequently, not only does boincmgr show estimations which may be way off, but boinc-client is also highly confused when it is meant to fetch work for a buffer of n days duration. It will fetch far too few tasks, or far too many tasks, depending on whether you decreased or increased the target CPU time, and by how much you changed it.
On May 2, a change to the project scheduler was being put to test at Ralph@home: The scheduler now recognizes the target CPU time preference and sends an according number of tasks in response to work requests.
(Ralph@home forum thread)

Either the same scheduler update, or a simpler one (I am not sure), has been implemented at Rosetta@home now too.
(Rosetta@home forum post by Admin on May 5)
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,736
14,767
136
Now our lead is 50k....

And I have done 1,400,739 of our almost 4 mil
 
Last edited:
Reactions: Assimilator1

StefanR5R

Elite Member
Dec 10, 2016
5,681
8,227
136
Going by the latest "last hour" figures, 7AA7 is 7ᵗʰ in the Rosetta Marathon if I am counting right.
 

StefanR5R

Elite Member
Dec 10, 2016
5,681
8,227
136
1st to 3rd in the last 3 hours... dang
So far it looks like none of the other teams built as large Rosetta bunkers up front as we did. Of course, sustained output will be what decides this 14 days long Marathon.

--------
Jeeper's next bulletin is out.

Marathon at Rosetta@home:
The best start had TeAm AnandTech (TAAT), but apparently they are running out of steam now. Newcomer Dutch Power Cows (DPC) has taken the lead, followed by Planet 3DNow! (P3D). A small gap has formed behind it.

Javelin Throw at NumberFields@home:
Do the teams show their utmost on the first day or do they first look at the competition? This consideration is not so easy, because it will take some time before all dates for the Javelin Throw are clear. [...]
What a view! SG on #1! The blue and yellow ones had to wait a long time for it. But nothing is decided yet. CNT (#2) stays with SG and wants to have a say in the victory. TAAT on #3 is no longer such a big surprise, last year's performances have certainly announced it.
(This was based on 12:00 UTC stats = 3 hours ago.)
 

StefanR5R

Elite Member
Dec 10, 2016
5,681
8,227
136
Our bunker uploads in Rosetta@home are now more than 12 hours ago.
Ranked by last 12h output, we are 8ᵗʰ in the Rosetta Marathon.

(Still, our bunkers gave us 2...3 days advantage over teams with similar speed who didn't bunker.)
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,736
14,767
136
Our bunker uploads in Rosetta@home are now more than 12 hours ago.
Ranked by last 12h output, we are 8ᵗʰ in the Rosetta Marathon.
Well, I got almost everything in, over 600 threads, and all my EPYC boxes Based on my bunkers, about 400k ppd.
 
Reactions: mopardude87

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
Is Boinc better then Folding@Home for that thing that shall not be named when it comes to running on the cpu? I heard it got updated for it and i am genuinely curious on this.

I i tried to bring over to the Folding@Home team a close friend of mine who is pretty set on using BOINC and perhaps i have been a bit bias just running folding@home. I got a 3900x arriving today and wanna make sure when i go to bed and fully work the cpu i am picking the most prime optimized program that will take full advantage of it as well. Well as full as i could get with a gpu also rendering and using a thread in Folding@Home.

Due keep in mind my build KRONOS will be getting dual 3080ti before the end of the year and i need a client that plays nicely with this but given i wanna do possibly cpu this may not even matter? I know Folding@Home requires a single thread per fah_21/22 exe which works fine. I am currently sitting on a single 1080ti so idk how this client will interact with the Folding@Home one which will be using a single thread to execute.
 
Last edited:

Endgame124

Senior member
Feb 11, 2008
956
669
136
Is Boinc better then Folding@Home for that thing that shall not be named when it comes to running on the cpu? I heard it got updated for it and i am genuinely curious on this.

I i tried to bring over to the Folding@Home team a close friend of mine who is pretty set on using BOINC and perhaps i have been a bit bias just running folding@home. I got a 3900x arriving today and wanna make sure when i go to bed and fully work the cpu i am picking the most prime optimized program that will take full advantage of it as well. Well as full as i could get with a gpu also rendering and using a thread in Folding@Home.

Due keep in mind my build KRONOS will be getting dual 3080ti before the end of the year and i need a client that plays nicely with this but given i wanna do possibly cpu this may not even matter? I know Folding@Home requires a single thread per fah_21/22 exe which works fine. I am currently sitting on a single 1080ti so idk how this client will interact with the Folding@Home one which will be using a single thread to execute.
I'm running Folding at home on my GPUs and Rosetta at Home (BOINC) on my CPUs. They work fine together, and if you look at how F@H awards points, GPUs get 10x the points that CPUs get within a similar generation (1080ti vs Ryzen 2700x). They are, essentially, telling you that your CPUs are best used to drive GPUs for F@H.

Personally, with so much support for F@H right now, even if points were even between GPU and CPU, I would still probably have my CPUs on Rosetta. Contributors that want work at F@H are still occasionally having problems with WU assignment (though the situation is better than 3 weeks ago) while Rosetta isn't having an issue at all - so if I move a CPU from F@H to Rosetta, I'm giving someone else that is only running F@H a chance to process WUs while I can have every CPU in my house still running a Protein Folding Project.
 
Reactions: mopardude87

StefanR5R

Elite Member
Dec 10, 2016
5,681
8,227
136
@mopardude87, if you already used a F@h CPU slot alongside the F@h GPU slot, and figured out how many threads you can put towards the CPU slot without affecting the GPU slot's performance — then you could replace that CPU slot by a BOINC client and set it to the same number of logical CPUs.

There is only one extra consideration when you run Rosetta@home specifically: You need about 0.8...0.9 GB of spare RAM for each CPU thread that you enable in BOINC – on average. Peak usage of these threads can be 1.4 GB.

(On my computers with few cores, I frequently had situations in which almost all, or even all, simultaneous tasks took the maximum of 1.3...1.4 GB each. On my computers with more cores, I always had a good mix of tasks such that the RAM usage remained at the average of 0.8...0.9 GB.)

As for the science that gets done: What @Endgame124 said. Also, the kind of simulations which R@h and F@h do are kind of complementary in general. Perhaps somebody here who has deeper understanding of it can chime in and explain.

And of course, for TeAm AnandTech as a team, the BOINC Pentathlon is one of the most (arguably, the most) prominent competition we are taking part in throughout the year.. :-)
 
Reactions: mopardude87

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
@mopardude87, if you already used a F@h CPU slot alongside the F@h GPU slot, and figured out how many threads you can put towards the CPU slot without affecting the GPU slot's performance — then you could replace that CPU slot by a BOINC client and set it to the same number of logical CPUs.

There is only one extra consideration when you run Rosetta@home specifically: You need about 0.8...0.9 GB of spare RAM for each CPU thread that you enable in BOINC – on average. Peak usage of these threads can be 1.4 GB.

I tried to fold on the cpu with F@h, i found the rewards very little to none as you prob seen me mention before in the F@H thread. I lost ppd on the gpu so the 7700k has been idle. I set 2 threads for the gpu even and still was losing ppd on the gpu. More points then if i just had the cpu not folding so yeah its been kinda abandoned but i figured maybe its a 7700 paired with "slow"2666mhz" memory and running the HD630 onbaord causing the issues more then anything. Without the HD630 the desktop just lags to much but its silky smooth when enabled and used as primary driver. I had same experience prior to the HD630 enabling cpu folding as well. I wanna maybe get back into it again with the 3900x which will have 16gb of 3600mhz memory.

About the ram usage and threads, it may be a issue as my build will eventually be gaming on one gpu with 12 threads set for that and os use, then the other gpu will be crunching F@h at the same time with possibly 11 threads dedicated to folding and 1 to the gpu client assuming such a set up does not choke prior. The games will be mostly older like BF4 for example and if i get into a mood for serious gaming then i will pause all folding prob then resume when done. I game very little but i get my moods at times. Both gpus and cpu will fold late evenings when i crash for the night or if afk and gone.

With my mentioned use case, how many threads could i actually devote to like BOINC before its a issue?Assuming i could even set a certain amount of threads of course, hopefully its got such options to tinker with. I may build a dedicated gamer/secondary folder in 2021 if this set up proves problematic but i wanna have a bit of fun with it first and just see how it handles things. yeah its a radical idea but one i have not toyed with since my 2 games same cpu on a Q6600 back in 2007.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,736
14,767
136
@mopardude87 , I had to upgrade my ram to 32 gig on all my 3900x's.

As for folding and Rosetta, yes, GPUs on folding, and whats left for CPU's on Rosetta is the way to go, they work fine together, all my rigs are configured that way.. I tell BOINC to use all the cpu time, but 90 or 95% of theCPUs. That leaves abou 2 threads for the video card, which is fine.. So you end of with 22 on the CPUs doing Rosetta, and 2 working the video cards.
 
Reactions: mopardude87

Endgame124

Senior member
Feb 11, 2008
956
669
136
@mopardude87 , I had to upgrade my ram to 32 gig on all my 3900x's.

As for folding and Rosetta, yes, GPUs on folding, and whats left for CPU's on Rosetta is the way to go, they work fine together, all my rigs are configured that way.. I tell BOINC to use all the cpu time, but 90 or 95% of theCPUs. That leaves abou 2 threads for the video card, which is fine.. So you end of with 22 on the CPUs doing Rosetta, and 2 working the video cards.
On older hardware, like my old AMD APUs, I only have 4 threads. If I tell BOINC to use only 75% of CPUs (1 CPU), my PPD drops by 50k-100k on my A10-5800k. If I only use 50% of CPUs (2CPUs) I pickup that 50-100k PPD, but I gain no extra PPD by having all 4 CPUs dedicated to folding. I settled on 75% of CPUs - I'll take the slightly longer turn around times when I do get Folding WUs, so that I have 3 threads running Rosetta when I fail to get F@H WUs.

On my Ryzen 2700x, I've set boinc to use 15 of the 16 CPUs and my 1080ti folds all the time. There appears to be no gain in PPD by reserving 2 CPUs vs 1 CPU, BUT I can't say it gets consistent enough work with F@H to really verify that.
 
Reactions: mopardude87
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |