Recent Changes in projects

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

StefanR5R

Elite Member
Dec 10, 2016
6,056
9,106
136
Prediction has been restarted.

Nice, a chemistry project!

Thanks for the pointer. I added their news feed to teamanandtech.org's Distributed Computing News.

https://uspex-at-home.ru/prediction/stats/ is still empty. Once files show up there, expect coverage in our Weekly Stats.

Edit:
Windows only atm, 1 task her host. vbox only.
I wonder how hard it could be to make it run on Linux via app_info.xml. Also…
At least it's not another Russian math project.

From the basic info, it looks like it's attempting to predict the Ion Conductivity of Lithium Phosphides, at least in part to try to improve battery technology. Could be interesting, but probably won't be added to the stats sites for a while due to the .ru site domain.
…some sanity checks that the application is doing what it is specified to be doing would be good. I have no idea how to approach this though, beyond looking at what files are in the VM image. Looks like the code was free and open in older versions but is proprietary in the current version (download site).
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
6,056
9,106
136
MilkyWay@home is still not finished getting their server migration completed:

A while ago they changed the project URL to http://milkyway-new.cs.rpi.edu/milkyway/. In DNS, the host names milkyway.cs.rpi.edu and milkyway-new.cs.rpi.edu resolved to the same address. On the server, HTTP requests to milkyway.cs.rpi.edu were redirected to milkyway-new.cs.rpi.edu.

Last night they changed the project URL back to http://milkyway.cs.rpi.edu/milkyway/. The host name milkyway-new.cs.rpi.edu was removed from DNS. #-(

If you followed the earlier boinc-client notices that the project is at -new now, and changed to the -new project URL, you need to change it back to the old one now (by removing and re-adding the project, as usual in such cases). If you currently have work in progress on your client which you would like to complete (or abort) and report before you remove the now broken -new project URL, you first need to add the line
128.113.126.54 milkyway-new.cs.rpi.edu
to /etc/hosts or C:\WINDOWS\system32\drivers\etc\hosts (after excluding this file from Windows Defender's file monitoring, on some Windows versions).
 
Reactions: Orange Kid

StefanR5R

Elite Member
Dec 10, 2016
6,056
9,106
136

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,389
15,513
136
Something has happened at Rosetta. After months of not being able to get work, while I have WCG and Rosetta set equally, I have work for both , but for some reason all my boxes are running Rosetta, and Rosetta beta tasks !

Edit: what does this mean ?

"7950x main 45801 Rosetta@home 12/6/2023 9:53:17 PM Tasks won't finish in time: BOINC runs 99.9% of the time; computation is enabled 100.0% of that "
 
Last edited:

Fardringle

Diamond Member
Oct 23, 2000
9,199
765
126
I have been getting the same message about the tasks possibly not completing on time as well. I think the tasks in this batch had shorter run time estimates than the actual run times so my computers downloaded more work than they should have. But they still seem to be (mostly) getting done anyway.
 

Fardringle

Diamond Member
Oct 23, 2000
9,199
765
126
what is my quque ? and how do I fix that ?
In BoincTasks, click on the computer name in the list on the left, then click the Extra menu option, then click BOINC Preference.
In the BOINC Settings window, click on the Network tab, then change the two work buffer lines. I have mine set to 0.50 days, but you can pick whatever works best for you. The task deadlines on these Rosetta tasks are four days, so just make sure the total is less than that. And especially in this case where the task estimated run times are too low, probably even set it a bit lower than you think you want it to be for now.

Unfortunately, I don't know of any way to set it on all computers at the same time as BoincTasks can only set it on one computer at a time, so it will be a bit tedious to change the setting on all of your computers.

Or, you can just leave them the way they are and acknowledge that some of your tasks might time out before they are completed. As far as I am aware, that doesn't really have any negative impact on anything unless you just don't want to have the task stats on your account on the project page show any "errors"...
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,389
15,513
136
In BoincTasks, click on the computer name in the list on the left, then click the Extra menu option, then click BOINC Preference.
In the BOINC Settings window, click on the Network tab, then change the two work buffer lines. I have mine set to 0.50 days, but you can pick whatever works best for you. The task deadlines on these Rosetta tasks are four days, so just make sure the total is less than that. And especially in this case where the task estimated run times are too low, probably even set it a bit lower than you think you want it to be for now.

Unfortunately, I don't know of any way to set it on all computers at the same time as BoincTasks can only set it on one computer at a time, so it will be a bit tedious to change the setting on all of your computers.

Or, you can just leave them the way they are and acknowledge that some of your tasks might time out before they are completed. As far as I am aware, that doesn't really have any negative impact on anything unless you just don't want to have the task stats on your account on the project page show any "errors"...
I am at 3.0 and 0.5
 

mmonnin03

Senior member
Nov 7, 2006
248
233
116
Deadline is 3 days for these Rosetta tasks so your queue/work buffer is asking for more than you can complete before the deadlines assuming ETAs match actual run times.
 
Reactions: Fardringle

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,389
15,513
136
Deadline is 3 days for these Rosetta tasks so your queue/work buffer is asking for more than you can complete before the deadlines assuming ETAs match actual run times.
Thank you. I changed it to 1.0 for all the computers currently running.
 
Reactions: cellarnoise

StefanR5R

Elite Member
Dec 10, 2016
6,056
9,106
136
PrimeGrid Genefer on GPUs:
Noticed this on the PG forums today.

Seems we have moved to a new level on GFN 20.
The same slower transform called "ocl4(high)" is already in use by GPUs in GFN-16…-19.

GFN-21 and up still have ways to go before they will switch likewise.
GFN-21: current leading edge¹ b = 1,261,310 | b limit² = 2,019,124 (ocl5), 2,097,152 (ocl3)
GFN-22: current leading edge b = 320,598 | b limit = 1,427,736 (ocl5), 1,482,910 (ocl3)
DYFL a.k.a. GFN Extreme: current leading edge b = 1,121,882 | b limits like GFN-22

________
¹) https://www.primegrid.com/stats_genefer.php
²) https://www.primegrid.com/forum_thread.php?id=4152&nowrap=true#51084
 

gsrcrxsi

Member
Aug 27, 2022
55
28
61
PrimeGrid Genefer on GPUs:

The same slower transform called "ocl4(high)" is already in use by GPUs in GFN-16…-19.

GFN-21 and up still have ways to go before they will switch likewise.
GFN-21: current leading edge¹ b = 1,261,310 | b limit² = 2,019,124 (ocl5), 2,097,152 (ocl3)
GFN-22: current leading edge b = 320,598 | b limit = 1,427,736 (ocl5), 1,482,910 (ocl3)
DYFL a.k.a. GFN Extreme: current leading edge b = 1,121,882 | b limits like GFN-22

________
¹) https://www.primegrid.com/stats_genefer.php
²) https://www.primegrid.com/forum_thread.php?id=4152&nowrap=true#51084
Stefan,

what is meant by the terms "ocl[n]" does the number n hold any significance other than an identifier?

does o-c-l mean anything in particular? does this reference "OpenCL", or are these just internal names given to the different algorithms in use in these various apps? can you expand on this?
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,410
4,175
75
Stefan,

what is meant by the terms "ocl[n]" does the number n hold any significance other than an identifier?

does o-c-l mean anything in particular? does this reference "OpenCL", or are these just internal names given to the different algorithms in use in these various apps? can you expand on this?
ocl[n] refers to particular algorithms using, yes, OpenCL, to test the primality of generalized Fermat numbers ("Gene-Fer" or GFN). Generally, the low-numbered algorithms are faster than the high-numbered ones, but the high-numbered ones can process larger numbers. That's not a hard-and-fast rule, either - I think the largest numbers are tested by one of the lowest-numbered algorithms, rather slowly.
 

gsrcrxsi

Member
Aug 27, 2022
55
28
61
I know that the apps are OpenCL, but I wasn’t sure if that algorithm name “ocl[n]” was supposed to be referencing openCL directly or if it coincidentally was named ocl referring to something else. Like if there was some standard algorithm naming syntax in the maths world that I was ignorant to. Like how computational complexity or limiting behavior of a function is often described in terms of big O notation.

Thanks for the clarification.
 

StefanR5R

Elite Member
Dec 10, 2016
6,056
9,106
136
@gsrcrxsi, the names of the transforms were given to them by their inventor Yves Gallot. As @Ken g6 said, "ocl" refers to the fact that they are using the OpenCL programming interface. AFAIU, the numbers in these names were mostly arbitrarily given to these transforms as Yves created them.

Here is a mapping of names of the GPU transforms to data formats used in these transforms:
https://www.seti-germany.de/forum/t...kussion/page24?p=338921&viewfull=1#post338921 (in German)
Of these, the "ocl" transform is the only one which uses floating point numbers. This transform is no longer in use at PrimeGrid because all of the Genefer subprojects are past the b limit of it by now.

Like the "ocl" GPU transform, and in contrast to the "ocl2"…"ocl5" GPU transforms, the CPU transforms are all floating point based. There is a very slow but very precise "x87" transform which uses classic floating point math. And there are several (AFAIU) FP64 based CPU transforms which are using vector arithmetic. (Yves Gallot called them "sse2", "sse4", "avx", "fma3", and "512".) They are naturally much faster and energy efficient, but happen to be less precise and therefore have much lower b limits than the "x87" transform.

Which of the transforms shall be used on a given host for a given workunit is auto-detected by Genefer at the start of a task by default, or can be specified by a command line parameter. If a Genefer task started out using one of the FP64 transforms (the "ocl" GPU transform back in times when it was still usable at PrimeGrid, or the vector arithmetic transforms on CPUs), it accumulates a numeric error during the run time of the task. The current error is checked several times during the task duration. If it reaches a certain limit, Genefer would switch over to one of the "ocl2"…"ocl5" transforms on GPUs or the "x87" transform on CPUs. Once the numeric error is back at a low level, the task might switch up to the faster FP64 transform again.

The "ocl" FP64 transform was only used on GPUs if Genefer auto-detected that the respective GPU had fast FP64 support.

Here is the source code: https://github.com/galloty/genefer22
edit: typo
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
6,056
9,106
136
PS,
Once the numeric error is back at a low level, the task might switch up to the faster FP64 transform again.
on second thought, I am not sure anymore if this kick back into high gear happens automagically. It is at least possible to trigger a switch back into the fastest transform by means of a suspend-to-disk/resume cycle of a task.

The automatic downgrading from a faster to a slower transform during the run time of a task pertains only to FP64 based transforms (nowadays, the vectorized CPU transforms). In case of integer based GPU transforms "ocl2"…"ocl5", Genefer will pick a transform with sufficiently high b limit right from the start and should never switch to a different transform after the fact, at least as long as there aren't any hardware induced errors (bad overclocks/ undervolts…).
 
Last edited:
Reactions: gsrcrxsi

StefanR5R

Elite Member
Dec 10, 2016
6,056
9,106
136
GPUGrid
is testing a new application. It requires Linux and an Nvidia GPU (Pascal or later, with more than 2 GB VRAM per task).
Message boards : News : PYSCFbeta: Quantum chemistry calculations on GPU
In parts it might benefit from FP64 hardware. An initial download of ~2 GB is involved. An option to select/deselect this particular application in the project preferences has been added just a few days ago. Right now there is no unsent work, but they plan to send more sometime soon.
 
Reactions: gsrcrxsi

gsrcrxsi

Member
Aug 27, 2022
55
28
61
GPUGrid
is testing a new application. It requires Linux and an Nvidia GPU (Pascal or later, with more than 2 GB VRAM per task).
Message boards : News : PYSCFbeta: Quantum chemistry calculations on GPU
In parts it might benefit from FP64 hardware. An initial download of ~2 GB is involved. An option to select/deselect this particular application in the project preferences has been added just a few days ago. Right now there is no unsent work, but they plan to send more sometime soon.
This app definitely uses FP64, and to some extent even Tensor cores (if available). I was seeing very fast throughput on my Titan V, easily 3-6x faster than high end RTX cards like 3090s or 4090s
 
Reactions: StefanR5R

pututu

Member
Jul 1, 2017
151
229
116
Since MW gpu app ended, finally found a good use for the Tesla P100 card sitting in a storage. It is like 5x faster than my 3060 Ti.
 
Reactions: gsrcrxsi

Skillz

Senior member
Feb 14, 2014
970
999
136
Apparently WEP pulled the plug without telling anyone. PDW reached out to them to ask questions and this was the response.

Hi, WEP has been down for quite a while so I emailed James to ask about it last night, his final response said...


I do actually have reserve servers all setup and ready to go - some even significantly faster than the old server! However I wanted to stop mainly for two reasons - 1) I’m now in my late fifties, and my wife and I have both been planning retirement preferment this year anyway.
2) I think the algorithm I was using will work significantly better on bigger numbers, and I don’t think I’ve quite got the energy to make a fairly big change to the setup.
I’m probably going to be continuing manually (ie not BOINCified) with my own farm crunching on the Mersenneplustwo numbers tho, as that’s very easy to do…

It’s been a wild ride (at times) for 17 years, and I’ve enjoyed every minute of it!!!

>
> If it's dead I'll let other know, there's been no other communication to advise what's happening.

Thanks very much - feel free to quote anything I’ve written


...So if you want to post [all or some of that] on your forum others can find out through TAAT !

Regards,
PDW
 

mmonnin03

Senior member
Nov 7, 2006
248
233
116
The time has come to inform you about what will happen next with the Universe@Home project.

As you know, Krzysztof Belczyński was the initiator and lead scientist of the project - he was the head of our small team. His sudden passing caused a bit of chaos, which we are gradually trying to manage. Unfortunately, the latest changes in the application code that were being implemented also disappeared with him. For this reason, as well as the necessity to introduce changes to the application code that had been planned for some time, we have decided to temporarily suspend the generation of new WUs.

As of today, we already know that Universe@Home will not end - it is being taken under the wing of Prof. Tomasz Bulik (https://nauka-polska.pl/#/profile/scientist?id=23855&lang=en&_k=ta6rhj). Nevertheless, we must ask for your patience; at this moment, the most important thing is to manage the organizational and formal matters and to work on the changes in the code. This should take us two to three months.

I will keep you updated.

 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |