lane42
Diamond Member
- Sep 3, 2000
- 5,721
- 624
- 126
Raising the cpu count on my laptop to 400 cpu's http://setiathome.berkeley.edu/hosts_user.php?userid=1281
only got me 100 workunits.
only got me 100 workunits.
Raising the cpu count on my laptop to 400 cpu's http://setiathome.berkeley.edu/hosts_user.php?userid=1281
only got me 100 workunits.
The userid required for registration can also be found locally on your host, in the boinc data directory in a file called sched_reply_setiathome.berkeley.edu.xml.Once Seti@home gets its servers back up, I'll be in
The resource splits by percentages virtually never work as expected. The boinc client has a complicated and less than intuitive algorithm to decide which of the enabled projects to run, involving the resource percentages, the "recent" history of per-project credits, tasks deadlines, and whatnot.I could have switched over to primegrid and crunched with them... But, normally, I try to do my own downtime, cleaning fans and dusting out the radiators or upgrades with the seti downtime, and often times, I'm not really around to check when the rig is thirsty. I feel a bit guilty for starting and stopping working on different projects when one goes down (for meantime). Unless it was somehow automated and I didn't really have to do anything. I suppose I could split the difference, and I know some people do that. Tho if you did do it 50/50, and seti goes down, does that pool the resources to 100% of the other project?
IMO it's perfectly on-topic. I am still undecided myself how I will manage Maintenance Tuesday (Is it still on Tuesdays?) during the Wow!-Event. Let the hosts sit idle? Probably not, but maybe if weather is hot. Set up multiple clients per host, as Tony suggested? If so, do I run them in parallel or one after the other? I am not sure, but I believe I had several clients running in parallel during maintenance downtime at last year's Wow!-Event. Or do I take it easier with SETI this time and run a 0 % backup project instead? ... Decisions, decisions.Sorry, I'm getting off topic here for this WOW event. Hopefully during this WOW event, they don't pull 2.5 days of downtime.
Oh right, this would be another alternative: Patch the client to have a GPU equivalent of cc_config.options.ncpus.Because the scheduler sees them as 1 device, no matter how many CPU cores there are. GPUs however you can say you have 3 and you'll get 300 WUs
Oh right, this would be another alternative: Patch the client to have a GPU equivalent of cc_config.options.ncpus.
<coproc>
<type>some_name</type>
<count>1</count>
<device_nums>0 2</device_nums>
[ <peak_flops>1e10</peak_flops> ]
[ <non_gpu/> ]
</coproc>
IMO it's perfectly on-topic. I am still undecided myself how I will manage Maintenance Tuesday (Is it still on Tuesdays?) during the Wow!-Event. Let the hosts sit idle? Probably not, but maybe if weather is hot. Set up multiple clients per host, as Tony suggested? If so, do I run them in parallel or one after the other? I am not sure, but I believe I had several clients running in parallel during maintenance downtime at last year's Wow!-Event. Or do I take it easier with SETI this time and run a 0 % backup project instead? ... Decisions, decisions.
Oh right, this would be another alternative: Patch the client to have a GPU equivalent of cc_config.options.ncpus.
Of the big projects, World Community Grid never seems to have an outage. (I run it only occasionally, hence may have missed some.) Folding@home has work servers at several sites, and when some of them are down, the F@H client usually succeeds to switch to another working one.Do other projects have these issues? Or is this just a seti thing?
PrimeGrid's servers get free hosting from rackspace which is pretty reliable.
They were given another year with Rackspace and thus are saving money in 2018 - https://www.primegrid.com/forum_thread.php?id=7592&nowrap=true#113021They have their own server now, rackspace ended their partnership
I need to learn using this.You may specify this in cc_config using:
Code:<coproc> <type>some_name</type> <count>1</count> <device_nums>0 2</device_nums> [ <peak_flops>1e10</peak_flops> ] [ <non_gpu/> ] </coproc>
01/08/2018 21:05:26 | | Unrecognized tag in cc_config.xml: <coproc>
I need to learn using this.
Now it says:
Code:01/08/2018 21:05:26 | | Unrecognized tag in cc_config.xml: <coproc>
<options>
<allow_multiple_clients>1</allow_multiple_clients>
<allow_remote_gui_rpc>1</allow_remote_gui_rpc>
<coproc>
<type>some_name</type>
<count>1</count>
<device_nums>0 2</device_nums>
[ <peak_flops>1e10</peak_flops> ]
[ <non_gpu/> ]
</coproc>
</options>
Thanks. It seems there's quite a lot development on GPU app (already used it since SETI.WOW last year). I'll try it later.http://mikesworld.eu/download.html don't have ATI, but you could try this and download seti v8 ati config and see if it works.
It's optimized for newer GPU's...
Thanks. It seems there's quite a lot development on GPU app (already used it since SETI.WOW last year). I'll try it later.
One question: Do we have to uninstall old lunatics app?
source, found via list of user posts, which I looked up after coming across another postpetri33 at the setiathome forum said:If and only if the current surge of Gbt vlar (blc162bit guppi) WUs keep coming on I'll be hitting more than or over 400 000 credit a day.
{
Titan V: 33 seconds
1080Ti : 58 seconds
1080 (*2): 77 seconds
}
A reduction of memory writes and subsequent reads in the first phase of pulse calculations has had an effect of a 25% speed upgrade in all vlar tasks.
To differentiate the good work from some development stages I have renamed the latest results to x41p_V0.93.
And yes. I know I have 2800 inconclusives. It will drop.