Formula Boinc Sprints 2018

Page 40 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

StefanR5R

Elite Member
Dec 10, 2016
5,905
8,800
136
The reason why its better is it doesn't use much (if any) ram. I'm able to run all 72 cores of one machine with a measly 64 GB ram.
OK, further reading is telling me that (a) several tasks share libraries, and (b) brief spikes in memory consumption at the beginning of a task can be absorbed by paging out to swap.
 

TennesseeTony

Elite Member
Aug 2, 2003
4,235
3,668
136
www.google.com
72 cores and 64GB RAM is measly?

I'll go to LHC eventually, I would hate to have to chase down Marathon projects for the next 35 days because we skipped some Sprint points.
 
Last edited:

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
Instructions are here. I have it setup on mint 18.3.

The reason why its better is it doesn't use much (if any) ram. I'm able to run all 72 cores of one machine with a measly 64 GB ram.
According to the old formulae (1.4 GB + n*0.8 GB ; n= number of allocated cores) your client has more than enough RAM.
 

zzuupp

Lifer
Jul 6, 2008
14,863
2,319
126
HA HA!!!! Yep, wearing Alabama colors is considered a capital offense during 'Uhmurikan' college Football season.

Yeah, LHC.
I'm glad you caught the joke

You aughta hear what the jawja folks say. Let's not think about south cackliacky


LHC it is
 

StefanR5R

Elite Member
Dec 10, 2016
5,905
8,800
136
72 cores and 64GB RAM is measly?
64 GB RAM for 72 cores is measly. (Due to high RAM prices, my own dual-processor hosts have only 64 GB too, incidentally.)

According to the old formulae (1.4 GB + n*0.8 GB ; n= number of allocated cores) your client has more than enough RAM.
Yes, but this is just one formula of several that you can find at the LHC@home site. And like so much other info at the site, it is outdated/ wrong/ misleading. Keep this in mind whenever you look antyhing up at that site. (Also take into account that n should not be set to too high. Scaling is limited, and the application has single- and lowly threaded periods, IIRC.)

Another, more recently posted formula, posted by a user whose insight into LHC@home I trust to some extent, is 3.0 GB + 0.9 GB/thread for each vbox based ATLAS@home task.

LHC it is
While I am not saying here whether or not I will participate in this sprint, everybody can easily find that I never run LHC@home outside of competitions, like Pentathlon 2017. SixTrack tasks are nonsense, the vbox based applications are nonsense, and (at least) the external dependencies of the native Linux ATLAS application are nonsense.
 
Last edited:

crashtech

Lifer
Jan 4, 2013
10,578
2,146
146
Can clients get blacklisted by the LHC servers? I have at least one that can't get anything at all, not one little task.
 

StefanR5R

Elite Member
Dec 10, 2016
5,905
8,800
136
If the host doesn't have VirtualBox = can only receive Sixtrack work, then the most likely explanation is that others snagged all tasks before you when the work generator created some the last time. Its output rate is far lower than contributor capacity, obviously.

(edit, PS:
CERN itself is able to consume SixTrack work a lot faster than the work generator is producing it. -> top_teams.php)
 
Last edited:
Reactions: ao_ika_red

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
This LHC is much more frustrating in sprint than when I did it myself. What's the point of having reliable server without any sufficient tasks? And the VBox-powered subprojects don't help easing the demand on SixTrack because their hardware requirements are quite ridiculous and sometimes stuck for hours.

Oh, Happy Thanksgiving to all who celebrate it. I can't wait to see your hardware upgrades the following day.
 
Reactions: TennesseeTony

StefanR5R

Elite Member
Dec 10, 2016
5,905
8,800
136
I took the total of the RACs of the top 300 teams and found:
69 % ........ CERN
16 % ........ Gridcoin
15 % ........ all other teams

IOW, at this time the public side of LHC@home is just a gimmick. CERN could run all the current workload internally just fine.
 
Last edited:

crashtech

Lifer
Jan 4, 2013
10,578
2,146
146
I've lost many hours of compute time due to various inscrutable VM errors. LHC is a terrible Sprint project, imo.
 

StefanR5R

Elite Member
Dec 10, 2016
5,905
8,800
136
Looks like the mutil-core Theory tasks finally made it to the main site since I came back (I've been crunching them on the Dev site for a while).
Like for SixTrack, Theory work is no longer available. Even most of the Theory tasks which could be fetched last night may be idle now, because they can't side-load any payload (directly into the VM guests, invisible to the BOINC client, only visible by their near zero CPU utilization and by looking at network interface utilization).

We are back to the usual FB sprint mode where Sébastien failed to notice that the project that he selected does not have enough work. From what I understand, non-vbox work (SixTrack) was near unavailable already outside the sprint.
 
Reactions: Orange Kid

biodoc

Diamond Member
Dec 29, 2005
6,284
2,238
136
My caches are full of SixTrak and ATLAS(vbox) tasks. I was seeing extremely slow progress on Theory tasks so I gave up on those. I set my cpu limit in the LHC preferences to 8 so the ATLAS tasks are running with 8 threads per task.
 
Reactions: Orange Kid

StefanR5R

Elite Member
Dec 10, 2016
5,905
8,800
136
Yes, server_status.php is showing SixTrack available now, and a small number of Theory available. (But the Theory tasks which I have locally are still idling to a large part.)
 
Reactions: Orange Kid

Orange Kid

Elite Member
Oct 9, 1999
4,375
2,164
146
Day one of LHC finds us in second
This is one tuff project to run, but we seen to be succeeding.

1 Gridcoin ___25___2,444,700
2 TeAm AnandTech ____18____815,717
3 Rechenkraft.net ___15___386,881
4 Planet 3DNow! ____12___338,250
5 Czech National Team ___10___207,998
6 L'Alliance Francophone ___8___177,974
7 Overclock.net ___6___111,338
8 SETI.Germany ___4___96,901
9 UK BOINC Team ___2___86,720
10 BOINC@Poland ___1___78,626

From FreeDC
 

TennesseeTony

Elite Member
Aug 2, 2003
4,235
3,668
136
www.google.com
Uhm....I'm getting left in the dust here, but have EVERYTHING running LHC.....hmmm.

  • No VM's for me
  • Checked some machines and they either have tasks lasting more than one day,
  • or tasks lasting 10 seconds to 2 minutes,
  • and a few normal ones in the 1 to 6 hour range.
I might just bow out of this one, rather than waste cycles that can be better used elsewhere.
 

StefanR5R

Elite Member
Dec 10, 2016
5,905
8,800
136
Sunday 19.59UTC is approaching fast!
I am beginning to doubt that even my downloads have finished by then.
Yet the next problem is already showing up: Upload bandwidth, or lack thereof.

My lack of RAM may turn out to be a trivial problem compared to the networking bottleneck.
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
First time my C2D laptop having LHC project and it's horrendous. SixTrack SSE2 app runtime is about 8-14 hours. Such a needy project.
 

zzuupp

Lifer
Jul 6, 2008
14,863
2,319
126
I finally got a second round of six track on the non vm box. The other one is on vm task #3 and trying to d/l more.
 

crashtech

Lifer
Jan 4, 2013
10,578
2,146
146
It looks like on one of my PCs, each Atlas task uses a VM that allocates itself 10.2GB. I want to learn how to trim that down a bit because many other tasks wait for memory, the VMs allocate it to themselves but don't use half of it.
 

StefanR5R

Elite Member
Dec 10, 2016
5,905
8,800
136
We are back to the usual FB sprint mode where Sébastien failed to notice that the project that he selected does not have enough work. From what I understand, non-vbox work (SixTrack) was near unavailable already outside the sprint.
I need to take this back partially... Whatever checks Sébastien is applying before he selects a project, they apparently involve a crystal ball.



(graph courtesy of Kiska; "rts" = number of tasks ready to send)
 

biodoc

Diamond Member
Dec 29, 2005
6,284
2,238
136
My 2P ran out of disk space so all tasks were in "waiting to run" mode. I deleted a bunch of stuff I didn't need and it's back up running again. The LHC directory contains nearly 10 GB of files. I also reduced swap partition size and increased the size of my main linux partition. It seems like there's also something that needs attending!
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |