Kernel Power = PSU is likely bad.Good point, I'll check. I can't find an event manager, but I did find an event viewer, is that what you meant?
Critical - Event ID 41 kernal-power - The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.
After rebooting their is a small window open saying windows has encountered an error. Not much help there.
I can't remember the number of rails it has, I'll check later, along with getting my DMM out to check the 12v rail readings. I do remember it's an old Corsair 650w PSU, more info later....[update] It's a Corsair TX650w, it's got a single 12v rail. Apparently it's MBTF is ~11.4 yrs, I wonder when I got it? I think maybe I got it for when I 1st built my C2 rig(2007)! lol, although it might have been a bit later.
DMM readings from my PSU whilst running SETI on CPU (10 threads) & GPU :-
12v line - wire1 11.95v, wire 2 11.96v, wire 3 11.97v (small plug to mbrd), also seen at the main ATX plug (different WU maybe?) And a steady 12v (11.97-12.03v) to the GPU plug, I watched it on & off over several minutes whilst also looking at GPU-Z to see GPU load.
5v line - 4.86v (main ATX plug), 3v line - 3.32v (main ATX plug).
That good ole PSU is chugging along nicely
I wonder if it's some sort of problem relating to when the monitors go to sleep.....
...or Wow signal was detected and they don't want the earthlings to find out, lolAnd now the Servers are down, That must have bin some dump
Obviously, @biodoc at least doesn't have any troubles in this regard (or not anymore).
...or Wow signal was detected and they don't want the earthlings to find out, lol
I feel a great disturbance in the Force (seti's server).
<snip>
zoomed-in
<snip>
Happened since 1112 UTC (approx)
/usr/share/munin/plugins# time ./seti
multigraph results_setiathomev8_in_progress
inProgress.value 5812407
multigraph results_setiathomev8_rts
rts.value 662272
multigraph results_setiathome_AP
rts.value 0
inProgress.value 6888
multigraph workunits_setiathomev8
validation.value 4769138
assimilation.value 59
deletion.value 67
deletion_result.value 1
multigraph workunits_setiathome_AP
validation.value 8391
assimilation.value 0
deletion.value 0
deletion_result.value 0
multigraph transitioner_setiathome
backlog.value 0.000833333333333333
real 0m0.277s
user 0m0.168s
sys 0m0.044s
Mine is still having problem in uploading, but it's slowly improving, as expected.At least the server recovered
View attachment 9640
Code:/usr/share/munin/plugins# time ./seti multigraph results_setiathomev8_in_progress inProgress.value 5812407 multigraph results_setiathomev8_rts rts.value 662272 multigraph results_setiathome_AP rts.value 0 inProgress.value 6888 multigraph workunits_setiathomev8 validation.value 4769138 assimilation.value 59 deletion.value 67 deletion_result.value 1 multigraph workunits_setiathome_AP validation.value 8391 assimilation.value 0 deletion.value 0 deletion_result.value 0 multigraph transitioner_setiathome backlog.value 0.000833333333333333 real 0m0.277s user 0m0.168s sys 0m0.044s
Good point, we have an oscilloscope at work (I'm a mechanic), I wonder if our kit has enough fidelity to test 12v PSU ripple....Kernel Power = PSU is likely bad.
A multimeter doesn't really test a PSU, you need a load tester and a scope, Ideally to see what's happening. If the circuitry is wearing out, excessive ripple in the current will cause instability and harm to your components over time.
I had this exact issue with a PSU about 6 months ago. Oddly, it was a Corsair HX750. Random reboots with Kernel Power errors as well. Corsair is amazing, and they RMA'd the PSU. All has been well and no reboots or Kernel Power errors.
That is, for the same hardware + application version, PPD have regressed proportionally from 2017 through 2018 to 2019.
Sure, AstroPulse counts too.I see my client is running an Astropulse unit, the points from this still count for SETI right?
I believe this is due to the continued improvement of GPU application versions, and their increasing adoption. I failed to find a discussion of it in the setiathome forums.That is, for the same hardware + application version, PPD have regressed proportionally from 2017 through 2018 to 2019.
As far as I know, this is not for performance, but for deeper work buffers. They certainly have cards and application versions which take less than a minute to complete one task. Hence, SETI@home's server-side enforced limit of 100 tasks in progress per GPU makes for very shallow work queues.
Hence they configure their clients to tell the server that there are a lot more GPUs in it than there are in reality.
In addition, there is a client-side limit of 1000 runnable tasks, after which the client would no longer request more new work. With this limit in mind, it makes sense to tell the server that there are e.g. 11 GPUs in the host. People who want more than this need to recompile the client from patched sources.
I listened to Crosby's Mele Kalikimaka earlier today. What a coincidence!