16th Annual Folding@Home Holiday Season Race: The race is over and the Moonshots win!

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,740
14,772
136
OK< coolermaster finally replied to my query on an adapter for a new 612-2 cooler I have sitting around. And I still have 16 gig more 3200 cl14 ram sitting around, and a PSU, and an nvme drive.. Now Amazon has the 5950x for $690 !

tempting !!!!! I need it like a hole in the head.
 

voodoo5_6k

Senior member
Jan 14, 2021
395
443
116
Thank you for your double stats update, @biodoc

Yeah, the Moonshot Gang is nibbling away our lead... How much higher can they push their production? The 2020 race points have already been exceeded!!!

What can we do to stop them? Or at least hang in there long enough to push it over the finish line? @Assimilator1's beloved cheering gerbils must already be cheering their lungs out It's a good thing, we have them on our side, a big plus for us! We should get some extra points for style And then we have @Skillz ramping up production Maybe we now have an anvil to counter the Moonshot Gang's hammer blows!
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,740
14,772
136
Yeah don't count on me @voodoo5_6k I don't have all my GPUs in this, and of the ones I do most of them are rocking x1 slots.
F@H does not care much about bandwidth, its the cuda cores. Even my lowly 1070TI does 1.1m. Turn them all on, we need you !!!

The bottom of the line 2060 ( OF THE 2000 SERIES) does 2 million ppd !
 

Skillz

Senior member
Feb 14, 2014
946
979
136
F@H does not care much about bandwidth, its the cuda cores. Even my lowly 1070TI does 1.1m. Turn them all on, we need you !!!

The bottom of the line 2060 ( OF THE 2000 SERIES) does 2 million ppd !

Just got home and noticed my main crunching rig was down. It had rebooted for some unknown reason last night or this morning. Rebooted the rig and it's back on F@H. Hopefully it doesn't give me anymore issues, but if it does crash and reboot again I'll be forced to drop F@H.
 

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,251
136
F@H does not care much about bandwidth,
F@h does like host bus bandwidth, at least on big and medium sized GPUs. Much more so than most other DC GPGPU applications. More so on Windows than on Linux, but it wants a good a mount of bandwidth on Linux too.

E.g. nvidia-smi dmon -d 3 -s pmuct is showing what's going on, for Nvidia GPUs. (On Windows, the path to nvidia-smi needs to be prepended.)

For example, a single-lane PCIe v2 connection has got 500 MByte/s nominal bandwidth per direction, and this doesn't get you far even with a middle-of-the-road Pascal GPU.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,740
14,772
136
F@h does like host bus bandwidth, at least on big and medium sized GPUs. Much more so than most other DC GPGPU applications. More so on Windows than on Linux, but it wants a good a mount of bandwidth on Linux too.

E.g. nvidia-smi dmon -d 3 -s pmuct is showing what's going on, for Nvidia GPUs. (On Windows, the path to nvidia-smi needs to be prepended.)

For example, a single-lane PCIe v2 connection has got 500 MByte/s nominal bandwidth per direction, and this doesn't get you far even with a middle-of-the-road Pascal GPU.
hmmm I was basing this on an older post, that 4x to 8x to 16x slots made no difference in ppd.
 

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,251
136
It's possible a) that AMD cards generally need less bandwidth for F@h, b) that the change from OpenCL to CUDA on Nvidia cards with recent drivers reduced the bandwidth needs somewhat. But I am not sure about either.
 

crashtech

Lifer
Jan 4, 2013
10,547
2,138
146
F@h does like host bus bandwidth, at least on big and medium sized GPUs. Much more so than most other DC GPGPU applications. More so on Windows than on Linux, but it wants a good a mount of bandwidth on Linux too.

E.g. nvidia-smi dmon -d 3 -s pmuct is showing what's going on, for Nvidia GPUs. (On Windows, the path to nvidia-smi needs to be prepended.)

For example, a single-lane PCIe v2 connection has got 500 MByte/s nominal bandwidth per direction, and this doesn't get you far even with a middle-of-the-road Pascal GPU.
This might explain the lack of GPU utilization I was seeing on cloud when adjacent instances were hitting the CPU hard. Even though F@H doesn't really need a lot of CPU, it needs the other resources, control of which in a shared environment is a big question mark.

Edit:

I'm seeing between 1500 and 2200 MB/s on a big Nvidia GPU running the CUDA core, this is well beyond the PCIe 3.0 x1 spec of 985 MB/s.
 
Reactions: voodoo5_6k

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,740
14,772
136
so we lost 17 million points of our lead in 12 hours. At that rate we loose in 3 days, let alone 10.5 more days.
 
Reactions: Assimilator1

cellarnoise

Senior member
Mar 22, 2017
729
399
136
so we lost 17 million points of our lead in 12 hours. At that rate we loose in 3 days, let alone 10.5 more days.
I have been trying my hardest to not "Trash Talk", hoping it would not add fuel to the Moonboot's fire.

So starting tomorrow the mouth may come out in an attempt at reverse DCology?

Back to LOTR! (I don't even know how to properly insert memes...)

 

TennesseeTony

Elite Member
Aug 2, 2003
4,221
3,649
136
www.google.com
My 2080Ti has been seriously under performing the last few days, 1.5M ppd under Windows. I finally took note of the task, 16491 (cancer). Anyone else note a problem with this series of tasks? I have rebooted is all, to 'fix' the problem, but to no avail.
 

crashtech

Lifer
Jan 4, 2013
10,547
2,138
146
My 2080Ti has been seriously under performing the last few days, 1.5M ppd under Windows. I finally took note of the task, 16491 (cancer). Anyone else note a problem with this series of tasks? I have rebooted is all, to 'fix' the problem, but to no avail.
Hey, Tony! Nice to see you around these parts! I'd have to check, honestly I have been so busy I've just been doing the "set it and forget it" routine... Unfortunately I don't think F@H even has a detailed way of looking at past WUs and how long they took... do they?
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,740
14,772
136
My 2080Ti has been seriously under performing the last few days, 1.5M ppd under Windows. I finally took note of the task, 16491 (cancer). Anyone else note a problem with this series of tasks? I have rebooted is all, to 'fix' the problem, but to no avail.
Just set your preferences to Altzheimers. That might fix it. it should do 5.5m. I have 3 of 6 that are doing 3.3m.. The other 3 are doing 5.5.

Have you checked your driver version ? or the log file ? It should be using cude if its driver 4.6 or higher. The log fill look like this(the bolded is the most important place):
02:22:36:WU00:FS00:0x22: There are 4 platforms available.
02:22:36:WU00:FS00:0x22: platform 0: Reference
02:22:36:WU00:FS00:0x22: platform 1: CPU
02:22:36:WU00:FS00:0x22: platform 2: OpenCL
02:22:36:WU00:FS00:0x22: opencl-device 0 specified
02:22:36:WU00:FS00:0x22 : platform 3: CUDA
02:22:36:WU00:FS00:0x22: cuda-device 0 specified
02:22:38:WU00:FS00:0x22: Attempting to create CUDA context:
02:22:38:WU00:FS00:0x22: Configuring platform CUDA
02:22:41:WU01:FS00:Upload 83.84%
02:22:42:WU01:FS00:Upload complete
02:22:43:WU01:FS00:Server responded WORK_ACK (400)
02:22:43:WU01:FS00:Final credit estimate, 116885.00 points
02:22:43:WU01:FS00:Cleaning up
02:22:50:WU00:FS00:0x22: Using CUDA and gpu 0
02:22:50:WU00:FS00:0x22:Completed 0 out of 2500000 steps (0%)
 

StefanR5R

Elite Member
Dec 10, 2016
5,684
8,251
136
I don't think F@H even has a detailed way of looking at past WUs and how long they took... do they?
It's possible with some text processing of log.txt and logs/log-*.txt in the client data directory. Filtering for a few keywords with 'grep' may already go a long way to get to the relevant data, but on top of that you might want to calculate credits/duration for the tasks.
 

biodoc

Diamond Member
Dec 29, 2005
6,271
2,238
136
My 2080Ti has been seriously under performing the last few days, 1.5M ppd under Windows. I finally took note of the task, 16491 (cancer). Anyone else note a problem with this series of tasks? I have rebooted is all, to 'fix' the problem, but to no avail.
16491 & 16492 were released to the beta team on Dec. 16th and one member reported TPF on 3 GPUs. Using the bonus point calculator I was able to calculate PPD for 16491.

RTX 2080 TPF 41.00 secs : 1.34 M PPD
RTX 2070 TPF 48.00 secs : 1.06 M PPD
GTX 1070 TPF 60.00 secs : 759 K PPD

Do you have client-type set to beta? You may want to delete that since these tasks seem to be exceptionally low PPD. Also, check to make sure you are using recent Nvidia drivers which are required to use core 22 v.18. This tidbit was posted by @voodoo5_6k here.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |