Markfw
Moderator Emeritus, Elite Member
- May 16, 2002
- 25,740
- 14,772
- 136
Day 19 stats: The Marauders are fighting back.
Don't struggle, Marauders; it's no use.
Hmm, seems the Marauders either missed, or dismissed, my informative and strictly factual post. :-(Day 19.5 stats: The Marauders have increased their lead a bit.
F@H does not care much about bandwidth, its the cuda cores. Even my lowly 1070TI does 1.1m. Turn them all on, we need you !!!Yeah don't count on me @voodoo5_6k I don't have all my GPUs in this, and of the ones I do most of them are rocking x1 slots.
F@H does not care much about bandwidth, its the cuda cores. Even my lowly 1070TI does 1.1m. Turn them all on, we need you !!!
The bottom of the line 2060 ( OF THE 2000 SERIES) does 2 million ppd !
F@h does like host bus bandwidth, at least on big and medium sized GPUs. Much more so than most other DC GPGPU applications. More so on Windows than on Linux, but it wants a good a mount of bandwidth on Linux too.F@H does not care much about bandwidth,
nvidia-smi dmon -d 3 -s pmuct
is showing what's going on, for Nvidia GPUs. (On Windows, the path to nvidia-smi needs to be prepended.)hmmm I was basing this on an older post, that 4x to 8x to 16x slots made no difference in ppd.F@h does like host bus bandwidth, at least on big and medium sized GPUs. Much more so than most other DC GPGPU applications. More so on Windows than on Linux, but it wants a good a mount of bandwidth on Linux too.
E.g.nvidia-smi dmon -d 3 -s pmuct
is showing what's going on, for Nvidia GPUs. (On Windows, the path to nvidia-smi needs to be prepended.)
For example, a single-lane PCIe v2 connection has got 500 MByte/s nominal bandwidth per direction, and this doesn't get you far even with a middle-of-the-road Pascal GPU.
This might explain the lack of GPU utilization I was seeing on cloud when adjacent instances were hitting the CPU hard. Even though F@H doesn't really need a lot of CPU, it needs the other resources, control of which in a shared environment is a big question mark.F@h does like host bus bandwidth, at least on big and medium sized GPUs. Much more so than most other DC GPGPU applications. More so on Windows than on Linux, but it wants a good a mount of bandwidth on Linux too.
E.g.nvidia-smi dmon -d 3 -s pmuct
is showing what's going on, for Nvidia GPUs. (On Windows, the path to nvidia-smi needs to be prepended.)
For example, a single-lane PCIe v2 connection has got 500 MByte/s nominal bandwidth per direction, and this doesn't get you far even with a middle-of-the-road Pascal GPU.
I have been trying my hardest to not "Trash Talk", hoping it would not add fuel to the Moonboot's fire.so we lost 17 million points of our lead in 12 hours. At that rate we loose in 3 days, let alone 10.5 more days.
Hey, Tony! Nice to see you around these parts! I'd have to check, honestly I have been so busy I've just been doing the "set it and forget it" routine... Unfortunately I don't think F@H even has a detailed way of looking at past WUs and how long they took... do they?My 2080Ti has been seriously under performing the last few days, 1.5M ppd under Windows. I finally took note of the task, 16491 (cancer). Anyone else note a problem with this series of tasks? I have rebooted is all, to 'fix' the problem, but to no avail.
Just set your preferences to Altzheimers. That might fix it. it should do 5.5m. I have 3 of 6 that are doing 3.3m.. The other 3 are doing 5.5.My 2080Ti has been seriously under performing the last few days, 1.5M ppd under Windows. I finally took note of the task, 16491 (cancer). Anyone else note a problem with this series of tasks? I have rebooted is all, to 'fix' the problem, but to no avail.
It's possible with some text processing of log.txt and logs/log-*.txt in the client data directory. Filtering for a few keywords with 'grep' may already go a long way to get to the relevant data, but on top of that you might want to calculate credits/duration for the tasks.I don't think F@H even has a detailed way of looking at past WUs and how long they took... do they?
If this doesn't earn us bonus points for style then I've lost all faith in humanityThanks for the stats biodoc
My boys are ready for cheering on!
16491 & 16492 were released to the beta team on Dec. 16th and one member reported TPF on 3 GPUs. Using the bonus point calculator I was able to calculate PPD for 16491.My 2080Ti has been seriously under performing the last few days, 1.5M ppd under Windows. I finally took note of the task, 16491 (cancer). Anyone else note a problem with this series of tasks? I have rebooted is all, to 'fix' the problem, but to no avail.