I have my mix of GTX 1080 and 1080Ti hosted by a 2.8 GHz Xeon, which seems to present a bottleneck in getting GPU utilization up for genefer17low.
WCG load + 1 GFN17low per card: ~87 % GPU core utilization
WCG load + 2 GFN17low per card: ~88 % GPU core utilization
no CPU load + 2 GFN17low per card: ~87 % GPU core utilization
The latter drop is probably because I have EIST on, and Windows 7 pro, and the GPU feeder task presumably hopping between cores which in turn are being wiggled between 1.2 and 2.8 GHz.
Furthermore, genefer17low ignores the <process_priority_special> tag in cc_config.xml and runs the feeder processes at lowest priority. I am running a powershell script now which switches the feeder processes to normal priority, but this doesn't improve
CPU GPU core utilization either.
I have long been thinking I should clean up my DC equipment and divide it cleanly into separate hosts for GPU projects vs. CPU projects. (The former with inefficient high-frequency CPUs, the latter with efficient CPUs.)
PS, my GPUs run capped by reliability voltage. Power consumption is below 80 % on 1080Ti and slightly above 80 % on 1080, while a moderate GPU core overclock is applied (against
Roger's recommendation), and 52...57 °C GPU temperature (under water and with moderate fan speeds).
Edit,
I have yet to compare actual throughput of 1/card vs. 2/card. No time for this right now.