580 GTX hovering around 25-50% load

Bradtech519

Senior member
Jul 6, 2010
520
47
91
Does this sound normal? I'm using techpowerup to monitor the load. My GPU temps are really low so I know it's not doing all it can. I switcted over to only crunch primegrid at the moment. I had been doing einstein and seti yesterday. I haven't done any crunching with the 580GTX until yesterday. Prior to that I just did milkyway@home with my old Radeon 5850 which would go 100% load.
 

Bradtech519

Senior member
Jul 6, 2010
520
47
91
looks like it might be the WU's specific to Einstein@home. Primegrid sr2sieve's are going 99-100%.
 

Kiska

Golden Member
Apr 4, 2012
1,025
291
136
It is specific to the WU but Einstein@home requires a bit more CPU power to keep the GPU fed. A BRP4 requires 2% per second whereas SR2sieve requires 0.5% per second, see the difference? Or another BRP5 requires 5.2% per second whereas GFNcuda requires just 1%.
 

Bradtech519

Senior member
Jul 6, 2010
520
47
91
That makes the most sense. I've had to throttle my CPU down due to thermal issues with cooling at the moment. I've got a new mobo, cpu, zalman after market cooler, and DDR3 2133 coming via UPS right now. So by Thursday night after I get it all installed I should be able to throw an entire FX 8350 at projects instead of 2 cores of a phenom 2 965.

It is specific to the WU but Einstein@home requires a bit more CPU power to keep the GPU fed. A BRP4 requires 2% per second whereas SR2sieve requires 0.5% per second, see the difference? Or another BRP5 requires 5.2% per second whereas GFNcuda requires just 1%.
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
looks like it might be the WU's specific to Einstein@home. Primegrid sr2sieve's are going 99-100%.
the phenomenon is specific to Einstein@Home...you need to run multiple WU's simultaneously in order to max out GPU load. i used to have 4 GTX 560 Ti's on Einstein@Home, and it seemed their sweet spot for maximum GPU utilization was 3 simultaneous BRP tasks. now i have 3 GTX 580's (would like to add a 4th real soon), and their sweet spot on the E@H project seems to be 4 simultaneous BRP tasks.

now you may have noticed that up until recently, an E@H BRP task typically consumed approx. 250MB of VRAM, and occasionally VRAM consumption would spike at just over 300MB, on an nVidia GPU in Windows. this may have changed a bit since the new Perseus Arm Survey began...in fact, as i reference my GTX 580 crunchers in MSI Afterburner, it appears that the new Perseus Arm Survey BRP tasks consume closer to ~180MB VRAM each. nevertheless, i experimented with both the 1.5GB and 3GB versions of the GTX 580 to confirm this, and i did it before the new Perseus Arm Survey tasks showed up. in other words, i experimented w/ GPU tasks that consumed ~250MB VRAM on average, and occasionally spiked at 300+MB VRAM. anyways, the reason i bring this up is b/c you'll most likely max out GPU utilization before you max out VRAM usage. you see, while the GTX 580 3GB mathematically had enough VRAM to manage up to 12 simultaneous BRP tasks (perhaps only 8 or 9 if you take VRAM spiking into consideration), GPU utilization would max out with only 4 simultaneous BRP tasks running. at that point i knew that a GTX 580 3GB was of no benefit over a GTX 580 1.5GB when it comes to maximizing GPU utilization on the Einstein@Home project.

sounds like you need an app_info for those WUs!
keep in mind that this is no longer necessary with Einstein@Home. if you go to your Einstein@Home account and click on "Einstein@Home preferences" in the preferences section, you'll see something called "GPU utilization factor of BRP apps." this is the changeable parameter that allows you to run more than one task at a time (you no longer need to create an app_info.xml file to run multiple simultaneous tasks on the E@H project). it works just like it did in the app_info.xml file - that is, it uses the reciprocal method (a factor of 1 equates to running a single task, a factor of 0.5 equates to running 2 simultaneous tasks, a factor of 0.33 equates to running 3 simultaneous tasks,and so on and so forth).
 

Bradtech519

Senior member
Jul 6, 2010
520
47
91
0.33 seems to be the BRP setting that maxes out my GPU. Gives me three GPU tasks.. You recommend any other tweaks on the settings in there?

the phenomenon is specific to Einstein@Home...you need to run multiple WU's simultaneously in order to max out GPU load. i used to have 4 GTX 560 Ti's on Einstein@Home, and it seemed their sweet spot for maximum GPU utilization was 3 simultaneous BRP tasks. now i have 3 GTX 580's (would like to add a 4th real soon), and their sweet spot on the E@H project seems to be 4 simultaneous BRP tasks. GPU load is hovering between 85-90% with my fan speed gonig at 85%. Running around 60c load. I did some SETI and Milkyway. Now I'm back over to Einstein to go for a while. Might leave it running on it next couple of days.

now you may have noticed that up until recently, an E@H BRP task typically consumed approx. 250MB of VRAM, and occasionally VRAM consumption would spike at just over 300MB, on an nVidia GPU in Windows. this may have changed a bit since the new Perseus Arm Survey began...in fact, as i reference my GTX 580 crunchers in MSI Afterburner, it appears that the new Perseus Arm Survey BRP tasks consume closer to ~180MB VRAM each. nevertheless, i experimented with both the 1.5GB and 3GB versions of the GTX 580 to confirm this, and i did it before the new Perseus Arm Survey tasks showed up. in other words, i experimented w/ GPU tasks that consumed ~250MB VRAM on average, and occasionally spiked at 300+MB VRAM. anyways, the reason i bring this up is b/c you'll most likely max out GPU utilization before you max out VRAM usage. you see, while the GTX 580 3GB mathematically had enough VRAM to manage up to 12 simultaneous BRP tasks (perhaps only 8 or 9 if you take VRAM spiking into consideration), GPU utilization would max out with only 4 simultaneous BRP tasks running. at that point i knew that a GTX 580 3GB was of no benefit over a GTX 580 1.5GB when it comes to maximizing GPU utilization on the Einstein@Home project.


keep in mind that this is no longer necessary with Einstein@Home. if you go to your Einstein@Home account and click on "Einstein@Home preferences" in the preferences section, you'll see something called "GPU utilization factor of BRP apps." this is the changeable parameter that allows you to run more than one task at a time (you no longer need to create an app_info.xml file to run multiple simultaneous tasks on the E@H project). it works just like it did in the app_info.xml file - that is, it uses the reciprocal method (a factor of 1 equates to running a single task, a factor of 0.5 equates to running 2 simultaneous tasks, a factor of 0.33 equates to running 3 simultaneous tasks,and so on and so forth).
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
0.33 seems to be the BRP setting that maxes out my GPU. Gives me three GPU tasks.. You recommend any other tweaks on the settings in there?
nope, that's pretty much it.

just a heads up though - if you ever decide to change your GPU utilization factor (via your Einstein@Home web preferences), the change might not take effect immediately after the next time your host contacts the Einstein@Home server, as intuitive as it might seem. rather the change will take effect the next time your host contacts the E@H server to request new work. so if your host's first server contact subsequent to having changed your GPU utilization factor is to report completed work, and you don't necessarily need any new work at that moment, then the change won't take effect just after that particular server contact.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,448
10,117
126
I'm running Einstein@Home on my Llano A6-3670K's IGP. Should I run multiple WUs at the same time on that?
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
i have no idea...i've never had an APU, let alone used one for crunching. on my mobos that have integrated graphics, i've always used the IGP as a dedicated display GPU so that my discrete GPU(s) don't have to share display duties with crunching, and can thus avoid GUI lag.

i guess it really comes down to how well a single E@H task utilizes your APU...although i would imagine that a single E@H task just about maxes out a small GPU like that...what is your GPU (APU) usage look like while crunching in E@H task?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |