(Continuing this here rather than in the stats thread…)
CPDN's upload server (upload7.cpdn.org) is presumably crowded and I've got two trickle files for each task in the client's upload retry queue by now. (But there are also two successful trickles listed for each of my two tasks on the web site, so it's not completely hopeless.)
Uhm no, it
is hopeless for me: While the trickles are listed on the web site and got credit granted normally, the files of these trickles haven't been uploaded yet (one ~95 MB file per trickle). Meanwhile, I've got no so such problems with PrimeGrid (never had). One big difference: PrimeGrid's server is located here in Germany.
traceroute www.primegrid.com
shows me 13 hops, and ~41 ms roundtrip times of the TTL probes to the very last one.
traceroute -m 200 upload7.cpdn.org
shows me 200 hops (obviously bogus), and ~270…350 ms roundtrip times to the last visible hop, which is the 15th at 141.223.253.61 (upload7.cpdn.org resolves to 141.223.16.156).
sudo traceroute -I upload7.cpdn.org
shows me 18 hops, and ~330 ms ICMP roundtrip time to the very last one, which is eawah.postech.ac.kr (141.223.16.156).
That is, I've got issues shipping large files to either the US (Folding@Home) or to South Korea (CPDN, current WAH2 batch).
I'll have to look whether or not client-side settings can reduce the connection losses.
Edit:
I took a traceroute to two work servers and one collection server of F@H, while there was no transfer was going on. The traceroutes showed ~16…20 ms when it got to Frankfurt, and jumped to ~105…110 ms on the very next step when it got to either Boston or Philadelphia. From there, latency practically didn't get any higher when the last responding hosts in the routes were reached
I repeated this with
traceroute -I
now:
work server 131.239.113.97: 15 hops, ~110 ms
work server 158.130.118.24: 16 hops, ~135…150 ms
collection server 158.130.118.26: 16 hops, ~110 ms
However, neither 110 nor 150 nor even 330 ms seem so bad to me when all what is to be accomplished is a large file transfer without quality of service requirements.
Edit 2:
I'll have to look whether or not client-side settings can reduce the connection losses.
Hmm. The default values of the most relevant transfer related options should already be quite resistant to high latencies during transfers:
<http_transfer_timeout>300</http_transfer_timeout>
(Abort HTTP transfers if idle for 300 seconds. 300 is the default according to
documentation.)
<http_transfer_timeout_bps>10</http_transfer_timeout_bps>
(An HTTP transfer is considered idle if its transfer rate is below 10 bits per second. The default values is not documented, but 10 seems to be it in the initial population in cc_config.xml files.)