The size of the one "large" file varies a lot. Perhaps similar to how the task duration varies. (U@h puts out batches of workunits, and within one batch, the task duration gradually creeps up from first WUs to last WUs.) Just looking at one random computer of mine, it has got "large" files sized 13...40 kB, and two outliers which are 114 and 160 kB.
Still you are of course right, these are not big file sizes, compared with some other projects (protein folding projects, e.g.). It really is the unlucky split into six individual files = the need for ≈six times as many transactions, compared to the technically feasible minimum, which is causing the congestion. Six times during normal operation, that is. Currently, because so many transactions are failing and retried, the traffic jam is even more severe.