The reason for ECC memory, usually, is to support critical computer applications, such as monitoring a hundred hospital patients and vital signs, running a nuclear power plant or supporting an air-traffic control activity -- that sort of thing. I think ECC is both more expensive and comparatively lagging in speed, because of the hash code calculation of the extra bits of memory. Other types of work, in which the system owners repeat any number complex simulations, or even gaming hedge funds and margin calls, would not suffer much from an occasional alpha particle tripping a bit.
I have a set of 4x4GB=16GB Corsair XMS 1600 DDR3 RAM @ 9,9,9,24. It came used in a motherboard and processor bundle, and the original owner had used his Sabertooth Z77 rig to do Folding@Home or SETI for hours daily. He didn't overclock it, and the 3570K i5 processor when proceeding through its daily toil, would reach 55C degrees. It was never abused, but was certainly used. You could still run HCIMemtest64 on them with 1,000% coverage -- the grueling all-nighter-into the next day test. There are patterns to failures over a product's life-time; there are things called "infant mortality," which -- if not failing in the first 72 hours, are more and more likely to last what seems forever.
I also remember an AnandTech interloper I encountered at the "Cases and Cooling" forum. He had a couple fans that might have been 18" to 24:" in diameter, with a roomy cubicle case. The man had a dual-processor Xeon motherboard with two hexa-core processors running a total of 24 threads.
More than a decade or more past, I remember an article about a project for NSA, with parallel-processing and something like a hundred Xeons or other Intel CPUs all wired together. I think there were people involved who were students at MIT, but I can no longer be absolutely sure with precision.