IF you happened to get a data error in the middle of a PCM audio stream, you're just going to mess up one of the samples (16-bit samples at 44kHz), so we're not talking major here. If the bit happened to be the most significant bit of the sample, then you could end up sending the level +/- 32,768 away from what it should have been. If the error is at the least significant bit side, it could only cause a +/- 1 level alteration. And note that this will affect ONE of the 44,000 samples that occurred in a second.
I'm not sure what header information is stored with PCM audio streams, but if that happened to get corrupted, then the problem would be significantly worse; but the header is small relative to the size of the audio stream.
I have ECC memory in my system because I do scientific computing calculations and cache simulations in an academic environment, and I don't want ANYTHING corrupting our data. But I have to admit that the likelihood of a random bit error occurring on these chips is pretty rare.
IBM has done years of studies on these "soft errors" which you can read about
here. Basically, radioactivity and cosmic rays are the two culprits that cause these random errors to occur. Cosmic rays are bombarding earth from space and can penetrate multiple story buildings. Higher altitudes are more susceptible to cosmic ray induced errors than altitudes near sea level. In all cases, with appropriate shielding (and we're talking LOTS of shielding), the error rate could be reduced to near zero. But regardless, IBM found that as chip densities increase, error rates have gone up. I can't remember where I read this, but one estimate suggested that at higher altitudes a typical 128 MB SDRAM chip might see 1 or 2 one-bit errors a month; it's just blind luck whether the BIT actually ends up affecting anything important. At lower altitudes and in buildings with extensive shielding, the rate drops to around 1 error a year.
ECC memory can fix these single bit errors on the fly without a hiccup. In addition, it can catch most multi-bit errors and let the operating system know that an error has occurred (assuming you're running a professional OS such as Linux or Windows NT/2k).
It is true that ECC memory is slightly slower; most people have suggested that memory performance takes a 1-2% hit.
As for motherboard support, that's a tough one. Because few people understand ECC and many more are simply not interested when they hear about the small performance hit, some motherboard manufacturers no longer implement ECC support in their BIOSes, even if the chipset has an ECC memory controller. For instance, Via's Kt133 chipset DOES have an ECC memory controller onboard, but the Iwill KK series boards do not support ECC SDRAM. MSI's K7T-Turbo uses the same chipset and DOES support ECC memory.
As for stability, that's a different concern. I've never had a problem with my Via Apollo Pro 133a-based board (made by Gigabyte) under Win2k using 512 MB of CAS2 ECC PC133 Crucial SDRAM. YMMV, of course.