Hey Carbonyl! Nothing inherently wrong with spending $2k on a turntable + cartridge (vs. spending it on wires!). Mine was over $700. However saying CD's sound awful and refusing to listen to them is absurd. Granted, some CD's do sound awful, but so do a lot of records! And no mater what, records still have all the limitations of a "physical contact" medium. Clicks, pops, wear, distortion from mistracking, etc. surely don't contribute to high fidelity. And a 65db dynamic range is pretty lame these days.
I'm not sure if I understand completely your "binning" analogy for digital audio, so I'll explain it in a way that makes sense to me, and you can see if that matches up with what you are thinking.
Any sound is made up of changes in air pressure, which can be represented as waves. Many simultaneous sounds make many simultaneous waves. But at any given instant, all those individual waves can be summed into one equivalent wave. If the measurement time can be made short enough, for all intents and purposes the wave can be represented as a point in time with a specific amplitude - and if we repeat the measurements quickly enough, we can get enough points to make a fair representation of the original wave. Each of the points is called a sample, and the rate at which each point is measured is called the sampling frequency. In additon, the amplitude has to represented as a number. Which follows that to uniquely describe a point on a wave we need 2 things - when it occurs and how big it is. So how do we know how many samples we need and how accurately must their amplitude be measured to correctly reproduce a wave?
There are a few handy coincidences that help out - the first is that some fancy math PROVES that any given sampling frequency can PERFECTLY reproduce (not approximately reproduce, not almost reproduce, PERFECTLY reproduce) a wave at 1/2 the sampling frequency. So how high of frequencies do we need to reproduce? Only a tiny fraction of people can hear sounds at 20,000Hz, so maybe that's a good place to cut off (there's one more factor that will come soon that helped pick that cut-off frequency, that's coming soon). Which means we need a sampling frequency of about 40,000Hz to make it work. If we pick 44,100Hz as the sampling frequency (to allow for minimal error correction), means we can record frequencies up to 20,000Hz.
How accurately must the amplitude be represented? Obviously converting an analog quantity to a digital value needs to have fine enough resolution to cover a wide range of values without being too "step-like", or the wave will not be perfectly reproduced. And we only have binary numbers to work with. So what power of 2 will give us enough steps so that it doesn't seem like we have steps? 2^8 only gives us 256 steps. That ain't gonna cut it. 2^16 gives us 65,536 possible values, that should do it. So we have 16 bits to work with to describe the amplitude. 16 bits is called the "word size".
OK, now we know we need 44,100 samples per second, and each sample must be 16 bits. That means 1 second of sound will occupy 705,600 bits. But we have 2 channels for stereo sound, so we need double the space. Make that 1,411,200 bits per second of stereo sound. That means 1 minute occupies 84,672,000 bits, or 10,584,000 bytes or about 10.09 Megabytes. But how much raw data can a standard CD hold? 750MB, that's how much. So we can fit about 74 minutes of music on a standard CD. So it's a nice coincidence that 44,100Hz and 16 bits works out.
So if there were any "bins", each would only be 1/44100 of a second wide and 1/65536 of the full-scale amplitude deep. If your friend can discern that he is a rare specimen indeed.
"But in "real" music there are transient frequencies far above 20kHz, CD audio just chops those off, so the music all sounds unnatural" is a common anti-CD statement. Precious few people can hear 17kHz, let alone 20kHz. If you can't hear it, how will you miss it? And if the transients indeed affect sound in the range you CAN hear, well, that's not lost because the recording is made after the sound is created, not before. So chopping off the part you can't hear would have no effect on the part you can. Besides that, after the first time you play a record, and every time after that, more and more of the high frequency content is erased. It's inevitable. A CD will sound the same every time you play it.
Some people also have grown so accustomed to the relatively high distortion (which often adds a "warmth" to the sound) that can't be avoided with vinyl playback (on the order of 0.1% THD for even the best cartridges) that to them CD's (with an order of magnitude less THD) sound "harsh".
There's also snob appeal - playing vinyl records "properly" involves a degree of ritual and expense that is missing from CD's. LP's must be handled lovingly, cleaned carefully, and high quality playback equipment is expensive. But any schlub can throw a CD in a $50 player and have technically superior sound.
Sorry for the little veer off-topic but actually you can probably get a better idea of what's involved if you know how it works.