And if you run the encoder on a file twice, is the output file the same both times?
If you attempt to encode it twice it can actually increase in size.
For lossless algorithms,you will get deterministic results - every time you run encoder on the WAV file, you'll get exactly the same output. Decoding the output will get you exactly the same data as the input. Think ZIP - you want exactly the same thing out that you put in.
For lossy algorithms, it's pretty much the same - every time you run the SAME ENCODER with the same parameters, you'll get the same output file. However, if you took a WAV file and ran it through three different MP3 encoders with the same parameters (256 Kbps, etc), you'll get three completely different output files. Decoding the three files, none of them will give the same data as the input file, but they will all sound the same to your ears.
Lossless algorithms suffer from very poor compression ratios - you're lucky if the file is half the size of the uncompressed file. Lossy can give you a file 1/10 the size of the input file, and sound perfectly fine.
as far as I know, lossless algorithms don't quantize the data. they don't alter the data such that after you encode/decode you get something different from what you put in. quantization is the lossy part in all the algorithms I'm aware of.
Quatization noise is a product of the A/D conversion, trying to fit an analog signal into discrete digital values.
that's not what people are talking about when they talk about lossy encoding. the loss comes from quantizing the signal spectrum based on some perceived model of human hearing.
The music I listen to doesn't sound perfectly fine to me if it's lossy. Lossy compression needs to die.For lossless algorithms,you will get deterministic results - every time you run encoder on the WAV file, you'll get exactly the same output. Decoding the output will get you exactly the same data as the input. Think ZIP - you want exactly the same thing out that you put in.
For lossy algorithms, it's pretty much the same - every time you run the SAME ENCODER with the same parameters, you'll get the same output file. However, if you took a WAV file and ran it through three different MP3 encoders with the same parameters (256 Kbps, etc), you'll get three completely different output files. Decoding the three files, none of them will give the same data as the input file, but they will all sound the same to your ears.
Lossless algorithms suffer from very poor compression ratios - you're lucky if the file is half the size of the uncompressed file. Lossy can give you a file 1/10 the size of the input file, and sound perfectly fine.
You've got the wrong understanding:
https://en.wikipedia.org/wiki/Jpeg#Quantization
The general scheme for a lossy encoding scheme (image, video, auditory) is as follows:
Linear transformation
Quantisation (sometimes referred to as truncation) of transformation coefficients
Rasterisation followed by Run-Length encoding and entropy coding
Quantisation is referred to as such because when you have, for example a coefficient 27, and your quantisation factor for a coefficient in that position is 10, you will have 2 after you perform the division. During reconstruction you will multiply by the quantisation factor again and get 20=/=27, which is why it is named quantisation, because just like continuous -> discrete quantisation, some information becomes unrecoverable in the process.
Basically, values that are members of a discrete set, rather than continuous, are not precluded from quantisation just because they are already discrete.