Compression and correlation function

Status
Not open for further replies.

Qacer

Platinum Member
Apr 5, 2001
2,722
1
86
Regarding this, imagemodel.jpg .

Why are natural images difficult to compress if it has high correlation between neighboring image samples? I thought high correlation translated to highly similar, which means that it can be exploited to minimize the redundancies in the image.

Anyone care to explain this in layman terms? Thanks!
 

TecHNooB

Diamond Member
Sep 10, 2005
7,460
1
76
Doesn't make sense to me either -_-; May pertain specifically to moving images.
 

toslat

Senior member
Jul 26, 2007
216
0
76
Lossless compression is typically perfromed in two stages:
1. Decorrelation: to remove redundancy. In your example, this is achieved by motion compensation and using the image model.
2. Entropy encoding: to encode the symbols using the smallest possible number of bits needed, with often occurring symbols being represented with smaller codewords.

The entropy encoder works on a symbol by symbol basis and does not utilize inter-symbol correlation. Without removing the correlation (which is evidence of redundant information), you will be sending a bloated entropy input into the encoder, resulting in poor compression.
 
Jul 18, 2009
122
0
0
I know I probably shouldn't, but I want to take a stab at this question:

Originally posted by: QacerWhy are natural images difficult to compress if it has high correlation between neighboring image samples? I thought high correlation translated to highly similar, which means that it can be exploited to minimize the redundancies in the image.

Correct. But according to your source, video encoding is a three-step process, and only the first step uses intercorrelation:

Transformation: Looks for similarities between nearby image graphs, and then builds a new graph which better exploits these similarities.
Quantization: Looks at the new graph generated by the first step, and removes redundant nodes from this graph (for instance, nodes that overdetermine the value of a pixel).
Reordering: I believe this is basically just a fancy name for token-based compression.

The paragraph you're asking about is talking about residual correlations, which are correlations that are still present after you finish the first step. If you have high residual correlation, that means you didn't do a good job of transforming your graph, because it still has a bunch of leftover similarities you didn't exploit.

Anyone care to explain this in layman terms?

I don't know. Is "residual correlation" a layman term?
 

toslat

Senior member
Jul 26, 2007
216
0
76
Lets take a second stab at this:

The phrase quoted is a bit misleading. Natural images are not 'difficult to compress', but more precisely do not compress well if entropy encoded directly. Like you rightly noted, correlation, is a sign of redundancy, and thus hints at possible compression gains if the correlation knowledge is utilized.

The first thing to note is that entropy encoding is not a compression system per se, but rather a form of minimal representation i.e. optimal encoding. An entropy encoder aims to represent a data source (in this case an image) using the minimum average number of bits possible without loss of information.

The entropy encoder would map each symbol (likely each pixel in the case of images) to a codeword. It does not use the information about the correlation between successive symbols (i.e. neighboring pixels in images) and thus we end up encoding optimally the redundant information along with the actual information. This results in a ppor compression.

For a example, we have an entropy encoder, which represents (based on probability of occurrence) pixels value of 0 by 0 (1 bit), 1 by 10 (2 bits), and 2 by 110 (3 bits). We then use this to encode 3 plain images i.e all pixels have the same value

- If you send in a 10 x 10 (100 pixels) image all with a pixel value of 0, the output would be 100 bits (1 bit per pixel)
- If you send in a 10 x 10 (100 pixels) image all with a color of 1, the output would be 200 bits (2 bit per pixel)
- If you send in a 10 x 10 (100 pixels) image all with a color of 2, the output would be 300 bits (3 bit per pixel)

And we can see that the compression is poor/suboptimal except for the first case.

Compare that to when we utilize the fact that they are plain images (i.e. perfect pixel correlation) to construct a secondary image in which we store the value of the first pixel and then the difference between a pixel and it predecessor. For each image, we would have a first pixel value of 0, 1 or 2 (depending on the color) and all other pixel values would be 0 (since they were initially equal). Encoding this with our entropy encoder, gives 100 bits for the first, 101 for the second, and 102 for the 3rd image. The results are much better than our first approach.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
If we're talking about lossless compression, natural images while continuous have extremely random least significant bit information (ex. 166 blue vs 167 blue). This is due to noise (mostly thermal noise generated by the sensor that captures the image, assuming that the lens is "far away" from the object). Lossy techniques such as JPEG typically have a quantization step which results in the discarding of a lot of least significant bit information (so while a JPEG is "changed" a lot compared to the original image, it is not perceived as such). Humans are extremely good at detecting small changes in brightness over large areas (gradients), or large changes in brightness over small areas (edges), so this suits us fine.
 
Last edited:
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |