It's not hard to see there's a difference in Hitman textures. I'd even argue you'd be able to see the difference in compressed images.
There is literally no way that compression could cause such a large difference between the texture quality. You're not running the side by side through needsmorejpeg.com.
You guys are implying that video compression is random, or as random to the point where it is capable of creating the texture difference in the carpet still we see. There is no way as far as I can see. Explain to me how it could compress three near identical images differently? Other than "encoder voodoo magic" of course.
As far as I know, video compression is mostly DCT (basically rewriting the data as cosine functions to make it easier to compress), quantization and encoding (making your DCT'ed block nice and compressed), frame prediction, and psychovisual hacks that reduce image data even further, by reducing colors, deciding which parts of the image get more or less compressed, dealing with luminance, etc.
I mean, I can't see how all of that can manage to produce the difference with the carpets in
this scene. In fact, I think these techniques should be more formulaic and make the images more similar, rather than make them look different. I don't think it would be desirable for codec designers to have small differences in the source video to manifest as larger differences in the final one.
I mean, I might (there's a good chance, since I don't know much about the nit and grit of encoding) be completely wrong, feel free to show me, but if you need more proof than a youtube video, I need more proof than saying "oh no you don't know what the encoder can do to the image".