Here's how it works: The image quality depends on 2 things -
1. Bitrate
2. Resolution
Each frame in a movie is composed of a fixed number of pixels, and has only so many bits to store that information. With compression, each pixel in a frame only gets so many bits to store color information, and if you lower the bitrate too much, a good portion of that information will be lost. To fix this, you can either
increase bitrate, or
decrease resolution. Increasing the resolution will only make things worse.
Obviously you want to have as high resolution as you can, and that means also keeping the bitrate high enough. So, when I encode divx movies, I use a bitrate calculator (there are many available for free, I even wrote one myself), to find what's the highest bitrate I can have to make a movie of a certain length fit a certain size. If the movie is longer, you either have to increase bitrate or the size, depending on your priorities. Then, you find the maximum resolution you can reasonably use, and encode it like that. I don't remember the actual math, but it basically comes out to >= 0.20 (bits * pixel / frame). If you have less than that, then the image quality will suffer.
Here's a link to my old college website (can't believe it's still up), I have a bitrate calculator you can download, and that's the one I use.
Linky (in the freebies section)