zlib uses two encoding schemes: copying (LZ77) and entropy encoding (
Huffman). Entropy encoding assumes that some bytes (00, 14, and 15 in your example) occur more than others (73 for instance). It then gives the more common bytes shorter representations, and gives the less common ones longer representations. (Yes, longer than the originals.) There is a better method of entropy encoding, called "
arithmetic coding". It has had patent issues in the USA, but many of its patents have expired.
Looking at your example without knowledge of its meaning, a 3- or 4-bit run-length encoding of zeroes seems like it would help, followed by Huffman for the rest. You could put the run lengths at the beginning or the end to help Huffman work only on the other bytes.
The zero RLE for the first line could look like:
15040e14 15151614 1c1b111c 1c141214 14160b1c 1400 3c00282800 0c1c00 0163
To decode, every time you encounter 00, repeat it the number of times found at the four bits at the end. Note that a lone 00 now needs another 0 to indicate not to repeat it. Compression can always create a longer file in the worst case.
But if you know the reasons why certain bytes are written to this file, you might be able to compress it more. Is the beginning a "magic number", for example? Are some of these integers that never exceed a certain value?
From the department of other stupid ideas: Could you write some of the file's data into the filename? Base64-encoded, of course.
Conversely, if your filesystem uses a minimum number of bytes for each file (and most do, usually at least 512 bytes), why do you need to compress these files further?
Edit: I see you asked about arranging the files better. Putting more of the zeroes at the beginning may help zlib. This may result in better compression than my simple RLE idea.
Also, Train's idea seems unlikely to result in compression: It could easily take a digit of pi with more digits than the original value to get the desired result.