Hello,
This question is in the context of
16KB ROM game compo - technical thread, but can be useful in any CPC programming context.
In such a context, it may be interesting to minimize the memory footprint of a program. If you load a compressed package (program+data) and decompress it all before running, you have saved on size before decompression but not after. If your program was in ROM, you have fully lost the benefit of saving RAM.
A solution would be a stream-based decompressor. It would be a routine that does not uncompress the whole data at once, but only processes a small fraction of it, writes the result and return control to the caller. The caller will then use the data, and when it needs more, it call again a function "decompress_some_more", until there is nothing more to decompress. In cases where decompressed data from the beginning of the stream is no longer used, that memory can be reclaimed, and this is where we win.
People who have piped gzip output to another program on Linux know what I mean. But here we've got no underlying OS for such a generic tool.
Another example is video decoding : a movie in any MPEG standard (fortunately) does not need to keep all the old frames in memory to be fully played. In this context, stream-based decompression is the rule.
Another good example actually useful for the CPC is music. Music data often compresses very well because it is highly redundant.
One could imagine, for example, decompressing music data just as needed. If you decompress it from a ROM, you can for example uncompress one pattern (see
http://en.wikipedia.org/wiki/MOD_%28file_format%29), play it while decompressing the next one, and when playing the second, overwrite memory used by the first to write the third one, etc. It will take CPU time *while* playing instead of *before* (but not more in total than fully decompressing before), and it definitely saves memory because at all times you consume memory for only two patterns.
Of course, any data can be artificially split into small compressed blocks, but you lose a lot in compression efficiency due to inter-block redundancy, so a decompressor that is stream-aware is definitely a win.
Intuitively any decompressing routine actually implements a loop with a "decompress_some_more" function internally. Calling that function only when needed allows what we need, but there are some constraints to resolve, like : does the decompressor need to access recently decompressed data (to how long in the past), or only data from the stream ? Only a person that has written a decompressor can answer, for that particular algorithm.
Does anyone know about a stream-based decompressor written for Z80, or has hint on adjusting one already written ?