Equalizer logo
Collage logo
GPU-SD logo

Image Compression

Author: eilemann@gmail.com
State:

Overview

Equalizer implements a fast image compression algorithm used for network transfers. The algorithm is a modified RLE algorithm, which exploits some characteristics of the image data for better performance.

Implementation

The compression algorithm uses eight-byte values as the token size. The input data has typically four-byte sized elements (one pixel of the data), and using eight-byte tokens is slightly faster than four-byte tokens, especially on 64-bit machines. Comparing pixels instead of bytes enables compression of equal input data. Furthermore, we choose a marker byte which is not present in the input data, and can therefore quarantee that the compressed data is not bigger than the input data (plus additional eight bytes for the marker). The compressed data has the following grammar:

  input: marker data
  data: compressedData | symbol
  compressedData: marker symbol count

The compression algorithm compresses streams of more than three equal input symbols to 'marker symbol count', and can therefore quarantee a maximum size for the output stream. The decompression algorithm simply replicates each single symbol from the input to the output stream, and expands compressedData to the according number of symbols.

Open Issues

Handling of non-four-byte pixels, which are not currently used in Equalizer.

Eight-byte token can cause unitialized memory reads at the end of the pixel data. This is not an error condition, since the memory buffer is big enough.