version 2008-02-06 : trying phasing-out and bit flipping/counting

version 2010-07-06 : bias visualisation

version 2015-08-30 : added links

Around 2001, I found an interesting algorithm that could be useful for several applications. I called it "Recursive Range Reduction" ("3R" for short) and it is described in some depth here (EN). It is an encoder, not a transform, which means that it requires correctly pre-processed input data to be efficient (so it is expected that 3R expands "random" data, like most encoders).

You can find a dedicated page on hackaday and a few pages in the 4th edition of David Salomon's book "Data Compression: The Complete Reference", described at http://www.davidsalomon.name/DC4advertis/DComp4Ad.html (Google book)

This page is a Javascript port of the 3R algorithm that was developed in C in 2003. The first version ("3R") shows the original idea. "3Rpi" is an enhanced version, thanks to Steven Pigeon's suggestion to use "Phase-In" codes : a modest increase in computation complexity gains several bits per block in average. Later (february 2008), I modified the "Phase-In" coder to take into account the "Big Numbers Law" : there are more large numbers than small numbers, so I concluded that the large numbers should use less bits :-) The new encoder is called "3Rpo" (for "Phase-Out", the reverse of "Phase-In" :-D) and a single substraction was added to the source, resulting in an average gain of 1 bit (and more with "interesting" data sets).

Sometimes, 3Rpi is shorter than 3Rpo so this page implements a combined version called "3Rpio". The results of 3Rpi and 3Rpo are compared, the shortest result is kept and a bit is added (to signal the decoder which encoder is chosen). You can keep on hitting the "random" button tens or hundreds of times : the difference is very small but 3Rpo wins anyway ! Now, "real data" are not random so check yourself.

In July 2010, I finally implemented yet another variation of the compressor : I noticed that in some specific cases, the algorithm generates long strings of consecutive 1s or 0s. The new "-x" variations of the algorithms simply "xor"s the consecutive bits of the output stream (this is computationnaly insignificant and easily recovered by the decoder) and we count the number of 1s. In some cases (many "spikes" and 0s), there is enough bias for an arithmetic or range coder to exploit. But again, YMWV with your data and I have yet to find an arithmetic encoder that does not expand the output when given equiprobable (random) bits.

Todo : design an efficient offset-removal algorithm (low-order linear approximation ?)

Input data : *(move/drag the cursors to change them)*

Output data (normal) :

Output with phasing-in :

Output with phasing-out :

Legend : Size prefix, Total sum without MSB, Partial sum, Some of the terminal values

Accumulate bias (**1** vs **0**) over runs :

algo | total | gain | 1s | ratio |