BCH decoder for S-NET

In my previous post, I talked about the coding used by the S-NET cubesats and the implementation of my decoder included in gr-satellites. The decoder was still missing a BCH decoder. I have now implemented a BCH decoder and included it in gr-satellites. Here I describe the details of this decoder.

Although I already had a BCH decoder in C that Walter Frese kindly sent me, I have decided to implement my own in Python. The problem with several FEC decoders is that their licensing is not very clear. They originally come from textbook examples that get passed on and modified. For instance, several of the decoders available online for BCH and other codes are based on Robert Morelos-Zaragoza’s implementations, whose licence prohibits commercial use. Thus, to keep licensing clean, I implemented my own decoder. In doing so, I have also learned the details of the decoding algorithm.

My BCH decoder has been made with simplicity in mind. The code is brief and simple to understand. Its performance (in terms of decoding speed) could be optimized, but performance is not very important for low bitrate telemetry decoding. The decoder implementation can be found here.

S-NET uses three related BCH codes: BCH(15,11,3), BCH(15,7,5) and BCH(15,5,7). These are cyclic codes over \(GF(2)\) that are constructed with the help of the finite field \(GF(16)\). The only difference between the three codes is the choice of the generator polynomial.

BCH decoding is very well explained in the Wikipedia BCH page, so I’ve followed that closely. The decoding procedure uses arithmetic in \(GF(16)\). I have implemented multiplication and inversion by using exponential and logarithm tables, which have been calculated in Sage.

The received word \((c_0,\ldots,c_{14})\) is represented as the polynomial \(R(x) = \sum_j c_j x^j\). Note that here the first element of the codeword represents the independent term of the polynomial. This is the convention used in S-NET, but the mapping between codewords and polynomials can also be done in the opposite order.

Syndrome calculation is easy to do by using the exponential table. Since the syndromes are \(s_k = R(\alpha^k) = \sum_j c_j \alpha^{jk}\), they can be computed by summing up some powers of \(\alpha\). Here \(\alpha\) is the primitive element of \(GF(16)\) that we have used to construct our exponential and logarithm tables. Note that, from the point of view of the decoder, the only difference between the three codes BCH(15,11,3), BCH(15,7,5) and BCH(15,5,7) is the number of syndromes that are calculated (2, 4 and 6 respectively).

The next step in the decoding algorithm is to calculate the coefficients of the error locator polynomial. I have used the Peterson–Gorenstein–Zierler algorithm, as described in Wikipedia. This involves solving a linear system with coefficients in \(GF(16)\). The matrix of this system is a square matrix of size 1, 2 or 3, depending on the number of syndromes.

To solve this system, I use Cramer’s rule. The determinants are computed by Sarrus’ rule. Since the matrix is small, it is not difficult to write all the calculations explicitly by hand. This step could be optimized, but is is easy and clear to do it in this way.

After having computed the coefficients of the error locator polynomial \(\Lambda(x)\), I calculate its roots by brute-force search, by computing \(\Lambda(\alpha^{-j})\) for \(j = 0,\ldots,14\). This is done again by using the exponential table. This step can be optimized by using Chien search, but this is not so important, because we only have to try 15 possible roots and \(\Lambda(x)\) has degree at most 3, so even the naïve computation is fast.

Since the roots of \(\Lambda(x)\) indicate the positions of the errors, the final decoding step is to flip the erroneous bits.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.