Using a Golay(24,12) decoder for Golay(23,12)

Yesterday I explained an algebraic decoding algorithm for Golay(24,12) and commented that it was not easy to adapt it to decode Golay(23,12). Today I’ve thought of a simple way to use any Golay(24,12) decoder to decode Golay(23,12).

Recall that a systematic Golay(23,12) code is obtained from a systematic Golay(24,12) by omitting the last component of each codeword (i.e., the codeword \((c_1,\ldots,c_{24})\) from the Golay(24,12) code gives the codeword \((c_1,\ldots,c_{23})\) from the Golay(23,12) code). Conversely, one can obtain a systematic Golay(24,12) code from a systematic Golay(23,12) code by adding a parity bit at the end. This means that \(c_{24} = \sum_{j=1}^{23} c_j\), since \(\sum_{j=1}^{24} c_j = 0\) for all words in a Golay(24,12) code.

The idea to decode a Golay(23,12) code with a Golay(24,12) decoder is first to restore the parity bit \(c_{24}\) and then apply the Golay(24,12) decoder. However, if there are errors in the received codeword, the restored parity bit can also be in error, increasing the number of errors in one.

The key remark is that both Golay(23,12) and Golay(24,12) are able to correct up to 3 errors. Therefore, we only care about restoring the parity bit correctly in the case when there are exactly 3 errors. If there are 2 or less errors, adding another error still gives a word decodable by the Golay(24,12) decoder.

Now note that if there are exactly 3 errors in \((c_1,\ldots,c_{23})\), then \(\sum_{j=1}^{23} c_j\) gives the opposite from the parity of the original codeword. Therefore, we should restore \(c_{24}\) as\[c_{24} = 1 + \sum_{j=1}^{23} c_j\]and then apply the Golay(24,12) decoder.

Algebraic decoding of Golay(24,12)

A couple years ago, I implemented a Golay(24,12) decoder to be used in the GOMX-1 decoder in gr-satellites. The implementation can be seen here. I followed the algorithm in the book The Art of Error Correction Coding, Section 2.2.3, without taking much care to understand why the algorithm worked. Now I am doing some experiments with Golay(24,12) and Golay(23,12) codes, so I have needed to revisit that algorithm and understand it well to adapt it to my needs. Here I explain how this algebraic decoder works.

Continue reading “Algebraic decoding of Golay(24,12)”

Testing LDPC code erasure decoding performance

In my previous post I talked about the RFC5170 LDPC codes used in Outernet. There I explained in some detail the pseudorandom construction of the LDPC codes and the simple erasure decoding algorithm used both in free-outernet and in the official closed-source receiver.

The Outernet LDPC codes follow what I call the “identity scheme”. This is different from the staircase and triangle schemes introduced in the RFC. The identity scheme already appeared in the literature, but it did not make it into the RFC. See, for instance, the report by Roca and Neumann Design, Evaluation and Comparison of Four Large Block FEC Codecs, LDPC, LDGM, LDGM Staircase and LDGM Triangle, plus a Reed-Solomon Small Block FEC Codec, especially Section 2, where it is called “LDGM”.

I also commented that erasure decoding for an LDPC code (or any other linear code) amounts to solving a linear system. This can be done using any algebraic method, such as Gaussian elimination. However, the simple decoding algorithm used in Outernet is as follows: try to find an equation with only one unknown, solve for that unknown, and repeat until the system is solved. Clearly this algorithm can fail even if the system can be solved (see my previous post for some examples and formal results). I will refer to this algorithm as iterative decoding, as it is done in the RFC.

With these two things in mind, I wondered about the performance of the LDPC codes used in Outernet and the iterative decoding algorithm. I’ve done some simulations and here I present my results.

Continue reading “Testing LDPC code erasure decoding performance”

Outernet LDPC code revisited

I have been preparing the slides for my future talk about reverse-engineering Outernet at FAQin 2018. While doing this, I have been re-reading some of the material about the work done on LDPC code and FEC in Outernet by George Hopkins in January 2017. One of the things I didn’t do back then was to read carefully the LDPC decoding function implemented by George.

In my post I explained that the LDPC code used in Outernet followed RFC5170, and I wondered whether it used the staircase scheme or the triangle scheme. I also commented that erasure decoding with an LDPC code (or any other linear code, actually) was mathematically equivalent to solving a linear system where the unknowns correspond to the erased symbols. I observed that the decoding function looked very different from this mathematical procedure, but should do more or less the same thing. Now I have read the decoder implementation carefully and I have the answer to both questions.

Continue reading “Outernet LDPC code revisited”

About critical damping

Having to deal with DSP texts written by engineers, I have sometimes to work a bit to get a good grasp the concepts, which many times are not explained clearly from their mathematical bases. Often, a formula is just used without much motivation. Lately, I’ve been trying to understand critically damped systems, in the context of PLL loop filters.

The issue is as follows. In a second order filter there is a damping parameter \(\zeta > 0\). The impulse response of the filter is an exponentially decaying sinusoid if \(\zeta < 1\) (underdamped system), a decaying exponential if \(\zeta > 1\) (overdamped system) and something of the form \(C t e^{-\lambda t}\) if \(\zeta = 1\) (critically damped system). Critical damping is desirable in many cases because it maximizes the exponential decay rate of the impulse response. However, many engineering texts just go and choose \(\zeta = \sqrt{2}/2\) without any justification and even call this critical damping. Here I give some motivation starting with the basics and explain what is special about \(\zeta = \sqrt{2}/2\) and why one may want to choose this value in applications.

Continue reading “About critical damping”

An erasure code based on Vandermonde matrices

I’ve been looking at an erasure code by Luigi Rizzo which is based on Vandermonde matrices, since this code is used in Outernet. In fact, it is the code implemented by the zfec library. Luigi Rizzo describes his code in a paper from 1997, but the paper can be very confusing and misleading because it describes the mathematics in very little detail. I needed to go to the source code to understand how it works. Actually, the idea behind this code is very simple. Here I do a mathematical description of the code and show that it is the same as a Reed-Solomon code. This is rather weird, because Luigi Rizzo makes no mention of Reed-Solomon codes, which were first described in 1960.

Continue reading “An erasure code based on Vandermonde matrices”

How hard is it to decode 3CAT-2?

In a previous post, I looked at the telemetry packets transmitted by the satellite 3CAT-2. This satellite transmits 9600bps AX.25 BPSK packets in the Amateur 2m band. As far as I know, it is the only satellite that transmits fast BPSK without any form of forward error correction. LilacSat-2 uses a concatenated code with a (7, 1/2) convolutional inner code and a (255, 223) Reed-Solomon outer code. The remaining BPSK satellites transmit at 1200bps, either using AX.25 without FEC (the QB50p satellites, for instance), or with strong FEC (Funcube, for example). Therefore, I remark that 3CAT-2’s packets will be a bit difficult to decode without errors. But how difficult? Here I look at how to use the theory to calculate this, without resorting to simulations.

Continue reading “How hard is it to decode 3CAT-2?”

Scramblers and their implementation in GNUradio

A scrambler is a function that is applied to a sequence of data before transmitting with the goal of making this data more “random-like”. For instance, scrambling avoids long runs of the bits 0 or 1 only, which may make the receiver loose synchronization or cause spectral concentration of the signal. The receiver applies the inverse function, which is called descrambler, to recover the original data. The documentation for the scrambler blocks in GNUradio is not very good and one may need to take a look at the implementation of these blocks to get their parameters right. Here, I’ll try to explain a bit of the mathematics behind scramblers and the peculiarities of their implementation in GNUradio.

Continue reading “Scramblers and their implementation in GNUradio”

Estimation of the contribution of the frontend to the total noise figure

In a radio receiver composed of two stages, the total noise factor \(F\) can be computed using Friis’s formula as\[F = F_1 + \frac{F_2 – 1}{G_1},\]where \(F_1\) is the noise factor of the first block, \(G_1\) is the gain of the first stage and \(F_2\) is the noise factor of the second stage. If \(G_1\) is large enough, then the contribution of the second factor is small and the total noise factor of the whole system is essentially the same as the noise factor of the first stage. This is the reason why a low noise amplifier is useful as a frontend, because it has a low noise factor \(F_1\) and high gain \(G_1\).

If \(F_2\) and \(G_1\) are known (perhaps only approximately), then it is easy to check if the contribution of the frontend to the total noise figure is large enough so that the total noise figure is determined by the noise figure such frontend alone. However, it may happen that one or both of \(F_2\) and \(G_1\) are not known. In email communication, Leif Åsbrink mentioned that there is an easy way of checking the contribution of the frontend without knowing these parameters. The method is to switch off the frontend and note the drop in the noise floor. He gave the following estimates: if the noise floor drops by more than 10dB, then the total noise figure is the same as the noise figure of the frontend up to 1dB; if the noise floor drops by more than 17dB, then the total noise figure is the same as the noise figure of the frontend up to 0.1dB. Here I present the maths behind these kind of estimates.

Continue reading “Estimation of the contribution of the frontend to the total noise figure”