DSLWP-B Moonbounce

If you have been following my latest posts, you will know that a series of observations with the DSLWP-B Inory eye camera have been scheduled over the last few days to try to take and download images of the Moon and Earth (see my last post). In a future post I will do a chronicle of these observations.

On October 6 an image of the Moon was taken to calibrate the exposure of the camera. This image was downlinked on the UTC morning of October 7. The download was commanded by Reinhard Kuehn DK5LA and received by the Dwingeloo radiotelescope.

Cees Bassa observed that in the waterfalls of the recordings made in Dwingeloo a weak Doppler-shifted signal of the DSLWP-B GMSK signal could be seen. This signal was a reflection off the Moon.

As far as I know, this is the first reported case of satellite-Moon-Earth (or SME) propagation, at least in Amateur radio. Here I do a Doppler analysis confirming that the signal is indeed reflected on the Moon surface and do some general remarks about the possibility of receiving the SME signal from DSLWP-B. Further analysis will be done in future posts.

Continue reading “DSLWP-B Moonbounce”

Lunar orbit perturbations and DSLWP-B

Ever since simulating DSLWP-B’s long term orbit with GMAT, I wanted to understand the cause of the periodic perturbations that occur in some Keplerian elements such as the excentricity. As a reminder from that post, the excentricity of DSLWP-B’s orbit shows two periodic perturbations (see the figure below). One of them has a period of half a sidereal lunar month, so it should be possible to explain this effect from the rotation of the Moon around the Earth. The other has a period on the order of 8 or 9 months, so explaining this could be more difficult.

DSLWP-B eccentricity

In this post I look at how to model the perturbations of the orbit of a satellite in lunar orbit, explaining the behaviour of the long term orbit of DSLWP-B.

Continue reading “Lunar orbit perturbations and DSLWP-B”

Using a Golay(24,12) decoder for Golay(23,12)

Yesterday I explained an algebraic decoding algorithm for Golay(24,12) and commented that it was not easy to adapt it to decode Golay(23,12). Today I’ve thought of a simple way to use any Golay(24,12) decoder to decode Golay(23,12).

Recall that a systematic Golay(23,12) code is obtained from a systematic Golay(24,12) by omitting the last component of each codeword (i.e., the codeword \((c_1,\ldots,c_{24})\) from the Golay(24,12) code gives the codeword \((c_1,\ldots,c_{23})\) from the Golay(23,12) code). Conversely, one can obtain a systematic Golay(24,12) code from a systematic Golay(23,12) code by adding a parity bit at the end. This means that \(c_{24} = \sum_{j=1}^{23} c_j\), since \(\sum_{j=1}^{24} c_j = 0\) for all words in a Golay(24,12) code.

The idea to decode a Golay(23,12) code with a Golay(24,12) decoder is first to restore the parity bit \(c_{24}\) and then apply the Golay(24,12) decoder. However, if there are errors in the received codeword, the restored parity bit can also be in error, increasing the number of errors in one.

The key remark is that both Golay(23,12) and Golay(24,12) are able to correct up to 3 errors. Therefore, we only care about restoring the parity bit correctly in the case when there are exactly 3 errors. If there are 2 or less errors, adding another error still gives a word decodable by the Golay(24,12) decoder.

Now note that if there are exactly 3 errors in \((c_1,\ldots,c_{23})\), then \(\sum_{j=1}^{23} c_j\) gives the opposite from the parity of the original codeword. Therefore, we should restore \(c_{24}\) as\[c_{24} = 1 + \sum_{j=1}^{23} c_j\]and then apply the Golay(24,12) decoder.

Algebraic decoding of Golay(24,12)

A couple years ago, I implemented a Golay(24,12) decoder to be used in the GOMX-1 decoder in gr-satellites. The implementation can be seen here. I followed the algorithm in the book The Art of Error Correction Coding, Section 2.2.3, without taking much care to understand why the algorithm worked. Now I am doing some experiments with Golay(24,12) and Golay(23,12) codes, so I have needed to revisit that algorithm and understand it well to adapt it to my needs. Here I explain how this algebraic decoder works.

Continue reading “Algebraic decoding of Golay(24,12)”

Testing LDPC code erasure decoding performance

In my previous post I talked about the RFC5170 LDPC codes used in Outernet. There I explained in some detail the pseudorandom construction of the LDPC codes and the simple erasure decoding algorithm used both in free-outernet and in the official closed-source receiver.

The Outernet LDPC codes follow what I call the “identity scheme”. This is different from the staircase and triangle schemes introduced in the RFC. The identity scheme already appeared in the literature, but it did not make it into the RFC. See, for instance, the report by Roca and Neumann Design, Evaluation and Comparison of Four Large Block FEC Codecs, LDPC, LDGM, LDGM Staircase and LDGM Triangle, plus a Reed-Solomon Small Block FEC Codec, especially Section 2, where it is called “LDGM”.

I also commented that erasure decoding for an LDPC code (or any other linear code) amounts to solving a linear system. This can be done using any algebraic method, such as Gaussian elimination. However, the simple decoding algorithm used in Outernet is as follows: try to find an equation with only one unknown, solve for that unknown, and repeat until the system is solved. Clearly this algorithm can fail even if the system can be solved (see my previous post for some examples and formal results). I will refer to this algorithm as iterative decoding, as it is done in the RFC.

With these two things in mind, I wondered about the performance of the LDPC codes used in Outernet and the iterative decoding algorithm. I’ve done some simulations and here I present my results.

Continue reading “Testing LDPC code erasure decoding performance”

Outernet LDPC code revisited

I have been preparing the slides for my future talk about reverse-engineering Outernet at FAQin 2018. While doing this, I have been re-reading some of the material about the work done on LDPC code and FEC in Outernet by George Hopkins in January 2017. One of the things I didn’t do back then was to read carefully the LDPC decoding function implemented by George.

In my post I explained that the LDPC code used in Outernet followed RFC5170, and I wondered whether it used the staircase scheme or the triangle scheme. I also commented that erasure decoding with an LDPC code (or any other linear code, actually) was mathematically equivalent to solving a linear system where the unknowns correspond to the erased symbols. I observed that the decoding function looked very different from this mathematical procedure, but should do more or less the same thing. Now I have read the decoder implementation carefully and I have the answer to both questions.

Continue reading “Outernet LDPC code revisited”

About critical damping

Having to deal with DSP texts written by engineers, I have sometimes to work a bit to get a good grasp the concepts, which many times are not explained clearly from their mathematical bases. Often, a formula is just used without much motivation. Lately, I’ve been trying to understand critically damped systems, in the context of PLL loop filters.

The issue is as follows. In a second order filter there is a damping parameter \(\zeta > 0\). The impulse response of the filter is an exponentially decaying sinusoid if \(\zeta < 1\) (underdamped system), a decaying exponential if \(\zeta > 1\) (overdamped system) and something of the form \(C t e^{-\lambda t}\) if \(\zeta = 1\) (critically damped system). Critical damping is desirable in many cases because it maximizes the exponential decay rate of the impulse response. However, many engineering texts just go and choose \(\zeta = \sqrt{2}/2\) without any justification and even call this critical damping. Here I give some motivation starting with the basics and explain what is special about \(\zeta = \sqrt{2}/2\) and why one may want to choose this value in applications.

Continue reading “About critical damping”

An erasure code based on Vandermonde matrices

I’ve been looking at an erasure code by Luigi Rizzo which is based on Vandermonde matrices, since this code is used in Outernet. In fact, it is the code implemented by the zfec library. Luigi Rizzo describes his code in a paper from 1997, but the paper can be very confusing and misleading because it describes the mathematics in very little detail. I needed to go to the source code to understand how it works. Actually, the idea behind this code is very simple. Here I do a mathematical description of the code and show that it is the same as a Reed-Solomon code. This is rather weird, because Luigi Rizzo makes no mention of Reed-Solomon codes, which were first described in 1960.

Continue reading “An erasure code based on Vandermonde matrices”

How hard is it to decode 3CAT-2?

In a previous post, I looked at the telemetry packets transmitted by the satellite 3CAT-2. This satellite transmits 9600bps AX.25 BPSK packets in the Amateur 2m band. As far as I know, it is the only satellite that transmits fast BPSK without any form of forward error correction. LilacSat-2 uses a concatenated code with a (7, 1/2) convolutional inner code and a (255, 223) Reed-Solomon outer code. The remaining BPSK satellites transmit at 1200bps, either using AX.25 without FEC (the QB50p satellites, for instance), or with strong FEC (Funcube, for example). Therefore, I remark that 3CAT-2’s packets will be a bit difficult to decode without errors. But how difficult? Here I look at how to use the theory to calculate this, without resorting to simulations.

Continue reading “How hard is it to decode 3CAT-2?”