ldpc-toolbox gets LDPC decoding

Recently I have implemented an FPGA LDPC decoder for a commercial project. The belief propagation LDPC decoder algorithm admits many different approximations in the arithmetic, and other tricks that can be used to trade off between decoding sensitivity (BER versus Eb/N0 performance) and computational complexity. To help me benchmark the different belief propagation algorithms, I have extended my ldpc-toolbox project to implement many different LDPC decoding algorithms and perform BER simulations.

ldpc-toolbox is a Rust library and command line tool for the design of LDPC codes. I initially created this project when I was trying to design a suitable LDPC code for a narrowband 32APSK modem to be used over the QO-100 amateur GEO transponder. The tool so far supported some classical pseudorandom constructions of LDPC codes, computed Tanner graph girths, and could construct the alists for all the DVB-S2 and CCSDS LDPC codes. Extending this tool to support LDPC encoding, decoding and BER simulation is a natural step.

Decoding Euclid

Euclid is an ESA near-infrared space telescope that was launched to the Sun-Earth Lagrange L2 point on July 1, using a Falcon 9 from Cape Canaveral. The spacecraft uses K-band to transmit science data, and X-band with a downlink frequency of 8455 MHz for TT&C. On July 2 at 07:00 UTC, 16 hours after launch and with the spacecraft at a distance of 167000 km from Earth, I recorded the X-band telemetry signal using antennas 1a and 1f from the Allen Telescope Array. The recording lasted approximately 4 hours and 30 minutes, until the spacecraft set.

Even though the telemetry signal fits in about 300 kHz, I recorded at 4.096 Msps, since I wanted to see if there were ranging signals at some point. Since the IQ recordings for two antennas at 4.096 Msps 16-bit are quite large, I have done some data reduction before publishing to Zenodo. I have decided to publish only the data for antenna 1a, since the SNR in a single antenna is already very good, so there is no point in using the data for the second antenna unless someone wants to do interferometry. Since looking in the waterfall I saw no signals outside of the central 2 MHz, I decimated to 2.048 Msps 8-bit. Also, I synthesized the signal polarization from the two linear polarizations. To fit within Zenodo’s constraints for a 50 GB dataset, I split the recording in two parts of 32 GiB each.

The recording is published in these two datasets:

In this post I will look at the signal modulation and coding and present GNU Radio decoders. I will look at the contents of the telemetry frames in a future post.

Mars Express 20th anniversary livestream

On June 2, ESA celebrated the 20th anniversary of the launch of Mars Express (MEX) by livestreaming images of Mars from the VMC camera in a Youtube livestream. They set things up so that an image was taken by the camera approximately every 50 seconds, downlinked in the X-band telemetry to the Cebreros groundstation, which was tracking the spacecraft, and then sent to the Youtube. The total latency, according to the image timestamps that were shown in Youtube was around 17 minutes, which is quite good, since most of that latency was the 16 minutes and 45 seconds of one-way light time from Mars to Earth.

The livestream was accompanied by commentary from Simon Wood and Jorge Hernández Bernal. One of the things that got my attention during the livestream was the mention that to make the livestream work, the VMC camera should be pointing to Mars and the high-gain antenna should be pointing to Earth. This could only be done during part of Mars Express orbit, and in fact reduced the amount of sunlight hitting the solar panels, so it could not be done for too long.

This gave me the idea to use the Mars Express SPICE kernels to understand better how the geometry looked like. This is also a good excuse to show how to use SPICE.

An erasure FEC for SSDV

SSDV is an amateur radio protocol that is used to transmit images in packets, in a way that is tolerant to packet loss. It is based on JPEG, but unlike a regular JPEG file, where losing even a small part of the file has catastrophic results, in SSDV different blocks of the image are compressed independently. This means that packet loss affects only the corresponding blocks, and the image can still be decoded and displayed, albeit with some missing blocks.

SSDV was originally designed for transmission from high-altitude balloons (see this reference for more information), but it has also been used for some satellite missions, including Longjiang-2, a Chinese lunar orbiting satellite.

Even though SSDV is tolerant to packet loss, to obtain the full image it is necessary to receive all the packets that form the image. If some packets are lost, then it is necessary to retransmit them. Here I present an erasure FEC scheme that is backwards-compatible with SSDV, in the sense that the first packets transmitted by this scheme are identical to the usual \(k\) packets of standard SSDV, and augments the transmission with FEC packets in such a way that the complete image can be recovered from any set of \(k\) packets (so there is no encoding overhead). The FEC packets work as a fountain code, since it is possible to generate up to \(2^{16}\) packets, which is a limit unlikely to be reached in practice.

Analysis of JUICE frames

In the previous post, I showed how to use GNU Radio to decode a 3 hour recording of the ESA spacecraft JUICE that I made with the Allen Telecope Array. In this post I will analyse the contents of the telemetry frames.

As I mentioned, the decoder I used was quite slow because the Turbo decoder was rather inefficient. In fact, the 3 hour recording has taken a total of 70.82 hours to process using the gnuradio1 machine at the GR-ATA testbed (a dual-socket Xeon Silver 4216). This means that the decoder runs 23.6 times slower than real time in this machine. Here I have used the decoder that beamforms two ATA antennas, as I described in the previous post. In total, 152 MiB worth of frames have been decoded.

Decoding JUICE

JUICE, the Jupiter Icy Moons Explorer, is ESA’s first mission to Jupiter. It will arrive to Jupiter in 2031, and study Ganymede, Callisto and Europa until 2035. The spacecraft was launched on an Ariane 5 from Kourou on April 14. On April 15, between 05:30 and 08:30 UTC, I recorded JUICE’s X-band telemetry signal at 8436 MHz using two of the 6.1 m dishes from the Allen Telescope Array. The spacecraft was at a distance between 227000 and 261000 km.

The recording I made used 16-bit IQ at 6.144 Msps. Since there are 4 channels (2 antennas and 2 linear polarizations), the total data size is huge (966 GiB). To publish the data to Zenodo, I have combined the two linear polarizations of each antenna to form the spacecraft’s circular polarization, and downsampled to 8-bit IQ at 2.048 Msps. This reduces the data for each antenna to 41 GiB. The sample rate is still enough to contain the main lobes of the telemetry modulation. As we will see below, some ranging signals are too wide for this sample rate, so perhaps I’ll also publish some shorter excerpts at the higher sample rate.

The downsampled IQ recordings are in the following Zenodo datasets:

In this post I will look at the signal modulation and coding, and some of its radiometric properties. I’ll show how to decode the telemetry frames with GNU Radio. The analysis of the decoded telemetry frames will be done in a future post.

More about the QO-100 WB transponder power budget

Last week I wrote a post with a study about the QO-100 WB transponder power budget. After writing this post, I have been talking with Dave Crump G8GKQ. He says that the main conclusions of my study don’t match well his practical experience using the transponder. In particular, he mentions that he has often seen that a relatively large number of stations, such as 8, can use the transponder at the same time. In this situation, they “rob” much more power from the beacon compared to what I stated in my post.

I have looked more carefully at my data, specially at situations in which the transponder is very busy, to understand better what happens. In this post I publish some corrections to my previous study. As we will see below, the main correction is that the operating point of 73 dB·Hz output power that I had chosen to compute the power budget is not very relevant. When the transponder is quite busy, the output power can go up to 73.8 dB·Hz. While a difference of 0.8 dB might not seem much, there is a huge difference in practice, because this drives the transponder more towards saturation, decreasing its gain and robbing more output power from the beacon to be used by other stations.

I want to thank Dave for an interesting discussion about all these topics.

Measuring the QO-100 WB transponder power budget

The QO-100 WB transponder is an S-band to X-band amateur radio transponder on the Es’hail 2 GEO satellite. It has about 9 MHz of bandwidth and is routinely used for transmitting DVB-S2 signals, though other uses are possible. In the lowermost part of the transponder, there is a 1.5 Msym QPSK 4/5 DVB-S2 beacon that is transmitted continuously from a groundstation in Qatar. The remaining bandwidth is free to be used by all amateurs in a “use whatever bandwidth is free and don’t interfere others” basis (there is a channelized bandplan to put some order).

From the communications theory perspective, one of the fundamental aspects of a transponder like this is how much output power is available. This sets the downlink SNR and determines whether the transponder is in the power-constrained regime or in the bandwidth-constrained regime. It indicates the optimal spectral efficiency (bits per second per Hz), which helps choose appropriate modulation and FEC parameters.

However, some of the values required to do these calculations are not publicly available. I hear that the typical values which would appear in a link budget (maximum TWTA output power, output power back-off, antenna gain, etc.) are under NDA from MELCO, who built the satellite and transponders.

I have been monitoring the WB transponder and recording waterfall data of the downlink with my groundstation for three weeks already. With this data we can obtain a good understanding of the transponder behaviour. For example, we can measure the input power to output power transfer function, taking advantage of the fact that different stations with different powers appear and disappear, which effectively sweeps the transponder input power (though in a rather chaotic and uncontrollable manner). In this post I share the methods I have used to study this data and my findings.

About channel capacity and sub-channels

The Shannon-Hartley theorem describes the maximum rate at which information can be sent over a bandwidth-limited AWGN channel. This rate is called the channel capacity. If \(B\) is the bandwidth of the channel in Hz, \(S\) is the signal power (in units of W), and \(N_0\) is the noise power spectral density (in units of W/Hz), then the channel capacity \(C\) in units of bits per second is\[C = B \log_2\left(1 + \frac{S}{N_0B}\right).\]

Let us now consider that we make \(n\) “sub-channels”, by selecting \(n\) disjoint bandwidth intervals contained in the total bandwidth of the channel. We denote the bandwidth of these sub-channels by \(B_j\), \(j = 1,\ldots,n\). Clearly, we have the constraint \(\sum_{j=1}^n B_j \leq B\). Likewise, we divide our total transmit power \(S\) into the \(n\) sub-channels, allocating power \(S_j\) to the signal in the sub-channel \(j\). We have \(\sum_{j=1}^n S_j = S\). Under these conditions, each sub-channel will have capacity \(C_j\), given by the formula above with \(B_j\) and \(S_j\) in place of \(B\) and \(S\).

The natural question regards using the \(n\) sub-channels in parallel to transmit data: what is the maximum of the sum \(\sum_{j=1}^n C_j\) under these conditions and how can it be achieved? It is probably clear from the definition of channel capacity that this sum is always smaller or equal than \(C\). After all, by dividing the channel into sub-channels we cannot do any better than by considering it as a whole.

People used to communications theory might find intuitive that we can achieve \(\sum_{j=1}^n C_j = C\), and that this happens if and only if we use all the bandwidth (\(\sum_{j=1}^n B_j = B\)) and the SNRs of the sub-channels, defined by \(S_j/(N_0B_j)\), are all equal, so that \(S_j = SB_j/B\). After all, this is pretty much how OFDM and other channelized communication methods work. In this post I give an easy proof of this result.

Published
Categorised as Maths Tagged

Monitoring the QO-100 WB transponder usage with Maia SDR

I am interested in monitoring the usage of the QO-100 WB transponder over several weeks or months, to obtain statistics about how full the transponder is, what bandwidths are used, which channels are occupied more often, etc., as well as statistics about the power of the signals and the DVB-S2 beacon. For this, we need to compute and record to disk waterfall data for later analysis. Maia SDR is ideal for this task, because it is easy to write a Python script that configures the spectrometer to a low rate, connects to the WebSocket to fetch spectrometer data, performs some integrations to lower the rate even more, and records data to disk.

For this project I’ve settled on using a sample rate of 20 Msps, which covers the whole transponder plus a few MHz of receiver noise floor on each side (this will be used to calibrate the receiver gain) and gives a frequency resolution of 4.9 kHz with Maia SDR’s 4096-point FFT. At this sample rate, I can set the Maia SDR spectrometer to 5 Hz and then perform 50 integrations in the Python script to obtain one spectrum average every 10 seconds.

Part of the interest of setting up this project is that the Python script can serve as an example of how to interface Maia SDR with other applications and scripts. In this post I will show how the system works and an initial evaluation of the data that I have recorder over a week. More detailed analysis of the data will come in future posts.