Computing PLL coefficients

Whenever I implement a PLL or a similar control loop, I invariably consult the formulas in the paper Controlled-Root Formulation for Digital Phase-Locked Loops, by Stephens and Thomas. Other sources that give formulas for the loop coefficients in terms of the loop bandwidth perform a continuous time analysis and then use a bilinear transform or a similar kind of transform to translate results between continuous time and discrete time. The appeal of the paper by Stephens and Thomas is that they work directly in discrete time, using a beautiful complex contour integral argument to calculate the loop bandwidth in terms of the loop coefficients for a loop of any order. Unfortunately, their method doesn’t give a closed-form formula for the loop coefficients in terms of the loop bandwidth. The loop coefficients can be obtained numerically, and the paper gives tables for common loop bandwidths and orders.

In most of my designs I use a second order loop with supercritical damping, which means that the two loop roots in the z-plane are equal (and hence real). As I was doing a design the other day, I wondered whether in this specific situation, which is much simpler than the general case, a closed-form solution could be obtained. It turns out that this is the case, so I’ll be using this formula from now on. In this short post I explain how this is done and give the formula.

A modern implementation of the Parks-McClellan FIR design algorithm

The Parks-McClellan FIR filter design algorithm is used to design optimal FIR filters according to a minimax criterion: it tries to find the FIR filter with a given number of coefficients whose frequency response minimizes the maximum weighted error with respect to a desired response over a finite set of closed sub-intervals of the frequency domain. It is based on the Remez exchange algorithm, which is an algorithm to find uniform approximations by polynomials using the equioscillation theorem. In signal processing, the Parks-McClellan algorithm is often call Remez. This algorithm is a very popular FIR design algorithm. Compared to the windowing method, which is another commonly used algorithm, it is able to obtain better filters (for instance, meeting design constraints with less coefficients), in part because it allows the designer to control the passband ripple and stopband attenuation independently by means of the weight function.

I have been laying some groundwork for Maia SDR, and for this I will need to run the Parks-McClellan algorithm in maia-httpd, the piece of software that runs in the Pluto ARM CPU. To evaluate what implementation of this algorithm to use, I have first gone to the implementations that I normally use: the SciPy remez function, and GNU Radio’s pm_remez function. I read these implementations, but I didn’t like them much.

The SciPy implementation is a direct C translation of the original Fortran implementation by McClellan, Parks and Rabiner from 1973. This C translation was probably written decades ago and never updated. The code is very hard to read. The GNU Radio implementation looks somewhat better. It is a C implementation that was extracted from Octave and dates from the 90s. The code is much easier to follow, but there are some comments saying “There appear to be some problems with the routine search. See comments therein [search for PAK:]. I haven’t looked closely at the rest of the code—it may also have some problems.” that have seemingly been left unattended.

Because of this and since I want to keep all the Maia SDR software under permissive open source licenses (the GNU Radio / Octave implementation is GPL), I decided to write from scratch an implementation of the Parks-McClellan algorithm in Rust. The result of this has been the pm-remez crate, which I have released recently. It uses modern coding style and is inspired by recent papers about how to improve the numerical robustness of the Parks-McClellan algorithm. Since I figured that this implementation would also be useful outside of Maia SDR, I have written Python bindings and published a pm-remez Python package. This has a few neat features that SciPy’s remez function doesn’t have. The Python documentation gives a walkthrough of these by showing how to design several types of filters that are commonly used. This documentation is the best place to see what pm-remez is capable of.

The rest of this post has some comments about the implementation and the things I’ve learned while working on this.

LTE Transmission Mode 4 (closed-loop spatial multiplexing)

This is a long overdue post. In 2022, I wrote a series of posts about LTE as I studied its physical layer to understand it better. In the last post, I decoded the PDCCH (physical downlink control channel), which contains control information about each PDSCH (physical downlink shared channel) transmission. I found that, in the recording that I was using, some PDSCH transmissions used Transmission Mode 4 (TM4), which stands for closed-loop spatial multiplexing. For an eNB with two antenna ports (which is what I recorded), this transmission mode sends either one or two codewords simultaneously over the two ports by using a precoding matrix that is chosen from a list that contains a few options. The choice is done by means of channel-state information from the UE (hence the “closed-loop” in the name).

In the post I found a transmission where only one codeword was transmitted. It used the precoding matrix \([1, i]^T/\sqrt{2}\). This basically means that a 90º phase offset is applied to the two antenna ports as they simultaneously transmit the same data. I mentioned that this was the reason why I obtained bad results when I tried to equalize this PDSCH transmission using transmit diversity in another previous post, and that in a future post I would show how to equalize this transmission correctly. I have realized that I never wrote this post, so now it is as good a time as any.

ssdv-fec: an erasure FEC for SSDV implemented in Rust

Back in May I proposed an erasure FEC scheme for SSDV. The SSDV protocol is used in amateur radio to transmit JPEG files split in packets, in such a way that losing some packets only cases the loss of pieces of the image, instead of a completely corrupted file. My erasure FEC augments the usual SSDV packets with additional FEC packets. Any set of \(k\) received packets is sufficient to recover the full image, where \(k\) is the number of packets in the original image. An almost limitless amount of distinct FEC packets can be generated on the fly as required.

I have now written a Rust implementation of this erasure FEC scheme, which I have called ssdv-fec. This implementation has small microcontrollers in mind. It is no_std (it doesn’t use the Rust standard library nor libc), does not perform any dynamic memory allocations, and works in-place as much as possible to reduce the memory footprint. As an example use case of this implementation, it is bundled as a static library with a C-like API for ARM Cortex-M4 microcontrollers. This might be used in the AMSAT-DL ERMINAZ PocketQube mission, and it is suitable for other small satellites. There is also a simple CLI application to perform encoding and decoding on a PC.

An erasure FEC for SSDV

SSDV is an amateur radio protocol that is used to transmit images in packets, in a way that is tolerant to packet loss. It is based on JPEG, but unlike a regular JPEG file, where losing even a small part of the file has catastrophic results, in SSDV different blocks of the image are compressed independently. This means that packet loss affects only the corresponding blocks, and the image can still be decoded and displayed, albeit with some missing blocks.

SSDV was originally designed for transmission from high-altitude balloons (see this reference for more information), but it has also been used for some satellite missions, including Longjiang-2, a Chinese lunar orbiting satellite.

Even though SSDV is tolerant to packet loss, to obtain the full image it is necessary to receive all the packets that form the image. If some packets are lost, then it is necessary to retransmit them. Here I present an erasure FEC scheme that is backwards-compatible with SSDV, in the sense that the first packets transmitted by this scheme are identical to the usual \(k\) packets of standard SSDV, and augments the transmission with FEC packets in such a way that the complete image can be recovered from any set of \(k\) packets (so there is no encoding overhead). The FEC packets work as a fountain code, since it is possible to generate up to \(2^{16}\) packets, which is a limit unlikely to be reached in practice.

About channel capacity and sub-channels

The Shannon-Hartley theorem describes the maximum rate at which information can be sent over a bandwidth-limited AWGN channel. This rate is called the channel capacity. If \(B\) is the bandwidth of the channel in Hz, \(S\) is the signal power (in units of W), and \(N_0\) is the noise power spectral density (in units of W/Hz), then the channel capacity \(C\) in units of bits per second is\[C = B \log_2\left(1 + \frac{S}{N_0B}\right).\]

Let us now consider that we make \(n\) “sub-channels”, by selecting \(n\) disjoint bandwidth intervals contained in the total bandwidth of the channel. We denote the bandwidth of these sub-channels by \(B_j\), \(j = 1,\ldots,n\). Clearly, we have the constraint \(\sum_{j=1}^n B_j \leq B\). Likewise, we divide our total transmit power \(S\) into the \(n\) sub-channels, allocating power \(S_j\) to the signal in the sub-channel \(j\). We have \(\sum_{j=1}^n S_j = S\). Under these conditions, each sub-channel will have capacity \(C_j\), given by the formula above with \(B_j\) and \(S_j\) in place of \(B\) and \(S\).

The natural question regards using the \(n\) sub-channels in parallel to transmit data: what is the maximum of the sum \(\sum_{j=1}^n C_j\) under these conditions and how can it be achieved? It is probably clear from the definition of channel capacity that this sum is always smaller or equal than \(C\). After all, by dividing the channel into sub-channels we cannot do any better than by considering it as a whole.

People used to communications theory might find intuitive that we can achieve \(\sum_{j=1}^n C_j = C\), and that this happens if and only if we use all the bandwidth (\(\sum_{j=1}^n B_j = B\)) and the SNRs of the sub-channels, defined by \(S_j/(N_0B_j)\), are all equal, so that \(S_j = SB_j/B\). After all, this is pretty much how OFDM and other channelized communication methods work. In this post I give an easy proof of this result.

Published
Categorised as Maths Tagged

A note about non-matched pulse filtering

This is a short note about the losses caused by non-matched pulse filtering in the demodulation of a PAM waveform. Recently I’ve needed to come back to these calculations several times, and I’ve found that even though the calculations are simple, sometimes I make silly mistakes on my first try. This post will serve me as a reference in the future to save some time. I have also been slightly surprised when I noticed that if we have two pulse shapes, let’s call them A and B, the losses of demodulating waveform A using pulse shape B are the same as the losses of demodulating waveform B using pulse shape A. I wanted to understand better why this happens.

Recall that if \(p(t)\) denotes the pulse shape of a PAM waveform and \(h(t)\) is a filter function, then in AWGN the SNR at the output of the demodulator is equal to the input SNR (with an appropriate normalization factor) times the factor\[\begin{equation}\tag{1}\frac{\left|\int_{-\infty}^{+\infty} p(t) \overline{h(t)}\, dt\right|^2}{\int_{-\infty}^{+\infty} |h(t)|^2\, dt}.\end{equation}\]This factor describes the losses caused by filtering. As a consequence of the Cauchy-Schwarz inequality, we see that the output SNR is maximized when a matched filter \(h = p\) is used.

To derive this expression, we assume that we receive the waveform\[y(t) = ap(t) + n(t)\]with \(a \in \mathbb{C}\) and \(n(t)\) a circularly symmetric stationary Gaussian process with covariance \(\mathbb{E}[n(t)\overline{n(s)}] = \delta(t-s)\). The demodulator output is\[T(y) = \int_{-\infty}^{+\infty} y(t) \overline{h(t)}\, dt.\]The output SNR is defined as \(|\mathbb{E}[T(y)]|^2/V(T(y))\). Since \(\mathbb{E}[n(t)] = 0\) due to the circular symmetry, we have\[\mathbb{E}[T(y)] = a\int_{-\infty}^{+\infty} p(t)\overline{h(t)}\,dt.\]Additionally,\[\begin{split}V(T(y)) &= \mathbb{E}[|T(y) – \mathbb{E}[T(y)]|^2] = \mathbb{E}\left[\left|\int_{-\infty}^{+\infty} n(t)\overline{h(t)}\,dt\right|^2\right] \\ &= \mathbb{E}\left[\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} n(t)\overline{n(s)}\overline{h(t)}h(s)\,dtds\right] \\ &= \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \mathbb{E}\left[n(t)\overline{n(s)}\right]\overline{h(t)}h(s)\,dtds \\ &= \int_{-\infty}^{+\infty} |h(t)|^2\, dt. \end{split}\]Therefore, we see that the output SNR equals\[\frac{|a|^2\left|\int_{-\infty}^{+\infty} p(t) \overline{h(t)}\, dt\right|^2}{\int_{-\infty}^{+\infty} |h(t)|^2 dt.}.\]

The losses caused by using a non-matched filter \(h\), in comparison to using a matched filter, can be computed as the quotient of the quantity (1) divided by the same quantity where \(h\) is replaced by \(p\). This gives\[\frac{\frac{\left|\int_{-\infty}^{+\infty} p(t) \overline{h(t)}\, dt\right|^2}{\int_{-\infty}^{+\infty} |h(t)|^2\, dt}}{\frac{\left|\int_{-\infty}^{+\infty} |p(t)|^2\, dt\right|^2}{\int_{-\infty}^{+\infty} |p(t)|^2\, dt}}=\frac{\left|\int_{-\infty}^{+\infty} p(t) \overline{h(t)}\, dt\right|^2}{\int_{-\infty}^{+\infty} |p(t)|^2\, dt\cdot \int_{-\infty}^{+\infty} |h(t)|^2\, dt}.\]

We notice that this expression is symmetric in \(p\) and \(h\), in the sense that if we interchange \(p\) and \(h\) we obtain the same quantity. This shows that, as I mentioned above, the losses obtained when filtering waveform A with pulse B coincide with the losses obtained when filtering waveform B with pulse A. This is a clear consequence of these calculations, but I haven’t found a way to understand this more intuitively. We can say that the losses are equal to the cosine squared of the angle between the pulse shape vectors in \(L^2(\mathbb{R})\). This remark makes the symmetry clear, but I’m not sure if I’m satisfied by this as an intuitive explanation.

As an example, let us compute the losses caused by receiving a square pulse shape, defined by \(p(t) = 1\) for \(0 \leq t \leq \pi\) and \(p(t) = 0\) elsewhere, with a half-sine pulse shape filter, defined by \(h(t) = \sin t\) for \(0 \leq t \leq \pi\) and \(h(t) = 0\) elsewhere. This case shows up in many different situations. We can compute the losses as indicated above, obtaining\[\frac{\left(\int_0^\pi \sin t \, dt\right)^2}{\int_0^\pi \sin^2t\,dt\cdot \int_0^\pi dt} = \frac{2^2}{\frac{\pi}{2}\cdot\pi}= \frac{8}{\pi^2}\approx -0.91\,\mathrm{dB}.\]

An error in the DSN Telecommunications Link Design Handbook description of Reed-Solomon

The DSN Telecommunications Link Design Handbook is a large document describing many aspects pertaining deep space communications and how they are implemented by the NASA Deep Space Network. One of the many things it contains is a description of a Reed-Solomon encoder for the CCSDS code using the Berlekamp bit-serial architecture. While following this description to implement an encoder, I have found an error. In this post, I explain the error and where I think it comes from.

Published
Categorised as Maths, Space Tagged

Voyager 1 and Reed-Solomon

In one of my previous posts about Voyager 1, I stated that the Voyager probes used as forward error correction only the k=7, r=1/2 CCSDS convolutional code, and that Reed-Solomon wasn’t used. However, some days ago, Brett Gottula asked about this, citing several sources that stated that the Voyager probes used Reed-Solomon coding after their encounter with Saturn.

My source for stating that Reed-Solomon wasn’t used was some private communication with DSN operators. Since the XML files describing the configuration of the DSN receivers for Voyager 1 didn’t mention Reed-Solomon either, I had no reason to question this. However, the DSN only processes the spacecraft data up to some point (which usually includes all FEC decoding), and then passes the spacecraft frames to the mission project team without really looking at their contents. Therefore, it might be the case that it’s the project team the one who handles the Reed-Solomon code for the Voyagers. This would make sense specially if the code was something custom, rather than the CCSDS code (recall that Voyager predates the CCSDS standards). If this were true, the DSN wouldn’t really care if there is Reed-Solomon or not, and they might have just forgotten about it.

After looking at the frames I had decoded from Voyager 1 in more detail, I remarked that Brett might be right. Doing some more analysis, I have managed to check that in fact the Voyager 1 frames used Reed-Solomon as described in the references that Brett mentioned. In this post I give a detailed look at the Reed-Solomon code used by the Voyager probes, compare it with the CCSDS code, and show how to perform Reed-Solomon decoding in the frames I decoded in the last post. The middle section of this post is rather math heavy, so readers might want to skip it and go directly to the section where Reed-Solomon codewords in the Voyager 1 frames are decoded.

Computing the symbol error rate of m-FSK

How to compute the symbol error rate rate of an m-FSK modulation is something that comes up in a variety of situations, since the math is the same in any setting in which the symbols are orthogonal (so it also applies to some spread spectrum modulations). I guess this must appear somewhere in the literature, but I can never find this result when I need it, so I have decided to write this post explaining the math.

Here I show an approach that I first learned from Wei Mingchuan BG2BHC two years ago during the Longjiang-2 lunar orbiter mission. While writing our paper about the mission, we wanted to compute a closed expression for the BER of the LRTC modulation used in the uplink (which is related to \(m\)-FSK). Using a clever idea, Wei was able to find a formula that involved an integral of CDFs and PDFs of chi-squared distributions. Even though this wasn’t really a closed formula, evaluating the integral numerically was much faster than doing simulations, specially for high \(E_b/N_0\).

Recently, I came again to the same idea independently. I was trying to compute the symbol error rate of \(m\)-FSK and even though I remembered that the problem about LRTC was related, I had forgotten about Wei’s formula and the trick used to obtain it. So I thought of something on my own. Later, digging through my emails I found the messages Wei and I exchanged about this and saw that I had arrived to the same idea and formula. Maybe the trick was in the back of my mind all the time.

Due to space constraints, the BER formula for LRTC and its mathematical derivation didn’t make it into the Longjiang-2 paper. Therefore, I include a small section below with the details.

Published
Categorised as Maths Tagged ,