Maia SDR

I’m happy to announce the release of Maia SDR, an open-source FPGA-based SDR project focusing on the ADALM Pluto. The first release provides a firmware image for the Pluto with the following functionality:

  • Web-based interface that can be accessed from a smartphone, PC or other device.
  • Real-time waterfall display supporting up to 61.44 Msps (limit given by the AD936x RFIC of the Pluto).
  • IQ recording in SigMF format, at up to 61.44 Msps and with a 400 MiB maximum data size (limit given by the Pluto RAM size). Recordings can be downloaded to a smartphone or other device.

A note about non-matched pulse filtering

This is a short note about the losses caused by non-matched pulse filtering in the demodulation of a PAM waveform. Recently I’ve needed to come back to these calculations several times, and I’ve found that even though the calculations are simple, sometimes I make silly mistakes on my first try. This post will serve me as a reference in the future to save some time. I have also been slightly surprised when I noticed that if we have two pulse shapes, let’s call them A and B, the losses of demodulating waveform A using pulse shape B are the same as the losses of demodulating waveform B using pulse shape A. I wanted to understand better why this happens.

Recall that if \(p(t)\) denotes the pulse shape of a PAM waveform and \(h(t)\) is a filter function, then in AWGN the SNR at the output of the demodulator is equal to the input SNR (with an appropriate normalization factor) times the factor\[\begin{equation}\tag{1}\frac{\left|\int_{-\infty}^{+\infty} p(t) \overline{h(t)}\, dt\right|^2}{\int_{-\infty}^{+\infty} |h(t)|^2\, dt}.\end{equation}\]This factor describes the losses caused by filtering. As a consequence of the Cauchy-Schwarz inequality, we see that the output SNR is maximized when a matched filter \(h = p\) is used.

To derive this expression, we assume that we receive the waveform\[y(t) = ap(t) + n(t)\]with \(a \in \mathbb{C}\) and \(n(t)\) a circularly symmetric stationary Gaussian process with covariance \(\mathbb{E}[n(t)\overline{n(s)}] = \delta(t-s)\). The demodulator output is\[T(y) = \int_{-\infty}^{+\infty} y(t) \overline{h(t)}\, dt.\]The output SNR is defined as \(|\mathbb{E}[T(y)]|^2/V(T(y))\). Since \(\mathbb{E}[n(t)] = 0\) due to the circular symmetry, we have\[\mathbb{E}[T(y)] = a\int_{-\infty}^{+\infty} p(t)\overline{h(t)}\,dt.\]Additionally,\[\begin{split}V(T(y)) &= \mathbb{E}[|T(y) – \mathbb{E}[T(y)]|^2] = \mathbb{E}\left[\left|\int_{-\infty}^{+\infty} n(t)\overline{h(t)}\,dt\right|^2\right] \\ &= \mathbb{E}\left[\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} n(t)\overline{n(s)}\overline{h(t)}h(s)\,dtds\right] \\ &= \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \mathbb{E}\left[n(t)\overline{n(s)}\right]\overline{h(t)}h(s)\,dtds \\ &= \int_{-\infty}^{+\infty} |h(t)|^2\, dt. \end{split}\]Therefore, we see that the output SNR equals\[\frac{|a|^2\left|\int_{-\infty}^{+\infty} p(t) \overline{h(t)}\, dt\right|^2}{\int_{-\infty}^{+\infty} |h(t)|^2 dt.}.\]

The losses caused by using a non-matched filter \(h\), in comparison to using a matched filter, can be computed as the quotient of the quantity (1) divided by the same quantity where \(h\) is replaced by \(p\). This gives\[\frac{\frac{\left|\int_{-\infty}^{+\infty} p(t) \overline{h(t)}\, dt\right|^2}{\int_{-\infty}^{+\infty} |h(t)|^2\, dt}}{\frac{\left|\int_{-\infty}^{+\infty} |p(t)|^2\, dt\right|^2}{\int_{-\infty}^{+\infty} |p(t)|^2\, dt}}=\frac{\left|\int_{-\infty}^{+\infty} p(t) \overline{h(t)}\, dt\right|^2}{\int_{-\infty}^{+\infty} |p(t)|^2\, dt\cdot \int_{-\infty}^{+\infty} |h(t)|^2\, dt}.\]

We notice that this expression is symmetric in \(p\) and \(h\), in the sense that if we interchange \(p\) and \(h\) we obtain the same quantity. This shows that, as I mentioned above, the losses obtained when filtering waveform A with pulse B coincide with the losses obtained when filtering waveform B with pulse A. This is a clear consequence of these calculations, but I haven’t found a way to understand this more intuitively. We can say that the losses are equal to the cosine squared of the angle between the pulse shape vectors in \(L^2(\mathbb{R})\). This remark makes the symmetry clear, but I’m not sure if I’m satisfied by this as an intuitive explanation.

As an example, let us compute the losses caused by receiving a square pulse shape, defined by \(p(t) = 1\) for \(0 \leq t \leq \pi\) and \(p(t) = 0\) elsewhere, with a half-sine pulse shape filter, defined by \(h(t) = \sin t\) for \(0 \leq t \leq \pi\) and \(h(t) = 0\) elsewhere. This case shows up in many different situations. We can compute the losses as indicated above, obtaining\[\frac{\left(\int_0^\pi \sin t \, dt\right)^2}{\int_0^\pi \sin^2t\,dt\cdot \int_0^\pi dt} = \frac{2^2}{\frac{\pi}{2}\cdot\pi}= \frac{8}{\pi^2}\approx -0.91\,\mathrm{dB}.\]

GRCon22 Capture The Flag

I have spent a great week attending GRCon22 remotely. Besides trying to follow all the talks as usual, I have been participating in the Capture The Flag. I had sent a few challenges for the CTF, and I wanted to have some fun and see what challenges other people had sent. I ended up in 3rd position. In this post I’ll give a walkthrough of the challenges I submitted, and of my solution to some of the other challenges. The material I am presenting here is now in the grcon22-ctf Github repository.

Writing GUPPI files with GNU Radio and using SETI tools

GUPPI stands for Green Bank Ultimate Pulsar Processing Instrument. The GUPPI raw file format, which was originally used by this instrument for pulsar observations, is now used in many telescopes for radio astronomy and SETI. For instance Breakthrough Listen uses the GUPPI format as part of the processing pipeline, as described in this paper. The Breakthrough Listen blimpy tools work with GUPPI files or with filterbank files (basically, waterfalls) that have been produced from a GUPPI file using rawspec.

I think that GNU Radio can be a very useful tool for SETI and radio astronomy, as evidenced by the partnership of GNU Radio and SETI Institute. However, the set of tools used in the GNU Radio ecosystem (and by the wider SDR community) and the tools used traditionally by the SETI community are quite different, and even the file formats and some key concepts are unalike. Therefore, interfacing these tools is not trivial.

During this summer I have been teaching some GNU Radio lessons to the BSRC REU students. As part of these sessions, I made gr-guppi, a GNU Radio out-of-tree module that can write GUPPI files. I thought this could be potentially useful to the students, and it might be a first step in increasing the compatibility between GNU Radio and the SETI tools. (The materials for the sessions of this year are in this repository, and the materials for 2021 are here; these may be useful to someone even without the context of the workshop-like sessions for which they were created).

In this post I will show how gr-guppi works and what are the key concepts for GUPPI files, as these files store the output of a polyphase filterbank, which many people from the GNU Radio community might not be very familiar with. The goal of the post is to generate a simulated technosignature in GNU Radio (a CW carrier drifting in frequency) and then detect it using turboSETI, which is a tool for detecting narrowband signals with a Doppler drift.

Before going on, it is convenient to mention that an alternative to this approach is using gr-turboseti, which wraps up turboSETI as a GNU Radio block. This was Yiwei Chai‘s REU project at the ATA in 2021.

Real time Doppler correction with GNU Radio

Satellite RF signals are shifted in frequency proportionally to the line-of-sight velocity between the satellite and groundstation, due to the Doppler effect. The Doppler frequency depends on time, on the location of the groundstation, and on the orbit of the satellite, as well as on the carrier frequency. In satellite communications, it is common to correct for the Doppler present in the downlink signals before processing them. It is also common to correct for the uplink Doppler before transmitting an uplink signal, so that the satellite receiver sees a constant frequency.

For Earth satellites, these kinds of corrections can be done in GNU Radio using the gr-gpredict-doppler out-of-tree module and Gpredict (see this old post). In this method, Gpredict calculates the current Doppler frequency and sends it to gr-gpredict-doppler, which updates a variable in the GNU Radio flowgraph that controls the Doppler correction (for instance by changing the frequency of a Frequency Xlating FIR Filter or Signal Source).

I’m more interested in non Earth orbiting satellites, for which Gpredict, which uses TLEs, doesn’t work. I want to perform Doppler correction using data from NASA HORIZONS or computed with GMAT. To do this, I have added a new Doppler Correction C++ block to gr-satellites. This block reads a text file that lists Doppler frequency versus time, and uses that to perform the Doppler correction. In this post, I describe how the block works.

LTE downlink: PBCH and PDCCH

This post is a continuation of my series about LTE signal analysis. In the previous post I showed how to decode the PHICH. Now we will decode two other downlink channels, the PBCH (physical broadcast channel) and the PDDCH (physical downlink control channel).

The PBCH is used to transmit the MIB (master information block). This is a small data packet that all the UEs must decode after detecting a cell using the synchronization signals. The MIB contains essential information for the usage of the cell, such as the cell bandwidth and PHICH configuration. The PDDCH contains control information, such as uplink grants and the scheduling of the PDSCH (physical downlink shared channel).

The PBCH and PDDCH use the same kind of channel coding: a tail-biting k=7, r=1/3 convolutional code with a circular buffer for rate matching that performs puncturing and repetition coding as needed to obtain the required codeword size. The remaining aspects of the PBCH and PDDCH are quite different, so they will be treated separately.

As usual, we will be using a short IQ recording from my local cell site. The link to the recording is given at the end of the post.

LTE downlink: PHICH

This is a continuation of my series of posts about LTE. In the previous post we looked at the downlink cell-specific reference signals (CRS), transmit diversity equalization, and the demodulation of the PBCH (physical broadcast channel), PCFICH (physical control format indicator channel) and PDSCH (physical downlink shared channel). In this post we will look at the PHICH (physical hybrid ARQ indicator channel). As usual, I will be analysing the recording of a base station that I did in the first post about the LTE downlink.

The PHICH is used to send hybrid-ARQ ACK/NACKs to the UEs. Each PHICH transmission carries a single bit, either ACK (encoded by the bit 1) or NACK (encoded by the bit 0). Repetition encoding is used to increase the chances of correct decoding, and an orthogonal overlay code allows transmitting information for several UEs using the same resource elements.

The PHICH is transmitted in the control region of the subframe, which is formed by the first 1, 2, or 3 symbols of the subframe (according to the CFI value). As other control channels, the PHICH uses REGs. Recall that a REG is a set of 4 resource elements which are not used for the transmission of the CRS and which are adjacent in frequency if we ignore the resource elements used for the CRS. For instance, when 2 or 4 antenna ports are used for the CRS, in the first symbol of the subframe two resource elements in every block of 6 are used for the CRS. The other 4 resource elements form a REG. Therefore, there are 2 REGs per resource block. In symbols 2 and 3 there may not be resource elements allocated to the CRS, so there are 3 REGs per resource block in that case.

A PHICH transmission uses 3 REGs which are equally spaced over the bandwidth of the cell, in order to give frequency diversity. This is similar to the PCFICH, which uses 4 equally spaced REGs in the first symbol of the subframe. Depending on the configuration of a parameter called PHICH duration, the PHICH can either use the first symbol in each subframe (normal PHICH duration), or the first 2 or 3 symbols in each subframe (extended PHICH duration). Here we will only look at the normal PHICH duration, which is what is used in the recording. In the normal duration, the 3 REGs are transmitted simultaneously in the first symbol of the subframe. In the extended duration the 3 REGs are distributed over the first 2 or 3 symbols of the subframe.

In the waterfall below we can see a PHICH transmission. In the first symbol of each subframe we can see the 4 REGs used by the PCFICH (the lower frequency REG, at around -4 MHz is barely visible). In the subframe near the centre of the image (which incidentally contains the synchronization signals), in addition to these 4 REGs, there are 3 more REGs in use, which I have marked with red ticks. These form a PHICH transmission.

Waterfall of an LTE downlink signal, showing PHICH transmissions

LTE downlink: reference signals and transmit diversity

In this post I continue with the analysis of an LTE downlink recording, which I started by looking at the primary and secondary synchronization signals. This recording is a one second excerpt of a 10 MHz cell in the B20 band that I recorded close to the base station, with a line-of-sight channel.

Now we will handle the reference signals to perform channel estimation. This will be used to equalize the received data transmissions. We will also handle the transmit diversity used by the base station, and show how to locate and demodulate some of the physical channels. All the calculations and plots are done in a Jupyter notebook.

The cell-specific reference signals (CRS) are transmitted in every subframe across all the cell bandwidth. They can be transmitted on either one, two or four antenna ports. In LTE, the concept of an antenna port does not necessarily correspond to a physical antenna. Signals are said to use the same antenna port if they have the same propagation channel to the user. Therefore, different beamforming combinations of the same physical antennas constitute different antenna ports.

The figure below shows the resource elements that are used for the reference signals in each of the ports. The resource elements allocated to reference signals for the antenna ports that are active are only used for this purpose, and only one of the ports transmits the reference signal in each of these resource elements. For instance, say that the cell uses two antenna ports. Then the elements marked as \(R_0\) and \(R_1\) below will only be used for the CRS, while the elements marked as \(R_2\) and \(R_3\) are free and can be used for other purposes.

Allocation of resource elements to CRS (taken from the LTE-Advanced book by Sassan Ahmadi)

To the pattern shown above, a frequency offset that consists of the PCI (physical cell ID) modulo 6 subcarriers is applied. This is done so that the reference signals of cells having different PCIs use different subcarriers, so as to prevent interference (especially those cells in the same group, since their PCI modulo 3 is different).

In the waterfall of our recording, we can clearly see the CRS transmissions. They last one symbol and occupy the whole bandwidth of the cell. We can also see the PSS, SSS and PBCH, as we remarked in the previous post. These indicate us where the subframes start. Thus, we can see that the first and fifth symbol of each slot are used for transmission of the CRS. This means that the cell does not use four antenna ports, since their corresponding CRS would be transmitted on the second symbol of each slot.

Waterfall of the downlink recording, showing CRS, PSS, SSS and PBCH

LTE downlink: synchronization signals

I have been posting about analysing LTE signals, with a focus on the structure of the pilot signals. I my two previous posts on this topic, I looked at the uplink using an IQ recording of my phone. Now I turn my attention to the downlink. I have done a short recording of the B20 band carrier of my local base station and I will be analysing it in this and future posts.

In this post, we will look at the primary synchronization signal (PSS) and secondary synchronization signal (SSS). These are the first signals in the downlink that a UE (phone) will attempt to detect and measure to estimate the carrier frequency offset, symbol time offset, start of the radio frames, cell identity, etc.

In an FDD system such as the one we are looking at here, the PSS is transmitted in the last symbol of slots 0 and 10 in each radio frame (Recall that LTE FDD signals are organized in 0.5 ms slots each containing 7 OFDM symbols. A radio frame lasts 10 ms and contains 20 slots). The SSS is transmitted on the symbol before the PSS.

The figure below shows the waterfall of the first 20 ms of the recording. I have marked the locations of the PSS and SSS with a red tick. These signals only occupy the 6 central resource blocks (1.08 MHz), so that they are compatible with all the possible cell bandwidths (LTE supports cell bandwidths of 1.4, 3, 5, 10, 15 and 20 MHz) and can be received by a UE which doesn’t know the cell bandwidth yet. In this case, we are looking at a 10 MHz cell, and we can see the neighbouring 10 MHz cells in the top and bottom of the waterfall.

Waterfall of LTE downlink carrier. Synchronization signals are marked with a red line.

We can see that every other PSS and SSS transmission there is another 1.08 MHz transmission following it. This corresponds to the PBCH (physical broadcast channel), which is transmitted on the first 4 symbols of slot 1 in each radio frame. The keen reader will have noticed that the PBCH is slightly wider than the PSS and SSS. This is because the PSS and SSS only use the central 62 out of 72 subcarriers in the 6 resource blocks they occupy, leaving 5 subcarriers at each edge as a guardband. This helps UEs having a large carrier frequency offset to detect these signals. On the other hand, the PBCH occupies all the 72 subcarriers.

Demodulation of LTE PUCCH

In a previous post I showed how to demodulate the LTE physical uplink shared channel (PUSCH) by using a recording of my phone and some Python code. This is a continuation of that post. Here we will look at the physical uplink control channel (PUCCH) transmissions in that recording, and use a similar approach to demodulate them. All the work is done in a Jupyter notebook, which is linked at the end of the post.

The PUCCH carries control information from the UE to the eNodeB, such as scheduling requests, ACK/NACK for HARQ, and the CQI (channel quality indicator). A PUCCH transmission lasts for one subframe (1 ms) and typically occupies a single 12-subcarrier resource block in each of the two 0.5 ms slots in the subframe (there are more recently introduced PUCCH formats which use more subcarriers).

PUCCH transmissions are allocated to the edges of the uplink bandwidth, so as to leave the centre clear as a contiguous segment to be used for PUSCH. On its first slot, the PUCCH transmission uses some particular resource block. On its second slot it uses the symmetric resource block with respect to the centre frequency. This gives some frequency diversity to the transmissions.

The figure below shows a portion of the waterfall of the LTE uplink recording that we will be using (the link to the recording is included in the previous post). It corresponds to a 10MHz-wide cell in the B20 band. The PUCCH transmissions are the narrow bursts. The wider stronger bursts are PUSCH.

Waterfall of an LTE uplink showing some PUCCH and PUSCH transmissions

This illustrates that the PUCCH subframes are allocated to the edges of the cell, and how each subframe jumps to the symmetric resource block on its second slot.