Decoding the QO-100 multimedia beacon with GNU Radio: part II

In my previous post I showed a GNU Radio demodulator for the QO-100 multimedia beacon, which AMSAT-DL has recently started to broadcast through the QO-100 NB transponder, using a downlink frequency of 10489.995 MHz. This demodulator flowgraph could receive and save to disk the files transmitted by the beacon using the file receiver from gr-satellites. However, the performance was not so good, because it had a couple of ad-hoc Python blocks. Also, the real-time streaming data (which uses WebSockets) was not handled.

I have continued working in the decoder and solved these problems. Now we have a decoder with good performance that uses new C++ blocks that I have added to gr-satellites, and the streaming data is supported. I think that the only feature that isn’t supported yet is displaying the AMSAT bulletins in the qo100info.html web page (but the bulletins are received and saved to disk).

I have added the decoder and related tools to the examples folder of gr-satellites, so that other people can set this up more easily. In this post I summarise this work.

Decoding the QO-100 multimedia beacon with GNU Radio

Last weekend, AMSAT-DL started some test transmissions of a high-speed multimedia beacon through the QO-100 NB transponder. The beacon uses the high-speed modem by Kurt Moraw DJ0ABR. It is called “high-speed” because the idea is to fit several kbps of data within the typical 2.7 kHz bandwidth of an SSB channel. The modem waveform is 2.4 kbaud 8APSK with Reed-Solomon (255, 223) frames. The net data rate (taking into account FEC and syncword overhead) is about 6.2 kbps.

I had never worked with this modem before, even though it served me as motivation for my 32APSK modem (still a work in progress). With a 24/7 continuous transmission on QO-100, now it was the perfect time to play with the modem, so I quickly put something together in GNU Radio. In this post I explain how my prototype decoder works and what remains to be improved.

LTE downlink: reference signals and transmit diversity

In this post I continue with the analysis of an LTE downlink recording, which I started by looking at the primary and secondary synchronization signals. This recording is a one second excerpt of a 10 MHz cell in the B20 band that I recorded close to the base station, with a line-of-sight channel.

Now we will handle the reference signals to perform channel estimation. This will be used to equalize the received data transmissions. We will also handle the transmit diversity used by the base station, and show how to locate and demodulate some of the physical channels. All the calculations and plots are done in a Jupyter notebook.

The cell-specific reference signals (CRS) are transmitted in every subframe across all the cell bandwidth. They can be transmitted on either one, two or four antenna ports. In LTE, the concept of an antenna port does not necessarily correspond to a physical antenna. Signals are said to use the same antenna port if they have the same propagation channel to the user. Therefore, different beamforming combinations of the same physical antennas constitute different antenna ports.

The figure below shows the resource elements that are used for the reference signals in each of the ports. The resource elements allocated to reference signals for the antenna ports that are active are only used for this purpose, and only one of the ports transmits the reference signal in each of these resource elements. For instance, say that the cell uses two antenna ports. Then the elements marked as \(R_0\) and \(R_1\) below will only be used for the CRS, while the elements marked as \(R_2\) and \(R_3\) are free and can be used for other purposes.

Allocation of resource elements to CRS (taken from the LTE-Advanced book by Sassan Ahmadi)

To the pattern shown above, a frequency offset that consists of the PCI (physical cell ID) modulo 6 subcarriers is applied. This is done so that the reference signals of cells having different PCIs use different subcarriers, so as to prevent interference (especially those cells in the same group, since their PCI modulo 3 is different).

In the waterfall of our recording, we can clearly see the CRS transmissions. They last one symbol and occupy the whole bandwidth of the cell. We can also see the PSS, SSS and PBCH, as we remarked in the previous post. These indicate us where the subframes start. Thus, we can see that the first and fifth symbol of each slot are used for transmission of the CRS. This means that the cell does not use four antenna ports, since their corresponding CRS would be transmitted on the second symbol of each slot.

Waterfall of the downlink recording, showing CRS, PSS, SSS and PBCH

LTE downlink: synchronization signals

I have been posting about analysing LTE signals, with a focus on the structure of the pilot signals. I my two previous posts on this topic, I looked at the uplink using an IQ recording of my phone. Now I turn my attention to the downlink. I have done a short recording of the B20 band carrier of my local base station and I will be analysing it in this and future posts.

In this post, we will look at the primary synchronization signal (PSS) and secondary synchronization signal (SSS). These are the first signals in the downlink that a UE (phone) will attempt to detect and measure to estimate the carrier frequency offset, symbol time offset, start of the radio frames, cell identity, etc.

In an FDD system such as the one we are looking at here, the PSS is transmitted in the last symbol of slots 0 and 10 in each radio frame (Recall that LTE FDD signals are organized in 0.5 ms slots each containing 7 OFDM symbols. A radio frame lasts 10 ms and contains 20 slots). The SSS is transmitted on the symbol before the PSS.

The figure below shows the waterfall of the first 20 ms of the recording. I have marked the locations of the PSS and SSS with a red tick. These signals only occupy the 6 central resource blocks (1.08 MHz), so that they are compatible with all the possible cell bandwidths (LTE supports cell bandwidths of 1.4, 3, 5, 10, 15 and 20 MHz) and can be received by a UE which doesn’t know the cell bandwidth yet. In this case, we are looking at a 10 MHz cell, and we can see the neighbouring 10 MHz cells in the top and bottom of the waterfall.

Waterfall of LTE downlink carrier. Synchronization signals are marked with a red line.

We can see that every other PSS and SSS transmission there is another 1.08 MHz transmission following it. This corresponds to the PBCH (physical broadcast channel), which is transmitted on the first 4 symbols of slot 1 in each radio frame. The keen reader will have noticed that the PBCH is slightly wider than the PSS and SSS. This is because the PSS and SSS only use the central 62 out of 72 subcarriers in the 6 resource blocks they occupy, leaving 5 subcarriers at each edge as a guardband. This helps UEs having a large carrier frequency offset to detect these signals. On the other hand, the PBCH occupies all the 72 subcarriers.

Timing SDR recordings with GPS

Following a discussion on Twitter about how to use satellite signals to check that distributed receivers are properly synchronized, I have decided to write a post about how to use GPS signals to timestamp an SDR recording. The idea is simple: we do a short IQ recording of GPS signals, and then process those signals to find the GPS time corresponding to the start of the recording. This can be applied in many contexts, such as:

  • Checking if the 1PPS synchronization in an SDR receiver is working correctly.
  • Timestamping an SDR recording without the need of a GPS receiver or 1PPS input, by first recording GPS signals for some seconds and then moving to the signals of interest (this only works if you’re able to change frequency without stopping the sample stream).
  • Measuring hardware delays between the 1PPS input and the ADC of an SDR (for this you need to know the hardware delay between the antenna connector and 1PPS output of your GPSDO).
  • Checking if synchronization is repetitive across restarts or power cycles.

We will do things in a fairly manual way, using a couple of open source tools and a Jupyter notebook. The procedure could certainly be automated more (but if you do so, at some point you might end up building a full fledged GPS receiver!). The post is written with a walk-through approach in mind, and besides the usefulness of timestamping recordings, it is also interesting to see hands-on how GPS works.

Demodulation of LTE PUCCH

In a previous post I showed how to demodulate the LTE physical uplink shared channel (PUSCH) by using a recording of my phone and some Python code. This is a continuation of that post. Here we will look at the physical uplink control channel (PUCCH) transmissions in that recording, and use a similar approach to demodulate them. All the work is done in a Jupyter notebook, which is linked at the end of the post.

The PUCCH carries control information from the UE to the eNodeB, such as scheduling requests, ACK/NACK for HARQ, and the CQI (channel quality indicator). A PUCCH transmission lasts for one subframe (1 ms) and typically occupies a single 12-subcarrier resource block in each of the two 0.5 ms slots in the subframe (there are more recently introduced PUCCH formats which use more subcarriers).

PUCCH transmissions are allocated to the edges of the uplink bandwidth, so as to leave the centre clear as a contiguous segment to be used for PUSCH. On its first slot, the PUCCH transmission uses some particular resource block. On its second slot it uses the symmetric resource block with respect to the centre frequency. This gives some frequency diversity to the transmissions.

The figure below shows a portion of the waterfall of the LTE uplink recording that we will be using (the link to the recording is included in the previous post). It corresponds to a 10MHz-wide cell in the B20 band. The PUCCH transmissions are the narrow bursts. The wider stronger bursts are PUSCH.

Waterfall of an LTE uplink showing some PUCCH and PUSCH transmissions

This illustrates that the PUCCH subframes are allocated to the edges of the cell, and how each subframe jumps to the symmetric resource block on its second slot.

Demodulation of the LTE uplink

I have been playing with some LTE recordings to brush up my knowledge, since it isn’t a protocol I’m very familiar with. I’m specially interested in understanding the structure and properties of all the pilot signals. Textbooks and documentation are great, but nothing beats getting your hands dirty with some IQ recordings to be sure you understand all the details.

To have something to work with, I have done some recordings of my phone by holding it near a USRP B205mini without an antenna. While recording, I was playing a Youtube video or browsing the web, to have some traffic. A waterfall of one of the recordings can be seen below. In this post we will be looking at how to demodulate the highlighted section, which contains 7 ms of PUSCH (physical uplink shared channel) occupying 15 resource blocks, together with the corresponding DMRS (demodulation reference signal) symbols. The post assumes some familiarity with OFDM, but doesn’t require any previous knowledge of LTE, so it can be useful to people interested in a hands-on introduction to LTE.

Waterfall of LTE uplink signal (using inspectrum)

GNU Radio 3.9 in Buildroot

Recently I’ve had to cross-compile GNU Radio for an ARM embedded system. I have decided to use Buildroot to build GNU Radio and its dependencies, since I’m fairly familiar with using Buildroot to generate embedded Linux images. Earlier this year, Jean-Michel Friedt and
Gwenhaël Goavec-Merou
presented in FOSDEM their work about adding a GNU Radio package in buildroot. They gave a talk called “Never compile on the target!“.

Unfortunately, the version they used was GNU Radio 3.8, and the package hasn’t been updated to GNU Radio 3.9 yet. I wanted to use GNU Radio 3.9, so I decided to try to update the Buildroot package. After some assorted problems, I have managed to get GNU Radio 3.9 running on my ARM target. The fixes I’ve done are really horrible, so I’ve been quite tempted not to share my changes. I’ve finally decide to share this even though it’s far from perfect, because it might save someone from having to replicate this work, and because if anyone wants to do this properly and update the upstream package, this could be useful as a starting point.

Decoding Voyager 1

Today is the 44th anniversary of the launch of Voyager 1, so I want to celebrate by showing how to decode the Voyager 1 telemetry signal using GNU Radio and some Python. I will use a recording that was done back in 30 December 2015 with the Green Bank Telescope in the context of the Breakthrough Listen project. Most of the data from this project is open data and can be accessed through this portal.

In contrast to other posts about deep space probes in this blog, which are of a very specialized nature, I will try to keep this post accessible to a wider audience by giving more details about the basics. Those interested in learning further can refer to the workshop “Decoding Interplanetary Spacecraft” that I gave in GRCon 2020, and also take a look at other posts in this blog.

LDPC code design for my QO-100 narrowband modem

A couple months ago I presented my work-in-progress design for a data modem intended to be used through the QO-100 NB transponder. The main design goal for this modem is to give the maximum data rate possible in a 2.7 kHz channel at 50 dB·Hz CN0. For the physical layer I settled on an RRC-filtered single-carrier modulation with 32APSK data symbols and an interleaved BPSK pilot sequence for synchronization. Simulation and over-the-air tests of this modulation showed good performance. The next step was designing an appropriate FEC.

Owing to the properties of the synchronization sequence, a natural size for the FEC codewords of this modem is 7595 bits (transmitted in 1519 data symbols). The modem uses a baudrate of 2570 baud, so at 50 dB·Hz CN0 the Es/N0 is 15.90 dB. In my previous post I considered using an LDPC code with a rate of 8/9 or 9/10 for FEC, taking as a reference the target Es/N0 performance of the DVB-S2 MODCODs. After some performing some simulations, it turns out that 9/10 is a bit too high with 7595 bit codewords (the DVB-S2 normal FECFRAMEs are 64800 bits long, giving a lower LDPC decoding threshold). Therefore, I’ve settled on trying to design a good rate 8/9 FEC. At this rate, the Eb/N0 is 9.42 dB.