An annotated 5G SigMF recording

For quite some time I’ve been thinking about generating SigMF annotations in some of the Jupyter notebooks I have about signal analysis, such as those for LTE and 5G NR. The idea is that the information about the frequencies and timestamps of the packets, as well as their type and other metadata, is already obtained in the notebook, so it is not too difficult to generate SigMF annotations with this information. The main intention is educational: the annotated SigMF file provides a visual guide that helps to understand the signal structure, and it also serves as a summary of what kind of signal detection and analysis is done in the Jupyter notebook. The code also serves as an example of how to generate annotations.

Another benefit of this idea is that it serves as a good test case for applications that display SigMF annotations. It shows what kinds of limitations the current tools have, and can also motivate new features. I’ve been toying with this idea since a while ago, but never wrote a blog post about it before. A year ago I sent a pull request to Inspectrum to be able to display annotation comments as tooltips when the mouse hovers above the annotation. While doing some tests with one LTE recording I realized that a feature like this was necessary to display any kind of detailed information about a packet. Back then, Inspectrum was the only application that was reasonably good at displaying SigMF annotations in a waterfall. Later, IQEngine has appeared as another good tool to display SigMF annotations (and also add them manually).

I have now updated the Jupyter notebook that I used to process a 5G NR downlink recording made by Benjamin Menkuec. This is much better to show an example of what I have in mind compared to the LTE recordings I was playing with before. The recording is quite short (so it is small), and I already have code to detect all the “packets”, although I have not been able to identify what kind of signals some of them are.

ssdv-fec: an erasure FEC for SSDV implemented in Rust

Back in May I proposed an erasure FEC scheme for SSDV. The SSDV protocol is used in amateur radio to transmit JPEG files split in packets, in such a way that losing some packets only cases the loss of pieces of the image, instead of a completely corrupted file. My erasure FEC augments the usual SSDV packets with additional FEC packets. Any set of \(k\) received packets is sufficient to recover the full image, where \(k\) is the number of packets in the original image. An almost limitless amount of distinct FEC packets can be generated on the fly as required.

I have now written a Rust implementation of this erasure FEC scheme, which I have called ssdv-fec. This implementation has small microcontrollers in mind. It is no_std (it doesn’t use the Rust standard library nor libc), does not perform any dynamic memory allocations, and works in-place as much as possible to reduce the memory footprint. As an example use case of this implementation, it is bundled as a static library with a C-like API for ARM Cortex-M4 microcontrollers. This might be used in the AMSAT-DL ERMINAZ PocketQube mission, and it is suitable for other small satellites. There is also a simple CLI application to perform encoding and decoding on a PC.

Psyche telemetry

In my previous post I spoke about the recording of the telemetry signal from the Psyche spacecraft that I made just a few hours after launch with the Allen Telescope Array. In that post I analysed the physical aspects of the signal and the modulation and coding, but left the analysis of the telemetry frames for another post. That is the topic of this post. It will be a rather long and in-depth look at the telemetry, since I have managed to make sense of much of the structure of the data and there are several protocol layers to cover.

As a reminder from the previous post, the recording was around four hours long. During most of the first three hours, the spacecraft was slowly rotating around one of its axes, so the signal was only visible when the low-gain antenna pointed towards Earth. It was transmitting a low-rate 2 kbps signal. At some point it stopped rotating and switched to a higher rate 61.1 kbps signal. We will see important changes in the telemetry when this switch happens. Even though the high-rate signal represents only one quarter of the recording by duration, due to its 30x higher data rate, it represents most of the received telemetry by size.

Receiving the Psyche launch

On Friday, the Psyche mission launched on a Falcon Heavy from Cape Canaveral. This mission will study the metal-rich asteroid of the same name, 16 Psyche. For more details about this mission you can refer to the talk that Lindy Elkins-Tanton, the mission principal investigator, gave a month ago at GRCon23.

The launch trajectory was such that the spacecraft could be observed from the Allen Telescope Array shortly after launch. The launch was at 14:19 UTC. Spacecraft separation was at 15:21 UTC. The spacecraft then rose above the ATA 16.8 degree elevation mask in the western sky at 15:53 UTC. However, the signal was so strong that it could be received even when the spacecraft was a couple degrees below the elevation mask, so I confirmed the presence of the signal and started recording a couple minutes earlier. At this moment, the spacecraft was at a distance of 18450 km. The spacecraft continued to rise in the sky, achieving a maximum elevation of 32.9 degrees at 16:53 UTC, and setting below the elevation mask on the west at 19:22 UTC. At this moment the spacecraft was 103800 km away. The signal could still be received for a few minutes afterwards, but eventually became very weak and I stopped recording.

Since the recording started only 30 minutes after spacecraft separation, we get to see some of the events that happen very early on in the mission. Most of the observations of deep space launches that I have done with the ATA have started several hours after launch. This Twitter thread by Lindy Elkins-Tanton gives some insight about the first steps following spacecraft separation, and I will be referring to it to explain what we see in the recording.

I intend to publish the recordings in Zenodo as usual, but the platform has been upgraded recently and is showing the following message “Oct 14 12:03 UTC: We are working to resolve issues reported by users.” So far I have been unable to upload large files, but I will keep retrying and update this post when I manage.

Update 2023-10-19: Zenodo have now solved their problems and I have been able to upload the recordings. They are published in the following datasets:

BSRC REU GNU Radio tutorial recordings

Since 2021 I have been collaborating with the Berkeley SETI Research Center Breakthrough Listen Summer Undergraduate Research Experience program by giving some GNU Radio tutorials. This year, the tutorials have been recorded and they are now available in the BSRC Tech YouTube channel (actually they have been there since the end of August, but I only realized just now).

These tutorials are intended as an introduction to GNU Radio and SDR in general, focusing on topics and techniques that are related or applicable to SETI and radio astronomy. They don’t assume much previous background, so they can also be useful for GNU Radio beginners outside of SETI. Although each tutorial builds up on concepts introduced in previous tutorials, their topics are reasonably independent, so if you have some background in SDR you can watch them in any order.

All the GNU Radio flowgraphs and other materials that I used are available in the daniestevez/reu-2023 Github repository. Below is a short summary of each of the tutorials.

Observing OSIRIS-REx during the capsule reentry

On September 24, the OSIRIX-REx sample return capsule landed in the Utah Test and Training Range at 14:52 UTC. The capsule had been released on a reentry trajectory by the spacecraft a few hours earlier, at 10:42 UTC. The spacecraft then performed an evasion manoeuvre at 11:02 and passed by Earth on a hyperbolic orbit with a perigee altitude of 773 km. The spacecraft has now continued to a second mission to study asteroid Apophis, and has been renamed as OSIRIS-APEX.

This simulation I did in GMAT shows the trajectories of the spacecraft (red) and sample return capsule (yellow).

Trajectory of OSIRIX-REx and sample return capsule

Since the Allen Telescope Array (ATA) is in northern California, its location provided a great opportunity to observe this event. Looking at the trajectories in NASA HORIZONS, I saw that the sample return capsule would pass south of the ATA. It would be above the horizon between 14:34 and 14:43 UTC, but it would be very low in the sky, only reaching a peak elevation of 17 degrees. Apparently the capsule had some kind of UHF locator beacon, but I had no information of whether this would be on during the descent (during the sample return livestream I then learned that the main method of tracking the capsule descent was optically, from airplanes and helicopters). Furthermore, the ATA antennas can only point as low as 16.8 degrees, so it wasn’t really possible to track the capsule. Therefore, I decided to observe the spacecraft X-band beacon instead. The spacecraft would also pass south of the ATA, but would be much higher in the sky, reaching an elevation above 80 degrees. The closest approach would be only 1000 km, which is pretty close for a deep space satellite flyby.

As I will explain below in more detail, I prepared a custom tracking file for the ATA using the SPICE kernels from NAIF and recorded the full X-band deep space band at 61.44 Msps using two antennas. The signal from OSIRIS-REx was extremely strong, so this recording can serve for detailed modulation analysis. To reduce the file size to something manageable, I have decimated the recording to 2.048 Msps centred around 8445.8 MHz, where the X-band downlink of OSIRIS-REx is located, and published these files in the Zenodo dataset “Recording of OSIRIS-REx with the Allen Telescope Array during SRC reentry“.

In the rest of this post, I describe the observation setup, analyse the recording and spacecraft telemetry, and describe some possible further work.

Update on the Galileo GST-UTC anomaly

At the beginning of September I wrote about an ongoing anomaly in the offset between the GST (the Galileo System Time) and the UTC timescales. This short post is an update on this problem, with new data and plots.

For this post I have only used final solutions for the CODE precise clock biases. I have replaced the rapid solutions that I used in the last post by their final versions. The data under analysis now spans from day of year 225 (2023-08-13) to day of year 266 (2023-09-23).

The plot of the difference between the broadcast clock biases and the CODE precise clock biases now has the following aspect. Comparing with the previous post, we see that the difference has stayed around -15 ns until 2023-09-08, and then it has started increasing towards zero, but it has overshoot and it is at around 20 ns by 2023-09-23.

The plot of the system time differences in the broadcast messages is also quite interesting. For this I am using the IGS BRDC data until day of year 275 (2023-10-03). Recall that it appeared that the GST-UTC drift had the wrong sing, because it was clearly increasing but the drift had negative sign. Now the situation looks more complicated. Though it appears that the drift has the wrong sign around 2023-08-28 and 2023-09-22, there is also a segment around 2023-09-08 where the sign looks correct. Additionally, around 2023-09-01 the sign should be close to zero but is not.

For comparison, here is the same plot with the sign of the GST-UTC drift flipped. Arguably, the drift gives a better match most of the time, but certainly not around 2023-09-08. Therefore, the problem with the modelling of the GST-UTC drift in the Galileo broadcast message looks more complicated than just the sign bit being wrong.

The updated Jupyter notebook and data is in this repository.

Published
Categorised as Space Tagged

Decoding Aditya-L1

Aditya-L1 is an Indian solar observer satellite that was launched to the Sun-Earth Lagrange point L1 on September 2. It transmits telemetry in X-band at 8497.475 MHz. This post is going to be just a quick note. Lately I’ve been writing very detailed posts about deep space satellites and I’ve had to skip some missions such as Chandrayaan-3 because of lack of time. There is a very active community of amateur observers doing frequent updates, but the information usually gets scattered in Twitter and elsewhere, and it is hard to find after some months. I think I’m going to start writing shorter posts about some missions to collect the information and be able to find it later.

DVB-S2 GRCon23 CTF challenge

This year I submitted a challenge track based around a DVB-S2 signal to the GRCon CTF (see this post for the challenges I sent in 2022). The challenge description was the following.

I was scanning some Astra satellites and found this interesting signal. I tried to receive it with my MiniTiouner, but it didn’t work great. Maybe you can do better?

Note: This challenge has multiple parts, which can be solved in any order. The flag format is flag{...}.

A single SigMF recording with a DVB-S2 signal sampled at 2 Msps at a carrier frequency of 11.723 GHz was given for the 10 flags of the track. The description and frequency were just some irrelevant fake backstory to introduce the challenge, although some people tried to look online for transponders on Astra satellites at this frequency.

The challenge was inspired by last year’s NTSC challenge by Clayton Smith (argilo), which I described in my last year’s post. This DVB-S2 challenge was intended as a challenge with many flags that can be solved in any order and that try to showcase many different places where one might sensibly put a flag in a DVB-S2 signal (by sensibly I mean a place that is actually intended to transmit some information; surely it is also possible to inject flags into headers, padding, and the likes, but I didn’t want to do that). In this sense, the DVB-S2 challenge was a spiritual successor to the NTSC challenge, using a digital and modern TV signal.

Another source of motivation was one of last year’s Dune challenges by muad’dib, which used a DVB-S2 Blockstream Satellite signal. In that case demodulating the signal was mostly straightforward, and the fun began once you had the transport stream. In my challenge I wanted to have participants deal with the structure of a DVB-S2 signal and maybe need to pull out the ETSI documentation as a help.

I wanted to have flags of progressively increasing difficulty, keeping some of the flags relatively easy and accessible to most people (in any case it’s a DVB-S2 signal, so at least you need to know which software tools can be used it to decode it and take some time to set them up correctly). This makes a lot of sense for a challenge with many flags, and in hindsight most of the challenges I sent last year were quite difficult, so I wanted to have several easier flags this time. I think I managed well in this respect. The challenge had 10 flags, numbered more or less in order of increasing difficulty. Flags #1 through #8 were solved by between 7 and 11 teams, except for flag #4, which was somewhat more difficult and only got 6 solves. Flags #9 and #10 were significantly more difficult than the rest. Vlad, from the Caliola Engineering LLC team, was the only person that managed to get flag #10, using a couple of hints I gave, and no one solved flag #9.

When I started designing the challenge, I knew I wanted to use most of the DVB-S2 signal bandwidth to send a regular transport stream video. There are plenty of ways of putting flags in video (a steady image, single frames, audio tracks, subtitles…), so this would be close to the way that people normally deal with DVB-S2, and give opportunities for many easier flags. I also wanted to put in some GSE with IP packets to show that DVB-S2 can also be used to transmit network traffic instead of or in addition to video. Finally, I wanted to use some of the more fancy features of DVB-S2 such as adaptive modulation and coding for the harder flags.

To ensure that the DVB-S2 signal that I prepared was not too hard to decode (since I was putting more things in addition to a regular transport stream), I kept constantly testing my signal during the design. I mostly used a Minitiouner, since perhaps some people would use hardware decoders. I heard that some people did last year’s NTSC challenge by playing back the signal into their hotel TVs with an SDR, and for DVB-S2 maybe the same would be possible. Hardware decoders tend to be more picky and less flexible. I also tested gr-dvbs2rx and leandvb to have an idea of how the challenge could be solved with the software tools that were more likely to be used.

What I found, as I will explain below in more detail, is that the initial way in which I intended to construct the signal (which was actually the proper way) was unfeasible for the challenge, because the Minitiouner would completely choke with it. I also found important limitations with the software decoders. Basically they would do some weird things when the signal was not 100% a regular transport stream video, because other cases were never considered or tested too thoroughly. Still, the problems didn’t get too much in the way of getting the video with the easier flags, and I found clever ways of getting around the limitations of gr-dvbs2rx and leandvb to get the more difficult flags, so I decided that this was appropriate for the challenge.

In the rest of this post I explain how I put together the signal, the design choices I made, and sketch some possible ways to solve it.

Galileo GST-UTC anomaly

Each GNSS system uses their own timescale for all the calculations. What this means is slightly difficult to explain, since anything involving “paper clocks” is conceptually tricky. In essence, the system, as part of the orbit and clock determination, maintains a timescale (paper clock) to which the ephemeris are referenced. This timescale is usually related to some atomic clocks in the countries that manage the GNSS system, and these clocks often participate in the realization of UTC, so that in the end this GNSS timescale is traceable to UTC. For instance, GPS time is related to UTC(USNO), and GST, the Galileo system time, is related to a few atomic clocks in Europe that are realizations of UTC.

Since all the information for a single GNSS system is self-consistent with respect to the timescale it uses, a receiver using a single system to compute position and velocity does not really care about the GNSS timescale. Everything will just work. However, when the receiver needs to compute UTC time accurately (for instance to produce a 1PPS output that is aligned to UTC) or to mix measurements from several GNSS systems, then it needs to handle the different GNSS timescales involved, and transfer all the measurements to a common timescale (and in the case of measuring UTC time or producing a 1PPS output, relating this timescale to UTC).

As part of their broadcast ephemeris, GNSS systems broadcast an estimate of the offset between the system timescale and UTC (this estimate is just a forecast; the true value of UTC is only known a posteriori, when the BIPM publishes its monthly Circular T). Additionally, some systems also broadcast the offset between their timescale and the timescale of another system. For example, Galileo broadcasts the offset GST-GPST, the difference between Galileo system time and GPS time. This is intended as a help to receivers that want to combine measurements from both systems. However, these values are somewhat inaccurate, so it is common for receivers to ignore them, and determine the offsets on their own (this is done by adding extra variables called intersystem biases to the system of equations solved to obtain the navigation solution). This trick only works for combining measurements of different systems. In the case of UTC, receivers do not have any way of directly measuring UTC, so they must trust the broadcast message. Alternatively, some receivers offer an option to align their 1PPS output to a GNSS timescale such as GPST.

These days, most GNSS systems, including GPS and Galileo usually keep their timescales within a few nanoseconds of UTC. According to the specifications of each system, the relation to UTC is much loose. GPS time promises to be synchronized to UTC(USNO) with a maximum error of 1 microsecond. GLONASS time only promises 1 millisecond synchronization to UTC. GST is supposed to be within 50 ns of UTC, and Beidou time within 100 ns. However, if timekeeping is done correctly, there is no reason why the GNSS timescale should drift from UTC by more than a couple nanoseconds (which is the typical drift of physical realizations of UTC). Receivers should assume that the offsets can be as large as the maximum errors in the specification, but might occasionally freak out if they see an offset which is large, either because of bugs if this case was never tested, as in practice the offsets are typically small, or because it suspects an error in its own measurements.

A few days ago, some people in the time-nuts mailing list, and Bert Hubert from galmon.eu noticed that the GST-UTC, the difference between Galileo system time and UTC, had increased to 17 ns and remain around this value during the end of August. This is highly unusual, because GST-UTC is usually smaller than 2 or 3 ns.

galmon.eu provides a public Grafana dashboard where this anomaly can be seen, though the dashboard (or web browser) tends to choke if we request to plot more than one week worth of data.

Broadcast GST-UTC in the galmon.eu Grafana dashboard

When hearing about this on Friday, I got interested and commented some ways in which this could be researched further. I also wondered: is the GST-UTC offset actually large, or is it just that the offset estimate in the navigation message is wrong? In this post I explore one of my ideas.

Published
Categorised as Space Tagged