Decoding S-NET

S-NET is a swarm of 4 cubesats from TU-Berlin. Their mission is to test SLink, an S-band transceiver for inter-satellite communications. They were launched on February 1 this year and they use use Amateur frequencies for their telemetry downlink on the 70cm band. Several weeks ago, Mike Rupprecht DK3WN raised my attention towards these satellites. Since they use a rather particular coding, custom software would be needed to decode the telemetry. Then, I set to add support for S-NET to gr-satellites

After some really helpful communication with the S-NET team, in particular with Walter Frese, and some exchanges of ideas with Andrei Kopanchuk UZ7HO, who was also working to add an S-NET decoder to his soundmodem, I have finally added a basic S-NET decoder to gr-satellites.

From the very start, Walter Frese from the S-NET team has been very helpful and has provided Andy UZ7HO and me with all the information we needed. He even did a worked example for me on how to parse the packet header. This has made our lives a lot easier, since S-NET has some quirks with the endianness and some bugs in the implementation of CRC’s, so we probably wouldn’t have succeeded without Walter’s help. Many thanks to Walter and the rest of the S-NET team.

S-NET uses the CMX469 FFSK modem chip to transmit on 70cm. This chip is capable of generating 1k2, 2k4 or 4k8 AFSK, ready to be sent to an FM modulator. S-NET uses the 1k2 configuration, which uses tones at 1200 and 1800Hz, with the lower tone representing the bit 1 and the higher tone representing the bit 0. Note that the tone frequencies are different from the tones at 1200 and 2400Hz of the Bell 202 modem, used in 1k2 AFSK packet radio.

A packet sent by S-NET starts with 24 bits consisting of a repetition of the pattern 01. This is used for clock synchronization in the receiver. After this preamble, the 6 character callsign is transmitted in ASCII. As DK3WN shows, the callsigns for each of the satellites S-NET A, S-NET B, S-NET C and S-NET D are DP0TBB, DP0TBC, DP0TBD and DP0TBE respectively. After the callsign, the 32-bit syncword 0x20F3FA13 is sent to allow byte-synchronization in the receiver.

There is a quirk in the way that the callsign and syncword are transmitted: each byte is sent least-significant bit first. In this way, the syncword that is really sent over the air is 0x04CF5FC8. It is also a bit strange to send the syncword after the callsign, and this almost seems like an afterthought in the protocol. My gr-satellites decoder uses the syncword 0x04CF5FC8 to detect the start of a packet and ignores the callsign.

After the syncword, the packet header (which is called LTU frame header) is sent. This header consists of 70 bits, but FEC and interleaving is used, so a total of 210 bits are transmitted for the header. For transmission, the 70 bit header is first encoded as 14 BCH(15,5,3) codewords. Then, the codewords are sent interleaved, so the order of transmission is as follows: the first bit of the first codeword, the first bit of the second codeword, …, the first bit of the 14th codeword, the second bit of the first codeword, etc.

There is also an endianness quirk in how the 70 bit header is split into 14 chunks of 5 bits and systematically encoded into each of the BCH codewords. The 5 last bits of each BCH codeword are used to store each 5 bit chunk of the header. However, the order in which these 5 bits are written is inverted (from right to left). In other words, the first bit of the header is stored in the last bit of the first BCH codeword, the second bit of the header is stored in the second to last bit of the first BCH codeword, and so on.

The 70 bit header is divided in fields according to the following construct BitStruct, which is taken from the gr-satellites decoder.

LTUFrameHeader = BitStruct(
    'SrcId' / BitsInteger(7),
    'DstId' / BitsInteger(7),
    'FrCntTx' / BitsInteger(4),
    'FrCntRx' / BitsInteger(4),
    'SNR' / BitsInteger(4),
    'AiTypeSrc' / BitsInteger(4),
    'AiTypeDst' / BitsInteger(4),
    'DfcId' / BitsInteger(2),
    'Caller' / Flag,
    'Arq' / Flag,
    'PduTypeId' / Flag,
    'BchRq' / Flag,
    'Hailing' / Flag,
    'UdFl1' / Flag,
    'PduLength' / BitsInteger(10),
    'CRC13' / BitsInteger(13),
    'CRC5' / BitsInteger(5),

The last 2 bits of padding are actually not included in the header. They are only used because a BitStruct must have a length which is a multiple of 8 bits. The SrcId field identifies the spacecraft and transmitter: 0 is S-NET A transmitter 0, 1 is SNET-A transmitter 1, 2 is S-NET B transmitter 0, etc. The PduLength field indicates the length of the (uncoded) PDU (or payload of the packet) in bytes. The CRC13 is a CRC-13BBC of the (uncoded) PDU and the CRC5 is a CRC-5ITU of the 70 bit header.

The CRC-5ITU is implemented in a peculiar way. The CRC is computed over the 65 bits comprising the header without the CRC5 field, followed by the sequence 1011011, which is used to pad the data to a multiple of 8 bits. Interestingly, the bytes are processed in reverse (from the last byte to the first byte), and within each byte the most significant bit is processed first. The CRC computation code is as follows.

crc = 0x1F
for bit in bits:
   crc <<= 1
   if (crc >> 5) != bit:
       crc ^= 0x15 # CRC polynomial
   crc &= 0x1F

I’m not an expert on CRC’s, but I don’t see how this code implements polynomial division, and it doesn’t resemble any of the usual algorithms I know for CRC computation. In particular, note that if bits equals five ones followed by an arbitrary number of zeros, then the resulting CRC is always zero.

There is also a bug in the way that the CRC is implemented in the satellite: input byte number 4 is overwritten with the contents of input byte number 3. The gr-satellites decoder mimics this bug to get the same result. The team has said that they will correct this bug in a future software update.

After the FEC encoded and interleaved header is sent, the PDU is transmitted by blocks. Each block also uses FEC and interleaving and consists of 16 codewords of 15 bits. The interleaving is done in the same way as for the header: first we send the first bit of each codeword, then the second bit of each codeword and so on. The contents of each 15 bit codeword depend on the value of the AiTypeSrc, which indicates the code used for the codewords:

  • 0. Uncoded. All the 15 bits represent data.
  • 1. BCH(15,11). The last 11 bits represent data.
  • 2. BCH(15, 7). The last 7 bits represent data.
  • 3. BCH(15, 5). The last 5 bits represent data.

In contrast to the header, the data is written in the usual way (from left to right) in the last bits of each codeword.

Note that since there are 16 codewords by block, each block transmits a whole number of data bytes. However, in the stream of bits that we obtain by concatenating all the last bits in each codeword (according to the value of AiTypeSrc), the bytes are stored in least-significant-bit first format (another endianness quirk).

The complete data is recovered by concatenating the data bytes extracted from each of the blocks. The data is padded with 0xDB bytes at the end to get a whole number of data blocks.

After extracting the data and dropping the 0xDB padding bytes, the CRC13 must be checked. The implementation of CRC13 is similar to the implementation of CRC5 above. However, there is a bug, so the actual code is equivalent to this:

crc = 0x1FFF
for bit in bits:
    crc <<= 1
    if crc & 0x2000 or bit: # BUG
        crc ^= 0x1CF5 # CRC polynomial
    crc &= 0x1FFF

Currently I haven’t implemented any additional processing for the PDU, but there are some details about its format in Mike’s webpage.

The only thing that is missing from the gr-satellites decoder right now is BCH codeword decoding. However, the decoder is able to extract the PDU and check CRC’s, provided there are no bit errors. I have a working implementation of a BCH decoder, which was sent by Walter Frese, but I’ve preferred to release the decoder earlier, before I have some spare time to integrate this BCH decoder.

I have decoded a sample recording of S-NET A that Mike sent me. The results are in this gist, and a brief excerpt of this recording can be found in satellite-recordings.


  1. Great job /article Dani. Thanks (also Mike and Andrey!) Hope to have Gr-satellites up and running coming weeks. Gracias and 73 Albert PD0OXW

      1. As far as I understand it the code represents calculating CRC with linear feeback shift registers (LFSR). There is a ton of material about this topic on the internet.

    1. Hi Johannes,
      The BUG doesn’t have to do with using crc & 0x2000 instead of crc >> 13. When used as truth values, these two are equivalent. The bug is the use of “or” instead of “!=” (compare with the CRC-5 code given above in the post).

      In other words, the on-board implementation of the S-NET CRC-13 is buggy, so the value that is sent over the air is not a CRC, but something different.

      This was discussed with the satellite team back in March 2018.

      1. Hi Daniel.
        yes I understand! It is just that the code that is presented in this blog post is missing the shifting – but the code in gr-satellites does use the shiftig.

        Btw. I talked to Walter Frese and he said that it will not get fixed for S-Net but for SALSAT.

        1. HI Johannes,

          The CRC-13 code in gr-satellites is different but formally equivalent to the code showed in this post. I don’t remember the reason for this, but probably the cause is that the gr-satellites code closely mirrors the on-board C code that Walter Frese shared with me, while the version in this post has been simplified (while maintaining formal equivalence).

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.