Maximum Likelihood Decoding of Linear Codes on Binary-Input Memoryless Channels
Theory
As described in the theory of the previous experiment, a memoryless channel is described by the input alphabet (set of possible values taken by the input random variable ), the output alphabet (possible values taken by the output random variable ), and the transition probabilities .
Consider that we use the channel times, i.e, our input sequence is a vector (recall that denotes the set of all -length tuples with entries from ). Because of the noisy characteristics of the channel (governed by the transition probabilities ), the transmitted input vector is transformed, into a random output vector , governed by the transition probabilites .
The receiver observes the output sequence . The goal of the receiver is then to decode the transmitted vector . The estimate of the transmitted vector is denoted by . However, because of the fact that the channel noise is random, there could be several possible vectors in which result in the same received vector . Therefore, the decoder at the receiver has to have some metric, based on which it can decide the value for the estimate .
Specifically, we consider the likelihood as the decoding metric, in this virtual lab. That is, the decoder seeks to find that which maximizes the probability . The probability is called the likelihood of with respect to the received vector .
However, the problem that there could be several candidates for such which maximize the likelihood. This problem can be resolved by choosing the transmit sequences from a code. Specifically, we are interested in choosing -length linear codes for binary input channels.
Using Linear Codes on Memoryless Binary-Input Channels
As defined before, an -length linear code over of rate is a -dimensional subspace of .
The ML Decoding Rule for Linear Codes
Assuming that the receiver receives the vector , the Maximum Likelihood decoding rule (called the ML decoding rule) when using such a code on a binary-input channel, is written as follows.
In case of there being multiple codewords in which maximize the likelihood, we will assume that the decoder will break the ties arbitrarily, i.e., the decoder declares any of these codewords (that maximize the likelihood) as the estimate. This estimate is called the ML estimate for the input codeword .
The figure shows a depiction of using a linear code with such a decoder.
NOTE
Add generic decoder picture when using a linear code . Ask me how the pic should look.
We now explore via an example how the ML decoding works for linear codes on the three channel models we consider in this work. Throughout the three channels, we consider the code which is the rowspace of the matrix This code contains codewords, which are all the -linear-combinations of the two rows of . Thus we have,
We now see how the ML decoder decodes, when the codeword is transmitted, in each of the three channels we consider in this virtual lab.
1. Binary Erasure Channel :
Assume that the received vector . In this case, we see that as and are not compatible vectors, for any . At the same time, we have . Since , clearly, we see that Thus, the ML decoder outputs the estimate as the codeword . Thus, the ML decoder decodes the transmitted codeword correctly.
It is easy to present a scenario when the decoding can be incorrect. Consider the received vector with three erasures, . In this case, we see that , while . Therefore, in this case, the decoder can declare one of or as the estimate. In case the decoder chooses , clearly the decoder will be making a decoding error.
2. Binary Symmetric Channel :
Recollect, from the previous experiment, that in the channel , for any , we have the following: where denotes the Hamming distance (number of positions where and have distinct values). We assume that .
Assume that the received vector . Thus, with respect to the four codewords in , we have the following for . Clearly, when , we see that is the highest among all the . Thus, we can say that the ML estimate is . Note that the decoder indeed makes no error, in case the true transmitted codeword was .
Now, we explain a scenario when the decoder's estimate involves choosing one among many possible codewords, each of which has the same likelihood value. For instance, assume that the received word is . In this case, Thus, in this case, we find that both and can be valid ML estimates for the true transmitted codeword, since both of them have the same (and largest) likelihood, among all codewords. Observe that, in this case, if the transmitted codeword was and the decoder declared as the estimate, then we have an error.
From the above observations, it is clear that, if there are many bit flips when the codeword is transmitted is transmitted through the channel, the probability of decoding error increases.
3. AWGN Channel with Noise :
Recall that, for the memoryless AWGN channel, we have, for any two vectors , the probability of receiving vector given that vector was transmitted through the channel is given by
Now, what does it mean to use a binary code on this channel? Observe that the channel really accepts only real numbered input symbols, however, we have binary digits (bits) in the codeword. Therefore, we need a modulation scheme, which maps the bits to real-numbered symbols that we can transmit.
The simplest modulation scheme when we use a binary code (for example the code above) on the AWGN channel is to map the binary symbols to respectively. This essentially translates to using a BPSK (Binary phase shift keying) modulation scheme on the AWGN channel, to transmit the bits of the codeword. The fact that we assume bit is mapped to and the bit is mapped to translates to using SNR (signal-to-noise ratio) per codeword-bit as . To mathematically model a larger SNR value, we could then assume that the channel has a smaller noise power , without having to change the mapping of the BPSK modulation itself. Under this modulation scheme, the code is effectively written as follows. We call this the bipolar representation of the code .
Now, with the code written in its bipolar representation as above, let us understand the ML decoding, when the received vector is . In this case, observe that we have the following equations which simplify the process of finding the ML estimate. where is just the scalar product (also known as the correlation) between the vectors and , which is precisely . Note that is the blocklength of the code , in our example. Now, the minimization above is with respect to all . Observe that the value of does not change in this minimization. Further, for any , it is easy to see that (where , in our example), and is therefore constant too. Therefore, the minimization above can be re-written as Thus, finding the ML estimate for the true transmitted codeword is that codeword which has the largest scalar product or correlation with the received vector .
We finally apply this observation to the example hand. We said that the received vector is . To find the ML estimate on the AWGN channel, we find the correlations between each and and find which one has the largest. We urge the reader to check the following calculations. Therefore, the ML estimate for the transmitted codeword is precisely .