BCH Codes

Theory

In this experiment, we consider a binary primitive BCH code, which is a subclass of binary BCH codes that is one of the most important from the standpoint of both theory and implementation. A detailed discussion of the topics covered in this experiment can be found in [1, Chapter 6]. One of the most important and popular subclass of nonbinary BCH codes is the Reed-Solomon codes, presented in Experiment-7.

The theory associated with Experiment-9 is divided into two parts:

(1) Preliminaries
(2) Encoding of Binary Primitive BCH codes (3) Decoding of Binary Primitive BCH codes

1     Preliminaries

In this section, we recall the basics of Galois fields (fields of prime power Fpm\mathbb{F}_{p^{m}}) and the construction of the minimal polynomial of an element belonging to the Galois field. For more details, students can refer to Experiment-6 (Part-4).

In this experiment, we are studying only binary BCH code. Therefore, we consider the Galois field with p=2p=2, i.e., F2m={0,1,α,α2,,α2m2} \mathbb{F}_{2^{m}} = \{0, 1, \alpha, \alpha^{2}, \ldots, \alpha^{2^{m}-2} \}. Here α\alpha is the primitive element, which is the root of a primitive polynomial of degree mm over F2\mathbb{F}_{2} and α2m1=1\alpha^{2^{m}-1} = 1 . Notice that any non-zero element can be generated by the primitive element α\alpha (i.e., α\alpha raised to the power of some i,0i2m2i , \,0 \leq i \leq 2^{m}-2).

For an example all the elements of the field F24\mathbb{F}_{2^{4}} is provided in Table 1, where α15=1\alpha^{15}=1. It contains both power and polynomial representations of all elements, which is constructed with the primitive polynomial 1+X+X4 1 + X + X^{4}. Therefore, the relation 1+α+α4=01+ \alpha + \alpha^{4} = 0 has been used for polynomial representation. This table will be utilized throughout the experiment wherever it is required.

Binary BCH code of length n=2m1n = 2^{m}-1 is called binary primitive BCH code, since the length of the code is equal to the number of elements generated by the primitive element α\alpha in F2m \mathbb{F}_{2^{m}} . for e.g., binary BCH code of length n=15n=15 over the field F24\mathbb{F}_{2^{4}} is binary primitive BCH code, since n=15=241n = 15 = 2^{4}-1.

In Experiment-1 (Part-1), We have seen the definition of the field F\mathbb{F} with binary operations addition (+)(+) and multiplication (.)(.). One of the axioms of the field is multiplicative inverse. For every a0Fa \neq 0 \, \in \mathbb{F}, there exists an element in F\mathbb{F}, denoted by a1a^{-1} called the multiplication inverse of aa, such that a.a1=1a.a^{-1} =1. In the later part of the experiment, we will need to find the multiplicative inverses of non-zero elements belonging to F2m\mathbb{F}_{2^{m}}. Let's see the method to find the multiplicative inverse of any element for the particular Galois field F2m\mathbb{F}_{2^{m}}. The multiplicative inverse of an element αl\alpha^{l} belongs to F2m \mathbb{F}_{2^{m}} is the element αl\alpha^{l^{'}} from F2m \mathbb{F}_{2^{m}} if and only if αlαl=1\alpha^{l} \alpha^{l^{'}} = 1. That implies αl+l=1\alpha^{l+l^{'}} = 1, it can happen only if l+l=2m1l+l^{'} = 2^{m}-1, therefore αl=α(2m1)l\alpha^{l^{'}} = \alpha^{(2^{m}-1)-l}. For e.g.,\ inverse of α3\alpha^{3} in the field F24 \mathbb{F}_{2^{4}} is α153=α12\alpha^{15-3} = \alpha^{12}.

Minimal polynomials of filed elements are used in finding the generator polynomial of the BCH code. So, now let's see the construction of the minimal polynomial of an element through the following example.

Example 1:\textbf{Example 1:} Consider the Galois field F24\mathbb{F}_{2^{4}} such that 1+α+α4=01+\alpha+\alpha^{4} = 0 (see Table 1). Suppose we want to determine the minimal polynomial ϕ(X)\phi(X) of β=α3\beta = \alpha^{3} in F24\mathbb{F}_{2^{4}}. Recall that conjugates of β\beta are the distinct elements βp,βp2,,βpm\beta^p, \beta^{p^2}, \ldots, \beta^{p^{{m}}} , where pp is the characteristic of Fpm\mathbb{F}_{p^{m}} (here p=2p=2). So the conjugates of the element β=α3\beta=\alpha^{3} are α3.2=α6,α3.22=α12\alpha^{3.2} = \alpha^{6}, \alpha^{3.2^{2}}=\alpha^{12}, α3.23=α9\alpha^{3.2^{3}}=\alpha^{9} and α3.24=α3\alpha^{3.2^{4}}=\alpha^{3}. Hence, the minimal polynomial of β=α3\beta = \alpha^{3} is ϕ(X)=(X+α3)(X+α6)(X+α12)(X+α9)=[X2+(α3+α6)X+α9][X2+(α12+α9)X+α21]=(X2+α2X+α9)(X2+α8X+α6)=X4+(α2+α8)X3+(α6+α10+α9)X2+(α17+α8)X+α15=X4+X3+X2+X+1. \begin{align*} \phi(X) & = (X+\alpha^{3}) (X+\alpha^{6}) (X+\alpha^{12}) (X+\alpha^{9}) \\ & = [X^{2}+(\alpha^{3}+\alpha^{6})X+\alpha^{9}] [X^{2}+(\alpha^{12}+\alpha^{9})X+\alpha^{21}] \\ & = (X^{2}+\alpha^{2}X+\alpha^{9}) (X^{2}+\alpha^{8} X+\alpha^{6}) \\ & = X^{4} + (\alpha^{2} + \alpha^{8}) X^{3} + (\alpha^{6}+\alpha^{10}+\alpha^{9}) X^{2} + (\alpha^{17} + \alpha^{8}) X + \alpha^{15} \\ & = X^{4} + X^{3} + X^{2} + X +1. \end{align*} We can find the minimal polynomial of every element as shown above. Conjugates of an element

  • α\alpha are {α,α2,α4 \{ \alpha, \alpha^{2}, \alpha^{4}, α8}\alpha^{8} \},
  • α3\alpha^{3} are {α3,α6,α9\{ \alpha^{3}, \alpha^{6}, \alpha^{9} ,α12}\alpha^{12} \},
  • α5\alpha^{5} are {α5 \{ \alpha^{5} , α10}\alpha^{10} \} and
  • α7 \alpha^{7} are {α7,α11,α13 \{ \alpha^{7}, \alpha^{11}, \alpha^{13} and α14}\alpha^{14} \}.

The minimal polynomials of α,α3,α5\alpha, \alpha^{3}, \alpha^{5} and α7\alpha^{7} are ϕ1(X)=1+X+X4,ϕ3(X)=1+X+X2+X3+X4,ϕ5(X)=1+X+X2,andϕ7(X)=1+X+X2+X3+X4 \begin{align*} & \phi_{1}(X) = 1+X+X^{4}, \\ & \phi_{3}(X) = 1+X+X^{2}+X^{3}+X^{4}, \\ & \phi_{5}(X) = 1+X+X^{2}, \,\text{and}\\ & \phi_{7}(X) = 1+X+X^{2}+X^{3}+X^{4} \end{align*} respectively.

2     Encoding of Binary Primitive BCH Code

BCH codes form a class of cyclic codes. Suppose we want to design a code that can correct up to tt number of errors. In general, finding the generator polynomial to obtain the tt -error-correcting cyclic code is difficult. However, the design of BCH codes involves finding an appropriate generator polynomial, which allows the correction of errors up to tt. In this section, we will explain the construction of the generator polynomial for a binary primitive BCH code, which can correct up to tt errors.

For any positive integers m(m3)m (m\geq 3) and t(t<2m1)t (t < 2^{m-1}), there exists a binary BCH code with the parameters: block length n=2m1n = 2^{m}-1, number of parity check-check bits nkmtn-k \leq mt and minimum distance dmin2t+1d_{min} \geq 2t+1. This code can correct any combination of tt or fewer errors in a block of n=2m1n = 2^{m}-1 bits. From now onwards, in subsequent discussions, we will refer to the binary primitive BCH code as the BCH code. Now we will see the construction of the generator polynomial of tt -error-correcting BCH code.

Construction of Generator Polynomial:\textbf{Construction of Generator Polynomial:} The generator polynomial of the BCH code is specified in terms of its roots from the Galois field F2m\mathbb{F}_{2^{m}}. Let α\alpha be a primitive element in F2m\mathbb{F}_{2^{m}}. The generator polynomial g(X)\mathbf{g}(X) of the BCH code of length 2m12^{m}-1, which can correct up to tt errors, is the lowest-degree polynomial over F2\mathbb{F}_{2} that has α,α2,α3,,α2t \alpha, \alpha^{2}, \alpha^{3}, \ldots , \alpha^{2t} as it roots. i.e., g(αi)=0\mathbf{g}(\alpha^{i}) = 0 for 1i2t1 \leq i \leq 2t. We know that if β\beta is the root of the polynomial, then all its conjugates are also the roots of that polynomial. Therefore, g(X)\mathbf{g}(X) has α,α2,α3,,α2t \alpha, \alpha^{2}, \alpha^{3}, \ldots , \alpha^{2t} and their conjugates as all its roots. Let ϕi(X)\phi_{i}(X) be the minimal polynomial of αi\alpha^{i}. Then, g(X)\mathbf{g}(X) must be the least common multiple (LCM) of ϕ1(X),ϕ2(X),,ϕ2t(X)\phi_{1}(X), \phi_{2}(X), \ldots , \phi_{2t}(X), that is g(X)= LCM {ϕ1(X),ϕ2(X),,ϕ2t(X)}. \begin{equation} \mathbf{g}(X) = \text{ LCM } \{\phi_{1}(X), \phi_{2}(X), \ldots , \phi_{2t}(X) \}. \end{equation} Based on the fact that the minimal polynomial of an element and all its conjugates is the same, the generator polynomial can be reduced to g(X)=LCM{ϕ1(X),ϕ3(X),ϕ5(X),,ϕ2t1(X)}. \begin{equation} \mathbf{g}(X) = \text{LCM} \, \, \{\phi_{1}(X), \phi_{3}(X), \phi_{5}(X), \ldots , \phi_{2t-1}(X) \}. \end{equation}

Now, we can obtain the polynomial representation of the codeword for the given message vector u=[u0u1uk1]F2k\mathbf{u}=\begin{bmatrix} u_0& u_1& \ldots& u_{k-1} \end{bmatrix} \in \mathbb{F}_{2}^{k} as v(X)=u(X)g(X)=v0+v1X+v2X2++vn1Xn1, \begin{align} \mathbf{v}(X) & = \mathbf{u}(X) \mathbf{g}(X) \\ & = v_0 + v_1X + v_2X^2 + \ldots + v_{n-1}X^{n-1}, \nonumber \end{align} where u(X)=u0+u1X+u2X2++uk1Xk1\mathbf{u}(X) = u_0 + u_1X + u_2X^2 + \ldots + u_{k-1}X^{k-1} is the polynomial representation of the message uF2k\mathbf{u} \in \mathbb{F}_2^k. The codeword of uF2k\mathbf{u} \in \mathbb{F}_2^k is the vector representation of v(X)\mathbf{v}(X), i.e., v=[v0v1vn1]F2n\mathbf{v}=\begin{bmatrix} v_0& v_1& \ldots& v_{n-1} \end{bmatrix} \in \mathbb{F}_2^n.

Now we will illustrate the construction of a generator polynomial with an example (Example 2).

Example 2:\textbf{Example 2:} A t-error-correcting BCH code of length 15.

The length of the code n=15=2m1n= 15 = 2^{m}-1, that implies m=4m=4. Let α\alpha be a primitive element of Galois field F24\mathbb{F}_{2^{4}} such that 1+α+α4=01 + \alpha + \alpha^{4} = 0. Elements of F24\mathbb{F}_{2^{4}} are given in Table 1.

For t=1t=1: From Eq. (2) the generator polynomial g(X)=LCM{ϕ1(X)}.\mathbf{g}(X) = \text{LCM} \, \, \{\phi_{1}(X) \}. Therefore, the minimal polynomial ϕ1(X)\phi_1(X) is the generator polynomial. The minimum distance of the BCH code dmin2t+1=3d_{min} \geq 2t+1 = 3. The generator polynomial g(X)=(1+X+X4)\mathbf{g}(X) = (1+X+X^{4}) is the minimal degree code polynomial of the code, and the weight of the corresponding codeword is 3. Since dmin3d_{min} \geq 3 and there is a codeword of weight 3 in the code, hence the dmin=3d_{min} = 3. Therefore, with the above g(X)\mathbf{g}(X) we obtain the code (n=15,k=11,dmin=3)(n=15, k=11, d_{min}=3 ) which can correct any single error.

For t=3t=3: From Eq. (2) the generator polynomial, g(X)=LCM{ϕ1(X),ϕ3(X),ϕ5(X)}\mathbf{g}(X) = \text{LCM} \, \, \{\phi_{1}(X), \phi_{3}(X), \phi_{5}(X) \}. Because ϕ1(X),ϕ3(X),ϕ5(X) \phi_{1}(X), \phi_{3}(X), \phi_{5}(X) are three distinct irreducible polynomials. g(X)=ϕ1(X)ϕ3(X)ϕ5(X)=(1+X+X4)(1+X+X2+X3+X4)(1+X+X2)=1+X+X2+X4+X5+X8+X10. \begin{align*} \mathbf{g}(X) & = \phi_{1}(X) \phi_{3}(X) \phi_{5}(X) \\ & = (1+X+X^{4}) (1+X+X^{2}+X^{3}+X^{4}) (1+X+X^{2}) \\ & = 1+X+X^{2}+X^{4}+X^{5}+X^{8}+X^{10}. \end{align*} The minimum distance of the code dmin2t+1=7d_{min} \geq 2t+1 = 7 and the weight of the codeword corresponding to the code polynomial g(X)=1+X+X2+X4+X5+X8+X10\mathbf{g}(X) = 1+X+X^{2}+X^{4}+X^{5}+X^{8}+X^{10} is 7, hence dmin=7d_{min} = 7. Therefore, with the above g(X)\mathbf{g}(X) we obtain BCH code (n=15,k=5,d=7)(n=15, k=5, d=7 ), which can correct up to any three errors. Notice that these codes are not MDS codes.

3     Decoding of Binary Primitive BCH Code

Suppose that a codeword v(X)=v0+v1X+v2X2++vn1Xn1\mathbf{v}(X) = v_0 + v_1X + v_2X^2 + \ldots + v_{n-1}X^{n-1} is transmitted over Binary Symmetric Channel (BSC), and the received vector is r(X)=r0+r1X+r2X2++rn1Xn1\mathbf{r}(X) = r_0 + r_1X + r_2X^2 + \ldots + r_{n-1}X^{n-1}. Let e(X)=e0+e1X+e2X2++en1Xn1\mathbf{e}(X) = e_0 + e_1X + e_2X^2 + \ldots + e_{n-1}X^{n-1} is the error pattern. Then, r(X)=v(X)+e(X). \begin{equation} \mathbf{r}(X) = \mathbf{v}(X) + \mathbf{e}(X). \end{equation} In Experiment-4 (Part-2), we studied syndrome decoding for linear block codes. The key steps in syndrome decoding consist of computing the syndrome of the received vector and associating it with the error pattern, which is then added to obtain the decoded codeword. Now we will discuss the procedure to find the syndrome of the received vector and its error pattern for the BCH code.

In a tt -error-correcting BCH code, the syndrome can be found from the fact that α,α2,,α2t\alpha, \alpha^{2}, \ldots, \alpha^{2t} are the roots of g(X)\mathbf{g}(X) (refer Eq. (2)), i.e., g(αi)=0,1i2t\mathbf{g}(\alpha^{i}) = 0, \, \forall 1 \leq i \leq 2t . According to Eq. (3) every code polynomial v(X)=u(X)g(X)\mathbf{v}(X) = \mathbf{u}(X) \mathbf{g}(X) , so for every 1i2t1 \leq i \leq 2t we get v(αi)=u(αi)g(αi)==u(αi).0=0\mathbf{v}(\alpha^{i}) = \mathbf{u}(\alpha^{i}) \mathbf{g}(\alpha^{i}) = = \mathbf{u}(\alpha^{i}).0 = 0. Using these 2t2t elements as evaluation points for the received polynomial, the syndrome vector will form with 2t2t components. Let the syndrome vector S=[S1S2S2t]\mathbf{S} = \begin{bmatrix} S_1& S_2& \ldots & S_{2t} \end{bmatrix} and its components are Si=r(αi),1i2t S_{i} = \mathbf{r}(\alpha^{i}), \, \forall 1 \leq i \leq 2t. Furthermore, from Eq. (4) we also get r(αi)=v(αi)+e(αi)=0+e(αi)=e(αi),\mathbf{r}(\alpha^{i}) = \mathbf{v}(\alpha^{i}) + \mathbf{e}(\alpha^{i}) = 0 + \mathbf{e}(\alpha^{i}) = \mathbf{e}(\alpha^{i}), for all 1i2t1 \leq i \leq 2t. Therefore

S=[S1S2S2t]=[r(α1)r(α2),r(α2t)]=[e(α1)e(α2)e(α2t)]. \begin{align} \mathbf{S} = \begin{bmatrix} S_1& S_2& \ldots& S_{2t} \end{bmatrix} & = \begin{bmatrix} \mathbf{r}(\alpha^{1})& \mathbf{r}(\alpha^{2})& \ldots &, \mathbf{r}(\alpha^{2t}) \end{bmatrix} \nonumber \\ & = \begin{bmatrix} \mathbf{e}(\alpha^{1})& \mathbf{e}(\alpha^{2})& \ldots &\mathbf{e}(\alpha^{2t}) \end{bmatrix}. \end{align}

Since Si=e(αi)S_{i} = \mathbf{e}(\alpha^{i}) for 1i2t1 \leq i \leq 2t , the syndrome S\mathbf{S} depends on the error pattern e\mathbf{e} only. Suppose that the error vector e\mathbf{e} has ν\nu errors at locations j1,j2,,jν{j_{1}}, {j_{2}}, \ldots , {j_{\nu}}, where 0j1<j2<<jν<n0 \leq j_1 < j_2 < \ldots < j_{\nu} < n. Then the corresponding error polynomial e(X)=Xj1+Xj2++Xjν. \begin{equation} \mathbf{e}(X) = X^{j_1}+X^{j_2}+ \ldots + X^{j_{\nu}}. \end{equation}

From Eq. (5) and Eq. (6) we obtain for every 1i2t,Si=(αj1)i+(αj2)i++(αjν)i1 \leq i \leq 2t, \, S_{i} = (\alpha^{j_1})^{i}+(\alpha^{j_2})^{i}+ \ldots + (\alpha^{j_{\nu}})^{i}. Since, jlj_{l}'s are unknown, αjl\alpha^{j_{l}}'s are also not known. For convenience, let βl=αjl\beta_{l} = \alpha^{j_{l}} for 1lν1 \leq l \leq \nu. We call these elements the error location numbers. Since by knowing βl\beta_{l}'s, we can tell the location of the errors jlj_{l}. Now we define the following polynomial: σ(X)(1+β1X)(1+β2X)(1+βνX)=σ0+σ1X+σ2X2++σνXν. \begin{align} \sigma(X) & \triangleq (1+\beta_{1}X) (1+\beta_{2}X) \ldots (1+\beta_{\nu}X) \nonumber \\ & = \sigma_{0} + \sigma_{1}X+ \sigma_{2}X^{2}+ \ldots + \sigma_{\nu}X^{\nu}. \end{align} The roots of σ(X)\sigma(X) are β11,β21,,βν1\beta_{1}^{-1}, \beta_{2}^{-1}, \ldots , \beta_{\nu}^{-1}, which are the inverses of β1=αj1,β2=αj2,,βν=αj1\beta_{1}= \alpha^{j_{1}}, \beta_{2}= \alpha^{j_{2}}, \ldots , \beta_{\nu}= \alpha^{j_{1}}. Therefore, by knowing βl1\beta_{l}^{-1} for all ll we can find βl=αjl\beta_{l} = \alpha^{j_{l}}, thus we get the location of errors jlj_{l} of e\mathbf{e}. For this reason, σ(X)\sigma(X) is called the error-location polynomial. Note that σ(X)\sigma(X) is an unknown polynomial whose coefficients σl\sigma_{l}'s must be determined. Our decoding strategy is to estimate the codeword that is nearest to the received vector, which is equivalent to finding an error pattern with less number of error locations. Therefore, utilizing the relationship between σl\sigma_{l}'s and syndrome components SiS_{i}'s, we aim to determine σ(X)\sigma(X) with the lowest possible degree. Because the lowest possible degree will have less number of roots (βl\beta_{l}'s), hence the number of error locations (jlj_{l}'s) are minimum. If νt\nu \leq t, this σ(X)\sigma(X) will give the actual an error pattern e(X)\mathbf{e}(X).

By knowing the polynomial σ(X)\sigma(X), we can find its roots by substituting the elements from the field F2m\mathbb{F}_{2^{m}}. From this, we can find β1=αj1,β2=αj2,,βν=αjν\beta_{1} = \alpha^{j_1}, \beta_{2} =\alpha^{j_2}, \ldots , \beta_{\nu} = \alpha^{j_{\nu}} and j1,j2,,jνj_{1}, {j_2}, \ldots , j_{\nu} are the error location numbers.

Now, we can summarize the error-correcting procedure for BCH codes with three major steps:

  1. Compute the syndrome S=[S1S2S2t]\mathbf{S} = \begin{bmatrix} S_1 & S_2 & \ldots & S_{2t}\end{bmatrix} from the received polynomial r(X)\mathbf{r}(X).
  2. Determine the error-location polynomial σ(X)\sigma(X) from the syndrome components S1,S2,,S2tS_1, S_2, \ldots , S_{2t}.
  3. Determine the error-location numbers β1=αj1,β2=αj2,,βν=αjν\beta_{1} = \alpha^{j_1}, \beta_{2} =\alpha^{j_2}, \ldots , \beta_{\nu} = \alpha^{j_{\nu}} by finding the roots of σ(X)\sigma(X), and correct the errors in r(X)\mathbf{r}(X).

In step 1, computing the syndrome can be done by evaluating the received polynomial at αi\alpha^{i}'s. In step 3, for the given σ(X)\sigma(X) (obtained from Step 2) the roots βl1\beta_{l}^{-1} of σ(X)\sigma(X) can be obtained by substituting all the field elements, from that we can find βl=αjl\beta_{l} = \alpha^{j_{l}} and the corresponding error-location numbers jlj_{l}. Among all the three steps, step 2 is the most complicated part of decoding a BCH code. For step 2, in this experiment, we present one of the iterative algorithms called Berlekamp's iterative algorithm for finding the error-location polynomial σ(X)\sigma(X). This algorithm has initial state values for μ=1,0\mu = -1, 0 rows (see Table 2}). Using these initial rows and the syndrome components S1&S_{1} \, \& S2S_{2}, the first step of iteration μ=1\mu =1 is to find the minimal degree polynomial σ(1)(X)\sigma^{(1)}(X) and the entire μ=1th\mu=1 th row of the table. The next step is to find σ(2)(X)\sigma^{(2)}(X) and μ=2th\mu=2 th row. For this, we use the rows μ=1,0,1\mu = -1, 0, 1 , and syndrome components S1,S2&S_{1}, S_{2} \, \& S3S_{3}. This process will continue up to 2t2t iterations. Suppose the algorithm is performed μth\mu th row, then (μ+1)th(\mu+1) th step of iteration is to find the minimum-degree polynomial σ(μ+1)(X)\sigma^{(\mu+1)}(X) and entire μ+1\mu+1 th row of the table. Syndrome components S1,S2,,Sμ+1S_1 , S_2, \ldots , S_{\mu+1} and the rows up to μ\mu are used to obtain the σ(μ+1)(X)\sigma^{(\mu+1)}(X) . Now we will see the procedure of Berlekamp's iterative algorithm to get σ(μ+1)(X)\sigma^{(\mu+1)}(X) in detail.

Berlekamp’s iterative algorithm:\textbf{Berlekamp's iterative algorithm:} If the number of errors in the received polynomial r(X)\mathbf{r}(X) are tt or less, then σ(X)\sigma(X) produces the true error pattern. Let σ(μ)(X)=1+σ1(μ)X+σ2(μ)X2++σlμ(μ)Xlμ \begin{equation} \sigma^{(\mu)}(X) = 1 + \sigma^{(\mu)}_{1}X+ \sigma^{(\mu)}_{2}X^{2}+ \ldots + \sigma^{(\mu)}_{l_{\mu}}X^{l_{\mu}} \end{equation} be the minimum-degree polynomial determined at the μ\muth step of the iteration. Here lμl_{\mu} is the degree of σ(μ)(X)\sigma^{(\mu)}(X) and σj(μ)\sigma^{(\mu)}_{j} is the coefficient of XjX^{j} for j=0,1,2,,lμj= 0,1,2, \ldots ,l_{\mu}.

To carry out the iteration of finding σ(X) \sigma(X) , we begin with Table \ref{Table:Berlekamp_algorithm} and proceed to fill out the table, where lμ l_{\mu} is the degree of σ(μ)(X)\sigma^{(\mu)}(X) and dμd_{\mu} is called the μth\mu th discrepancy. Assuming that we have filled out all rows up to and including the μth\mu th row, we fill out the (μ+1)th(\mu+1) th row as follows:

  1. If dμ=0d_{\mu} = 0 , then
    σ(μ+1)(X)=σ(μ)(X),andlμ+1=lμ. \begin{equation} \sigma^{(\mu+1)}(X) = \sigma^{(\mu)}(X), \, \text{and} \,\, l_{\mu+1} = l_{\mu}. \end{equation}
  2. If dμ0d_{\mu} \neq 0, we find another row ρ\rho prior to the μ\muth row such that dμ0d_{\mu} \neq 0 and the number ρlρ\rho - l_{\rho} in the last column of the table has the largest value. Then, σ(μ+1)(X)\sigma^{(\mu+1)}(X) is σ(μ+1)(X)=σ(μ)(X)+dμdρ1X(μρ)σ(ρ)(X), \begin{equation} \sigma^{(\mu+1)}(X) = \sigma^{(\mu)}(X) + d_{\mu} d_{\rho}^{-1} X^{(\mu - \rho)} \sigma^{(\rho)}(X), \end{equation} and lμ+1=max(lμ,lρ+μρ). \begin{equation} l_{\mu +1} = \max(l_{\mu}, l_{\rho} + \mu -\rho ). \end{equation} In either case, dμ+1=Sμ+2+σ1(μ+1)Sμ+1++σlμ+1(μ+1), \begin{equation} d_{\mu+1} = S_{\mu+2} + \sigma_{1}^{(\mu+1)} S_{\mu+1} + \ldots + \sigma^{(\mu+1)}_{l_{\mu+1}}, \end{equation} where the σi(μ+1)\sigma_{i}^{(\mu+1)}'s are the coefficients of σ(μ+1)(X)\sigma^{(\mu+1)}(X). The polynomial σ(2t)(X)\sigma^{(2t)}(X) in the last row should be the required σ(X)\sigma(X). If its degree is greater than tt, there are more than tt errors in the received polynomial r(X)\mathbf{r}(X), then there will be a decoding error. Because the roots of the obtained σ(X)\sigma(X) don't provide the correct error locations of the received vector.

Now, we decode the codeword of the received vector given in the following example, which can be divided into three major steps as discussed earlier.

Example 3:\textbf{Example 3:} Consider the (15,5,7)(15, 5, 7) triple-error-correcting BCH code given in the Example 2. Suppose that v=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]\mathbf{v} = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
is transmitted, and the vector r=[0,0,0,1,0,1,0,0,0,0,0,0,1,0,0]\mathbf{r} = [0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0] is received. Then r(X)=X3+X5+X12\mathbf{r}(X) = X^{3}+X^{5}+X^{12}. The step-by-step decoding process of r(X)\mathbf{r}(X) is explained below:


Step 1:\textbf{Step 1:} We find the syndrome vector of the received polynomial from Eq. (5), S=[S1S2S3S4S5S6],=[r(α1)r(α2)r(α3),r(α4)r(α5)r(α6)]. \begin{align*} \mathbf{S} & = \begin{bmatrix} S_1& S_2& S_3& S_4& S_5& S_{6} \end{bmatrix}, \nonumber \\ & = \begin{bmatrix} \mathbf{r}(\alpha^{1})& \mathbf{r}(\alpha^{2})& \mathbf{r}(\alpha^{3}), \mathbf{r}(\alpha^{4})& \mathbf{r}(\alpha^{5})& \mathbf{r}(\alpha^{6}) \end{bmatrix}. \end{align*}

For this example, for 1i2t1 \leq i \leq 2t , r(αi)=α3i+α5i+α12i\mathbf{r}(\alpha^{i}) = \alpha^{3i}+\alpha^{5i}+\alpha^{12i} can be found by using Table 1. Let's look at the evaluation of the received polynomial r(αi)\mathbf{r}(\alpha^{i}) for i=1,2,3i = 1, 2, 3:

r(α)=α3+α5+α12=α3+(α+α2)+(1+α+α2+α3)=1. \begin{align*} \mathbf{r}(\alpha) & = \alpha^{3} + \alpha^{5} + \alpha^{12} \\ & = \alpha^{3} + (\alpha +\alpha^{2}) + (1+\alpha+\alpha^{2}+\alpha^{3}) \\ & = 1. \end{align*} r(α2)=α6+α10+α24=α6+α10+α9=(α2+α3)+(1+α+α2)+(α+α3)=1. \begin{align*} \mathbf{r}(\alpha^{2}) & = \alpha^{6} + \alpha^{10} + \alpha^{24} \\ & = \alpha^{6} + \alpha^{10} + \alpha^{9} \\ & = (\alpha^{2} + \alpha^{3} )+ (1+\alpha+\alpha^{2}) + (\alpha +\alpha^{3}) \\ & = 1. \end{align*} r(α3)=α9+α15+α36=α9+1+α6=(α+α3)+1+(α2+α3)=α10. \begin{align*} \mathbf{r}(\alpha^{3}) & = \alpha^{9} + \alpha^{15} + \alpha^{36} \\ & = \alpha^{9} + 1 + \alpha^{6} \\ & = (\alpha + \alpha^{3} )+ 1 + (\alpha^{2} +\alpha^{3}) \quad \quad \quad \quad \quad \\ & = \alpha^{10}. \end{align*} Similarly, we can find for i=4,5,6i = 4, 5, 6. The syndrome vector S=[S1S2S3S4S5S6],=[r(α1)r(α2)r(α3)r(α4)r(α5)r(α6)],=[11α101α10α5]. \begin{align*} \mathbf{S} & = \begin{bmatrix} S_1 & S_2 & S_3 & S_4 & S_5 & S_{6} \end{bmatrix}, \\ & = \begin{bmatrix} \mathbf{r}(\alpha^{1}) & \mathbf{r}(\alpha^{2}) & \mathbf{r}(\alpha^{3}) & \mathbf{r}(\alpha^{4}) & \mathbf{r}(\alpha^{5}) & \mathbf{r}(\alpha^{6}) \end{bmatrix}, \\ & = \begin{bmatrix} 1 & 1 & \alpha^{10} & 1 & \alpha^{10} & \alpha^{5} \end{bmatrix}. \end{align*}

Thus, we obtained the syndrome of the received polynomial. Now, we move on to step 2 in order to find the error-location polynomial σ(X)\sigma(X) for which we use Berlekamp's iterative algorithm.

Step 2:\textbf{Step 2:} In the process of determining the error-location polynomial σ(X)\sigma(X), we see a few iterations of the Table 2 of the Berlekamp's algorithm for the Example 3.

First iteration:\textbf{First iteration:} We start with Table 2 which has initial values till μ=0th\mu =0 th row. Now let's find the row μ+1=0+1=1\mu+1 =0+1 =1. Since d0=S1=10d_{0} = S_1 = 1 \neq 0 (condition 2 in the algorithm), ρ=1\rho = -1. Therefore, from Eq. (10) the minimum-degree polynomial σ(1)(X)=σ(0+1)(X)=σ(0)(X)+d0(d1)1X(0(1))σ(1)(X)=1+1(1)1X(1)=1+X. \begin{align*} \sigma^{(1)}(X) = \sigma^{(0+1)} (X) & = \sigma^{(0)}(X)+d_{0}(d_{-1})^{-1} X^{(0-(-1))} \sigma^{(-1)}(X) \\ & = 1 + 1(1)^{-1} X (1) \\ & = 1 +X. \end{align*}

From Eq. (11), the degree l1=l0+1=max(l0,l1+0(1))=max(0,0+1)=1. \begin{equation*} l_{1} = l_{0+1} = \max (l_{0}, l_{-1}+0-(-1)) = \max (0, 0+1) =1. \end{equation*} From Eq. (12), the discrepancy d1=d0+1=S0+2+σ1(0+1)S0+1=S2+σ1(1)S1=1+1.1=0. \begin{equation*} d_{1} = d_{0+1} = S_{0+2} + \sigma_{1}^{(0+1)} S_{0+1} = S_{2} + \sigma_{1}^{(1)} S_{1} = 1 + 1.1= 0. \end{equation*} Now we have filled up to the row μ=1\mu = 1 in Table 2, and it will be as shown in Table 3.

Second iteration:\textbf{Second iteration:} Now, using the Table 3, let's find the row μ+1=1+1=2\mu+1 =1+1 =2. Since d1=0d_{1} = 0 (condition 1 in the algorithm), from Eq. (9) the minimum-degree polynomial σ(2)(X)=σ(1+1)(X)=σ(1)(X)=1+X,and the degreel2=l1+1=l1=1. \begin{equation*} \sigma^{(2)}(X) = \sigma^{(1+1)}(X) = \sigma^{(1)}(X) = 1+X, \, \text{and the degree} \, \, l_{2} = l_{1+1} = l_{1} = 1. \end{equation*} From Eq. (12), the discrepancy d2=d1+1=S1+2+σ1(1+1)S1+1=S3+σ1(2)S2=α10+1.1=(1+α+α2)+1=α5. \begin{equation*} d_{2} = d_{1+1} = S_{1+2} + \sigma_{1}^{(1+1)} S_{1+1} = S_{3} + \sigma_{1}^{(2)} S_{2} = \alpha^{10} + 1.1= (1+\alpha+\alpha^{2}) + 1 = \alpha^{5}. \end{equation*} Now we have filled up to the row μ=2\mu = 2 in Table 2, and it will be as shown in Table 4.

Third iteration:\textbf{Third iteration:} Now, using the Table 4, let's find the row μ+1=1+1=2\mu+1 =1+1 =2. Since d2=α50d_2 = \alpha^{5} \neq 0, ρ=0\rho = 0. Therefore, from Eq. (10) the minimum-degree polynomial σ(3)(X)=σ(2+1)(X)=σ(2)(X)+d2(d0)1X(2(0))σ(0)(X),=(1+X)+α5(1)1X2(1).=1+X+α5X2. \begin{align*} \sigma^{(3)}(X) = \sigma^{(2+1)}(X) & = \sigma^{(2)}(X)+d_{2}(d_{0})^{-1} X^{(2-(0))} \sigma^{(0)}(X) , \\ & = (1+X) + \alpha^{5} (1)^{-1}X^{2}(1) .\\ & = 1+X+\alpha^{5}X^{2}. \end{align*}

From Eq. (11), the degree l3=l2+1=max(l2,l0+2(0))=max(1,0+20)=2. \begin{equation*} l_{3} = l_{2+1} = \max (l_{2}, l_{0}+2-(0)) = \max (1, 0+2-0) = 2. \end{equation*} From Eq. (12), the discrepancy d3=d2+1=S2+2+σ1(2+1)S2+1+σ2(2+1)S2=S4+σ1(3)S3+σ2(3)S2=1+(1)α10+α5(1)=1+(1+α+α2)+(α+α2)=0. \begin{align*} d_{3} = d_{2+1} & = S_{2+2} + \sigma_{1}^{(2+1)} S_{2+1} + \sigma_{2}^{(2+1)} S_{2} \\ & = S_{4} + \sigma_{1}^{(3)} S_{3} + \sigma_{2}^{(3)} S_{2} \\ & = 1 + (1) \alpha^{10} + \alpha^{5}(1 )\\ & = 1 + (1+\alpha+\alpha^{2})+(\alpha+\alpha^{2}) \\ & = 0. \end{align*} Now we have filled up to the row μ=3\mu = 3 in Table 2, and it will be as shown in Table 5. Similarly, we can find up to μ=6th\mu = 6th iteration and are left for exercise.