Product Code Database
Example Keywords: jelly -ring $55-145
   » » Wiki: Reed–solomon Error Correction
Tag Wiki 'Reed–solomon Error Correction'.
Tag

Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and in 1960. They have many applications, the most prominent of which include consumer technologies such as CDs, , , , data transmission technologies such as and , systems such as DVB and , and storage systems such as RAID 6. They are also used in satellite communication.

Reed–Solomon codes operate on a block of data treated as a set of elements called symbols. For example, a block of 4096 bytes (32768 bits) could be treated as a set of 2731 12 bit symbols, where each symbol is a finite field element of GF(212), the last symbol padded with four 0 bits. Reed–Solomon codes are able to detect and correct multiple symbol errors. By adding check symbols to the data, a Reed–Solomon code can detect any combination of up to erroneous symbols, or correct up to symbols. As an , it can correct up to known erasures, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple- bit-error correcting codes, since a sequence of consecutive bit errors can affect at most two symbols of size . The choice of is up to the designer of the code, and may be selected within wide limits.

There are two basic types of Reed-Solomon codes, and , with BCH view being the most common as BCH view decoders are faster and require less working storage than original view decoders.


History
Reed–Solomon codes were developed in 1960 by Irving S. Reed and , who were then staff members of MIT Lincoln Laboratory. Their seminal article was titled "Polynomial Codes over Certain Finite Fields." . The original encoding scheme described in the Reed & Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets of k (unencoded message length) out of n (encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to a like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed Solomon codes, ones that use the original encoding scheme, and ones that use the BCH encoding scheme.

Also in 1960, a practical fixed polynomial decoder for codes developed by Daniel Gorenstein and Neal Zierler was described in an MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961.D. Gorenstein and N. Zierler, "A class of cyclic linear error-correcting codes in p^m symbols," J. SIAM, vol. 9, pp. 207-214, June 1961 The Gorenstein-Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by W. Wesley Peterson (1961).Error Correcting Codes by W_Wesley_Peterson, 1961 By 1963 (or possibly earlier), J. J. Stone (and others) recognized that Reed Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes,Error Correcting Codes by W_Wesley_Peterson, second edition, 1972, but Reed Solomon codes based on the original encoding scheme, are not a class of BCH codes, and depending on the set of evaluation points, they are not even .

In 1969, an improved BCH scheme decoder was developed by and , and is since known as the Berlekamp–Massey decoding algorithm.

In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on the extended Euclidean algorithm.Yasuo Sugiyama, Masao Kasahara, Shigeichi Hirasawa, and Toshihiko Namekawa. A method for solving key equation for decoding Goppa codes. Information and Control, 27:87–99, 1975.

In 1977, Reed–Solomon codes were implemented in the in the form of concatenated error correction codes. The first commercial application in mass-produced consumer products appeared in 1982 with the , where two interleaved Reed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented in digital storage devices and digital communication standards, though they are being slowly replaced by more modern low-density parity-check (LDPC) codes or . For example, Reed–Solomon codes are used in the Digital Video Broadcasting (DVB) standard , but LDPC codes are used in its successor, DVB-S2.

In 1986, an original scheme decoder known as the Berlekamp–Welch algorithm was developed.

In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these type of decoders Guruswami%E2%80%93Sudan_list_decoding_algorithm .

In 2002, another original scheme decoder was developed by Shuhong Gao, based on the extended Euclid algorithm Gao_RS.pdf .


Applications

Data storage
Reed–Solomon coding is very widely used in mass storage systems to correct the burst errors associated with media defects.

Reed–Solomon coding is a key component of the . It was the first use of strong error correction coding in a mass-produced consumer product, and DAT and use similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way yields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block.

The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts.

DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code.

Reed–Solomon error correction is also used in files which are commonly posted accompanying multimedia files on . The Distributed online storage service (discontinued in 2015) also used to make use of Reed–Solomon when breaking up files.


Bar code
Almost all two-dimensional bar codes such as PDF-417, , , , and use Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure.

Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the symbology.


Data transmission
Specialized forms of Reed–Solomon codes, specifically -RS and Vandermonde-RS, can be used to overcome the unreliable nature of data transmission over erasure channels. The encoding process assumes a code of RS( NK) which results in N codewords of length N symbols each storing K symbols of data, being generated, that are then sent over an erasure channel.

Any combination of K codewords received at the other end is enough to reconstruct all of the N codewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion, N is usually 2 K, meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent.

Reed–Solomon codes are also used in systems and 's Space Communications Protocol Specifications as a form of forward error correction.


Space transmission
One significant application of Reed–Solomon coding was to encode the digital pictures sent back by the space probe.

Voyager introduced Reed–Solomon coding concatenated with convolutional codes, a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications.

tend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes.

Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on the , , Mars Exploration Rover and missions, where they perform within about 1–1.5 of the ultimate limit, being the .

These concatenated codes are now being replaced by more powerful .


Constructions
The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: an alphabet size q, a block length n, and a message length k, with k < n ≤ q. The set of alphabet symbols is interpreted as the of order q, and thus, q has to be a prime power. In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, the rate is some constant, and furthermore, the block length is equal to or one less than the alphabet size, that is, or .


Reed & Solomon's original view: The codeword as a sequence of values
There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords. In the original view of , every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less than k. In order to obtain a codeword of the Reed–Solomon code, the message is interpreted as the description of a polynomial p of degree less than k over the finite field F with q elements. In turn, the polynomial p is evaluated at nq distinct points a_1,\dots,a_n of the field F, and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include {0, 1, 2, ..., n-1}, {0,α, α2, ..., αn-2, 1}, {1, α, α2, ..., αn-2}, ... , where α is a primitive element of F.

Formally, the set \mathbf{C} of codewords of the Reed–Solomon code is defined as follows:

\mathbf{C}
= \Big\{\;
    \big( p(a_1), p(a_2), \dots, p(a_n) \big)
    \;\Big|\;
    p \text{ is a polynomial over } F \text{ of degree } 
     
Since any two  distinct polynomials of degree less than k agree in at most k-1 points, this means that any two codewords of the Reed–Solomon code disagree in at least n - (k-1) = n-k+1 positions.
Furthermore, there are two polynomials that do agree in k-2 points but are not equal, and thus, the distance of the Reed–Solomon code is exactly d=n-k+1.
Then the relative distance is \delta = d/n = 1-k/n + 1/n = 1-R+1/n\sim 1-R, where R=k/n is the rate.
This trade-off between the relative distance and the rate is asymptotically optimal since, by the ,  every code satisfies \delta+R\leq 1+1/n.
Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class of maximum distance separable codes.
     

While the number of different polynomials of degree less than k and the number of different messages are both equal to q^k, and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding. The original construction of interprets the message x as the coefficients of the polynomial p, whereas subsequent constructions interpret the message as the values of the polynomial at the first k points a_1,\dots,a_k and obtain the polynomial p by interpolating these values with a polynomial of degree less than k. The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to a , that is, the original message is always contained as a subsequence of the codeword.


Simple encoding procedure: The message as a sequence of coefficients
In the original construction of , the message x=(x_1,\dots,x_k)\in F^k is mapped to the polynomial p_x with
p_x(a) = \sum_{i=1}^k x_i a^{i-1} \,.
The codeword of x is obtained by evaluating p_x at n different points a_1,\dots,a_n of the field F. Thus the classical encoding function C:F^k \to F^n for the Reed–Solomon code is defined as follows:
C(x) = \big(p_x(a_1),\dots,p_x(a_n)\big)\,.
This function C is a , that is, it satisfies C(x) = x \cdot A for the following (k\times n)-matrix A with elements from F:
A=\begin{bmatrix}
1 & \dots & 1 & \dots & 1 \\ a_1 & \dots & a_k & \dots & a_n \\ a_1^2 & \dots & a_k^2 & \dots & a_n^2 \\ \vdots & \dots & \vdots & \dots & \vdots \\ a_1^{k-1} & \dots & a_k^{k-1} & \dots & a_n^{k-1} \end{bmatrix}

This matrix is the transpose of a Vandermonde matrix over F. In other words, the Reed–Solomon code is a , and in the classical encoding procedure, its generator matrix is A.


Systematic encoding procedure: The message as an initial sequence of values
There is an alternative encoding procedure that also produces the Reed–Solomon code, but that does so in a way. Here, the mapping from the message x to the polynomial p_x works differently: the polynomial p_x is now defined as the unique polynomial of degree less than k such that
p_x(a_i) = x_i holds for all i\in\{1,\dots,k\}.
To compute this polynomial p_x from x, one can use Lagrange interpolation. Once it has been found, it is evaluated at the other points a_{k+1},\dots,a_n of the field. The alternative encoding function C:F^k \to F^n for the Reed–Solomon code is then again just the sequence of values:
C(x) = \big(p_x(a_1),\dots,p_x(a_n)\big)\,.
Since the first k entries of each codeword C(x) coincide with x, this encoding procedure is indeed . Since Lagrange interpolation is a linear transformation, C is a linear mapping. In fact, we have C(x) = x \cdot G , where
G=
(A\text{'s left square submatrix})^{-1}\cdot A = \begin{bmatrix} 1 & 0 & 0 & \dots & 0 & g_{1,k+1} & \dots & g_{1,n} \\ 0 & 1 & 0 & \dots & 0 & g_{2,k+1} & \dots & g_{2,n} \\ 0 & 0 & 1 & \dots & 0 & g_{3,k+1} & \dots & g_{3,n} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \dots & 0 & \dots & 1 & g_{k,k+1} & \dots & g_{k,n} \end{bmatrix}


Discrete Fourier transform and its inverse
A discrete Fourier transform is essentially the same as the encoding procedure, it uses the generator polynomial p(x) to map a set of evaluation points into the message values as shown above:

C(x) = \big(p_x(a_1),\dots,p_x(a_n)\big)\,.

The inverse Fourier transform could be used to convert an error free set of n message values back into the encoding polynomial of k coefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers of α :

a_i = \alpha^{i-1}
a_1, \dots, a_n = \{ 1, \alpha, \alpha^2, \dots, \alpha^{n-1} \}

However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of the Gao decoder.


The BCH view: The codeword as a sequence of coefficients
In this view, the sender again maps the message x to a polynomial p_x, and for this, any of the two mappings just described can be used (where the message is either interpreted as the coefficients of p_x or as the initial sequence of values of p_x). Once the sender has constructed the polynomial p_x in some way, however, instead of sending the values of p_x at all points, the sender computes some related polynomial s of degree at most n-1 for n=q-1 and sends the n coefficients of that polynomial. The polynomial s(a) is constructed by multiplying the message polynomial p_x(a), which has degree at most k-1, with a generator polynomial g(a) of degree n-k that is known to both the sender and the receiver. The generator polynomial g(x) is defined as the polynomial whose roots are exactly \alpha,\alpha^2,\dots,\alpha^{n-k}, i.e.,

g(x) = (x-\alpha)(x-\alpha^2)\cdots(x-\alpha^{n-k}) = g_0 + g_1x + \cdots + g_{n-k-1}x^{n-k-1} + x^{n-k}\,.

The transmitter sends the n=q-1 coefficients of s(a)=p_x(a) \cdot g(a). Thus, in the BCH view of Reed–Solomon codes, the set \mathbf{C} of codewords is defined for n=q-1 as follows:

\mathbf{C} =
\left\{
 \left ( s_1, s_2,\dots, s_{n} \right)
 \;\Big|\;
 s(a)=\sum_{i=1}^n s_i a^{i-1}
 \text{ is a polynomial that has at least the roots } \alpha^1,\alpha^2, \dots, \alpha^{n-k}
     
\right\}\,.


Systematic encoding procedure
The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield a , in which each codeword contains the message as a prefix. Here, instead of sending s(x) = p(x) g(x), the encoder constructs the transmitted polynomial s(x) such that the coefficients of the k largest monomials are equal to the corresponding coefficients of p(x), and the lower-order coefficients of s(x) are chosen exactly in such a way that s(x) becomes divisible by g(x). Then the coefficients of p(x) are a subsequence of the coefficients of s(x). To get a code that is overall systematic, we construct the message polynomial p(x) by interpreting the message as the sequence of its coefficients.

Formally, the construction is done by multiplying p(x) by x^t to make room for the t=n-k check symbols, dividing that product by g(x) to find the remainder, and then compensating for that remainder by subtracting it. The t check symbols are created by computing the remainder s_r(x):

s_r(x) = p(x)\cdot x^t \ \bmod \ g(x).

Note that the remainder has degree at most t-1, whereas the coefficients of x^{t-1},x^{t-2},\dots,x^1,x^0 in the polynomial p(x)\cdot x^t are zero. Therefore, the following definition of the codeword s(x) has the property that the first k coefficients are identical to the coefficients of p(x):

s(x) = p(x)\cdot x^t - s_r(x)\,.

As a result, the codewords s(x) are indeed elements of \mathbf{C}, that is, they are divisible by the generator polynomial g(x):See , for example.

s(x) \equiv p(x)\cdot x^t - s_r(x) \equiv s_r(x) - s_r(x) \equiv 0 \mod g(x)\,.


Properties
The Reed–Solomon code is a n, code; in other words, it is a of length n (over F) with dimension k and minimum n −  k + 1. The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size ( nk); this is known as the . Such a code is also called a .

The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, by n - k, the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to (n-k)/2 erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" in signal-to-noise ratios)—these are called erasures. A Reed–Solomon code (like any MDS code) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation 2 E + Snk is satisfied, where E is the number of errors and S is the number of erasures in the block.

For practical uses of Reed–Solomon codes, it is common to use a finite field F with 2^m elements. In this case, each symbol can be represented as an m-bit value. The sender sends the data points as encoded blocks, and the number of symbols in the encoded block is n = 2^m - 1. Thus a Reed–Solomon code operating on 8-bit symbols has n = 2^8 - 1 = 255 symbols per block. (This is a very popular value because of the prevalence of computer systems.) The number k, with k < n, of data symbols in the block is a design parameter. A commonly used code encodes k = 223 eight-bit data symbols plus 32 eight-bit parity symbols in an n = 255-symbol block; this is denoted as a (n, k) = (255,223) code, and is capable of correcting up to 16 symbol errors per block.

The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur in . This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code.

The Reed–Solomon code, like the convolutional code, is a transparent code. This means that if the channel symbols have been somewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened. The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding.

Whether the Reed–Solomon code is or not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, if \alpha is a primitive root of the field F, then by definition all non-zero elements of F take the form \alpha^i for i\in\{1,\dots,q-1\}, where q=|F|. Each polynomial p over F gives rise to a codeword (p(\alpha^1),\dots,p(\alpha^{q-1})). Since the function a\mapsto p(\alpha a) is also a polynomial of the same degree, this function gives rise to a codeword (p(\alpha^2),\dots,p(\alpha^{q})); since \alpha^{q}=\alpha^1 holds, this codeword is the of the original codeword derived from p. So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon code . Reed–Solomon codes in the BCH view are always cyclic because BCH codes are cyclic.


Remarks
Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes. The Delsarte-Goethals-Seidel. Explains the Delsarte-Goethals-Seidel theorem as used in the context of the error correcting code for . theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known as allows omitting some of the encoded parity symbols.


Reed Solomon original view decoders
The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message.


Theoretical decoder
described a theoretical decoder that corrected errors by finding the most popular message polynomial. The decoder only knows the set of values a_1 to a_n and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical.  The number of subsets is the binomial coefficient, \textstyle \binom{n}{k} = {n! \over (n-k)! k!}, and the number of subsets is infeasible for even modest codes. For a (255,249) code that can correct 3 errors, the naive theoretical decoder would examine 359 billion subsets.
     


Berlekamp Welch decoder
In 1986, a decoder known as the Berlekamp–Welch algorithm was developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexity O(n^3), where n is the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message.


Example
Using RS(7,3), GF(929), and the set of evaluation points ai = i-1

If the message polynomial is

The codeword is

Errors in transmission might cause this to be received instead.

The key equations are:

b_i E(a_i) - Q(a_i) = 0

Assume maximum number of errors: e = 2. The key equations become:

b_i(e_0 + e_1 a_i) - (q_0 + q_1 a_i + q_2 a_i^2 + q_3 a_i^3 + q_4 a_i^4) = - b_i a_i^2


\begin{bmatrix}
001 & 000 & 928 & 000 & 000 & 000 & 000 \\ 006 & 006 & 928 & 928 & 928 & 928 & 928 \\ 123 & 246 & 928 & 927 & 925 & 921 & 913 \\ 456 & 439 & 928 & 926 & 920 & 902 & 848 \\ 057 & 228 & 928 & 925 & 913 & 865 & 673 \\ 086 & 430 & 928 & 924 & 904 & 804 & 304 \\ 121 & 726 & 928 & 923 & 893 & 713 & 562 \end{bmatrix} \begin{bmatrix} e_0 \\ e_1 \\ q_0 \\ q_1 \\ q_2 \\ q_3 \\ q_4 \end{bmatrix} = \begin{bmatrix} 000 \\ 923 \\ 437 \\ 541 \\ 017 \\ 637 \\ 289 \end{bmatrix}

Using Gaussian elimination:

\begin{bmatrix} 001 & 000 & 000 & 000 & 000 & 000 & 000 \\ 000 & 001 & 000 & 000 & 000 & 000 & 000 \\ 000 & 000 & 001 & 000 & 000 & 000 & 000 \\ 000 & 000 & 000 & 001 & 000 & 000 & 000 \\ 000 & 000 & 000 & 000 & 001 & 000 & 000 \\ 000 & 000 & 000 & 000 & 000 & 001 & 000 \\ 000 & 000 & 000 & 000 & 000 & 000 & 001 \end{bmatrix} \begin{bmatrix} e_0 \\ e_1 \\ q_0 \\ q_1 \\ q_2 \\ q_3 \\ q_4 \end{bmatrix} = \begin{bmatrix} 006 \\ 924 \\ 006 \\ 007 \\ 009 \\ 916 \\ 003 \end{bmatrix}

Recalculate where to correct resulting in the corrected codeword:


Gao decoder
In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorithm Gao_RS.pdf .


Example
Using the same data as the Berlekamp Welch example above:

R_{-1} = \prod_{i=1}^n (x - a_i)
R_0 = Lagrange interpolation of \{a_i, b(a_i)\} for i = 1 to n
A_{-1} = 0
A_0 = 1

-1001 x7 + 908 x6 + 175 x5 + 194 x4 + 695 x3 + 094 x2 + 720 x + 000000
0055 x6 + 440 x5 + 497 x4 + 904 x3 + 424 x2 + 472 x + 001001
1702 x5 + 845 x4 + 691 x3 + 461 x2 + 327 x + 237152 x + 237
2266 x4 + 086 x3 + 798 x2 + 311 x + 532708 x2 + 176 x + 532

divide Q(x) and E(x) by most significant coeficient of E(x) = 708. (Optional)

Recalculate where to correct resulting in the corrected codeword:


BCH view decoders
The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder.


Peterson–Gorenstein–Zierler decoder
Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961.D. Gorenstein and N. Zierler, "A class of cyclic linear error-correcting codes in p^m symbols," J. SIAM, vol. 9, pp. 207-214, June 1961 The Gorenstein-Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by W. Wesley Peterson (1961).Error Correcting Codes by W_Wesley_Peterson, 1961


Syndrome decoding
The transmitted message is viewed as the coefficients of a polynomial s( x) that is divisible by a generator polynomial g( x).
s(x) = \sum_{i = 0}^{n-1} c_i x^i
g(x) = \prod_{j=1}^{n-k} (x - \alpha^j),

where α is a primitive root.

Since s( x) is divisible by generator g( x), it follows that

s(\alpha^i) = 0, \ i=1,2,\ldots,n-k

The transmitted polynomial is corrupted in transit by an error polynomial e( x) to produce the received polynomial r( x).

r(x) = s(x) + e(x)
e(x) = \sum_{i=0}^{n-1} e_i x^i

where ei is the coefficient for the i-th power of x. Coefficient ei will be zero if there is no error at that power of x and nonzero if there is an error. If there are ν errors at distinct powers ik of x, then

e(x) = \sum_{k=1}^\nu e_{i_k} x^{i_k}

The goal of the decoder is to find the number of errors ( ν), the positions of the errors ( ik), and the error values at those positions ( eik). From those, e(x) can be calculated and subtracted from r(x) to get the original message s(x).

The syndromes S j are defined as

\begin{align} S_j &= r(\alpha^j) = s(\alpha^j) + e(\alpha^j) = 0 + e(\alpha^j) = e(\alpha^j), \ j=1,2,\ldots,n-k \\
   &= \sum_{k=1}^{\nu} e_{i_k} \left( \alpha^{j} \right)^{i_k}
     
\end{align}

The advantage of looking at the syndromes is that the message polynomial drops out.


Error locators and error values
For convenience, define the error locators Xk and error values Yk as:
X_k = \alpha^{i_k}, \ Y_k = e_{i_k}

Then the syndromes can be written in terms of the error locators and error values as

S_j = \sum_{k=1}^{\nu} Y_k X_k^{j}

The syndromes give a system of n −  k ≥ 2 ν equations in 2 ν unknowns, but that system of equations is nonlinear in the Xk and does not have an obvious solution. However, if the Xk were known (see below), then the syndrome equations provide a linear system of equations that can easily be solved for the Yk error values.

\begin{bmatrix}
X_1^1 & X_2^1 & \cdots & X_\nu^1 \\ X_1^2 & X_2^2 & \cdots & X_\nu^2 \\ \vdots & \vdots && \vdots \\ X_1^{n-k} & X_2^{n-k} & \cdots & X_\nu^{n-k} \\ \end{bmatrix} \begin{bmatrix} Y_1 \\ Y_2 \\ \vdots \\ Y_\nu \end{bmatrix} = \begin{bmatrix} S_1 \\ S_2 \\ \vdots \\ S_{n-k} \end{bmatrix}

Consequently, the problem is finding the Xk, because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Y k


Error locator polynomial
There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies the error locations.

Define the error locator polynomial Λ( x) as

\Lambda(x) = \prod_{k=1}^\nu (1 - x X_k ) = 1 + \Lambda_1 x^1 + \Lambda_2 x^2 + \cdots + \Lambda_\nu x^\nu

The zeros of Λ( x) are the reciprocals X_k^{-1}:

\Lambda(X_k^{-1}) = 0

\Lambda(X_k^{-1}) = 1 + \Lambda_1 X_k^{-1} + \Lambda_2 X_k^{-2} + \cdots + \Lambda_\nu X_k^{-\nu} = 0

Multiply both sides by Y_k X_k^{j+\nu} and it will still be zero. j is any number such that 1≤j≤v.

\begin{align} & Y_k X_k^{j+\nu} \Lambda(X_k^{-1}) = 0. \\ \text{Hence } & Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu} X_k^{-1} + \Lambda_2 Y_k X_k^{j+\nu} X_k^{-2} + \cdots + \Lambda_{\nu} Y_k X_k^{j+\nu} X_k^{-\nu} = 0, \\ \text{and so } & Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu-1} + \Lambda_2 Y_k X_k^{j+\nu -2} + \cdots + \Lambda_{\nu} Y_k X_k^j = 0 \\ \end{align}

Sum for k = 1 to ν

\begin{align}
& \sum_{k=1}^\nu ( Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu-1} + \Lambda_2 Y_k X_k^{j+\nu -2} + \cdots + \Lambda_{\nu} Y_k X_k^{j} ) = 0 \\ & \sum_{k=1}^\nu ( Y_k X_k^{j+\nu} ) + \Lambda_1 \sum_{k=1}^\nu (Y_k X_k^{j+\nu-1}) + \Lambda_2 \sum_{k=1}^\nu (Y_k X_k^{j+\nu -2}) + \cdots + \Lambda_\nu \sum_{k=1}^\nu ( Y_k X_k^j ) = 0 \end{align}

This reduces to

S_{j + \nu} + \Lambda_1 S_{j+\nu-1} + \cdots + \Lambda_{\nu-1} S_{j+1} + \Lambda_{\nu} S_j = 0 \,

S_j \Lambda_{\nu} + S_{j+1}\Lambda_{\nu-1} + \cdots + S_{j+\nu-1} \Lambda_1 = - S_{j + \nu} \

This yields a system of linear equations that can be solved for the coefficients Λ i of the error location polynomial:

\begin{bmatrix}
S_1 & S_2 & \cdots & S_{\nu} \\ S_2 & S_3 & \cdots & S_{\nu+1} \\ \vdots & \vdots && \vdots \\ S_{\nu} & S_{\nu+1} & \cdots & S_{2\nu-1} \end{bmatrix} \begin{bmatrix} \Lambda_{\nu} \\ \Lambda_{\nu-1} \\ \vdots \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} - S_{\nu+1} \\ - S_{\nu+2} \\ \vdots \\ - S_{\nu+\nu} \end{bmatrix} The above assumes the decoder knows the number of errors ν, but that number has not been determined yet. The PGZ decoder does not determine ν directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ν and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ν is reduced by one and the next smaller system is examined.


Obtain the error locators from the error locator polynomial
Use the coefficients Λ i found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators are the reciprocals of those roots. Note that the order of coefficients of the error location polynomial can be reversed, in which case the roots of that polynomial are the error locators (not reciprocals). is an efficient implementation of this step.


Calculate the error locations
Calculate ik by taking the log base a of Xk. This is generally done using a precomputed lookup table.


Calculate the error values
Once the error locators are known, the error values can be determined. This can be done by direct solution for Yk in the error equations given above, or using the .


Fix the errors
Finally, e(x) is generated from ik and eik and then is subtracted from r(x) to get the sent message s(x).


Example
Consider the Reed–Solomon code defined in with and (this is used in PDF417 barcodes) for a RS(7,3) code. The generator polynomial is
g(x) = (x-3)(x-3^2)(x-3^3)(x-3^4) = x^4+809 x^3+723 x^2+568 x+522
If the message polynomial is , then a systematic codeword is encoded as follows.
s_r(x) = p(x) \, x^t \mod g(x) = 547 x^3 + 738 x^2 + 442 x + 455
s(x) = p(x) \, x^t - s_r(x) = 3 x^6 + 2 x^5 + 1 x^4 + 382 x^3 + 191 x^2 + 487 x + 474
Errors in transmission might cause this to be received instead.
r(x) = s(x) + e(x) = 3 x^6 + 2 x^5 + 123 x^4 + 456 x^3 + 191 x^2 + 487 x + 474
The syndromes are calculated by evaluating r at powers of α.
S_1 = r(3^1) = 3\cdot 3^6 + 2\cdot 3^5 + 123\cdot 3^4 + 456\cdot 3^3 + 191\cdot 3^2 + 487\cdot 3 + 474 = 732
S_2 = r(3^2) = 637,\;S_3 = r(3^3) = 762,\;S_4 = r(3^4) = 925

\begin{bmatrix}
732 & 637 \\ 637 & 762 \end{bmatrix} \begin{bmatrix} \Lambda_2 \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} -762 \\ -925 \end{bmatrix} = \begin{bmatrix} 167 \\ 004 \end{bmatrix}

Using Gaussian elimination:

\begin{bmatrix}
001 & 000 \\ 000 & 001 \end{bmatrix} \begin{bmatrix} \Lambda_2 \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} 329 \\ 821 \end{bmatrix}

Λ(x) = 329 x2 + 821 x + 001, with roots x1 = 757 = 3−3 and x2 = 562 = 3−4
The coefficients can be reversed to produce roots with positive exponents, but typically this isn't used:
R(x) = 001 x2 + 821 x + 329, with roots 27 = 33 and 81 = 34
with the log of the roots corresponding to the error locations (right to left, location 0 is the last term in the codeword).

To calculate the error values, apply the .

Ω(x) = S(x) Λ(x) mod x4 = 546 x + 732
Λ'(x) = 658 x + 821
e1 = -Ω(x1)/Λ'(x1) = 074
e2 = -Ω(x2)/Λ'(x2) = 122

Subtracting e1 x3 and e2 x4 from the received polynomial r reproduces the original codeword s.


Berlekamp–Massey decoder
The Berlekamp–Massey algorithm is an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errors e:

\Delta = S_{i} + \Lambda_1 \ S_{i-1} + \cdots + \Lambda_e \ S_{i-e}

and then adjusts Λ(x) and e so that a recalculated Δ would be zero. The article Berlekamp–Massey algorithm has a detailed description of the procedure. In the following example, C(x) is used to represent Λ(x).


Example
Using the same data as the Peterson Gorenstein Zierler example above:
1
2
1
2
The final value of C is the error locator polynomial, Λ( x).


Euclidean decoder
Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of the Extended Euclidean algorithm .

Define S(x), Λ(x), and Ω(x) for t syndromes and e errors:

S(x) = S_{t} x^{t-1} + S_{t-1} x^{t-2} + \cdots + S_2 x + S_1

\Lambda(x) = \Lambda_{e} x^{e} + \Lambda_{e-1} x^{e-1} + \cdots + \Lambda_{1} x + 1

\Omega(x) = \Omega_{e} x^{e} + \Omega_{e-1} x^{e-1} + \cdots + \Omega_{1} x + \Omega_{0}

The key equation is:

\Lambda(x) S(x) = Q(x) x^{t} + \Omega(x)

For t = 6 and e = 3:

\begin{bmatrix}
\Lambda_3 S_6 & x^8 \\ \Lambda_2 S_6 + \Lambda_3 S_5 & x^7 \\ \Lambda_1 S_6 + \Lambda_2 S_5 + \Lambda_3 S_4 & x^6 \\
         S_6 + \Lambda_1 S_5 + \Lambda_2 S_4 + \Lambda_3 S_3 & x^5 \\
         S_5 + \Lambda_1 S_4 + \Lambda_2 S_3 + \Lambda_3 S_2 & x^4 \\
         S_4 + \Lambda_1 S_3 + \Lambda_2 S_2 + \Lambda_3 S_1 & x^3 \\
         S_3 + \Lambda_1 S_2 + \Lambda_2 S_1 & x^2 \\
         S_2 + \Lambda_1 S_1 & x \\
         S_1 &  \\
     
\end{bmatrix} = \begin{bmatrix} Q_2 x^8 \\ Q_1 x^7 \\ Q_0 x^6 \\ 0 \\ 0 \\ 0 \\ \Omega_2 x^2 \\ \Omega_1 x \\ \Omega_0 \\ \end{bmatrix}

The middle terms are zero due to the relationship between Λ and syndromes.

The extended Euclidean algorithm can find a series of polynomials of the form

Ai(x) S(x) + Bi(x) xt = Ri(x)

where the degree of R decreases as i increases. Once the degree of Ri(x) < t/2, then

Ai(x) = Λ(x)

Bi(x) = -Q(x)

Ri(x) = Ω(x).

B(x) and Q(x) don't need to be saved, so the algorithm becomes:

R−1 = xt
R0 = S(x)
A−1 = 0
A0 = 1
i = 0
while degree of Ri >= t/2
:i = i + 1
:Q = Ri-2 / Ri-1
:Ri = Ri-2 - Q Ri-1
:Ai = Ai-2 - Q Ai-1
to set low order term of Λ(x) to 1, divide Λ(x) and Ω(x) by Ai(0):
Λ(x) = Ai / Ai(0)
Ω(x) = Ri / Ai(0)

Ai(0) is the constant (low order) term of Ai.


Example
Using the same data as the Peterson Gorenstein Zierler example above:

-1001 x4 + 000 x3 + 000 x2 + 000 x + 000000
0925 x3 + 762 x2 + 637 x + 732001
1683 x2 + 676 x + 024697 x + 396
2673 x + 596608 x2 + 704 x + 544

Λ(x) = A2 / 544 = 329 x2 + 821 x + 001
Ω(x) = R2 / 544 = 546 x + 732


Decoder using discrete Fourier transform
A discrete Fourier transform can be used for decoding.Shu Lin and Daniel J. Costello Jr, "Error Control Coding" second edition, pp. 255-262, 1982, 2004 To avoid conflict with syndrome names, let c(x) = s(x) the encoded codeword. r(x) and e(x) are the same as above. Define C(x), E(x), and R(x) as the discrete Fourier transforms of c(x), e(x), and r(x). Since r(x) = c(x) + e(x), and since a discrete Fourier transform is a linear operator, R(x) = C(x) + E(x).

Transform r(x) to R(x) using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes, t coefficients of R(x) and E(x) are the same as the syndromes:

R_j = E_j = S_j = r(\alpha^j)
for \ 1 \le j \le t

Use R_1 through R_t as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders.

Let v = number of errors. Generate E(x) using the known coefficients E_1 to E_t, the error locator polynomial, and these formulas

E_0 = - \frac{1}{\Lambda_v}(E_{v} + \Lambda_1 E_{v-1} + \cdots + \Lambda_{v-1} E_{1})
E_j = -(\Lambda_1 E_{j-1} + \Lambda_2 E_{j-2} + \cdots + \Lambda_v E_{j-v})
for \ t < j < n

Then calculate C(x) = R(x) - E(x) and take the inverse transform (polynomial interpolation) of C(x) to produce c(x).


Decoding beyond the error-correction bound
The states that the minimum distance d of a linear block code of size ( n, k) is upper-bounded by n −  k + 1. The distance d was usually understood to limit the error-correction capability to ⌊ d/2⌋. The Reed–Solomon code achieves this bound with equality, and can thus correct up to ⌊( n −  k + 1)/2⌋ errors. However, this error-correction bound is not exact.

In 1999, and Venkatesan Guruswami at MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code. It applies to Reed–Solomon codes and more generally to algebraic geometric codes. This algorithm produces a list of codewords (it is a algorithm) and is based on interpolation and factorization of polynomials over GF(2^m) and its extensions.


Soft-decoding
The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channel 's confidence in the correctness of the symbol. The advent of LDPC and , which employ iterated soft-decision belief propagation decoding methods to achieve error-correction performance close to the , has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter and presented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami. In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.


Matlab Example

Encoder
Here we present a simple Matlab implementation for an encoder. function = rsEncoder( msg, m, prim_poly, n, k )
   %RSENCODER Encode message with the Reed-Solomon algorithm
   % m is the number of bits per symbol
   % prim_poly: Primitive polynomial p(x). Ie for DM is 301
   % k is the size of the message
   % n is the total size (k+redundant)
   % Example: msg = uint8('Test')
   % enc_msg = rsEncoder(msg, 8, 301, 12, numel(msg));
     

   % Get the alpha
   alpha = gf(2, m, prim_poly);
     

   % Get the Reed-Solomon generating polynomial g(x)
   g_x = genpoly(k, n, alpha);
     

   % Multiply the information by X^(n-k), or just pad with zeros at the end to
   % get space to add the redundant information
   msg_padded = gf([msg zeros(1, n-k)], m, prim_poly);
     

   % Get the remainder of the division of the extended message by the
   % Reed-Solomon generating polynomial g(x)
   [~, remainder] = deconv(msg_padded, g_x);
     

   % Now return the message with the redundant information
   encoded = msg_padded - remainder;
     

end

% Find the Reed-Solomon generating polynomial g(x), by the way this is the % same as the rsgenpoly function on matlab function g = genpoly(k, n, alpha)

   g = 1;
   % A multiplication on the galois field is just a convolution
   for k = mod(1 : n-k, n)
       g = conv(g, [1 alpha .^ (k)]);
   end
     
end


Decoder
Now the decoding part: function = rsDecoder( encoded, m, prim_poly, n, k )
   %RSDECODER Decode a Reed-Solomon encoded message
   %   Example:
   % [dec, ~, ~, ~, ~] = rsDecoder(enc_msg, 8, 301, 12, numel(msg))
   max_errors = floor((n-k)/2);
   orig_vals = encoded.x;
   % Initialize the error vector
   errors = zeros(1, n);
   g = [];
   S = [];
     

   % Get the alpha
   alpha = gf(2, m, prim_poly);
     

   % Find the syndromes (Check if dividing the message by the generator
   % polynomial the result is zero)
   Synd = polyval(encoded, alpha .^ (1:n-k));
   Syndromes = trim(Synd);
     

   % If all syndromes are zeros (perfectly divisible) there are no errors
   if isempty(Syndromes.x)
       decoded = orig_vals(1:k);
       error_pos = [];
       error_mag = [];
       g = [];
       S = Synd;
       return;
   end
     

   % Prepare for the euclidean algorithm (Used to find the error locating
   % polynomials)
   r0 = [1, zeros(1, 2*max_errors)]; r0 = gf(r0, m, prim_poly); r0 = trim(r0);
   size_r0 = length(r0);
   r1 = Syndromes;
   f0 = gf([zeros(1, size_r0-1) 1], m, prim_poly);
   f1 = gf(zeros(1, size_r0), m, prim_poly);
   g0 = f1; g1 = f0;
     

   % Do the euclidean algorithm on the polynomials r0(x) and Syndromes(x) in
   % order to find the error locating polynomial
   while true
       % Do a long division
       [quotient, remainder] = deconv(r0, r1);
       % Add some zeros
       quotient = pad(quotient, length(g1));
     

       % Find quotient*g1 and pad
       c = conv(quotient, g1);
       c = trim(c);
       c = pad(c, length(g0));
     

       % Update g as g0-quotient*g1
       g = g0 - c;
     

       % Check if the degree of remainder(x) is less than max_errors
       if all(remainder(1:end - max_errors) == 0)
           break;
       end
     

       % Update r0, r1, g0, g1 and remove leading zeros
       r0 = trim(r1); r1 = trim(remainder);
       g0 = g1; g1 = g;
   end
     

   % Remove leading zeros
   g = trim(g);
     

   % Find the zeros of the error polynomial on this galois field
   evalPoly = polyval(g, alpha .^ (n-1 : -1 : 0));
   error_pos = gf(find(evalPoly == 0), m);
     

   % If no error position is found we return the received work, because
   % basically is nothing that we could do and we return the received message
   if isempty(error_pos)
       decoded = orig_vals(1:k);
       error_mag = [];
       return;
   end
     

   % Prepare a linear system to solve the error polynomial and find the error
   % magnitudes
   size_error = length(error_pos);
   Syndrome_Vals = Syndromes.x;
   b(:, 1) = Syndrome_Vals(1:size_error);
   for idx = 1 : size_error
       e = alpha .^ (idx*(n-error_pos.x));
       err = e.x;
       er(idx, :) = err;
   end
     

   % Solve the linear system
   error_mag = (gf(er, m, prim_poly) \ gf(b, m, prim_poly))';
   % Put the error magnitude on the error vector
   errors(error_pos.x) = error_mag.x;
   % Bring this vector to the galois field
   errors_gf = gf(errors, m, prim_poly);
     

   % Now to fix the errors just add with the encoded code
   decoded_gf = encoded(1:k) + errors_gf(1:k);
   decoded = decoded_gf.x;
     

end

% Remove leading zeros from galois array function gt = trim(g)

   gx = g.x;
   gt = gf(gx(find(gx, 1) : end), g.m, g.prim_poly);
     
end

% Add leading zeros function xpad = pad(x,k)

   len = length(x);
   if (len
end
     
     


See also


Notes

Further reading

External links

Information and Tutorials


Code

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
1s Time