Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
1 views42 pages

Codinginvb

The document discusses elements of coding theory, focusing on error-detecting and -correcting codes, particularly Hamming's theory and linear codes. It outlines key concepts such as Hamming distance, error correction capabilities, and the Hamming bound, while also introducing various types of codes like Reed-Solomon and BCH codes. Additionally, it highlights the applications of error-correcting codes in fields such as algorithms, complexity, cryptography, and combinatorics.

Uploaded by

verdakun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views42 pages

Codinginvb

The document discusses elements of coding theory, focusing on error-detecting and -correcting codes, particularly Hamming's theory and linear codes. It outlines key concepts such as Hamming distance, error correction capabilities, and the Hamming bound, while also introducing various types of codes like Reed-Solomon and BCH codes. Additionally, it highlights the applications of error-correcting codes in fields such as algorithms, complexity, cryptography, and combinatorics.

Uploaded by

verdakun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Elements of Coding Theory

Error-detecting and -correcting codes

Radu Trı̂mbiţaş

UBB

January 2013

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 1 / 42


Outline I

1 Hamming’s Theory
Definitions
Hamming Bound

2 Linear Codes
Basics
Singleton bounds
Reed-Solomon Codes
Multivariate Polynomial Codes
BCH Codes

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 2 / 42


Hamming’s Problem

Hamming studied magnetic storage devices. He wanted to build (out


of magnetic tapes) a reliable storage medium where data was stored
in blocks of size 63 (this is a nice number, we will see why later).
When you try to read information from this device, bits may be
corrupted, i.e. flipped (from 0 to 1, or 1 to 0). Let us consider the
case that at most 1 bit in every block of 63 bits may be corrupted.
How can we store the information so that all is not lost? We must
design an encoding of the message to a codeword with enough
redundancy so that we can recover the original sequence from the
received word by decoding. (In Hamming’s problem about storage we
still say ”‘received word”’)
Naive solution – repetition code – to store each bit three times, so
that any one bit that is erroneously ipped can be detected and
corrected by majority decoding on its block of three.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 3 / 42


Basic Notions I
The Hamming encoding tries to do better with the following matrix:
 
1 0 0 0 0 1 1
 0 1 0 0 1 0 1 
G =  0 0

1 0 1 1 0 
0 0 0 1 1 1 1
Given a sequence of bits, we chop it into 4 bit chunks. Let b be the vector
representing one such chunk, then we encode b as the 7 bit bG , where all
arithmetic is performed mod 2. Clearly the efficiency is a much better 47 ,
though we still need to show that this code can correct one bit errors.
Claim 1
∀ b1 6= b2 , b1 G and b2 G differ in ≥ 3 coordinates.

First we present some definitions. We denote by Σ the alphabet, and the


ambient space Σn represents the set of n letter words over the alphabet Σ.
Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 4 / 42
Basic Notions II

Definition 2
The Hamming distance ∆(x, y ) between x, y ∈ Σn is the number of
coordinates i where xi 6= yi .

The Hamming distance is a metric since it is easy to verify that:

∆(x, y ) = ∆(y , x )
∆(x, z ) ≤ ∆(x, y ) + ∆(y , z )
∆(x, y ) = 0 ⇔ x = y

In our case, consider the space of all possible encoded words {0, 1}7 .
If we can prove our claim, then this means that in this space and
under the Hamming metric, each code word bG will have no other
code word within a radius of 2 around it.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 5 / 42


Basic Notions III
In fact, any point at Hamming distance 1 from a code word is
guaranteed to be closer to that code word than any other, and thus
we can correct one bit errors.

Definition 3
An Error Correcting Code is a set of code words C ⊆ Σn . The minimum
distance of C , written ∆(C ), is the minimum Hamming distance between
pairs of different code words in C .

Definition 4
An Error Correcting Code is said to be e error detecting if it can tell that
an error occurred when there were ≤ e errors, and at least one error
occurred. It is said to be t error correcting if it can tell where the errors
are when there were ≤ e errors, and at least one error occurred.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 6 / 42


Basic Notions IV

To formalize these notions we define the ball of radius t centered at x:

Ball(x, t ) = {y ∈ Σn |∆(x, y ) ≤ t }

Definition 5
Formally: A code C is t-error correcting if ∀x 6= y ∈ C ,
Ball(x, t ) ∩ Ball(y , t ) = ∅.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 7 / 42


Basic Notions V

Definition 6
Formally: A code is e-error detecting if ∀x ∈ C , Ball(x, e ) ∩ C = {x }.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 8 / 42


Basic Notions VI

Definition 7
The Hamming weight wt (v ) of a vector v is the number of non-zero
coordinates of v .

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 9 / 42


Basic Notions VII
Proposition 8
∆(C ) = 2t + 1 ⇔ the code C is 2t error detecting and t error correcting.

Proof.
: ∆(C ) = 2t + 1 ⇒ 2t error detecting since the word would simply not be
a code word. ∆(C ) = 2t + 1 ⇒ t error correcting since the word would be
closer to the original code word than any other code word. We omit the
reverse implications for now, though we note that the case for t error
correcting is easy.

We now present some key code parameters:


q = |Σ|
n = block length of code(encoded length)
k = message length(pre-encoded length) = logq |C |
d = ∆ (C )
Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 10 / 42
Basic Notions VIII

Usually, we fix three of the above parameters and try to optimize the
fourth.
Clearly, larger k and d values, and smaller n values are desirable. It
also turns out that smaller q values are desirable.
We may also try to maximize the rate of the code(efficiency ratio) kn
and the relative distance dn . We denote an error correcting code with
these parameters as a (n, k, d )q code.

Thus, proving our claim boils down to showing that {bG |b ∈ {0, 1}4 } is a
(7, 4, 3)2 code.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 11 / 42


Basic Notions IX

Proof.
: Assume that ∆(b1 G , b2 G ) < 3 for b1 6= b2 ⇒ ∆((b1 − b2 )G , 0) < 3 ⇒
∃ non-zero c ∈ {0, 1}4 s.t. wt (cG ) < 3. Consider the matrix
 T
0 0 0 1 1 1 1
H= 0 1 1 0 0 1 1 
1 0 1 0 1 0 1

It can be shown that 1 that {bG |b } = {y |yH = 0}. Hence, it suffices to


prove that: if a non-zero y ∈ {0, 1}7 s.t. yH = 0 ⇒ wt (y ) ≥ 3, since this
would contradict that wt (cG ) < 3 for some non-zero c.
Assume wt (y ) = 2 ⇒ 0 = yH = hi + hj , where h1 , h2 , ..., h7 are the rows
of the matrix H. But by the construction of H, this is impossible. Assume
wt (y ) = 1 ⇒ some row hi has all zeros. Again, impossible by
construction. Thus wt (y ) is at least 3 and we are done.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 12 / 42


Basic Notions X

From the properties of the matrix used above, we see that we may
generalize the H matrix for codes of block length n = 2` − 1 and
minimum distance ≥ 3 simply by forming the 2` − 1 by ` matrix
where the rows are the binary representations of all integers between
1 and 2l − 1. The message length k that this generalized H
corresponds to is left as an exercise.
Error correcting codes find application in a variety of different fields in
mathematics and computer science.
In Algorithms, they can be viewed as interesting data structures.
In Complexity, they are used for pseudo-randomness, hardness
amplification, and probabilistically checkable proofs.
In Cryptography, they are applied to implement secret sharing schemes
and proposal cryptosystems.
Finally, they also arise in combinatorics and recreational mathematics.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 13 / 42


Hamming Bound I

Lemma 9
For Σ = {0, 1} and x ∈ Σn ,
n 
n
|Ball(x, t )| = ∑ ≡ Vol (n, t )
i =0 i

Proof.
A vector y ∈ Ball(x, t ) if the number of coordinates of y that differ from
x is at most t. Since Σ = {0, 1}, they can only differ in one way, namely
by being the opposite bit (0 if the bit of x is 1 and 1 if the bit of x is 0).
Thus, to count the number of ways to be in the ball, we choose i of the n
coordinates for i from 0 to t. We define Vol (n, t ) to be the number of
points in the ball Ball(x, t ).

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 14 / 42


Hamming Bound II

Theorem 10
For Σ = {0, 1}, if C is t-error correcting, then

2n 2n
|C | ≤ =  .
Vol (n, t ) n n
∑ i =0
i

Proof.
If the code C is t-error correcting, then ∀x 6= y ∈ C ,
Ball (x, t ) ∩ Ball (y , t ) = ∅, namely, the balls do not intersect. Thus,
|C |Vol (n, t ) ≤ Vol (AmbientSpace ). Note that Vol (AmbientSpace ) = 2n ,
so dividing gives the result.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 15 / 42


Hamming Bound III

To decode a Hamming code (of minimum distance 3), we note that


multiplying the received codeword by the parity check matrix H
associated to the code will give 0 if no error occurred, while if 1 error
occurred it will give the binary representation of the index of the bit
where this error occurred.
This is true because suppose we receive the codeword c with an error
ei (where ei is the 0/1 vector which is 1 only in the i-th coordinate).
Then
(c + ei )H = cH + ei H = ei H.
Now note that ei H is simply the i-th row of H which by construction
is the binary representation of i.
Click here for examples with Haming codes html/hamex.html

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 16 / 42


Richard Wesley Hamming (1915-1998)
American mathematician and computer
scientist. Contributions to Information
Theory, Coding Theory and Numerical
Analysis. Involved in Manhattan project.
Fellow of the ACM.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 17 / 42


Linear Codes I

Definition 11
If the alphabet Σ is a finite field, then we say that a code C is linear if it is
a linear subspace of Σn . That is, x, y ∈ C implies x + y ∈ C and
x ∈ C , a ∈ Σ implies ax ∈ C .

Notationally, we represent linear codes with square brackets:


[n, k, d ]q . All the codes we will see in this class are linear.
Linear codes are interesting for many reasons. For example, the
encoding function is simple, just matrix multiplication. It is also easy
to detect errors: since the code is linear there is a parity check matrix
H such that C = {y : Hy = 0}, and therefore we can detect errors
again by simple matrix multiplication.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 18 / 42


Linear Codes II

Another interesting feature of a linear code is its dual. Let C be a


linear code generated by the matrix G . C has a parity check matrix
H. We define the dual code of C as the code generated by H T , the
transpose of the matrix H. It can be shown that the dual of the dual
of C is C itself.
For example, consider the Hamming code with block length
n = 2` − 1. Its dual is the code generated by a ` × n matrix whose
columns are all the non zero binary strings of length `. It is easy to
see that the encoding of a message b is

< b, x >x ∈{0,1}` −0 ,

where < ., . > denotes inner product modulo 2. In other words, the
encoding of b is the parity check of all the non-empty subsets of the
bits of b.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 19 / 42


Linear Codes III

It can be shown that if b 6= 0 then < b, x >= 1 for at least 2`−1 of


the x’s in {0, 1}` − 0. This implies that the dual code is a
[2` − 1, `, 2`−1 ]2 code. It can be shown that this is the best possible,
in the sense that there is no (2` − 1, ` + e, 2`−1 )2 code. This code is
called the simplex code or the Hadamard code. The second name
comes from the french mathematician Jacques Hadamard who
studied the n × n matrices M such that MM T = nI , where I is the
n × n identity matrix. It can be shown that if we form a matrix with
the codewords from the previous code, we replace 0 with −1, and we
pad the last bit with 0, then we obtain a Hadamard matrix.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 20 / 42


Singleton bounds I

So far we have discussed Shannon’s theory, Hamming’s metric and


Hamming codes, and Hadamard codes. We are looking for aymptotically
good codes to correct some constant fraction of errors while still
trasmitting the information through the noisy channel at a positive rate.
In our usual notation of [n, k, d ]q -codes, Hamming’s construction gives a
[n, n − log2 n, 3]2 -code while Hadamard codes were [n, log2 n, n2 ]2 -codes.
But we are looking for codes where kn and dn have a lower bound
independent of n and q is not growing to ∞. In this scenario, we discuss
the following simple impossibility result:
Theorem 12
For a code C : Σk → Σn with minimum distance d, n ≥ k + d − 1.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 21 / 42


Singleton bounds II

Proof.
In other words, we want to prove that d ≤ n − (k − 1). Just project all
the codewords on the first (k − 1) coordinates. Since there are q k
different codewords, by pigeon-hole principle at least two of them should
agree on these (k − 1) coordinates. But these then disagree on at most
the remaining n − (k − 1) coordinates. And hence the minimum distance
of the code C , d ≤ n − (k − 1).

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 22 / 42


Reed-Solomon Codes I

Definition 13
Let Σ = Fq a finite field and α1 , . . . , αn be distinct elements of Fq . Given
n, k and Fq such that k ≤ n ≤ q, we define the encoding function for
Reed-Solomon codes as: C : Σk → Σn which on message
(k −1)
m = (m0 , m1 , . . . , m(k −1) ) consider the polynomial p (X ) = ∑i =0 mi X i
and C (m ) = hp (α1 ), p (α2 ), . . . , p (αn )i.

Theorem 14
Reed-Solomon code matches the singleton bound. i.e. it’s a
[n, k, n − (k − 1)]q -code.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 23 / 42


Reed-Solomon Codes II

Proof.
The proof is based only on the simple fact that a non-zero polynomial of
degree l over a field can have at most l zeroes.
For Reed-Solomon code, two codewords (with corresponding polynomials
p1 and p2 ) agree at i-th coordinate iff (p1 − p2 )(αi ) = 0. But (p1 − p2 ),
from the above fact, can have at most (k − 1) zeros which means that the
minimum distance d ≥ n − (k − 1).

Reed-Solomon codes are linear. This can be easily verified by the fact
that the polynomials of degree ≤ (k − 1) form a vector space (i.e. if
p1 , p2 are polynomials of degree ≤ (k − 1) then similarly are βp1 and
p1 + p2 ). Since the polynomials
p (X ) ≡ 1, p (X ) = X , . . . , p (X ) = X (k −1) form the basis for this

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 24 / 42


Reed-Solomon Codes III

vector space, we can also find a generator matrix for Reed-Solomon


codes.  
1 1 ... 1
 α1 α2 ... αn 
G = 
 : : : : 
α1 ( k − 1 ) α2 ( k − 1 ) . . . αn ( k − 1 )

One can also prove the theorem about the minimum distance of
Reed-Solomon codes by using the fact that any k columns of G are
linearly independent (because αi ’s are distinct and thus the
Vandermonde matrix formed by the k columns is non-singular).

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 25 / 42


Reed-Solomon Codes IV
n
Using Reed-Solomon codes with k = 2 we can get a
n (n +2)
[n, 2, 2 ]q -code
which means that we can correct much more erros
than before. Typically Reed-Solomon codes are used for storage of
information in CD’s because they are robust against bursty errors that
come in contiguous manner, unlike the random error model studied by
Shannon. Also if some information is erased in the corrupted
encoding, we can still retrieve the original message by interpolating
the polynomial on the remaining values we get.
A way to visualize Reed-Solomon code on binary alphabet is to
consider q = n = 2m . This gives
C : Σk = F2 k log2 n → Σn = F2 n log2 n a
[n log2 n, k log2 n, n − (k − 1)]2 -code. And if you put n log2 n = N
and n − (k − 1) = 5 then this gives a [N, N − 4 log2 N, 5]2 -code. As
we have already seen, Hamming’s construction gave a
[N, N − log2 N, 3]2 -code and Hamming’s impossibility result said that

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 26 / 42


Reed-Solomon Codes V

for a [N, k, 2t + 1]2 -code, k ≥ N − t log2 N. Reed-Solomon code


don’t achieve this but give a fairly closer k ≥ N − 2t log N (We will
discuss later about BCH codes which match this bound).

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 27 / 42


Irving Stoy Reed (1923 – 2012) mathematician
and engineer. He is best known for co-inventing a
class of algebraic error-correcting and
error-detecting codes known as Reed–Solomon
codes in collaboration with Gustave Solomon. He
also co-invented the Reed–Muller code.
Reed made many contributions to areas of
electrical engineering including radar, signal
processing, and image processing.

Gustave Solomon (1930 – 1996) was a


mathematician and electrical engineer who was
one of the founders of the algebraic theory of
error detection and correction. Solomon was best
known for developing, along with Irving S. Reed,
the algebraic error correction and detection codes
named the Reed-Solomon codes.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 28 / 42


Multivariate Polynomial Codes I

Here instead of considering polynomials over one variable (like in


Reed-Solomon codes), we will consider multivariate polynomials. For
example, let’s see the following encoding similar to Reed-Solomon codes
but using bivariate polynomials.
Definition 15
Let Σ = Fq and let k = l 2 and n = q 2 . A message is typically
m = (m00 , m01 , . . . , mll ) and is treated as the coefficients of a bivariate
polynomial p (X , Y ) = ∑li =0 ∑lj =0 mij X i Y j which has degree l in each
variable. The encoding is just evaluating the polynomial over all the
elements of Fq × Fq . i.e. C (m ) = hp (x, y )i(x,y )∈Fq ×Fq .

But what is the minimum distance of this code ? We will use the following
lemma to find it.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 29 / 42


Multivariate Polynomial Codes II

Lemma 16 (Schwartz-Zippel lemma)


A multivariate polynomial Q (X1 , . . . , Xm ) (not identically 0) of total
degree L is non-zero on at least (1 − |SL | ) fraction of points in S m , where
S ⊆ Fq .

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 30 / 42


Multivariate Polynomial Codes III

proof-of-lemma . . . .
Induction on the number of variables. For m = 1, it’s easy as we know
that Q (X1 , . . . , Xm ) can have at most L zeros over the field Fq . Now
assume that the induction hypothesis is true for the a multivariate
polynomial with upto (m − 1) variables, for m > 1. Consider
t
Q (X1 , . . . , Xm ) = ∑ X1 i Qi (X2 , . . . , Xm )
i =0

where t ≤ L is the largest exponent of X1 in Q (X1 , . . . , Xm ). So the total


degree of Qt (X2 , . . . , Xm ) is at most (L − t ). ...

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 31 / 42


. . . proof-of-lemma.
(L−t )
Induction hypothesis ⇒ Qt (X2 , . . . , Xm ) 6= 0 on at least (1 − |S |
)
points in S (m−1) . But suppose Qt (s2 , . . . , sm ) 6= 0 then Q (X1 , s2 , . . . , sm )
is a not-identically-zero polynomial of degree t in X1 , and therefore is
non-zero on at least (1 − |St | ) fraction of choices for X1 . So putting it all
(L−t )
together, Q (X1 , . . . , Xm ) is non-zero on at least (1 − |S |
)(1 − |St | ) ≥
L
(1 − |S |
) points in S m.

Using this, for m = 2 and S = Fq , we get that the above bivariate


polynomial code is a [q 2 , l 2 , (1 − 2lq )q 2 ]q -code.
Some interesting cases of multivariate polynomial codes include
Reed-Solomon
√ code (l = k and m = 1), Bivariate polynomial code
(l = k and m = 2) and Hadamard code (l = 1, m = k and q = 2) !

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 32 / 42


BCH Codes I

A class of important and widely used cyclic codes that can correct
multiple errors, developed by R. C. Bose
and D. K. Ray-Chaudhuri (1960)[4] and independently by A.
Hocquenghem (1959)[5]
Let C1 be the code over Fn = F2t , n = 2t with the following parity
check matrix:
α21 α31
 
1 α1
 1 α2 α22 α32 
H= . .
 
.. .. 
 .. .. . . 
1 αn 2
αn α3n
C1 is a [n, n − 4, 5]n code. The reason why the code has distance 5 is
as follows. If the code has distance 4, there exists 4 rows of H that
are linearly dependent. However, this cannot happen because the
submatrix consisting of 4 rows of H is a Vandemonde matrix whose
determinant is non-zero when the elements are distinct.
Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 33 / 42
BCH Codes II

Now consider CBCH = C ∩ {0, 1}n . Clearly, the length and the
distance of the code do not change so CBCH = [n, ?, 5]2 . The main
question here is how many codewords there are in CBCH. We know
that the all zero codeword is in CBCH but is there any other
codeword?
Let’s represent all entries in Fn by vectors such that:

F2t ←→ Ft2
α ←→ Vα
1 ←→ (100 . . . 0)

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 34 / 42


BCH Codes III

Apply the representation above to H and we get a new matrix:


 
1 Vα1 Vα21 Vα31
0
 1 Vα2 Vα22 Vα32 
H =
 
.. .. .. .. 
 . . . . 
1 Vαn Vα2n Vα3n

If a {0, 1} vector X = [x1 . . . xn ] satisfies XH 0 = 0 then

∑ xi V αi = 0
V∑ xi αi = 0
∑ xi αi = 0
XH = 0

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 35 / 42


BCH Codes IV

e equal to H 0 with the third column removed:


Consider a matrix H
 
1 Vα1 Vα31
 1 Vα2 Vα32 
e =
H  .. .. ..


 . . . 
1 Vαn Vα3n

Proposition 17
For any X ∈ {0, 1}n such that X H
e = 0, XH = 0.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 36 / 42


BCH Codes V

Proof.
The only question is whether ∑ xi α2i = 0. Over Fpt , (x + y )p = x p + y p
∀x, y . Therefore,

∑ xi α2i = ∑ xi2 α2i


= ∑ xi α i
2

= ∑ xi αi = 0

The second and third columns of H impose on the code log n linear
constraints each so the dimension of the code is n − 2 log n − 1. Thus,
CBCH is a [n, n − 2 log n − 1, 5] code.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 37 / 42


BCH Codes VI

In general, the BCH code of distance d has the following parity check
matrix:

1 α1 α21 . . . αd1 −2 1 α1 α31 . . . αd1 −2


   
 1 α 2 α 2 . . . α d −2   1 α2 α32 . . . αd2 −2 
2 2
−→
   
 .. .. .. . . ..   .. .. .. . . .. 
 . . . . .   . . . . . 
1 αn α2n . . . αdn −2 3
1 αn αn . . . αdn −2
 d −2 
The number of columns is 1 + 2 log n so the binary code we get
satisfies n − k ≤ d −
 2
2 log n.
By the Hamming bound for code of length n and distance d,
 
k n d n
2 ≤ 2n → n − k ≥ log
d /2 2 d

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 38 / 42


BCH Codes VII

d
In the case d = no (1) , n − k ≥ 2 log n. Thus, BCH is essentially
optimal as long as d is small.
The problem is more difficult for bigger alphabet. Consider code over
the ternary alphabet {0, 1, 2}. The Hamming bound is
n − k ≥ d2 log3 n. BCH technique gives n − k = 23 d log3 n and we do
not know how to get a better code in this case.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 39 / 42


Raj Chandra Bose (19 June 1901 – 31 October 1987)
was an Indian American mathematician and statistician
best known for his work in design theory and the theory
of error-correcting codes in which the class of BCH
codes is partly named after him.
Dwijendra Kumar Ray-Chaudhuri (Born November 1,
1933) a Bengali-born mathematician and a statistician
is a professor emeritus at Ohio State University. He is
best known for his work in design theory and the theory
of error-correcting codes, in which the class of BCH
codes is partly named after him and his Ph.D. advisor
Bose.
Alexis Hocquenghem (1908?-1990) was a French
mathematician. He is known for his discovery of
Bose–Chaudhuri–Hocquenghem codes, today better
known under the acronym BCH codes.

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 40 / 42


References I

Thomas M. Cover, Joy A. Thomas, Elements of Information Theory,


2nd edition, Wiley, 2006.
David J.C. MacKay, Information Theory, Inference, and Learning
Algorithms, Cambridge University Press, 2003.
Robert M. Gray, Entropy and Information Theory, Springer, 2009
John C. Bowman, Coding Theory, University of Alberta, Edmonton,
Canada, 2003
D. G. Hoffmann, Coding Theory. The Essential, Marcel Deker, 1991
C. E. Shannon, A mathematical theory of communication, Bell System
Technical Journal, 1948.
R. V. Hamming, Error detecting and error correcting codes, Bell
System Technical Journal, 29: 147-160, 1950

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 41 / 42


References II

Reed, Irving S.; Solomon, Gustave, Polynomial Codes over Certain


Finite Fields, Journal of the Society for Industrial and Applied
Mathematics (SIAM) 8 (2): 300–304, 1960
Bose, R. C., Ray-Chaudhuri, D. K., On A Class of Error Correcting
Binary Group Codes, Information and Control 3 (1): 68–79, 1960
Hocquenghem, A., Codes correcteurs d’erreurs, Chiffres (Paris) 2:
147–156, 1959[]

Radu Trı̂mbiţaş (UBB) Elements of Coding Theory January 2013 42 / 42

You might also like