Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views13 pages

A Simple Coding-Decoding Algorithm For The Hamming

Uploaded by

souvikkuar90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views13 pages

A Simple Coding-Decoding Algorithm For The Hamming

Uploaded by

souvikkuar90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

A simple coding-decoding algorithm for the Hamming code

Omar Khadir
Laboratory of Mathematics, Cryptography, Mechanics and Numerical Analysis.
Hassan II University of Casablanca, Fstm, Morocco
e-mail: [email protected]

Abstract: .
arXiv:2201.02066v1 [cs.IT] 6 Jan 2022

In this work, we present a new simple way to encode/decode messages transmitted via a
noisy channel and protected against errors by the Hamming method. We also propose a
fast and efficient algorithm for the encoding and the decoding process which do not use
neither the generator matrix nor the parity-check matrix of the Hamming code.

Keywords: Error-correcting codes, parity-check bits, coding, Hamming code.


MSC: 11B50, 94B05, 94B35.

1 Introduction
The theory of error-correcting codes started in the first half of the last century with the
valuable contribution of Claude Shannon [28]. Richard Hamming [9] was also one of the
pioneers in the field. As he pointed it out himself at the end of his article, the only work
on error-correcting codes before his publication, has been that of Golay [8].
The principle of error-detecting and correcting codes is the redundancy technique. Before
sending a message M we code it by adding some bits. Theses bits are calculated accord-
ing to some specific mathematical laws. After receiving the sent data, we check again the
added bits by the same laws. Therefore, we can detect or even correct occurred errors.
In 1954, Muller [24] applied the Boolean algebra for constructing error-detecting codes.
In 1960, Reed and Solomon [26] extended the Hamming scheme and proposed a class of
multiple error-correcting codes by exploiting polynomials. Almost at the same year, Bose
with Chaudhuri [2] and independently Hocquenghem [12] showed how to design error-
correcting codes if we choose in advance the number of errors we would like to correct.
The method is based on polynomial roots in finite fields. The invented sets are now known
as BCH codes. In 1967, Viterbi [31] devised a convolutionaly error-correcting code based
on the principle of the maximum likelihood. In 1978, a cryptosystem based on error-
correcting codes theory was proposed by McEliece [23]. It is now considered as a serious

1
candidate encryption system that will survive quantum computer attacks [5]. Since then,
curiously and even with the intensive use of computers and digital data transmission, no
revolutionary change happened in the methods of error-detecting and correcting codes
until the appearance of Turbo-codes in the early 1990s [1].
Most of the codes studied in moderne scientific research on coding theory are linear.
They possess a generator matrix for coding and a parity-check matrix for error-correcting.
Cyclic codes [10, 19, 13] constitue a particular remarkable class of linear codes. They are
completely determined by a single binary polynomial and therefore can be easily imple-
mented by shift registers.
The low-density parity-check codes, (LDPC), first discovered by Gallager in 1962 [7], were
brought up to date by MacKay in 1999. These codes have a parity-check matrix whose
columns and rows contain a small number of 1’s. Like the Turbo-codes, they achieve
information rates up to the Shannon limit [1, 22, 28].
The Hamming code belongs to a family of error-correcting codes whose coding and de-
coding procedure are easy. This is why it is still widely used today in many applications
related to the digital data transmission and commuication networks. [17, 25, 4, 29, 33, 34,
14, 18]. In 2021, Falcone and Pavone [6] studied the weight distribution problem of the
binary Hamming code. In 2001, 2015, 2017 and 2018, attempts to improve the decoding
of Hamming codes have been performed [11, 15, 16, 3].
In this work, we present an original and simple way to encode/decode messages trans-
mitted via a noisy channel and protected against errors by the Hamming method. We
consequently construct algorithms for the encoding and the decoding procedures which do
not use neither the generator matrix nor the parity-check matrix of the Hamming code.
To the best of our knowledge, this issue has not been studied before and does not appear
in the mathematical or computer science literature on coding theory.
The article is organized as follows. In Section 2, we briefly review the Hamming error-
correcting code. Section 3 contains some preliminaries. Our contribution on coding and
decoding messages is described in Section 4. We conclude in Section 5.
Throughout this paper, we shall use standard notation. In particular N is the set of all
positive integers 0, 1, 2, 3, . . . . If a, b, n ∈ N, we write a ≡ b (mod n) if n divides a − b,
and a = b mod n if a is the remainder of the Euclidean division of b by n. The binary
representation of n is noted B(n) = ǫk−1 ǫk−2 . . . ǫ2 ǫ1 ǫ0 with ǫi ∈ {0, 1} and means that

2
X
k−1
n = ǫi 2i . The function log should be interpreted as logarithm to the base 2. The
i=0
largest integer which does not exceed the real x is denoted by ⌊x⌋.

We start, in the next section, by recalling the construction of the Hamming code [9] and
some known relevant facts on the associated algorithm.

2 Brief recall on the Hamming code


Assume that a binary message M = a1 a2 . . . an was transmitted through a noisy channel
and that the received message is M ′ = b1 b2 . . . bn . If at most one single error has occurred
during the transmission, then with log n parity checks, the Hamming algorithm [9] effi-
ciently determines the error position or detect double-bit errors. In this section, we review
the main steps when using the Hamming code in digital data communication networks.
There is an abundant literature on the subject. For more details, see for instance [9][19,
p. 38][30, p. 319][10, Chap. 8][13, p. 29][16] [20, p. 23].

2.1 The coding procedure


In a binary representation, assigning k bits to the error position, allows to analyze and
decode any message of length n = 2k − 1. The main idea of Hamming method relies
on a logical equivalence. The jth bit, among the k possible bits, is 1 if and only if the
error has occurred at a bit ai of M whose index i has 1 in the jth position in its binary
representation.
Hamming defines a2j as a parity-check bit and it is equal to the sum modulo 2 of all bits
of M with index having 1 in the jth position in its binary representation.
Bits a1 , a2 , a22 , . . . , a2j , . . . , a2k−1 are kept as parity-check bits. All the other bits are for
information. Their number is m = n − k = 2k − k − 1.

2.2 The decoding procedure


Suppose that the received codeword is M ′ = b1 b2 . . . bn . For each fixed integer j ∈
{0, 1, 2, . . . , k − 1}, if b2j is the sum modulo 2 of all bits of M ′ with index having 1 in the
jth position in its binary representation, then the error position has 0 at the jth place
from the right in its binary representation, if not, it is a 1.

3
Example 2.1: Let us illustrate the technique by un exemple. Suppose that we received
a message with 15 bits as it is indicated in Table 1.

Indexes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
bits 0 1 1 0 1 0 0 0 1 0 1 1 0 0 1
Table 1: The received 15 bits

Let ǫ3 ǫ2 ǫ1 ǫ0 be the binary representation of the error position. By the Hamming algo-
rithm [9], ǫ0 is the sum modulo 2 of the bits whose binary representation starts by 1. So :
ǫ0 = (b1 + b3 + b5 + b7 + b9 + b11 + b13 + b15 ) mod 2 = 1.
ǫ1 is the sum of the bits whose binary representation has 1 in the second position. So :
ǫ1 = (b2 + b3 + b6 + b7 + b10 + b11 + b14 + b15 ) mod 2 = 0.
ǫ2 is the sum of the bits whose binary representation has 1 in the third position. So :
ǫ2 = (b4 + b5 + b6 + b7 + b12 + b13 + b14 + b15 ) mod 2 = 1.
ǫ3 is the sum of the bits whose binary representation has 1 in the last position. So :
ǫ3 = (b8 + b9 + b10 + b11 + b12 + b13 + b14 + b15 ) mod 2 = 0.
Finally, the error position is ǫ3 ǫ2 ǫ1 ǫ0 = 0 1 0 1 or 5 in the decimal base. The term b5
must be corrected.

2.3 Complexity of the algorithm


A Hamming code can also be defined by its k × n generator matrix G and its (n − k) × n
parity-check matrix H [32]. Let x be the message to send. To compute the codeword xG,
we need (k − 1)n ≃ n log n binary additions and kn bit multiplications.
The decoding complexity : Let y be the received message. To compute the syndrome Hy
we perform (n − k)(n − 1) ≃ n(n − log n) binary additions and (n − k)n ≃ n(n − log n) bit
multiplications. To locate the erreur position, in the worst case, we compare the vector
Hy to every column, so we need n−k ≃ n−log n bits comparison. As there are n columns
in the matrix H, the total number of the bit comparisons is n(n − k) ≃ n(n − log n).
if the columns of parity-check matrix H are arranged in order of increasing binary numbers
from 1 to n, we do not need to make comparisons [10, p. 83]. The syndrome Hy is exactly
the binary representation of the error position.

4
3 Preliminaries
Let k ∈ N − {0} and n = 2k − 1. For every fixed j ∈ {0, 1, 2, . . . , k − 1} we define the set
S(j, n) as

S(j, n) = {0 ≤ u ≤ n | B(u) contains 1 in the (j + 1)th position from the right}. (1)

Hence S(0, n) = {1, 3, 5, 7, . . . , n},


S(1, n) = {2, 3, 6, 7, 10, 11, . . . , n},
S(2, n) = {4, 5, 6, 7, 12, 13, 14, 15, . . . , n},
S(3, n) = {8, 9, 10, 11, 12, 13, 14, 15, 24, 25, 26, 27, 28, 29, 30, 31, . . . , n},
...
Observe that the binary representation of n is B(n) = 111 . . . 11 and so it contains 1 at
any position.
Remark 3.1: To schematize what is the set S(j, n), suppose that we have in front of us
the line representing all positive integers. From the term 2j , we keep the 2j consecutive
integers, we delete the following 2j elements, we keep the 2j following, we delete the
following 2j terms, and so on alternately... We repeat this procedure until the part just
before the limit n.

Proposition 3.1: For any positive integer u = (2α + 1)2j + i where 0 ≤ α ≤ 2k−j−1 − 1
and 0 ≤ i ≤ 2j − 1, we have u ∈ S(j, n).

Proof. First u is a positive integer and u ≤ (2.2k−j−1 − 2 + 1)2j + 2j − 1 = 2k − 1 = n.


On the other hand, in base 2, elements α and i can be written as:
Xm X
r
t
α= bt 2 , bt ∈ {0, 1}, m < k − j − 1 and i = ct 2t , ct ∈ {0, 1}, r < j. Therefore:
t=0 t=0
X
s X
r
u= bt 2j+1+t + 2j + ct 2t . As i ∈ {0, 1, 2, . . . , 2j − 1}, the binary representation of
t=0 t=0
u is
B(u) = bs bs−1 . . . b1 b0 1 0 0 . . . 0 0 cr cr−1 . . . c1 c0 (2)

As it is easy to see that b0 is in the position j + 2, we deduce that 1 is in the (j + 1)th


position and the proof is achieved.

The next result is essential to the construction of our algorithm.

Theorem 3.1: Let k ∈ N − {0} and n = 2k − 1. For every fixed j ∈ {0, 1, 2, . . . , k − 1}, if

5
we set T (j, n) = {(2α + 1)2j + i | 0 ≤ α ≤ 2k−j−1 − 1 and 0 ≤ i ≤ 2j − 1}, then we have :

T (j, n) = S(j, n) (3)

Proof. Proposition 2.1 shows that T (j, n) ⊂ S(j, n). Conversely consider an element u ∈
S(j, n). So the binary representation B(u) contains 1 in the (j+1)th position from the right.
Xs X
r Xs X r
j+1+t j t t
Hence : u = bt 2 +2 + ct 2 , r < j. By choosing α = bt 2 and i = ct 2 t ,
t=0 t=0 t=0 t=0
we get u = (2α + 1)2j + i. Since 0 ≤ u ≤ n, we can easily verify that 0 ≤ α ≤ 2k−j−1 − 1,
which ends the proof.

Corollary 3.1: Let r = 2j − 1 and s = 2k−j−1 − 1. In the coding Hamming algorithm,


the checking bits a2j , which is artificially zero before the calculation, can be computed as:
X
s X
r
a2j = ( a(2α+1)2j +i ) mod 2 (4)
α=0 i=0

For the decoding step,


X
s X
r
ǫj = ( a(2α+1)2j +i ) mod 2 (5)
α=0 i=0

is the bit in the (j + 1) position from the right of the binary representation of the error
location.

Proof. By the Hamming algorithm, we have:


X X
a2j = ( au ) mod 2 = ( au ) mod 2, and
u∈S(j,n)−{2j } j
X X u∈T (j,n)−{2 }
ǫj = ( au ) mod 2 = ( au ) mod 2, which give relations (4) and (5).
u∈S(j,n) u∈T (j,n)

Example 3.1: In the early 1980s, the Minitel system [27, p. 177,185] [21, p. 110] was a
national network in France, precursor to the moderne Internet. A Hamming code with 7
parity-check bits was used to correct single error in messages M = a1 a2 . . . an , n = 27 − 1.
Let us see how, by Corollary 3.1, we can compute for instance the parity-check bit a24 in
the coding step.
We have n = 127 and j = 4 =⇒ r = 15 and s = 3. By relation (4) we fill the following
table:
To determine the term a24 , we need to calculate the sum modulo 2 of all the 64 bits in
the second column of Table 2.
If the message M was received, in the decoding step, the sum modulo 2 of all the 64 bits

6
Values of α bits to add
0 a16 = 0 + a17 + a18 + . . . + a31
1 a48 + a49 + a50 + . . . + a63
2 a80 + a81 + a82 + . . . + a95
3 a112 + a113 + a114 + . . . + a127

Table 2: Computation of the parity-check a24

in Table 2, with the real received value of a16 , gives the 4th bit from the right in the
binary representation of the erreur position.

Theorem 3.2: Let k ∈ N − {0} and n = 2k − 1. For every fixed j ∈ {0, 1, 2, . . . , k − 1},
if we set U(j, n) = {2j + 2i − (i mod 2j ) | 0 ≤ i ≤ 2k−1 − 1}, then we have :

U(j, n) = T (j, n) (6)

Proof. Let u = 2j + 2i − (i mod 2j ) ∈ U(2j , n). Put i = q2j + r with q ∈ N and 0 ≤ r < 2j .
So i mod 2j = r. Therefore u = 2j + q2j+1 + 2r − r = q2j+1 + 2j + r and then B(u) has 1
in the (j + 1)th position from the right. Consequently u ∈ S(j, n) = T (j, n) by Theorem
3.1.
Conversely let u = (2α + 1)2j + i ∈ T (j, n). By the definition of T (j, n), we have
0 ≤ α ≤ 2k−j−1 − 1 and 0 ≤ i ≤ 2j − 1.
Put i1 = α2j +i. So i1 mod 2j = i. On the other hand 2j +2i1 −(i1 mod 2j ) = 2j +α2j+1 +
2i − i = (2α + 1)2j + i. Moreover 0 ≤ i1 ≤ 2k−1 − 2j + i ≤ 2k−1 − 2j + (2j − 1) = 2k−1 − 1.
Conclusion: u ∈ U(j, n).

Corollary 3.2: Let k ∈ N − {0}, n = 2k − 1 and r = 2k−1 − 1. For every fixed


j ∈ {0, 1, 2, . . . , k − 1}, we put J = 2j .
For the coding step:
Xr
aJ = [ aJ+2i−(i mod J) ] mod 2 (7)
i=1

For the decoding step:


Xr
ǫJ = [ aJ+2i−(i mod J) ] mod 2 (8)
i=0

Proof. By the Hamming algorithm, for the coding process, we have


X
a2j = ( au ) mod 2 (9)
u∈S(j,n)−{2j }

7
But Theorem 3.1 and Theorem 3.2 imply that the three sets S(j, n), T (j, n), U(j, n) are
identical, so:
X X−1
2k−1
a2j = ( au ) mod 2 = ( a2j +2i −(i mod 2j ) ) mod 2
u∈U (j,n)−{2j } i=0,i6=2j

X−1
2k−1 X
r
=( a2j +2i −(i mod 2j ) ) mod 2 = ( aJ+2i −(i mod J) ) mod 2,
i=1 i=1
and we get relation (7).
Similar proof for relation (8).

The next result is an other alternative manner for the calculation of the parity-check bits
in the Hamming algorithm.
Corollary 3.3: With the same hypothesis as in Corollary 3.2., we have:
For the coding step:
Xr
aJ = [ aJ(1+⌊i/J⌋)+i ] mod 2 (10)
i=1

For the decoding step:


Xr
ǫJ = [ aJ(1+⌊i/J⌋)+i ] mod 2 (11)
i=0

Proof. for every i ∈ {0, 1, 2, . . . , r}, the Euclidean division of i by J gives i = J⌊i/J⌋ +
(i mod J), so J + 2i − (i mod J) = J + i + (⌊i/J⌋)J + i = J(1 + ⌊i/J⌋) + i. Thus
aJ+2i−(i mod J) = aJ(1+⌊i/J⌋)+i and by Corollary 3.2, we get relations (10) and (11).

We now move to the presentation of our coding and decoding algorithms.

8
4 Our algorithms for the Hamming code
Relation (4) in Corollary 3.1 leads to the following coding algorithm where comments are
in italic font and delimited with braces

Algorithm 1 Determination of the parity check symbols


Require: The message M = a1 a2 . . . an to code before sending.
Ensure: The computation of all the checking bits a2j .
k ← 4 {k is the number of the checking bits a2j .}
n ← 2k − 1 {n is the length of the message M.}
M ← [1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1] {M is an example of a message to code.}
for j in {0, 1, ..., k − 1} do
u ← 2j {u is the index of the checking symbols a2j .}
max alpha ← 2k−j−1 − 1 {The bound max alpha is the maximal value of α.}
S←0
for α in {0, 1, ..., max alpha} do
v ← (2α + 1)u
for i in {0, 1, ..., 2j − 1} do
w ← v + i {w = (2α + 1)2j + i is the index of the bit to add to S.}
S ← S + aw
end for
end for
S ← S − a2j {The term a2j must not be part of the sum S.}
S ← S mod 2
a2j ← S {We assign S to the checking term a2j .}
end for
print(M) {M is the final coded message to send.}

9
Relation (5) in Corollary 3.1 leads to the following decoding algorithm where comments
are in italic font and delimited with braces

Algorithm 2 Determination of the error location


Require: The received message M = b1 b2 . . . bn to correct.
Ensure: The computation of the error position.
k ← 4 {k is the number of the checking bits b2j .}
n ← 2k − 1 {n is the length of the message M.}
M ← [1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1] {M is an example of a received message.}
X ← 0 {X is the error position in decimal base}
for j in {0, 1, ..., k − 1} do
u ← 2j {u is the index of the checking symbols b2j .}
max alpha ← 2k−j−1 − 1 {The bound max alpha is the maximal value of α.}
S←0
for α in {0, 1, ..., max alpha} do
v ← (2α + 1)u
for i in {0, 1, ..., 2j − 1} do
w ← v + i {w = (2α + 1)2j + i is the index of the bit to add to S.}
S ← S + bw
end for
end for
S ← S mod 2 { S is computed in base 2.}
X ← X + S ∗ 2j {We find the error position X in base 10.}
end for
print(X) {X is the final error position. If X = 0 there is no error in the transmission.}

5 Conclusion
In this paper, we presented a new simple and effective method for coding/decoding any
transmitted message through a noisy channel that is protected against errors by the Ham-
ming scheme. We also implemented practical corresponding algorithms. Our technique
constitues an alternative to the classical use of the generator matrix for coding or the
parity-check matrix for decoding.

References
[1] Berrou, C., Glavieux, A., and Thitimajshima, P., Near Shannon limit error-correcting
coding and decoding: Turbo-codes, Proceedings of ICC ’93 - IEEE International

10
Conference on Communications, vol. 2, pp1064-1070, (1993).
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.135.9318&rep=rep1&type=pdf

[2] Bose, R.C., and Ray-Chaudhri, D.K, On a class of error-correcting binary group codes,
Inform. Contr., vol. 3, pp68-79, (1960).

[3] Pankaj Kumar Das, Error-locating codes and extended Hamming code, Matematicki
Vesnik, 70,1, pp89-94, (2018).
http://elib.mi.sanu.ac.rs/files/journals/mv/271/mvn271p89-94.pdf

[4] Divsalar, D., and Dolinar, S., Concatenation of Hamming codes and accumulator
codes with high-order modulations for high-speed decoding, IPN Progress Report
42-156, (2004).
https://www.researchgate.net/publication/245759463_Concatenation_of_Hamming_Codes_and

[5] Elder, J., Quantum resistant Reed Muller codes on McEliece cryptosystem Thesis,
Phd, University of North Carolina, USA, (2020).
https://math.charlotte.edu/sites/math.charlotte.edu/files/fields/preprint_archive/pap

[6] Falcone, G., and Pavone, M., Binary Hamming codes and Boolean designs, Designs,
Codes and Cryptography, 89, pp1261-1277, (2021).
https://www.researchgate.net/publication/350768990_Binary_Hamming_codes_and_Boolean_d

[7] Gallager, R.G., Low-density parity-check codes, IRE Trans. Inf. Theory, vol. 8, no. 1,
pp. 21-28, (1962).

[8] Golay, M.J.E., Notes on digital coding, Proceedingsof the I.R.E., Vol. 37, pp657,
(1949).
http://www.lama.univ-savoie.fr/pagesmembres/hyvernat/Enseignement/1617/info528/TP-Gol

[9] Hamming, R., Error-detecting and error-correcting codes, Bell Syst. Tech. J. 29, pp147-
160, (1950).

[10] Hill, R., A first course in coding theory, Oxford University Press, (1997).

[11] Hirst, S., Honary, B., A simple soft-input/soft-output decoder for Hamming codes,
Cryptography and coding, pp38-43, Lecture Notes in Comput. Sci., 2260, Springer,
Berlin, (2001).

11
[12] Hocquenghem, A., Codes correcteurs d’erreurs, Chiffres, Vol. 2, pp147-156, (1959).

[13] Huffman, W.C., and Pless, V., Fundamentals of error-correcting codes, Cambridge
University Press, (2003).

[14] Jianhong Huang, Guangjun Xie, Rui Kuang, Feifei Deng, Yongqiang Zhang, QCA-
based Hamming code circuit for nano communication network, Microprocessors and
Microsystems, 84, pp1-12, (2021).

[15] Islam, M.S., Kim, C.H., and Kim, J.M., Computationally efficient implementation
of a Hamming code decoder using graphics processing unit, Journal of Communi-
cations and Networks. Institute of Electrical and Electronics Engineers (IEEE), (2015).
https://www.researchgate.net/publication/269935106_Computationally_Efficient_Implemen

[16] Klein, S.T., Shapira, D., Hierarchical parallel evaluation of a Hamming code, Algo-
rithms (Basel), 10, (2017).

[17] Lange, C., and Ahrens, A., On the undetected error probability for shortened Ham-
ming codes on channels with memory, Cryptography and coding, p9-19, Lecture Notes
in Comput. Sci., 2260, Springer, Berlin, (2001).

[18] Li, Lin; Chang, Chin-Chen; Lin, and Chia-Chen Reversible data hiding in encrypted
image based on (7, 4) Hamming code and unit smooth detection, Entropy 23, no. 7,
(2021).
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8306628/

[19] Lint, van, J.H., Introduction to coding theory, Third edition, Springer, (1999).

[20] MacWilliams, F.J., and Sloane, N.J.A, The theory of error-correcting codes, North-
Holland publishing company, Third printing, (1981).

[21] Martin, B., Codage, cryptologie et applications, Presses polytechniques universitaire


romandes, (2004).

[22] MacKay, D.J.C., Good error-correcting codes based on very sparse matrices, IEEE
Trans. Inform. Theory, vol. 45, no. 2, pp399-431, (1999).

[23] McEliece, R.J., A public-key cryptosysem based on algebraic coding theory, DSN
Progress Report, pp42-44, (1978).

12
[24] Muller, D.E., Application of the Boolean algebra to switching circuit design and to
error detection, IRE, Trasaction electronic computers, pp6-12, (1954).

[25] Otmani, A., Caractérisation des codes auto-duaux binaires de type II à partir du
code de Hamming étendu [8, 4, 4], C. R. Acad. Sci. Paris, Ser. I 336 (2003).
https://www.sciencedirect.com/journal/comptes-rendus-mathematique/vol/336/issue/12

[26] Reed, I.S., and Solomon, G., Polyomial codes over certain finite fields, J. Soc. Indust.
Al. Math. Vol. 8, No. 2, pp300-304, (1960).
https://faculty.math.illinois.edu/~ duursma/CT/RS-1960.pdf

[27] Rousseau, C., and Saint-Aubin, Y., Mathematics and Technology, Springer, (2008).

[28] Shannon, C., A Mathematical theory of communication, The Bell System Technical
Journal, Vol. 27, pp379-423 and 623-656,(1948).
https://people.math.harvard.edu/~ ctm/home/text/others/shannon/entropy/entropy.pdf

[29] Stakhov, A., Mission-critical systems, paradox of Hamming code, row hammer effect,
‘Trojan horse’ of the binary system and numeral systems with irrational bases, Com-
put. J. 61, no. 7, (2018).
https://academic.oup.com/comjnl/article/61/7/1038/4430323?login=true

[30] Trappe, W., and Washington, L.C., Introduction to cryptography and coding theory,
Printice Hall, (2002).

[31] Viterbi, A.J., Error bounds for convolutional codes and an asymptotically optimum
decoding algorithm, IEEE Trans. Inf. Theory IT-13, pp260-269 (1967).

[32] Viterbi, A.J., Omura, J.K., Principles of digital communication and coding, McGraw-
Hill, Inc., (1979).

[33] Yanting Wang, Mingwei Tang, Zhen Wang, High-capacity adaptive steganography
based on LSB and Hamming code, Inter. J. for Light and Electron Optics, 213, pp1-9,
(2020).

[34] Xiaotian Wu, Ching-Nung Yang, Yen-Wei Liu, A general framework for partial re-
versible data hiding using Hamming code, Signal Processing, 175, pp1-12, (2020).

13

You might also like