ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance ECEN 5682 Theory and Practice of Error Control Codes Block Code Performance Peter Mathys University of Colorado Spring 2007 ...
Author: Meryl Hart
0 downloads 0 Views 473KB Size
Block Code Performance

ECEN 5682 Theory and Practice of Error Control Codes Block Code Performance Peter Mathys University of Colorado

Spring 2007

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Performance Measures “Goodness” Criteria for Block Codes. How does one select the “best” code for a particular application? This is not an easy question to answer in general because many factors, some of which are only marginally related to coding theory (e.g. the wordlength used on a VLSI chip), need to be considered. But since error control codes are used to control the effect of transmission or channel errors, the probability of error after encoding and decoding with a code C is certainly one of the major, if not the most important, criterion for “goodness”. Depending on whether a code is used for error detection and/or error correction, we are interested in the probability of an undetected error Pu (E), or in the probability of a decoding error P(E).

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Probability of Undetected Error Theorem: Let C be any binary code with minimum (Hamming) distance dmin . Then the probability of undetected error, Pu (E), on a memoryless binary symmetric channel (BSC) with transition probability  satisfies   n X n w Pu (E) ≤  (1 − )n−w . w w =dmin

Proof: Since the code has minimum distance dmin , it can detect all error patterns of (Hamming) weight dmin − 1 or less. QED Note: The above theorem also holds for memoryless q-ary symmetric channels (QSC) with transition probabilities (x denotes input, y denotes output of channel)  1 −  , if α = β, α, β ∈ {0, 1, . . . q − 1} , P{y =β|x=α} =  , if α 6= β, α, β ∈ {0, 1, . . . q − 1} . q−1 Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Note: Usually, many more error patterns can be detected than those indicated by the above theorem. A q-ary linear (n, k) code, for example, can detect q n − q k error patterns. Definition: Let Aw be the number of codewords of a linear (n, k) code with (Hamming) weight w . Then {A0 , A1 , . . . , An } is called the weight distribution of the code. Example: Code #1 (binary (5, 2, 3) code) with generator matrix   1 0 1 0 1 G= , 0 1 0 1 1 has codewords C = {00000, 01011, 10101, 11101} , and thus weight distribution {A0 = 1, A3 = 2, A4 = 1, Peter Mathys

Aw = 0, w 6= 0, 3, 4} . ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Theorem: Let C be a binary linear (n, k) code with weight distribution {A0 , A1 , . . . , An } which is used on a binary symmetric channel (BSC) with transition probability . Then the probability of undetected error, Pu (E), is given by Pu (E) =

n X

Aw w (1 − )n−w .

w =dmin

Note that if C has minimum distance dmin , then A1 , A2 , . . . , Admin −1 are all zero. Proof: Every error pattern that is equal to a codeword cannot be detected when a linear code is used. Note: For a q-ary symmetric channel (QSC) with transition probabilities as given earlier, this theorem becomes  w n X  Pu (E) = Aw (1 − )n−w . q−1 w =dmin

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Examples: The weight distributions {Aw } of a few small binary codes are: • (7, 4, 3) Hamming code: {A0 = 1, A3 = 7, A4 = 7, A7 = 1} .

• (15, 11, 3) Hamming code: {A0 = 1, A3 = 35, A4 = 105, A5 = 168, A6 = 280, A7 = 435, A8 = 435, A9 = 280, A10 = 168, A11 = 105, A12 = 35, A15 = 1} .

• (31, 26, 3) Hamming code: {A0 = 1, A3 = 155, A4 = 1085, A5 = 5208, A6 = 22568, A7 = 82615, A8 = 247845, A9 = 628680, A10 = 1383096, A11 = 2648919, A12 = 4414865, A13 = 6440560, A14 = 8280720, A15 = 9398115, A16 = 9398115, A17 = 8280720, A18 = 6440560, A19 = 4414865, A20 = 2648919, A21 = 1383096, A22 = 628680, A23 = 247845, A24 = 82615, A25 = 22568, A26 = 5208, A27 = 1085, A28 = 155, A31 = 1} .

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Examples: (contd.) • (31, 21, 5) BCH code: {A0 = 1, A5 = 186, A6 = 806, A7 = 2635, A8 = 7905, A9 = 18910, A10 = 41602, A11 = 85560, A12 = 142600, A13 = 195300, A14 = 251100, A15 = 301971, A16 = 301971, A17 = 251100, A18 = 195300, A19 = 142600, A20 = 85560, A21 = 41602, A22 = 18910, A23 = 7905, A24 = 2635, A25 = 806, A26 = 186, A31 = 1} .

• (31, 16, 7) BCH code: {A0 = 1, A7 = 155, A8 = 465, A11 = 5208, A12 = 8680, A15 = 18259, A16 = 18259, A19 = 8680, A20 = 5208, A23 = 465, A24 = 155, A31 = 1} .

• (31, 11, 11) BCH code: {A0 = 1, A11 = 186, A12 = 310, A15 = 527, A16 = 527, A19 = 310, A20 = 186, A31 = 1} .

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

The (exact) probability of undetected error for these six codes, when used on a memoryless BSC with transition probability , is shown in the following graph. Probability of Undetected Error

0

10

−2

10

−4

10

(7,4,3) Code (15,11,3) Code (31,26,3) Code (31,21,5) Code (31,16,7) Code (31,11,11) Code

−6

10

−8

10 u

log P (E)

10

−10

10

−12

10

−14

10

−16

10

−18

10

−20

10

−6

−5.5

−5

−4.5

−4 −3.5 −3 log10(epsilon) for BSC

Peter Mathys

−2.5

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

The upper three curves show that if dmin is fixed and the blocklength n is increased, then the probability of undetected error Pu (E) increases. The lower three curves show that if n is fixed and dmin is increased, then Pu (E) decreases quite rapidly. But note that the code rate R = k/n, which does not show up explicitly in the above graph, also needs to be taken into account when designing a system that uses error control. A lower code rate means in general that either data can be sent less rapidly over a given channel, or the channel bandwidth needs to be increased. Definition: Let {A0 , A1 , . . . , An } be the weight distribution of a linear (n, k) code C and let {B0 , B1 , . . . , Bn } be the weight distribution of its dual code C ⊥ . Define the weight distribution polynomials of C and C ⊥ as A(z) =

n X

Aw z

w

and B(z) =

w =0

n X

Bw z w .

w =0 Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Example: Code #1 with weight distribution {A0 =1, A3 =2, A4 =1} has weight distribution polynomial A(z) = 1 + 2 z 3 + z 4 .

Theorem: MacWilliams Identity for Binary Codes. The weight distribution polynomials, A(z) and B(z), of a binary linear (n, k) code C and its dual code C ⊥ are related by   1−z −(n−k) n . A(z) = 2 (1 + z) B 1+z Theorem: MacWilliams Identity for q-ary Codes. The weight distribution polynomials, A(z) and B(z), of a q-ary linear (n, k) code C and its dual code C ⊥ are related by   1−z A(z) = q −(n−k) (1 + (q − 1)z)n B . 1 + (q − 1)z Proof: See F.J. MacWilliams, “A Theorem on the Distribution of Weights in a Systematic Code,” Bell Syst. Tech. J., vol. 42, 1963, pp. 79–94. Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Example: Consider the binary  1 G = 0 1

Performance Measures Probability of Undetected Error Probability of Decoding Error

(5, 3) code with generator matrix  0 1 0 0 1 0 1 0 . 1 0 0 1

Its dual code is code #1, the (5, 2) code with B(z) = 1 + 2 z 3 + z 4 given in a previous example. Using the MacWilliams identity, one finds that   3  4  1−z 1−z −2 5 A(z) = 2 (1 + z) 1 + 2 1+z + 1+z =

1 4

 (1 + z)5 + 2 (1 − z)3 (1 + z)2 + (1 − z)4 (1 + z)

= 1 + 2 z2 + 4 z3 + z4 , and thus the weight distribution of this (5, 3) code is {A0 = 1, A2 = 2, A3 = 4, A4 = 1, Ai = 0, i 6= 0, 2, 3, 4} . Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Theorem: In terms of the weight distribution polynomials A(z) (for code C) and B(z) (for code C ⊥ ), the probability of undetected error for a linear binary (n, k) code C used on a BSC with transition probability , can be expressed as     Pu (E) = (1 − )n A − 1 = 2−(n−k) B(1 − 2) − (1 − )n . 1− Proof: For the first equality note that Pu (E)=

n X

Aw w (1−)n−w =(1−)n

w =dmin

n X w =dmin

Aw

`

“ `  ´ ”  ´w =(1−)n A −1 . 1− 1−

For the second equality use MacWilliams’ identity A(z) = 2−(n−k) (1 + z)n B

1 − z  1+z

,

for binary (n, k) codes. Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Probability of Decoding Error Theorem: Let C be any binary code with minimum (Hamming) distance dmin and let t = b(dmin − 1)/2c. Then the probability of a (block) decoding error, PB (E), on a memoryless BSC with transition probability  is upper bounded by PB (E) ≤

  n t   X X n w n w  (1 − )n−w = 1 −  (1 − )n−w . w w w =t+1 {z } |w =0 = PB (C)

Proof: Any code with minimum distance dmin can correct all error patterns of t = b(dmin − 1)/2c or less (random) errors. QED

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Example: Consider bounded distance decoding of all error patterns up to t errors, t = b(dmin − 1)/2c, using binary codes with the following parameters (7, 4, 3), (15, 11, 3), (31, 26, 3), (31, 21, 5), (31, 16, 7), and (31, 11, 11). Upper bounds on the probability of decoding error PB (E) when using a BSC with transition probability  are given in the graph on the next slide. Note that the first three codes are Hamming codes and since these are perfect codes, the PB (E) curves are exact rather than bounds. As was the case for error detection, the upper three curves show that if dmin is kept fixed and n is increased, then the probability of error is increased. Conversely, if the blocklength n is kept fixed and dmin is increased, then PB (E) decreases quite rapidly as can be seen from the lower three curves.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Probability of Decoding Error for Bounded Distance Decoder

0

10

−2

10

(7,4,3) Code (15,11,3) Code (31,26,3) Code (31,21,5) Code (31,16,7) Code (31,11,11) Code

−4

log10PB(E)

10

−6

10

−8

10

−10

10

−12

10

−6

−5.5

−5

−4.5

−4 −3.5 −3 log10(epsilon) for BSC

Peter Mathys

−2.5

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Note: In most cases a t-error-correcting code can also correct many patterns of t + 1 or even more errors. A q-ary linear (n, k) code, for example, can correct q n−k − 1 error patterns (including those with t or less errors). The bound on the probability of decoding error given above assumes a bounded distance decoder that only corrects up to t errors. If a complete decoder is used, one should in most cases (except for perfect codes) be able to do better. Let X and Y be random vectors of length n. For any given channel model, let X denote the channel input and Y denote the channel output. A particular channel model is specified by giving the conditional probability mass function (pmf) p (y|x). Assume that the code Y |X C = {ci }, i = 0, 1, . . . , M − 1, is used and the transmission of a codeword from C over the specified channel results in the received n-tuple v. Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

The decoder then needs to look at the a posteriori probabilities p (c v) p (v|ci ) p (ci ) XY i Y |X X p (c |v) = = , X |Y i p (v) p (v) Y Y for i = 0, 1, . . . , M − 1. Definition: For a given code C = {c0 , c1 , . . . , cM−1 }, a priori pmf p (c), channel model p (v|c) and received n-tuple v, a X Y |X maximum a posteriori (MAP) decoder outputs estimate ˆc = ci iff i is the index which maximizes the expression p (v|ci ) p (ci ) . Y |X X If there is more than one index that maximizes this expression, one of the maximizing indexes can be chosen at random without loss of optimality. Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Definition: For a given code C = {c0 , c1 , . . . , cM−1 }, channel model p (v|c) and received n-tuple v, a maximum likelihood Y |X (ML) decoder outputs estimate ˆc = ci iff i is the index which maximizes the expression p (v|ci ) . Y |X If there is more than one index that maximizes this expression, one of the maximizing indexes can be chosen at random without loss of optimality.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Note: If an additive error model is used, then the conditional probability p (v|c) is equal to the probability p (v − c) that the Y |X E error pattern e = v − c occurs. If the channel model is such that lower (Hamming) weight error patterns are more likely to occur than higher weight patterns, then the ML decoder outputs ˆc = ci iff i is the index that minimizes the (Hamming) distance dH (v, ci ). If there are several indexes that minimize this distance, then one of these indexes can be chosen at random without loss of optimality. Note: If p (ci ) = 1/M for all i = 0, 1, . . . , M − 1 (uniform X distribution of codewords), then the MAP decoder and the ML decoder are the same.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

For ML decoders an upper bound for the block error probability PB (E) can be derived as follows. Define Pd (E) = P{Decoding error between two codewords distance d apart}. Then, assuming a linear code with M equally likely codewords, one can use a union bound (i.e., the probability of a union of events is upper bounded by the sum of the probabilities of the events) to write M−1 X PB (E) ≤ Pwm (E) , m=1

where wm , m = 1, 2, . . . , M − 1, are the weights of all nonzero codewords of the code.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Using the weight distribution {Aw } of the code one can sum over all weights and multiply by the number of codewords with each weight, rather than sum over all codewords individually, and thus n X PB (E) ≤ Aw Pw (E) . w =dmin

The probability Pd (E) of a decoding error between two codewords distance d apart is the probability that an error pattern occurs with d/2 or more errors in the d positions in which the two codewords differ. To be precise in the case when d is an even integer, only one half of the cases when exactly d/2 errors occur causes an error on the average. Thus, one obtains the following upper bound (or exact expression when d is an odd integer) for Pd (E)   d X d e Pd (E) ≤  (1 − )d−e . e e=dd/2e

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Substituting this in the union bound above the following theorem is proved. Theorem: Let C be a binary block code with blocklength n, minimum (Hamming) distance dmin , and weight distribution {Aw }. Then, using a ML decoder, the probability of a (block) decoding error PB (E) on a memoryless BSC with transition probability  is upper bounded by PB (E) ≤

n X w =dmin

Aw

  w X w e  (1 − )w −e . e

e=dw /2e

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

A somewhat weaker bound, based on the so called Bhattacharyya distance between two codewords, which is dB = −0.5 log(4(1 − )) for the BSC with transition probability , is given in the following theorem. Theorem: Let C be a binary block code with blocklength n, minimum (Hamming) distance dmin , and weight distribution {Aw }. Then, using a ML decoder, the probability of a (block) decoding error PB (E) on a memoryless BSC with transition probability  is upper bounded by n  w /2 1 X PB (E) ≤ Aw 4 (1 − ) . 2 w =dmin

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Proof: The probability Pd (E) of a decoding error between two codewords (Hamming) distance d apart can be upper bounded as follows ! !„ «e d X d e d  d−e d Pd (E) ≤  (1 − ) = (1 − ) e e 1− e=dd/2e e=dd/2e ! ! «d/2 X „ d d X d d  d/2 d/2 d =  (1 − ) ≤ (1 − ) e e 1− e=dd/2e e=dd/2e | {z } d ≤ 2 /2 ˜d/2 1ˆ 1 ≤ 2d d/2 (1 − )d/2 = 4  (1 − ) . 2 2 d X

Substituting this in the union bound for PB (E) completes the proof of the theorem. QED

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Suppose now that it is desired to compare the “goodness” of two codes with different k. The probability of a block error is then not really the best measure. Rather, one should compare the probability of a data symbol error Ps (E) or, in the binary case, the probability of a data bit error Pb (E). Because a block error will affect between 1 and k data symbols, the probability of a symbol error can be bounded as 1 PB (E) ≤ Ps (E) ≤ PB (E) , k but for large k there is quite a gap between the upper and the lower bounds. Another approach is to extend the concept of the weight distribution of a code so that it also contains information about the weight of the data symbols.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Definition: Let A(w , i) be the number of codewords of a linear (n,k) code with total weight w and data weight i (both using Hamming weight). Then the set {A(w , i)}, w = 0, 1, . . . , n, i = 0, 1, . . . , k, is called the extended weight distribution of the code. If {A(w , i)} is known, then it is easy to obtain {Aw } using Aw =

k X

A(w , i) ,

w = 0, 1, . . . , n .

i=0

Substituting this expression in the union bound for the probability of a block error yields PB (E) ≤

n k X X

A(w , i) Pw (E) .

w =dmin i=0 Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

To obtain an expression for the probability of a symbol error, multiply A(w , i) by the number i of data symbol errors in the formula above and then divide by k to obtain Ps (E) ≤

n k 1 X X i A(w , i) Pw (E) . k w =dmin i=1

Substituting the expression for Pw (E) on a BSC with transition probability  thus proves the following theorem. Theorem: Let C be a binary (n, k, dmin ) block code with extended weight distribution {A(w , i)}. Using a ML decoder, the probability of a bit error Pb (E) on a memoryless BSC with transition probability  is then upper bounded by   n k w X w e 1 X X i A(w , i) Pb (E) ≤  (1 − )w −e . k e w =dmin i=1

Peter Mathys

e=dw /2e

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Example: Bounds on the probability of block error for a binary (7, 4, 3) Hamming code on a memoryless BSC with transition probability  are shown in the graph on the next page. Clearly, the upper bound on Pu (E) based on dmin is quite a bit weaker than the exact value which uses the weight distribution {Aw } of the code. Because Hamming codes are perfect codes, the value of the block error probability PB (E) based on dmin is exact for both bounded distance and ML decoders. Note that the simpler bound for PB (E) of a ML decoder which uses the Bhattacharyya distance is quite loose, mostly because the (7, 4, 3) code has both even and odd weight codewords.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Upper Bounds on P (E) and P (E) for (7,4,3) Code on BSC u

0

B

10

−2

10

Pu(E) from dmin Pu(E) exact with Aw PB(E) from dmin PB(E) Union Bound PB(E) Bhattacharyya

−4

−6

10

log

10

u

P (E), log

10

B

P (E)

10

−8

10

−10

10

−12

10

−6

−5.5

−5

−4.5

−4 −3.5 −3 log10(epsilon) for BSC

Peter Mathys

−2.5

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Bounds on the probability of bit error are shown below. Bounds on Probability of Bit Error for (7,4,3) Code on BSC

0

10

Union Bound Upper Bound PB(E) Lower Bound PB(E)/k −2

10

−4

b

P (E)

10

−6

log

10

10

−8

10

−10

10

−12

10

−6

−5.5

−5

−4.5

−4 −3.5 −3 log10(epsilon) for BSC

Peter Mathys

−2.5

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

The upper and lower bounds, which use the more complicated but also more tight bound on PB (E) for ML decoding, are not too far apart in this case because k is rather small. Example: Bounds for the block and bit error probabilities of a binary (31, 26, 3) Hamming code on a memoryless BSC with transition probability  are shown in the next two figures. Similar comments as for the (7, 4, 3) code apply. Generally, the bounds that were loose already for the (7, 4, 3) code become even more loose as the blocklength n is increased (and dmin is kept fixed). In part this is due to the fact that Hamming codes are perfect codes, and in part it comes from the increase in the data length k.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Upper Bounds on P (E) and P (E) for (31,26,3) Code on BSC u

0

B

10

−2

10

Pu(E) from dmin Pu(E) exact with Aw PB(E) from dmin PB(E) Union Bound PB(E) Bhattacharyya

−4

−6

10

log

10

u

P (E), log

10

B

P (E)

10

−8

10

−10

10

−12

10

−6

−5.5

−5

−4.5

−4 −3.5 −3 log10(epsilon) for BSC

Peter Mathys

−2.5

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Bounds on Probability of Bit Error for (31,26,3) Code on BSC

0

10

Union Bound Upper Bound PB(E) Lower Bound PB(E)/k −2

10

−4

b

P (E)

10

−6

log

10

10

−8

10

−10

10

−12

10

−6

−5.5

−5

−4.5

−4 −3.5 −3 log10(epsilon) for BSC

Peter Mathys

−2.5

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Example: Bounds for the block and bit error probabilities of a binary (31, 11, 11) BCH code on a memoryless BSC with transition probability  are shown in the next two graphs. Because of the larger dmin , the error probabilities at the decoder output now decrease much more rapidly as  decreases. More interestingly, the bound on PB (E) for a ML decoder is now below the bound for bounded distance decoding in the main region of interest (i.e., PB (E) < 10−4 ). This is a direct consequence of the fact that the (31, 11, 11) code is not a perfect code. Note that the bound on Pu (E) based on dmin is quite loose and essentially useless. The upper and lower bounds on the probability of bit error, which are based on PB (E) for a ML decoder, are not too far apart because k is relatively small. Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Upper Bounds on P (E) and P (E) for (31,11,11) Code on BSC u

0

B

10

−2

10

Pu(E) from dmin Pu(E) exact with Aw PB(E) from dmin PB(E) Union Bound PB(E) Bhattacharyya

−4

−6

10

log

10

u

P (E), log

10

B

P (E)

10

−8

10

−10

10

−12

10

−6

−5.5

−5

−4.5

−4 −3.5 −3 log10(epsilon) for BSC

Peter Mathys

−2.5

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Bounds on Probability of Bit Error for (31,11,11) Code on BSC

0

10

Union Bound Upper Bound PB(E) Lower Bound PB(E)/k −2

10

−4

b

P (E)

10

−6

log

10

10

−8

10

−10

10

−12

10

−6

−5.5

−5

−4.5

−4 −3.5 −3 log10(epsilon) for BSC

Peter Mathys

−2.5

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

The bounds considered up to now were based on the BSC channel model and thus assumed that hard decisions are made by the receiver before decoding. To get an idea of the performance improvements that are possible if the receiver uses soft decisions, √ consider binary antipodal signaling with message bits m → − Eb 0 √ and m1 → + Eb over an additive white Gaussian noise (AWGN) channel with two-sided noise power spectral density (PSD) N0 /2. Assume that the receiver uses a matched filter (MF) and that both Eb and N0 are measured at the output of the MF at the optimum sampling time instant at which the SNR Eb /N0 is maximized. Theorem: The probability  of an uncoded bit error for binary antipodal signaling over an AWGN channel with MF receiver and decision rule m ˆ = m0 iff the MF output b < 0 (at the optimum sampling time instant) is given by Z ∞ r E  1 2 2 b  = erfc , where erfc(x) = √ e −µ dµ . 2 N0 π x Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Performance Measures Probability of Undetected Error Probability of Decoding Error

Block Code Performance

Proof: The output b of the matched filter at the optimum sampling time instant (which maximizes the SNR Eb /N √0 ) is 2 = N /2 and mean either − E or Gaussian with variance σ 0 b b √ + Eb , i.e., √ 2 e −(β+ Eb ) /N0 √ , fb (β|m0 ) = π N0

or

√ 2 e −(β− Eb ) /N0 √ fb (β|m1 ) = . π N0

Setting the decision threshold at β = 0 yields 1 P(E|m0 )= √ πN0



Z

e −(β+

0



Eb )2 /N0

Z ∞ “r E ” 2 1 1 b dβ= √ √ e −µ dµ= erfc . 2 N0 π Eb /N0

A similar computation yields P(E|m1 ) = P(E|m0 ) and thus  = P(E|m0 ) P(m0 ) + P(E|m1 ) P(m1 ) =

“r E ” 1 b erfc . 2 N0 QED

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Now consider the case where message mi is encoded into codeword ci = (ci0 , ci1 , . . . , ci n−1 ) and then transmitted using antipodal signaling such that the receiver (after sampling at the output of the MF) sees p mi → ci −→ (2ci0 −1, 2ci1 −1, . . . , 2ci n−1 −1) Eb . Thus, if the receiver has to distinguish between mi → ci and mj → cj and d(ci , cj ) = d, then this is equivalent to d uncoded decoding decisions (assuming a memoryless channel). This proves the following Theorem: The probability Pd (E) of a decoding error between two codewords (Hamming) distance d apart for binary antipodal signaling over an AWGN channel with MF receiver and ML decision rule is given by Z ∞ r d E  1 2 2 b Pd (E) = erfc e −µ dµ. , where erfc(x) = √ 2 N0 π x Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Combining the union bound with the result for Pd (E) from the previous page results in the following theorem. Theorem: Probability of Block Error for Soft Decisions. Let C be a linear binary (n, k, dmin ) block code with weight distribution {Aw }. Then, using antipodal signaling over an AWGN channel and (soft-decision) ML decoding with a matched filter at the receiver, the probability of a block decoding error PB (E) is upper bounded by n r w E  1 X b , PB (E) ≤ Aw erfc 2 N0 w =dmin

where Eb is the bit energy of the code bits and N0 /2 is the noise power (or noise variance), both at the output of the MF at the optimum sampling time instant.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Replacing {Aw } by {A(w , i)} finally yields the following theorem. Theorem: Probability of Bit Error for Soft Decisions. Let C be a linear binary (n, k, dmin ) block code with extended weight distribution {A(w , i)}. Then, using antipodal signaling over an AWGN channel and (soft-decision) ML decoding with a matched filter at the receiver, the probability of a bit decoding error Pb (E) is upper bounded by 1 Pb (E) ≤ 2k

n X

erfc

w =dmin

k r w E  X b

N0

i A(w , i) ,

i=1

where Eb is the bit energy of the code bits and N0 /2 is the noise power (or noise variance), both at the output of the MF at the optimum sampling time instant.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Example: Bounds on the probabilities of block error for hard and soft decisions of the binary (7,4,3) Hamming code and the binary (23,12,7) Golay code are shown on the following two pages. For the soft decision error probabilities the SNR Eb /N0 was converted to  of a BSC using r E  1 b  = erfc . 2 N0 For comparison purposes, the probability of undetected error was also included in the graphs.

Peter Mathys

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Binary (7,4,3) Hamming Code on BSC

0

10

−2

10

−4

u

P (E), P (E)

10

−6

B

10

−8

10

PB(E) dmin PB(E) union PB(E) Bhattcharyya PB(E) AWGN soft Pu(E)

−10

10

−12

10

−4

−3.5

−3

Peter Mathys

−2.5 log10(ε)

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes

Block Code Performance

Performance Measures Probability of Undetected Error Probability of Decoding Error

Binary (23,12,7) Golay Code on BSC

0

10

−5

10

−10

B

u

P (E), P (E)

10

−15

10

−20

PB(E) dmin PB(E) union PB(E) Bhattcharyya PB(E) AWGN soft Pu(E)

10

−25

10

−4

−3.5

−3

Peter Mathys

−2.5 log10(ε)

−2

−1.5

−1

ECEN 5682 Theory and Practice of Error Control Codes