EFFICIENT CODING/DECODING STRATEGIES FOR CHANNELS WITH MEMORY by CUONG HON LAI B. A. Sc., University of British Columbia, 1989 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA November 1992 © Cuong Hon Lai, 1992 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. (Signature) Department of Electrical Engineering The University of British Columbia Vancouver, Canada Date NI DE-6 (2/88) OV 2.0 I clef Abstract Many digital communication channels are affected by errors that tend to occur in bursts. A great deal of work has been devoted to finding good burst-error-correcting codes and developing burst-error-correcting schemes. However, burst-error-correcting codes are generally useless for long bursts. Some burst-error-correcting schemes suffer long delay in decoding. Others are very sensitive to random errors in the guard space. Most of these schemes are not adaptive to channel conditions. In this thesis, two new schemes are proposed to overcome these drawbacks. The proposed schemes are analyzed over a two state Markov chain channel model. Both schemes employ a combination of two codes. In the first scheme, one of the codes is used for random error correction and for burst detection while the other one is used only for burst recovery. In the second scheme, one of the codes is used for burst detection and for channel state estimation, and both codes are used for error correction. Unlike existing burst-error-correcting schemes, it is shown that the proposed schemes are adaptive to channel conditions and less sensitive to errors in the guard space. For the same delay, the proposed schemes offer better performance than the interleaving schemes. When the channel is heavily corrupted by bursts, the improvement is even more pronounced. ii Table of Contents Abstract ii List of Figures vi Acknowledgement^ viii 1. Introduction 1 2. Preliminaries 4 2.1. 2.2. 2.3. Channel Models ^ 4 2.1.1.^The Binary Symmetric Channel Model ^ 4 2.1.2.^The Gilbert-Elliott Channel Model ^ 5 Convolutional Codes ^ 7 2.2.1.^Encoding of Convolutional Codes ^ 7 2.2.2.^Decoding of Convolutional Codes ^ 11 2.2.3.^Punctured Convolutional Codes ^ 17 Performance of Viterbi decoding on a Gilbert-Elliott Channel ^ 18 2.3.1.^The Sliding Window Decoding (SWD) Bound for Decoded BER on a BSC ^ 19 2.3.2.^Bound for Viterbi Decoded BER on a Gilbert-Elliott Channel^. . . 22 3. Existing Burst-Error-Correcting Schemes 25 3.1. Schemes with Interleaved Codes ^ 25 3.2. Gallager's Burst-Finding Scheme ^ 27 3.2.1.^Encoding Circuit ^ 27 3.2.2.^Decoding Operations ^ 27 3.3. 3.4. 4. Schlegel and Herro's Scheme ^ 32 3.3.1.^Burst Detection Procedure (BDP) ^ 32 3.3.2.^Decoding Operations ^ 34 Limitations of Existing Burst-Error-Correcting Schemes ^ 36 The Proposed Adaptive Scheme 1 39 4.1. Encoding Circuit ^ 39 4.2. Decoding Operations ^ 40 4.3. Analysis of Decoded BER ^ 42 4.3.1.^Probability of Undetected Errors ^ 43 4.3.2.^Probability of Decoded Errors Due to Detected Bursts ^ 45 4.4. Simulation and Numerical Results ^ 48 4.4.1.^Approximation and simulation results ^ 49 4.4.2.^Comparisons with Existing Burst-Error-Correcting Schemes^. . . . 51 4.5. 5. Concluding Remarks ^ 53 55 The Proposed Adaptive Scheme 2 5.1. Encoding Circuit ^ 55 5.2. Decoding Operations ^ 56 5.3. Analysis of Decoded BER ^ 58 5.3.1.^Effects of Channel Parameters ^ 59 5.3.2.^Effects of Channel State Estimation ^ 60 5.3.3.^Probability of decoded BER of Scheme 2 ^ 61 iv 5.4. Simulation and Numerical Results ^ 5.4.1. Approximation and simulation results ^ 64 64 5.4.2. Comparisons with Existing Burst-Error-Correcting Schemes . . . ^ 65 5.5. Concluding Remarks ^ 80 6. Conclusions^ 81 References^ 83 List of Figures 2.1^The binary symmetric channel model ^ 5 2.2 The Gilbert-Elliott channel model ^ 5 2.3^An encoder for a (2, 1, 2) convolutional code ^ 8 2.4^Code tree for the (2, 1, 2) code. ^ 9 2.5^Trellis diagram for the (2, 1, 2) code. ^ 10 2.6^Backward code tree for the (2, 1, 2) code. ^ 16 2.7 A simplified model of a SWD 19 2.8 Decoded BER bounds for a (2, 1, 4) code on a binary symmetric channel.. . ^ 21 2.9 Performance x)f a Viterbi decoder on a Gilbert-Elliott channel with m = 4, PGG = 0.9999, PBB = 0.99, and average channel error rate = 0.005 ^ 24 3.1^Interleaving procedure with the interleaving depth A. ^ 26 3.2^Encoding circuit of a (2, 1, 5) systematic code in Gallager's burst-finding scheme ^ 28 3.3 Decoding circuit of Gallager's burst-finding scheme ^ 28 3.4 Performance of Gallager's burst-finding scheme with m = 4, PGG = 0.9999, PBB = 0.99, and 633 = 0.5. ^ 31 3.5 Decoding circuit of a (2, 1, m) code for Schlegel and Herro's adaptive scheme. . 34 3.6 Decoded BER of Schlegel and Herro's scheme with PGG = 0.9999, PBB = 0.99, 37 and E B = 0.5. ^ 4.1^Encoding circuit of Scheme 1. ^ 40 4.2 Decoding circuit for the proposed Scheme 1 ^ 41 4.3 Diagram of a burst recovery procedure ^ 42 4.4 Performance of Scheme 1 with ml = 4, R =113, PGG = 0.9995, PBB = 0.95, 50 EG = 0.005, and EB = 0.5. ^ 4.5 Comparison of decoded BER with ml = 4, m2 = 79, R = 1/3, PGG = 0.9995, 52 PBB = 0.95, and average channel error rate = 0.005 ^ vi 5.1^Encoding circuit of Scheme 2. ^ 56 5.2 Decoding circuit of Scheme 2. ^ 57 5.3 Channel model and channel state estimator. ^ 58 5.4 Performance of Scheme 2 with m = 4, R =113, PGG = 0.9995, PBB = 0.95, and average channel error rate = 0.005. ^ 66 5.5 Performance of Scheme 2 with m = 4, R =113, PGG = 0.9999, PBB = 0.99, and average channel error rate = 0.005. ^ 67 5.6 Performance of Scheme 2 with m = 4, R =113, PGG = 0.9999, PBB = 0.99, E, = 0.005, and E B = 0.5. ^ 68 5.7 Comparison of decoded BER for m = 4, R = 1/3, PGG = 0.9995, and PBB = 0.95 71 5.8 Comparison of decoded BER for m = 4, R = 1/3, PGG = 0.9999, and PBB = 0.99. ^ 72 . ^ 5.9 Comparison of decoded BER for m = 4, R= 1/3, e G = 0.005, and E B = 0.25 ^ 73 5.10 Comparison of decoded BER for m = 4, R= 1/3, E G = 0.005, and e B = 0.5. ^ 74 5.11 Comparison of decoded BER for m = 4, R = 1/3, PGG = 0.9999, PBB = 0.99, 75 and E B = 0.25 ^ 5.12 Comparison of decoded BER for m = 4, R = 1/3, PGG = 0.9999, PBB = 0.99, 76 and E B = 0.5. ^ 5.13 Comparison of decoded BER for m = 4, R=113, PGG = 0.9995, PBB = 0.95, 77 and E B = 0.25 ^ 5.14 Comparison of decoded BER for in = 4, R = 1/3, PGG = 0.9995, PBB = 0.95, 78 and EB = 0.5. ^ 5.15 Performance of Scheme 2 for different code rates with in = 6, PGG = 0.9999, 79 PBB = 0.99, and E B = 0.5. ^ vii Acknowledgement I would like to express my sincere gratitude to Dr. Kallel for his support and guidance. This thesis could not be completed without his valuable comments and suggestions. I would also like to thank Jeffrey Chow for his help in programming, William Cheung and Victor Wong for their useful suggestions in preparing this thesis. Finally, I would like to thank my family for their continued support of my studying. VIII Chapter 1 Introduction In many digital communication channels, such as mobile communication channels, telephone channels, and magnetic or optical recording systems, errors tend to occur in clusters or bursts [1-3]. These channels are often referred to as channels with memory. Several schemes for burst error correction on these channels have been reported [4-12], and are called burst-error-correcting schemes. One approach is to use special codes designed exclusively for burst error correction [10]. These so-called burst-error-correcting codes perform relatively well over channels with short bursts, but perform poorly when the channels are corrupted with long bursts. Another conventional approach is to interleave channel symbols prior to transmission. With interleaving, burst errors are spread over many symbols, and can thus be viewed as random errors. However, for channels with long bursts, interleaving schemes need extremely long delay to be effective, which might not be tolerated in some applications, such as voice transmissions. Another approach is Gallager's burst-finding scheme [5]. In this scheme, a rate 1/2 systematic convolutional code is used with a modified majority logic decoding. Gallager's scheme sacrifices random-error-correcting capability in exchange for better burst correction. A modified version of this scheme was recently suggested by Schlegel and Herro [12]. This scheme is essentially the same as Gallager's scheme except that majority logic decoding is replaced by a modified Viterbi decoding algorithm. Both Gallager's burst-finding scheme and Schlegel and Herro's scheme are extremely sensitive to errors in the guard space. Two efficient coding and decoding strategies are proposed in this thesis. Both schemes employ a combination of two punctured convolutional codes and a burst detection procedure. The burst detection is accomplished by observing the increment in the cumulative path Chapter 1. Introduction^ 2 metrics from Viterbi decoding. Scheme 1 uses two punctured convolutional codes with different memories. In this scheme, a code with a relatively short memory is used with Viterbi decoding for random error correction and for burst detection. The other code which has a much longer memory is used with backward sequential decoding to recover burst errors. Normally, the decoder operates in the random mode and it uses the received sequence corresponding to the code with the shorter memory. An abrupt increase in the cumulative path metrics indicates that the channel is most likely in a burst. The decoder then switches from the random mode to the burst mode, and starts burst error recovery. In the burst mode, starting from a chosen state, the decoder employs a backward sequential decoding algorithm to recover the corrupted data. When the channel becomes quiet, the path metrics are relatively constant, and the decoder returns to the random mode. Scheme 2 uses two punctured codes that are derived from the same original convolutional code with complementary perforation patterns. The second code sequence is transmitted after a delay from the transmission of the first code sequence. The first code sequence is used with a Viterbi decoder to detect bursts using the same burst detection procedure as in Scheme 1. The burst detection procedure also serves for estimating the channel state. Both received sequences are then used by a second Viterbi decoder which uses the channel state information provided by the first decoder. The proposed schemes have several advantages. Both schemes are adaptive to channel conditions. The parameters of the decoders can be chosen to optimize the performance of the schemes. For the same delay, these schemes outperform the conventional interleaving schemes when the channel is heavily corrupted by bursts. While Gallager's burst-finding scheme and Schlegel and Herro's scheme are sensitive to errors in the guard space, the Chapter 1. Introduction^ 3 proposed schemes can tolerate high error rates in the guard space. Different code rates can be easily implemented in these schemes. Further improvement can be obtained by introducing soft decision into the decoders. The thesis is organized as follows. Channel models are discussed in Chapter 2 and the Gilbert-Elliott channel model is described in details. Chapter 2 also gives an introduction to convolutional coding and decoding techniques. A number of existing burst-error-correcting schemes are presented in Chapter 3. These schemes include interleaving schemes, Gallager's burst-finding scheme, and Schlegel and Herro's adaptive scheme. Detailed descriptions of the two proposed schemes, Scheme 1 and Scheme 2, are given in Chapters 4 and 5 respectively. Analysis and simulation results of these schemes over a Gilbert-Elliott channel model, and comparisons with other burst-error-correcting schemes, are also presented in these chapters. Finally, Chapter 6 summarizes the main results and important features of the proposed schemes, and suggests some topics for future work. Chapter 2 Preliminaries This chapter provides the background on channel models and convolutional codes, which is necessary for the understanding of the thesis. 2.1 Channel Models Communication channels can be classified into two categories: memoryless channels and channels with memory. Space or satellite channels are a few examples of memoryless channels. On these channels, a received signal at a given time interval depends only on the transmitted signal in that time interval, but not on previous signal transmissions. As a result, errors appear randomly on memoryless channels, and therefore these channels are often referred to as random-error channels. Many real communication channels, however, have memory. Errors on these channels tend to occur in clusters or bursts. This bursty behavior is common in mobile communication channels, telephone channels, and magnetic or optical recording systems [1-3]. This section introduces two simple channel models, the binary symmetric channel (BSC) model representing a memoryless channel, and the Gilbert-Elliott channel model representing a channel with memory. 2.1.1 The Binary Symmetric Channel Model The binary symmetric channel (BSC) model is a simple model for discrete memoryless channels (DMC). A discrete memoryless channel of M-ary inputs and Q-ary outputs is defined by the transition probabilities P(j1i) where 0 < i < (M — 1) and 0 < j < (Q — 1). In the 4 5 Chapter 2. Preliminaries^ BSC model, M = 2 and Q = 2. The model is completely defined by a channel error rate or crossover probability e as shown in Figure 2.1. 1—e 0 1 1— Figure 2.1 The binary symmetric channel model 2.1.2 The Gilbert-Elliott Channel Model The binary symmetric channel model is inadequate in representing channels with memory where errors tend to occur in bursts. Many models have been suggested to simulate the behavior of bursts on channels with memory, and an accurate representation of a real channel requires increasingly complicated models. These channel models range from a simple twostate Gilbert channel model to a much more complex channel model with an infinite number of states [1-3]. The model used to simulate bursts in this thesis is the Gilbert-Elliott channel model [2] depicted in Figure 2.2. 1 - PGG PBB Error rate in G EG - pBB Figure 2.2 The Gilbert-Elliott channel model. Error rate in B £B Chapter 2. Preliminaries 6 The Gilbert-Elliott channel model is constructed with two states and four parameters. Let the two states G and B denote the good state and bad state respectively. The channel stays in state G most of the time, and only makes occasional transitions to state B. Since the channel resides in state G most of the time, the probability of remaining in state G denoted as PG , is often much larger than the probability of remaining in state B denoted as PBB . The steady state probability in the good state PG , and the steady state probability in the bad state PB are given as follow PG = PBG PGB PBG (2.1) and PB = PGB PGB PBG (2.2) where PGB is the transition probability from state G to state B and PBG is the transition probability from state B to state G [3]. Errors can occur in state G with a probability of EG, and in state B with a probability EB, where EG << EB. The average channel error rate is then given by PGEG PBEB. (2.3) It is necessary to introduce a number of important parameters that are useful in the analysis of the channel model. Throughout the thesis, a "burst" is defined as the number of consecutive intervals in state B before a transition to state G occurs. A "guard space" is similarly defined as the number of consecutive intervals in state G. Let us denote the average burst length as A B , and the average guard space as A G . It is shown in [3] that AG = 1 n ^ 1 — r GG - (2.4) Chapter 2. Preliminaries^ 7 and 1 =^ . 1 — r BB (2.5) - Another important parameter is the high-to-low error density ratio A eG = . This value A, together with PB , reflects the bursty nature of the channel. When the channel has dense bursts, A is high and PB is low. On the other hand, when the channel has diffuse bursts, A is low and PB is high. The channel becomes a random-error channel when A approaches 1. 2.2 Convolutional Codes Since the introduction of convolutional codes by Elias in 1955, this class of linear codes has been used in many important applications, such as space and satellite communications [10]. These contributions are attributable to the good structure of convolutional codes and efficient decoding algorithms. This section introduces the code structure and basic decoding algorithms for convolutional codes. Punctured convolutional codes are also discussed. 2.2.1 Encoding of Convolutional Codes Without loss of generality, let us limit our discussion to rate 1/n convolutional codes. A rate R = 1/n and memory m convolutional code, denoted by (n, 1, m), can be constructed using a shift register of (m + 1) stages, n modulo-2 adders, and a multiplexer. The connections of the n adders to the shift register are specified by a generator matrix given by G(D) = [g (1) (D), g (2) (D),^g ( n ) (D)]^(2.6) where the ith generator polynomial g (8) (D) =^Dg(ii)^D 2 e )^g.;,)^i = 1,2, ...,n^(2.7) Chapter 2. Preliminaries^ 8 0) g=1 (2) = 1 +D+D 2 g Figure 2.3 An encoder for a (2, 1, 2) convolutional code. indicates the connection of the 1th adder to the shift register, and D represents a delay unit. An encoding circuit for a (2, 1, 2) convolutional code with G(D) = [1, 1 D D 2 ] is shown in Figure 2.3. The encoding of a convolutional code is carried out one bit at a time. The content of the shift register is flushed out with all zeros before the encoding takes place. An input bit to be encoded is fed into the shift register from the left, and the shift register shifts to the right once. Output bits from the n adders are multiplexed before being transmitted over the channel. Let us denote an input sequence of length K by u(D) = uo + Du i + D2u2^DA--lux —1, ^(2.8) and a composite generator polynomial for an (n, 1, m) convolutional code by g ( D) = g(1) ( D) Dg(2) (D2 ) + D2 g(3) (D3 )^Dn-1g(n)(Dn).^(2.9) The encoded sequence or code word v(D) corresponding to u(D) can be obtained by v(D) = u(D)g(D).^ (2.10) The state of a convolutional encoder is defined by the first m stages of the shift register. An (n, 1, m) code has a total of 2m possible states. The convolutional encoding and the 9 Chapter 2. Preliminaries^ 01 11 00 00 01 10 01 01 01 00 00 10 00 01 01 01 00 11 01 01 00 00 00 11 oo 01 11 10 11 ••■•• 10 01 i 01 10 01 0 11 01 01 10 00 01 01 01 00 11 01 01 00 00 10 01 01 11 Figure 2.4 Code tree for the (2, 1, 2) code. states of the encoder are usually depicted by a tree diagram. The tree diagram for the given (2, 1, 2) code is shown in Figure 2.4. A simplified form of the tree diagram is the trellis diagram shown in Figure 2.5. The simplification is done by combining all the nodes having the same state at each level in the tree into one single node in the trellis. The encoding starts from the origin node, which corresponds to state 0. The code bits on every branch of the trellis or the tree are the output bits corresponding to each possible input bit. In the trellis and tree diagrams, upper branches correspond to "1" input bits, and lower branches correspond to "0" input bits. A tail of m zero bits are often added at the end to flush out the content of the shift register and let the encoder return to state 0. Chapter 2. Preliminaries 10 Convolutional codes can be classified as systematic or nonsystematic. In an (n, 1, m) systematic convolutional code, the first output sequence is exactly the same as the input sequence. The first generator polynomial is, therefore, given by g( 1 ) = 1. Any code that does not satisfy this condition is said to be nonsystematic. The Hamming distance between two code words is defined to be the number of positions in which the two code words disagree. The minimum free distance df of a convolutional code is defined to be the minimum Hamming distance between any two code words v and v` corresponding to the input sequences u and u', where u u'. The minimum distance dm of a convolutional code is the minimum-weight code word for the first (m + 1) input bits whose initial input bit= is nonzero. The most important distance measure of a convolutional code is the minimum free distance df since the error-correcting capability of a convolutional code is given by t M = [(df — 1)/2j. The free distance of a systematic code is known to be inferior to that of a nonsystematic code of the same memory DU 0 1 2 3 levels 4 5 Figure 2.5 Trellis diagram for the (2, 1, 2) code. 6 11 Chapter 2. Preliminaries^ 2.2.2 Decoding of Convolutional Codes There exist various decoding algorithms for convolutional codes [10, 13-15]. The principal decoding algorithms include Viterbi decoding, sequential decoding, and majoritylogic decoding algorithms. In Viterbi decoding and sequential decoding, a decoding decision is based on the entire received sequence. On the other hand, with the majority-logic decoding, a decoding decision is based on one portion of (m + 1) received blocks at a time. Descriptions of these algorithms are given for ease of understanding of the burst-error-correcting schemes. 2.2.2.1 Majority Logic Decoding (MLD) Let us consider a (2, 1, m) convolutional code with the code generator polynomials g(t)(D) = g 0( i) + Dg (i ) + D 2 e ) + + D nz g.;,) i = 1, 2.^(2.11) Given an input sequence u of length K, the parity matrix H is an K x 2K matrix given by g(02) g (2) H= ( ) g(m2)^gni ) e 1 )^ 3 g2)^g2) g(m2)^g(ml ) g 0( 2)^g(01) eni2) 2 e2) m g ( 2) m eml) 2 g (m1) 1 eml) g(02)^gl(2,1) g12)^g(11)^gi()2)^g(01) e) e g(12)^g(02) 41) (2.12) Let r be a received sequence to a possible transmitted sequence v. The syndrome sequence s is defined as s = rH T = + e)H T = eH T^(2.13) Chapter 2. Preliminaries^ 12 where e is the channel error sequence corresponding to r and v. The decoding decision for the ith bit is based on a truncated syndrome sequence [s],„, = (s , s 2+1 , s 2+2 , s t+m ). The truncated syndrome sequence [s] m is obtained from a truncated error sequence [e]„, and a truncated parity matrix [1-1] T „„ and can be written as [s]n, =^,^ (2.14) where [el ^( e cl) e (2) e (1)Ie (, 2+), e (, 1.4)2 e (2+)2^6(14, e (2)m ) and g(0 2) g(0 1 ) e) g(12 ) g(21) g(0 2) g(12) g(01) • • (2.15) (2) gm eml) g (2 2) g(n22 g(21) g(rn]. = (2.16) g ot) gO 1) Each syndrome bit in [s]„, is a sum of error estimates over (m + 1) received blocks. A set of J check sums is derived from the truncated syndrome sequence [s] m where 1 < J < m. Each of the J check sums can be a single syndrome bit or a combination of the (m + 1) syndrome bits. An error is said to be checked by a check sum if the check sum contains it. The main idea of the majority-logic decoding algorithm is based on the concept of orthogonality of the parity check sums. A set of J check sums is defined to be orthogonal on an error if each of these sums checks the error, and no other error bit is checked by more than one check sum. The check sums are often constructed to be orthogonal on the errors of the code sequence coming from the first adder of the encoder. With a threshold t, = 1://2], an error estimate to a received bit is chosen to be 1 if and only if more than Chapter 2. Preliminaries 13 tML of the J orthogonal check sums are 1. The error estimate is added to the received bit to give the decoded output. After the ith bit is decoded, a new truncated syndrome sequence is computed from the next received sequence for the (i + 1) th bit. The computation can be simplified by observing the relationship among a current truncated syndrome sequence, the previous sequence, and the previous error estimate. Error estimates can be fed back to the decoder as in feedback decoding, or excluded from the next syndrome computation as in definite decoding. Details of the definite and feedback decoding algorithms can be found in [10]. The syndrome computation continues until all bits are decoded. 2.2.2.2 Viterbi Decoding In the trellis representation of a convolutional code, a path is defined by the connection between the origin node and the end node via consecutive intermediate nodes. Each path in the trellis corresponds to a possible transmitted sequence v. Given a received sequence r, the Viterbi decoding algorithm searches the trellis for the path v with the maximum loglikelihood function log Pr(rlv). On a BSC, this corresponds to computing the Hamming distance between the received sequence and all possible paths in the trellis, and selecting the closest path to the received sequence. The Viterbi algorithm starts from the origin node and extends forward into the trellis. Given the first m branches of the received sequence, the decoder examines all 2' distinct paths at level m, and computes the cumulative path metrics which are either the log-likelihood functions or the Hamming distances if the channel is a BSC. After the reception of the (m + 1) th branch, each path extends its two branches into level (m + 1). At this level, two branches converge into each node in the trellis. The metrics of the two converging paths into each node are compared, but only the path with the larger metric is retained while the other is Chapter 2. Preliminaries 14 discarded. The retained path is called a survivor. All 2' survivors are obtained in the same way. In the case of a tie in the metrics, the survivor is chosen at random. From this level on, the decoding involves extending a node into the next level, computing the cumulative path metrics, comparing the path metrics in pairwise at each node, and selecting the survivors. This process continues until all received bits have been considered. At this point, the path with the largest metric among the survivors is selected to be the decoded path. In practice, since the length of the input sequence is long, the decoding decision on the bit is based on the best partial path at time (i + dr), where dT is called the truncation path length [10]. The truncation path length is often chosen to be about 4 or 5 times the code memory. The number -of computations in Viterbi decoding is constant for all channel noise levels since the number of state operations is fixed. The complexity of Viterbi decoding increases with the code memory as a result of the increase in the number of possible states. 2.2.2.3 Sequential Decoding Sequential decoding itself has many variations [10]. In the thesis, discussions of the sequential decoding techniques are limited to the stack or Zigangirov-Jelinek algorithm [14]. The description of the algorithm requires the help of the code tree as previously shown in Figure 2.4. The decoder consists of a stack of the already searched paths which are ordered in decreasing order of their metric values. The top of the stack is the path with the largest cumulative path metric and is the one that is extended into the tree. After each extension, the stack is reordered so that the decreasing order of the path metric values is reserved. The extended path which has an ever increasing accumulated metric will continue to be searched. As soon as its metric decreases, this path will be properly stored in the stack according to Chapter 2. Preliminaries^ 15 its metric value, and the new top node will be extended. The decoding process consists of finding the top node, extending it one step further into the tree, and reordering the stack. After the stack is initially loaded with the origin node as the top node with a zero path metric, the decoding algorithm can be described by the following steps: 1. Compute the metrics of the two successors of the top node. 2. Enter the new successors in their proper places in the stack according to their metric values. 3. Delete from the stack the entry of the node that was just extended. 4. Stop if the top node is the end node. Otherwise, return to step 1. When the loop ends, the decoded path can be recovered from the top node and the information stored in the stack. Other variations of sequential decoding include the Fano algorithm, the hybrid sequential and algebraic decoding schemes, the multiple stack algorithm, and the generalized stack algorithm [10]. Contrary to Viterbi decoding, sequential decoding is adaptable to the noise level. The fact that sequential decoding is independent of the code memory makes it a better choice for the decoding of convolutional codes with long memory. 2.2.2.4 Backward Sequential Decoding Substantial research work on bidirectional decoding of convolutional codes has been done in recent years [16, 17]. In these bidirectional decoding algorithms, a code tree is searched simultaneously from the root and end nodes in the opposite directions. In normal (forward) sequential decoding, starting from the origin node, the decoder explores the tree to find the best path until the end of the tree is reached. The decoding can also be done in the opposite direction starting from the end node of the tree. The code is now reversed as suggested Chapter 2. Preliminaries 16 11^01^11 11 01^11^00 01^10^11 01 11^00^00 10 00^01^11 01 10^11^00 10 10^10^11 11 00^00^00 11^01^11 00 01^11^00 0 10 01^10^11 10 11^00^00 00^01^11 10 10^11^00 10^10^11 00 00^00^00 Figure 2.6 Backward code tree for the (2, 1, 2) code. in the code tree of Figure 2.6. For example, an input sequence u = 1011 producing the highlighted path in Figure 2.4 now corresponds to the highlighted path in the backward code tree of Figure 2.6, generated by the reversed input sequence UT = 1101. Operations of a backward sequential decoder are the same as those of a forward sequential decoder except the decoding is applied on the backward tree. It should be pointed out that a convolutional good code for forward sequential decoding is not necessarily as good for backward sequential decoding. A good code for backward sequential decoding, however, can be obtained from a code used with forward sequential decoding. This is achieved by reversing all connections of the given forward code. When the obtained code is viewed backwards, the backward code used with backward sequential Chapter 2. Preliminaries^ 17 decoding is exactly the same as the forward code used with forward sequential decoding [17]. 2.2.3 Punctured Convolutional Codes A rate b/V punctured code of memory m, denoted by (V, b, 7n), can be obtained from an original (Vo , 1, in) code, where b/V > 1/V0 , by deleting (b170 — V) bits from every bV, code bits of the original code according to a well-selected perforation pattern. A perforation pattern for a (V, b, m) punctured code obtained from a (Vo , 1, m) code is often expressed by a matrix P of V, rows and b columns. Each of the rows corresponds to the code bits from one adder of the encoder. The number of columns represents the puncturing period. The perforation matrix elements are either "0" or "1" depending on whether a code bit is deleted or kept respectively. For example, a rate 2/3 punctured code can be obtained from a rate 1/2 code using the perforation pattern [1 1] 10 . Good punctured codes for various rates can be found in [18, 19]. A perforation pattern is said to be orthogonal if it has exactly one "1" in each row 10 [19]. An example of an orthogonal perforation pattern is 1 0 . A non-orthogonal 01 perforation pattern can also be expressed as an orthogonal pattern by splitting the rows with more than one "1"s into multiple rows with only one "1" in each row. For example, can be split into two rows, and the perforation matrix the first row of the pattern [1 0 111 1 10 can be written as 1 0 where the first row and the last row come from the first row 01 of the original non-_orthogonal pattern. If the generator matrix to be used with the pattern [1 11 is G(D) = [g ( ' ) (D), g (2) (D)], the equivalent generator matrix to be used with the 10 [1 0 rewritten pattern 1 0 is G(D) = [g (1) (D), gt 2) (D), g ( ' ) (D)]. Two orthogonal patterns 01 are complementary of each other if each element in one pattern is the complement of the Chapter 2. Preliminaries^ 18 [1 0 element in the other pattern at the same position. For example, the patterns 1 0 and 01 [0 1 0 1 are complementary of each other. Complementary perforation patterns are useful 10 when they are used in conjunction with each other since the combination of the punctured codes with complementary codes gives the original code. Modifications must be made in the decoding of a punctured code. When a punctured code is used with Viterbi or sequential decoding, the decoder discards the metric increments corresponding to the deleted symbols in the perforation patterns. This is accomplished by inserting dummy data in the positions of the deleted symbols and assigning zero metric to them. The truncation path length used in a Viterbi decoder with a punctured code must be chosen longer than with the original code [20]. 2.3 Performance of Viterbi decoding on a Gilbert-Elliott Channel Various bounds on the decoded BER of Viterbi decoding on a BSC have appeared in many papers and textbooks [10, 21]. Among these bounds are the transfer function bound [10] and the Sliding Window Decoding (SWD) bound [21]. The transfer function bound gives good approximations of decoded BER for low channel error rates, but the bound is extremely loose at high channel error rates. On the contrary, the SWD method provides a loose bound at low channel error rates, but tends to have a better estimation at high channel error rates. The analysis of the decoded BER is even more complicated for a Gilbert-Elliott channel since both the channel and the code have memory. To simplify the analysis, the channel is assumed to be comprised of bursts and guard spaces long enough to be considered as independent "sections". A bound on the Viterbi decoded BER on a BSC can be applied separately for each section. The SWD bound is used for the burst sections with high channel Chapter 2. Preliminaries^ 19 Figure 2.7 A simplified model of a SWD. error rates and the transfer function bound is used for the quiet sections where the channel error rate is relatively small. The bound for Viterbi decoded BER on a Gilbert-Elliott channel is estimated as the average sum of errors on these individual sections. 2.3.1 The Sliding Window Decoding (SWD) Bound for Decoded BER on a BSC It is necessary to introduce the concept of the SWD method and its bound on the decoded error rate of Viterbi decoding. Detailed description of the method can be found in [21]. The SWD method resembles a Viterbi algorithm with a truncation path length. Given an (n, 1, m) convolutional code and a window size W, the SWD makes a decoding decision based on a block of nW received bits. The SWD can be modelled by a two-state Markov chain. State C represents the correct state and state I represents the (r - 1) incorrect states collectively, as shown in Figure 2.7. Pc denotes the steady-state probability of being in the correct state. Pcc denotes the probability of remaining in the correct state and P c , denotes the transition probability from state C to state I. Two assumptions are made to simplify the SWD bound: 20 Chapter 2. Preliminaries^ • If the SWD is initially in the incorrect state, the worst case is assumed, where the decoder makes a decoding error half of the time. • All transitions from the incorrect states are equiprobable. In the SWD method, the decoder makes an error when it goes from a correct state to an incorrect state. The decoded BER is approximated by Pbe(E) 1 PcPc] + — (1 — Pc) 2 (2.17) The transition probability Pci can be bounded by Pc' < nW E (nWAi(1 ) i=t(W-1)+1 (2.18) - 1 L :1] in which d(w—t) represents the minimum weight code word where t(w-1) = [±(E L 2 over the first W time units whose initial information block is nonzero. The transition probability Plc can be found from the assumption that all transitions from the incorrect states are equiprobable. There are (2m — 1) incorrect states and (2 2 m — 2 - ) transitions coming from the incorrect states. The number of transitions from the incorrect states to the correct state is (2m — 1). Therefore, the transition probability P 1c is given by (2 - — 1)^1 = ^.^ (2.19) Pic (2 2 'n — 2m)^2The steady-state probability Pc is obtained from PI , and P, c as follows Pc = Pic Pct cl 1 Plc (2.20) - Even though the bound is valid for any window size, W can be chosen so as to obtain a tight bound. Simulation results of Viterbi decoding, the transfer function bound, and the SWD bound for a (2, 1, 4) code on a BSC are shown in Figure 2.8. The simulations used the code with Chapter 2. Preliminaries^ 21 1 .0 10 C^ 10 I I I^ I^ ^ t i -2 • 6 10 I 3_ A 10 10 10 4 0.01 0.05^0 1 0.5 1.0 Channel error rate Figure 2.8 Decoded BER bounds for a (2, 1, 4) code on a binary symmetric channel. Chapter 2. Preliminaries^ 22 G = [23, 35] [18], and a window size 10 for the SWD. The simulations were run until at least 100 errors were obtained. In the figure, it can be seen that the transfer function bound is loose for channel error rates higher than 0.05. On the other hand, the SWD bound is excellent for channel error rates higher than 0.2, but the SWD bound is loose for channel error rates in the range between 0.01 and 0.1. Below an error rate of 0.01, both bounds are approaching the actual simulation curve. 2.3.2 Bound for Viterbi Decoded BER on a Gilbert-Elliott Channel Let us consider the use of a rate R convolutional code on a Gilbert-Elliott channel with channel error rates EG and C B , and transition probabilities PGG and PBB . Given that the all zero input sequence is translnitted, an error event after the j th step is defined as the eliinination of the all zero path at the j th step in favor of a nonzero path. The error event length is the length from the first position to the last position in which the nonzero path differs from the zero path. Let us denote the minimum error event length of the code as / m i n . The probability that a given guard space has length 1 G , denoted by Pr(/ G ), and the probability that a given burst has length / B , denoted by Pr(lB ), can be obtained from the channel parameters as following Pr(1 G ) = Ph -1 (1 — PGG ) (2.21) Pr (1 B ) =^(1 — PBB ). Let Pv (E) be the decoded BER of a Viterbi decoder on a BSC with channel error rate e. Under the assumption that decoded errors occur in state G and in state B independently, the Viterbi decoded BER in state G denoted by P G , and the Viterbi decoded BER in state B denoted by PVB can be approximated by E 00 PVG (u)pruopv(EG) 1G>imin E (2.22) 00 PVB^ IB>ir„,„ (iBR)pruopv(EB) 23 Chapter 2. Preliminaries^ The Viterbi decoded BER on a Gilbert-Elliott channel P P (E G , E B ) can now be approximated by the average sum of the errors that occur separately in the bursts and in the guard space as PV(EGIEB) PG PVG PBPVB PG E (iGR)pruopv(eG)+PB E (iBi)prgopv(eB). (2.23) Simulation results and approximated decoded BER of a Viterbi decoder using the same (2, 1, 4) code for different values of A = e B IE G are shown in Figure 2.9. The minimum error event length of the code is equal to 8. The channel parameters used in the simulations are PGG = 0.9999, PBa = 0.99, and the average channel error rate is 0.005. The number of decoded errors was at least 100 in each simulation. From the figure, the approximation of Viterbi decoded BER is good for A higher than 100, where the channel has dense bursts. For lower values of A, the approximation becomes loose as the assumptions are no longer valid. Chapter 2. Preliminaries 10 10 ^ 24 -2 -3 1^ ^ approximation -.—.—^- simulation 10 -6 -7 10 1 II^ 10 ^ 2 5310 -3 4.6x10 . 5x10 3 4.6x10 3 21 ^ ^ ^ 31^ 10^ 10 0.25 2.5x 10 .3 4 10^A 0.46^0.5^EB 4 5^E 4.6 x 10 5x 10 G Figure 2.9 Performance of a Viterbi decoder on a Gilbert-Elliott channel with m = 4, Poem = 0.9999, PBB = 0.99, and average channel error rate = 0.005. Chapter 3 Existing Burst-Error-Correcting Schemes There exist various codes designed for burst error correction. Reed-Solomon codes in the BCH class, Berlekamp-Preparata convolutional codes, and Iwadare-Massey convolutional codes are among these so-called burst-error-correcting codes [10]. Burst-error-correcting codes are generally effective against bursts of short length, but the performance of these codes degrades considerably on channels corrupted by long bursts. The thesis will not focus on these burst-error-correcting codes, but rather on burst-error-correcting schemes that are more flexible to channel conditions. This chapter introduces a number of conventional bursterror-correcting schemes, namely interleaving schemes, Gallager's burst-finding scheme, and its modified version by Schlegel and Herro. The chapter begins with the descriptions of the schemes, and concludes with the discussion of the strength and weaknesses of these schemes. 3.1 Schemes with Interleaved Codes A technique commonly used for burst error correction is interleaving. The idea of interleaving is to spread burst errors over many blocks of channel symbols to reduce the effect of bursts upon the final output. Usually, these symbols are read into rows and read out in columns as suggested in Figure 3.1. The number of rows A is called the interleaving degree or interleaving depth. The number of columns must be at least equal to the required length over which a decoding decision is based. Channel bits are read out vertically and transmitted over the channel. 25 Chapter 3. Existing Burst-Error-Correcting Schemes^ 26 Transmission data read in interleaving depth X I• • • • • • 1 lb- • I • • • • • • I code word length ^ 1 Figure 3.1 Interleaving procedure with the interleaving depth A. Convolutional codes and Reed-Solomon codes [10] are often used in conjunction with interleaving over channels with memory. In general, a code which can correct t errors in g error-free bits is capable of correcting t or fewer bursts of length A or less in Ag error-free bits when it is interleaved with a degree A. Without interleaving, a rate R and memory m convolutional code is capable of correcting a maximum of t M channel errors over (m + 1)/R received bits, where t M is the random-error-correcting capability of the code given by pvi. With an interleaving depth A, the code can correct t M or fewer bursts of length A or less in an error-free guard space of length [ (T7 1) t.] A. Similarly, Reed-Solomon codes can be interleaved to increase their burst-error-correcting capability. A Reed-Solomon code is based on the nonbinary symbols from the Galois field GF(q). Consider a Reed-Solomon code with symbols from GF(2m) which can correct t symbols. If each symbol is represented by its corresponding m-bit byte, the code is capable of correcting any pattern that affects at most t bytes. Thus, a Reed-Solomon code with t-byte-error-correcting capability can correct 27 Chapter 3. Existing Burst-Error-Correcting Schemes ^ a single burst of length (t 1)m + 1. With an interleaving depth A, the code can correct bursts of length up to A[(t — 1)m + 1] or less. The total delay of the schemes with interleaved codes is the time required to receive all code bits interleaved at one time. If n is the code word length of a Reed-Solomon code used in a scheme with interleaving depth A, the total delay of the scheme is nA. For convolutional codes, the total delay must be at least equal to the time to receive the channel bits over one truncation path length since this is the minimum length needed for decoding. 3.2 Gallager's Burst-Finding Scheme Gallager's burst-finding scheme is one of the earliest schemes designed to locate and correct bursts [5]. The : scheme uses a rate 1/2 systematic convolutional code and a modified majority logic decoder (MLD). The scheme also includes a mechanism to detect the presence of bursts. 3.2.1 Encoding Circuit To describe the scheme in more details, let us consider an example with an original (2, 1, 5) systematic code with g (2) (D) = 1 + D 3 + D 4 + D 5 . The encoder of the scheme is based on the encoder of the original code with a modification in the adder of the parity bit. The modified encoder of the given code is depicted in Figure 3.2. A delay of (L + M) is added to the shift register of the original encoder where the values L and M will be used in the decoding circuit of the scheme. At time i, the first code bit tT ) is sent as it is whereas the parity from the original encoder is added to the delayed input bit u,_L_m_ rn to form 7., 2) . 3.2.2 Decoding Operations At the receiver, the overall decoder consists of a replica of the encoder, a modified MLD and a mode selector as shown in Figure 3.3. The overall decoder operates in two modes, 28 Chapter 3. Existing Burst-Error-Correcting Schemes ^ ui (2) V. op. Figure 3.2 Encoding circuit of a (2, 1, 5) systematic code in Gallager's burst-finding scheme. the random mode and the burst mode. The modified MLD is applied to the error estimates of the received channel bits. The MLD detects the presence of bursts and determines the operating mode. The mode selector selects error estimates with respect to the mode of the decoder. The final output is taken as the modulo-2 addition of an error estimate and its corresponding received bit. A Ui-L-M-5 r (2) 2 (1) i-L-M-5 Output = " " when at least 2 inputs are "1" Output = "0" otherwise M Mode selector p(12L.5 A(1) e. Figure 3.3 Decoding circuit of Gallager's burst-finding scheme [10]. Chapter 3. Existing Burst-Error-Correcting Schemes ^ 29 and p (, 2) be the two channel bits received at time i. The received bit r,( ' is fed Let and into the encoder replica. The output from the encoder replica is added to the second received bit r,(2) to form an error estimate [e (, i)i,_ m _ 5 ] b for the input at (L M + 5) time units earlier. In the absence of errors, the received bit r;' ,_ m _, is exactly the same as the input at time ) (i — L — M — 5), and the error estimate b is zero. The error estimate is delayed by L before it enters the MLD. Given a convolutional code, a maximum threshold t ML can be obtained from a maximum number of orthogonal check sums J to be used with a MLD. J orthogonal check sums are formed on the (in + 1) syndromes computed each time. The (2, 1, 5) systematic code in the figures has J = and t ML = 2. However, the modified MLD uses a lower threshold tM L = 1. The output re12,,_5] r from the MLD is zero when at most tM L check sum are ones; otherwise, the output is one. The error estimate [e , _ ] r is further delayed by M before it is ( 1) L 5 fed into the mode selector. At the mode selector, two estimates are available at any instant, [e , 2L _ m _ 5 ] b and [e 2,,_ m _ 5 ] r to be chosen according to the mode of the decoder. 1 (1 Let us assume that the decoder starts in the random mode, and the modified MLD initially contains all zero error estimates. In the random mode, the estimate le(,1),,_„_51 r from the MLD is chosen to be the final error estimate The error-correcting capability tM L < t ML of the code is sacrificed to detect bursts. If the number of ones in the J check sums is greater than tML and less than J, the decoder switches to the burst mode. In this mode, the error estimate [e (, 12,,,_ 5 ] b is selected. Once in the burst mode, if the error estimates [el 2 . . ] l L 5 from the MLD are zeros for M times, the decoder switches back to the random mode and uses the error estimates from the MLD. The output of the mode selector is added to the received bit r; _2L_M _ 5 to give the final decoded output 11,-L-m_5• Chapter 3. Existing Burst-Error-Correcting Schemes^ 30 ..(1 ) ^can be fed back into the MLD to remove the and "e (1) The error estimates [e^ L -5 B-L-M -5 effects of the estimates on future syndrome computations as shown in the decoding circuit. The values L and M should be selected to optimize the performance of the scheme. The value L must be considerably longer than the average burst length so that the error estimates in the burst mode will be selected from a guard space, and it must also be smaller than the average guard space to avoid selecting an error estimate from another burst. The value M should be long enough to ensure the end of a burst. The original encoding and decoding circuits of Gallager's burst-finding scheme can be modified to be used with any rate 1/n systematic code. The modified encoder includes the same delay (L + M) in subsequent parity branches, and the delayed input bit is added to all (n — 1) parity bits prior to transmission. In the decoding circuit, the first received bit is fed into the encoder replica, and the output from the encoder replica is added to all received bits corresponding to the (n — 1) parity bits. The results from these modulo-2 additions are used in the syndrome computation and in the burst detection of the MLD. There are (n — 1) error estimates from the (n — 1) parity bits, and majority voting can be applied to these estimates to obtain the error estimate in the burst mode. The rest of the decoding circuit remains unchanged. The performance of the scheme is evaluated by means of computer simulations. A rate 1/2 and a rate 1/3 systematic codes were used. The rate 1/2 systematic code is constructed with G = [g( 1 ), g( 2 )] = [1, 1 D + D 2 + D 4 ]. The rate 1/3 systematic code is constructed with G = [g (0 , g( 2) , go ) ] = [1,1 + D 2 + D 3 + D 4 , 1 + D D 4 ]. Using the octal notation in [22], the generator matrices can be expressed as G = [1,72] and G = [1,56,62] for the (2, 1, 4) and (3, 1, 4) codes respectively. From the code polynomials, J = 4 and V., = 1 were 31 Chapter 3. Existing Burst-Error-Correcting Schemes^ p I 10 ^- R=1/2 R=1/3 10 -2 ♦ -^ e ♦ If ei e l ♦ ♦ 4. I 10 -4 ...0. -5 . II 0 II o/ ^ EG -2 41 -41 10 10 ^ Figure 3.4 Performance of Gallager's burst-finding scheme with m = 4, PG , = 0.9999, PE,,, = 0.99, and C B = 0.5. 10 Chapter 3. Existing Burst-Error-Correcting Schemes ^ 32 obtained for the (2, 1, 4) code, and J = 6 and tM L = 2 were obtained for the (3, 1, 4) code. The number of information bits used for each simulation was 10 7 . A Gilbert-Elliott channel was simulated with PSG 0.9999, PBB 0.99, and EB = 0.5. The average guard space is 104 , and the average burst length is 100. The same delay L of 1200 channel bits was used in both cases. The delay of 1200 was chosen to ensure that the delay is essentially longer than all bursts, and shorter than the regions in the guard space. The rate 1/3 code performs better than the rate 1/2 code because of its higher decoding threshold t ivii L . When the channel error rate in the guard space is relatively high, the improvement is more noticeable since the scheme with the rate 1/2 code becomes more vulnerable to random errors in the guard space. 3.3 Schlegel and tlerro's Scheme A modified Viterbi decoding algorithm implemented on Gallager's burst-finding scheme was recently suggested by Schlegel and Herro [12]. This scheme also uses a systematic code. The modified encoder of Schlegel and Herro's scheme is the same as that of Gallager's burstfinding scheme except the delay is now L instead of (L M). The overall decoder uses a Viterbi algorithm instead of majority logic decoding to decode and to locate bursts. The burst detection procedure is based on the cumulative metrics of the partial paths in Viterbi decoding. Since the same burst detection procedure (BDP) is used later on in the thesis, an in-depth discussion of the BDP is given first, and then followed by the description and operations of Schlegel and Herro's scheme. 3.3.1 Burst Detection Procedure (BDP) The burst detection procedure (BDP) is performed by monitoring the increase of the path metric values in Viterbi decoding over a number of 1 branches. Let the cumulative metrics of the path j at steps i and (i — 1) be 11 and -y )) -1)1 respectively. The increase in metric value Chapter 3. Existing Burst-Error-Correcting Schemes ^ 33 of the path j at time i over / branches is given by A Large values of A -12 ) (3) (i) = "Ys (3) (3.1) for all survivors (j = 0,1, ..., 2 - — 1) indicates that the channel is most likely in a burst. In the BDP, a burst is said to be detected if Ai,, > t for all survivors, where t is a value selected for a particular channel condition to ensure that most of the bursts are detected. The beginning of the detected burst is declared at 1 steps before the instant at which the condition holds. When the channel is out of the burst, the metric values of the paths become stable. The BDP declares the end of the burst when A1,, < t for at least one survivor and for at least M consecutive steps. d 2—1 j , t For a given convolutional code with an error-correcting capability t M = [—f — L is usually chosen to be > d —1 . The choice of 1 is usually based on a given t. This value 1 should be such that d m i i,(/ — 1) > 2t where d rai n (/ — 1) is the minimum weight code word over the first 1 time units with nonzero initial information bit. Such value of 1 is often approximately 3 to 6 times (m + 1)/R, where R is the rate and m is the memory of the code. For a value t, a minimum value 1 can be found by finding d ini r,(/ — 1) for all 1 in that range to satisfy the condition. It is straightforward to see that the choice of t is governed by the code memory since the larger the memory m is, the larger the free distance will be. The choice of the code memory should be taken care of to ensure that the error-correcting capability t M is large enough so that there is some degree of freedom in the choice of t. The choice of 1 and t is also affected by the channel parameters. When the channel becomes a random-error channel, there is an increasing number of error events of high weights due to errors in the good state. Thus, t should be chosen higher than VIV I j. When 34 Chapter 3. Existing Burst-Error-Correcting Schemes^ A(2) Viterbi decoder —111. Burst alarm line L+m-d mode selector (2,1,m) systematic encoder Figure 3.5 Decoding circuit of a (2, 1, m) code for Schlegel and Herro's adaptive scheme. the channel is heavily corrupted with noise, and the high-to-low error density ratio is high (i.e. the channel has dense bursts), there is an increasing number of error events of lower weights, and therefore t should be chosen equal to [ 1117-1- . After t has been selected, / can be chosen accordingly. 3.3.2 Decoding Operations The decoding operations of the scheme are shown in Figure 3.5. The decoder consists of a normal Viterbi decoder for the (2, 1, m) systematic code, a replica of the original systematic encoder, and a mode selector. Incorporated in the Viterbi decoder is the BDP described above. The decoder operates in either the random mode or the burst mode, which is determined by the BDP. At any decoding step, two output estimates are available at the mode selector. One is from the Viterbi decoder, and the other is the one recovered from the replica of the systematic encoder. The choice of the decoded output depends on the mode of the decoder. Let i' (1) and f. (2) be the received bits for possible transmitted bits vC 1) and 1)C 2) . Since the parity bit IT ) of the modified encoder is the addition of the parity bit of the original Chapter 3. Existing Burst-Error-Correcting Schemes^ 35 encoder at time i and the delayed input bit u z _L_ n„ the effect of^has to be removed ..2) ( from r I before it is fed into the Viterbi decoder. This is done by adding the delayed output fi t _L, back to the received parity bit 7', (2) . The output from the Viterbi decoder is further delayed by (L + In - d T ) so that the delayed output [71,_/,_,,] f appears at the mode selector. Notice that the truncation path length dT is subtracted from the total delay since the output from the Viterbi decoder at time i is pa i _d T 1 r . The replica of the systematic encoder is used to recover the input bitui_L_ received bits izO ) and i'( 2) • The parity bit 1") 2) obtained from the encoding of 1- m from the 1) is added to i'( 2) . If both f.( 1) and ri g) are correctly received, the result [iii_L_ n ] b from the addition is exactly the same as the delayed input ui_L„• The decoding begins in the random mode. In this mode, the output is taken from the Viterbi decoder. When a burst is detected from the BDP, the decoder declares the beginning of the burst, and switches to the burst mode. Note that the output is not taken from the lower circuit right after the detection of the burst, but rather at (L + m 1) time units later in - order to synchronize the burst position with the corresponding delayed estimate [ii i _L_ n ] b • It is crucial that L be chosen large enough so that the estimate [iii_L_ n ] b can be retrieved when the channel is quiet. When the end of the burst is detected, the decoder returns to the random mode, and takes output bits from the Viterbi decoder. The scheme can be extended to nonsystematic codes [12]. The generalization from a rate 1/2 systematic code to a rate 1/n systematic code is also possible. The encoder of a rate 1/n systematic code is modified by adding the delayed input bit ui_L_,, to the last parity bit 't) ( r .(1) n) . , The other (n — 1) code bits remain unchanged. At the decoder, the received bits .(2) ..(n-1)) corresponding to the first (n — 1) code bits are fed directly into the Chapter 3. Existing Burst-Error-Correcting Schemes 36 Viterbi decoder while the last received bit i'' 11) needs to be added by the estimated output fed back from the mode selector. The replica of the systematic encoder takes the first received bit 1= 1) as the input, and the output is added to the last received bit l'C n) to provide the output estimate _L-,2 1 b • Computer simulations were conducted to evaluate the performance of Schlegel and Herro's scheme with rate 1/2 and rate 1/3 systematic codes. The decoded BER of the scheme for a (2, 1, 4) systematic code and a (3, 1, 4) systematic code are shown in Figure 3.6. The rate 1/2 code is constructed with G = [1, 72], and the rate 1/3 code is constructed with G = [1, 56, 72]. The number of information bits used in each simulation was 10 7 . The same Gilbert-Elliott channel was simulated with PGG = 0.9999, PBB = 0.99, and e B = 0.5. The average guard space was 10 4 , and the average burst length was 100. The delay L used in both cases was 1200 channel bits. This value of the delay was selected to make sure that most of the times the burst recovery takes place in the guard space. The choices of 1, t, and M were optimized as previously discussed. For the rate 1/2 code, t was often selected to be 2 or 3, and 1 was in the range of 12 to 20. For the rate 1/3 code, t was often selected to be 4 or 5 since the free distance of the rate 1/3 code is higher, and / was in the range of 14 to 20. The choice of M was fixed to 20. The performances of both codes are almost the same for EG < 10 -3 since the overall decoded BER is dominated by the burst errors. The improvement is noticeable for e G > 10 -3 as the errors in the guard space become more significant. 3.4 Limitations of Existing Burst-Error-Correcting Schemes The burst-error-correcting schemes discussed above have a number of problems. Specifically, the schemes with interleaved codes suffer a long delay because the decoder has to receive all the bits interleaved at each time before it begins decoding. The interleaving degree Chapter 3. Existing Burst-Error-Correcting Schemes^ 10 37 -2 g^ a R=1/2 R=1/3 10 / .3 IM. I .^ 10 10 --- -4 WIIIMAILMIL 4 .". -4 1 -5 10 ------ -31 10 10 -2 10 EG Figure 3.6 Decoded BER of Schlegel and Herro's scheme with P = 0.9999, PB . = 0.99, and c. = 0.5. 00 Chapter 3. Existing Burst-Error-Correcting Schemes ^ 38 must be long enough to spread the bursts effectively. The longer the bursts are, the higher the interleaving degree is required, and the longer the delay will be. Such a long delay is not acceptable in many applications, such as voice transmissions. The use of a systematic convolutional code and majority logic decoding in Gallager's burst-finding scheme makes it a suboptimal decoding scheme. By choosing t,„' , < 44 ,, the scheme also sacrifices some random-error-correcting capability in exchange for the burst detecting capability. The most serious drawback of the scheme is that it is extremely sensitive to errors in the guard space during burst recovery. Therefore, Gallager's scheme is not suitable for channels with a relatively high error rate in the guard space. Schlegel and Herro's scheme has more protection against the errors in the guard space, and better performance. Although the decoder does not sacrifice any error-correcting capability in comparison to Gallager's scheme, Schlegel and Herro's scheme is still sensitive to errors in the guard space. During a burst recovery, any errors in the guard space will likely appear as errors in the output sequence. In particular, errors in the parity bit sequence not only cause errors at the output, but also cause an incorrect content in the replica of the systematic encoder in the decoding circuit. Both Gallager's scheme and Schlegel and Herro's scheme are suitable for channels with occasional long bursts and clean guard space. Chapter 4 The Proposed Adaptive Scheme 1 This chapter proposes an adaptive scheme to be used on channels with memory in an attempt to overcome the drawbacks in the existing burst-error-correcting schemes. The proposed scheme (Scheme 1) is designed to recover bursts with the notion that input bits can be decoded by finding their corresponding node in the code tree. Scheme 1 uses two punctured convolutional codes. A code with short memory is used with Viterbi decoding to correct errors in the guard space and to detect bursts. The other code has relatively long memory, and is used with backward sequential decoding solely for burst recovery. The choices of Viterbi decoding for the short memory code, and backward sequential decoding for the long memory code take full advantages of these decoding algorithms in terms of efficiency and complexity. The chapter begins with a description of Scheme 1. An analysis of the decoded BER of Scheme 1 and computer simulations are then provided. 4.1 Encoding Circuit The encoder requires two punctured convolutional codes denoted code 1 and code 2. Code 1 is a rate R 1 punctured code which has a short memory ml, and is used in conjunction with Viterbi decoding. Code 2 is a rate R2 punctured code which has a relatively long memory m2, and is used with backward sequential decoding. Information bits to be transmitted are simultaneously fed into two separate encoders for code 1 and code 2. An example of the composite encoder is shown in Figure 4.1. In the encoding circuit, the punctured output 39 Chapter 4. The Proposed Adaptive Scheme 1^ (^ 1) sequences v z 40 ( 2) for code 1 and v z for code 2 are multiplexed and transmitted over the channel. The overall code rate is therefore u. • R = Ri+R2• the encoder for rate R memory m punctured code ( 1) V. a. E (2) the encoder for rateR 2 memory m 2 punctured code Vz , Figure 4.1 Encoding circuit of Scheme 1. 4.2 Decoding Operations At the receiver, received channel bits are demultiplexed into two sequences i( 1 ) and i( 2 ). Sequence i( 1 ) corresponds to the transmitted sequence v( 1 ) of code 1, and sequence 1-( 2 ) corresponds to the transmitted sequence v( 2 ) of code 2. The overall decoder consists of a Viterbi decoder, a sequential decoder, and an output selector. A general structure of the decoder is depicted in Figure 4.2. The Viterbi decoder in which is incorporated the same burst procedure (BDP) described in section 3.3.1 is applied to the first received bit sequence. The backward sequential decoder is used only to recover a burst detected by the BDP. The output selector selects one of the two output bits, fi r and lib, provided by the Viterbi and sequential decoders respectively. The overall decoder operates in two modes: the random mode and the burst mode. Initially, the decoder is in the random mode, and decodes the received sequence 14 1 ) corresponding to code 1 with a Viterbi decoding algorithm. Output bits are selected from Viterbi decoding until a burst is detected by the BDP. The decoder records the beginning Chapter 4. The Proposed Adaptive Scheme 1 41 and the end of the burst. After the end of a burst is detected, the decoder remains in the random mode for N steps more, where N is greater than or equal to m2. The last m2 decoded bits in the N bits provide a starting state for backward sequential decoding. After a starting state has been obtained, the decoder switches to the burst mode to begin the burst recovery with backward sequential decoding. tr r" Viterbi decoder burst alarm line (2) backward sequential decoder Ur output selector b Figure 4.2 Decoding circuit for the proposed Scheme 1. Backward sequential decoding is applied to recover the state at the end of a detected burst, as suggested in Figure 4.3. Supplied with the starting state, the backward sequential decoder uses the received sequence 1.( 2 ) from code 2 to find the state at the end of the detected burst. Since the end state is defined by the previous m2 (input) bits in the burst, only the last m2 bits in the burst are recovered. The rest of the burst, decoded by the Viterbi decoder, remains unchanged. Therefore, the memory m2 must be chosen sufficiently long so that most of the bursts are corrected. After the burst is recovered, the overall decoder switches back to the random mode and continues the decoding process with Viterbi decoding. Chapter 4. The Proposed Adaptive Scheme 1^ 42 Viterbi decoding 141-- burst —0-14*— guard space of length N m2 end state starting state backward sequential decoding Figure 4.3 Diagram of a burst recovery procedure. 4.3 Analysis of Decoded BER The decoded BER of Scheme 1 is now analyzed. In the analysis, the following assumptions are made: • EG is relatively small. EG• • Guard spaces and bursts are relatively long so that error events in guard spaces and in bursts can be treated as statistically independently. The assumptions above also imply that: • Almost all decoded errors come from bursts. • Almost no false alarm occurs in the guard space. The first implication is quite obvious because most errors in the guard space are corrected if EG is relatively small, and if E B EG, errors are mostly caused by bursts. When a "burst" is detected, the Hamming distance between the received sequence and any possible transmitted code sequence over / branches must be greater than or equal to t, where t is the threshold and / is the number of examined branches in the BDP. The least condition for having a detected I received bits, where R1 is the rate of code 1. If burst is to have at least t errors in — RI EG is Chapter 4. The Proposed Adaptive Scheme 1^ very small, the probability of having t errors in R, 43 ^is extremely small. Thus, the chance of causing a burst alarm in the guard space is extremely small as well. There are three scenarios that decoded errors might occur. The first case is when decoded errors are resulted from channel errors in the guard space. The second case is when a burst is undetected, and the burst is likely to cause an error event. These two cases can be seen as the cause for undetected errors. The third case is when a burst is detected, but decoded errors might occur during the burst recovery. The contribution of each case to the overall decoded BER is considered and studied in more detail in the following sections. 4.3.1 Probability of Undetected Errors Let Pu be the probability of undetected errors, Pug and Pu b be the conditional probabilities of undetected errors in the guard space and in the bursts respectively. The probability of undetected errors is given by Pu = PG Pug + PB Pub • ^ (4.1) The probability of decoded errors due to errors in the guard space can be found in a similar way to PVG in expression (2.22). Let / G denote the length of a guard space, and Pv(EG) be the Viterbi decoded BER of code 1 on a BSC with channel error rate E G . The probability of decoded errors due to channel errors in the guard space can be approximated by E 00 Pug 1 " (iGR)pr(iopv(EG) ^ (4.2) 1 G >imin where / m i n is the minimum length of an error event, R is the code rate, and Pr(1 G ) is the probability that a given guard space has length /G. 44 Chapter 4. The Proposed Adaptive Scheme 1^ The probability of decoded errors due to undetected bursts can be found by identifying all possible error events that are caused by undetected bursts. This task is extremely difficult. For simplicity, the following assumptions are made: • Error events due to undetected bursts are shorter than 1. • Error events lie entirely in the bursts. The first assumption is reasonable since the longer a burst is, the more likely it will be detected. The second assumption implies that, once an error event occurs in the bad state, the channel remains in the bad state for as long as the error event lasts. Under the assumption that the length of an error event is less than 1, an error path is selected if the Hamming distance between the received sequence and the error path is less than the threshold t. Thus, given an all zero input sequence is sent, an error event of weight d is undetected if no fewer than (d — t 1) channel errors occur over the length of the event. Let / mi n (d) be the minimum length of all bursts corresponding to error events of weight d, and Bd be the number of nonzero information bits of all error events of weight d. The ratio of the number of errors due to undetected bursts to the number of errors due to all bursts can be approximated by E Bin(d)-1 Bd^cl:)E:(1 — EB)Imin(d)—e pB1m d>df^/ PB 0o^d E ^ E Bd a d ( d ) el (1 — d>di^e=^e (4.3) I (d)—e - I min (d)-1 EB) mni^PBB where df is the free distance of code 1, PBB is the probability that the channel remains in state B, and ad = 1^if d is odd ^ (4.4) 1/2^if d is even. The probability of decoded errors due to undetected bursts Pu b can thus be approximated by Pub PB PV (EB ^ (4.5) Chapter 4. The Proposed Adaptive Scheme 1^ 45 where P, (E B ) is the Viterbi decoded BER of code 1 on a BSC with channel state error rate EH • The probability of undetected errors of the proposed scheme is the average sum of the probability of decoded errors due to errors in the guard space and the probability of errors due to undetected bursts. The probability of undetected errors / 3„ can be written as 00 Pu = PG Pug + Pu Pub PG E (iGR)pr(1G)Pv(EG) + PH PH PV ( E R )• ^ (4.6) 1G >lrnin 4.3.2 Probability of Decoded Errors Due to Detected Bursts When a burst is detected, backward sequential decoding is used to recover the burst with the starting state provided by the Viterbi decoder. The sequential decoder uses the received bits of code 2 in the guard space to find the end state defined by the information bits in the burst. Two assumptions are made regarding the starting state and the end state: • When the obtained starting state is incorrect, the end state is also incorrect. • When the obtained starting state is correct, the end state is also correct. Since the number of possible states in the sequential decoder is large due to the high memory of code 2, the chance of reaching a correct end state from an incorrect starting state is extremely small, and can be assumed to be zero, as suggested by the first assumption. The second assumption is reasonable because code 2 has a high error-correcting capability and EG is previously assumed to be small. Probability that a burst is detected: Let Po be the probability that a given burst is detected. Given that t is used as the threshold in the BDP, the least condition for a burst to be detected is to have at least t channel errors in the burst. The probability that a given 46 Chapter 4. The Proposed Adaptive Scheme 1 ^ burst is detected can be approximated as Pdib ..:-., E Pr(1 B ) Pr {making > t errors in ( / B R) 1B . Co^V IA_R^(1, _111\_e E prm E ( iti )6,e3 0. _ E B )k Ri )^. e 1B^e=t (4.7) The length (V) is the number of bits from code 1 received in a burst of length / B . Probability of a correct starting state: Let Pcs and P1 , be the probabilities of obtaining a correct state and an incorrect state respectively. An incorrect starting state occurs when there is at least one error within m2 decoded bits following the end of the detected burst. This situation can happen when • An undetected burst occurs within 7212" received bits from the end of the first burst. This also means the guard space following the first burst is shorter than T' 2- channel bits. • The guard space is longer than it , but the errors in the guard space cause at least one decoded error. In other words, the starting state is correct if no undetected error occurs within the m2 bits that define the state. Thus, the probability of having a correct state can be approximated by Pis ,===, (1 — PB ) m2^(4.8) where m2 is the memory of code 2, and P B is the probability of undetected errors given by (4.6). The probability of having an incorrect state is thus given by = 1 — Pc , 1 — (1 — PO' .^ (4.9) Chapter 4. The Proposed Adaptive Scheme 1 ^ 47 Probability of decoded errors due to detected bursts: Let Rb be the burst rate under a particular channel condition. On the average, a burst occurs every R( A G + A B ) information bits, where ). G and A B are the average lengths of a guard space and a burst in channel bits, respectively. Thus, the burst rate Rb is given by Rb — 1 ^(4.10) R(A G AB) . Let us denote Pd as the probability of decoded errors due to detected bursts, and denote Navg as the average number of errors decoded from a detected burst. The probability Pd can be written as Pd = Rb Pd lb Navg • (4.11) Let N„ be the average number of decoded errors for the correct state case, and Ni , be the average number of decoded errors caused by an incorrect state. The average number of errors decoded from a detected burst is computed from N„ and N1 as following Navg Pc s Ncs + Pis Nis (4.12) where Pc , and P15 are given in (4.8) and (4.9). Most decoded errors are the result of incorrect starting states and of bursts with length longer than -"k- since the decoder can recover at most m2 (decoded) bits in each burst. Under the assumption that the end state is incorrect if the starting state is incorrect, all bits in the burst with an incorrect state can be considered in error. Since 1 is the number of branches backtracked in the BDP when a burst is detected, the minimum detected "burst" length is . When the length of a detected burst is longer than Ey-, the unrecovered bits in the burst are decoded with Viterbi decoding. The average number of decoded errors A/Gs and Ni , can Chapter 4. The Proposed Adaptive Scheme 1^ 48 be approximated as 00 NCS R) R ] 13' (EB) Pr(1B) [( 1B^rn 17 1B > -kco^co N,,^ pruo( )R Pr(/ B ) 1B R w E +E (4.13) 113 ^ 77 imin<lu< 771 where Pr(1 B ) is the probability that the length of a given burst is 1 B , Pv(e B ) is the Viterbi. decoded BER for code 1 on a BSC with channel error rate e„, and / m i n is the minimum error event length. Summing all possible errors, the probability of decoded errors due to detected bursts is approximated by (4.14) Pd = RbPdIbNavg ti Pdlb { R(a G + AB) }{PCSNCS PJSNJS}. The overall decoded BER of the proposed scheme is the sum of undetected and detected errors, and is given by Pbe = Pu + Pa, ^ (4.15) where Pi, is given by (4.6) and Pd is given by (4.14). 4.4 Simulation and Numerical Results The performance of Scheme 1 was evaluated by computer simulations based on the Monte Carlo method [23]. All simulations were run with at least 10 7 input bits. For decoded BER of higher than 10-5 , each simulation produced at least 100 errors. This number of errors corresponds to a 90% confidence level with a maximum of about ±18% from the mean. Since simulations for low decoded BER are more time consuming, the minimum number of errors was set to 10. Thus, for error rates between le to l0-6, a 90% confidence Chapter 4. The Proposed Adaptive Scheme 1^ 49 level was obtained with a maximum of about ±70% from the mean. Extrapolation for lower error rates was done wherever it was needed. 4.4.1 Approximation and simulation results Estimation on the decoded BER and simulation results for different code memories m2 are plotted in Figure 4.4. A Gilbert-Elliott channel was simulated with PGG = 0.9995, PBB = 0.95, EG = 0.005, and e B = 0.5. Code 1 was given by G = [23, 35] with memory nil = 4. The free distance of code 1 is 4 and the minimum error event length / min is equal to 12 (channel bits). The codes used as code 2 were reversed codes obtained from the systematic codes given in [22]. Both code 1 and code 2 were punctured with the perforation pattern 11 to obtain rate 2/3 codes, and the overall code rate was 1/3. It can be seen in Figure 1^ 10 4.4 that the simulation results are higher than the approximated values. One reason is that, in the approximation, the beginning and the end of a detected burst are assumed to be located perfectly. However, the assumption is not always correct since there are cases in which the number of branches 1 is not long enough to include all the bits in the burst, or the end of a burst is declared before the burst actually ends. Another reason is that the approximation does not include the errors that would occur when two detected bursts were so close that a starting state could not be obtained. Finally, errors can also occur when there is a false alarm. In general, the simulated value is about 2 to 8 times higher than the approximated values. The discrepancy is higher for small m2. This can be seen as the limitation of the assumption that a correct end state is always obtained from a correct starting state. Given a code 2 with a short memory m2 and limited error-correcting capability, the channel error rate e, = 0.005 is too high for the assumption to be valid. However, the figure can be used to determine a good value m2 for a particular channel condition. Chapter 4. The Proposed Adaptive Scheme 1 ^ 10 50 t , simulation ^ _ _ ^ approximation 10 10 -2 -3 ^MN Mb ..„ oz. ram ... s.. 10 ... "--s.. .........■•■•■••• . -4 10 20 ^ i 30 ^ --- i^I 40 ^ bb. b. ANEM . ...SW 50^60 m2 ^ 70 ^ i ^ 80 ^ 1 90 ^ Figure 4.4 Performance of Scheme 1 with m1 = 4, R = 1/3, PSG = 0.9995, PBB = 0.95, e G = 0.005, and E B = 0.5. 100 Chapter 4. The Proposed Adaptive Scheme 1^ 51 4.4.2 Comparisons with Existing Burst-Error-Correcting Schemes Scheme 1 was tested for different channel conditions and its performance was compared with those of Viterbi decoding with and without interleaving, and with Schlegel and Herro's scheme. Code 1 is the same code used in the previous figure, and code 2 is the reversed code of the rate 2/3 punctured code derived from a (2, 1, 79) systematic code [22] with perforation [11I . The proposed scheme used a stack algorithm for backward sequential 10 decoding. For a fair comparison, the code used with the non-interleaving and interleaving pattern cases is a (3, 1, 4) nonsystematic code given by G = [52, 66, 76] [10]. Schlegel and Herro's scheme used the (3, 1, 4) systematic code with G = [1, 56, 72] and df = 9. The parameters (1, t, M) in the BDP are optimized as previously discussed although better parameters are possible to be obtained with more tests. The interleaving depth in the interleaving scheme was chosen such that the total delay was equal to the delay in the proposed scheme, which was taken as the sum of the memory m2 and the average burst length. Since a decoding decision is made by a Viterbi decoder only after a number of channel bits equal to one truncation path length has been received, the maximum interleaving depth can be calculated as tD dint = (dT/R) (4.16) where R is the code rate, and tD is the total delay. A Gilbert-Elliott channel was simulated with PGG = 0.9995, PBB = 0.95, and average channel error rate 0.005. Figure 4.5 shows the performances of Viterbi decoding with and without interleaving, and the proposed scheme for different values of high-to-low error density ratio A. The error rates EG and EB are chosen such that the average channel error rate is equal to 0.005. As A increases, the channel goes from a random-error channel to a channel with dense bursts. 52 Chapter 4. The Proposed Adaptive Scheme 1^ 10 -2 .....^ 10 ............ -3 ••••••••^ 10 10 -5 IMP 10 Non-interleaved Viterbi Interleaved Viterbi, d int– 3 — — — - Schlegel and Herro's scheme Proposed scheme 1 Tr 10 2 10 10 -2 4.6 x 10 4.6 x 105 3 ^ ^ ^ 0.25 ^ 4 10^A ^ 0.46 0.5 ^Es 4^ 4.6 x 16 5x10 -5 ^ - 3^ 2.5 x 10 10 Figure 4.5 Comparison of decoded BER with m 1 = 4, m 2 = 79, R = 1/3, PG G = 0.9995, P. = 0.95, and average channel error rate = 0.005. Chapter 4. The Proposed Adaptive Scheme 1 53 Scheme 1 performs well when the channel is heavily corrupted. For A > 10 3 , the decoded BER of the proposed scheme is less than half of the decoded BER of the interleaving scheme with interleaving depth dint = 3. For A > 10 2 , the proposed scheme always outperforms the interleaving scheme. However, when the guard space is afflicted with a relatively high error rate, more and more decoded errors occur in these regions. Two major factors contribute to the low performance of the proposed scheme on random-error channels: • Fewer bursts are detected, so the proposed scheme relies mainly on the strength of code 1 with Viterbi decoding. However, the proposed scheme uses a punctured code (code 1) with a lower random-error-correcting capability than the code used in the non-interleaving and interleaving cases. • More incorrect starting states due to errors in the guard space cause more decoded errors in bursts. The proposed scheme is not affected by changing channel conditions as much as Schlegel and Herro's scheme is. The improvement by the proposed scheme is more significant for 10 < A < 10 3 . Better performance with the proposed scheme could have been achieved if nonsystematic codes had been used with backward sequential decoding. However, at present, nonsystematic long memory codes are not yet available. 4.5 Concluding Remarks Scheme 1 has a number of advantages over the existing burst-error-correcting schemes. Unlike the schemes with interleaved codes where channel information is destroyed, Scheme 1 makes use of channel information in the burst recovery to optimize the error control performance. This scheme is more robust than Schlegel and Herro's scheme in that errors in the guard space are corrected by code 1 before they are utilized in any burst recovery. Chapter 4. The Proposed Adaptive Scheme 1 54 Scheme 1 is expected to perform well for phased bursts defined as bursts of some maximum length b separated by a guard space of g > (b+ dT), where dT is the truncation path length of code 1. If this condition is satisfied, one can always find a code with a memory m2 > b to recover all bursts. The proposed scheme undoubtedly has some limitations. First, when an incorrect starting state is obtained for a burst recovery, many information bits in the detected burst may be decoded in error. Second, the performance over a random-error channel is not as great as expected because of the limited error-correcting capability of the punctured code (code 1) used. Finally, the scheme cannot yet be applied to channels with relatively long bursts due to the fact that good codes with long memory to be used with sequential decoding are not available. Chapter 5 The Proposed Adaptive Scheme 2 Scheme I has a number of limitations since it does not make good use of the channel information in the detected bursts. When a burst is detected, the received channel bits in the burst are not used in the burst recovery process. This chapter proposes a second scheme which can resolve the limitations of Scheme 1, and at the same time, overcome the drawbacks in the existing burst-error-correcting schemes. Scheme 2 uses two punctured codes obtained from an original code with two complementary perforation patterns. The transmissions of the two code sequences are separated by a delay. Viterbi decoding is applied to the combination of the two received sequences with reliability factors determined by the BDP described in section 3.3.1. The encoding and decoding circuits of Scheme 2 are given in the beginning of this chapter. The analysis of decoded BER of Scheme 2 and computer simulation results are then provided. 5.1 Encoding Circuit The encoding circuit of the scheme is shown in Figure 5.1. The encoding circuit consists of the encoders for two rate R 1 and R2 punctured convolutional codes, a delay L and a multiplexer. Both punctured codes are obtained from the same original code with complementary perforation patterns. In the encoding circuit diagram, the i th input u, is fed (2) simultaneously into both encoders. The output block NT, of code 2 is delayed by L before it is multiplexed with the output block NT of code 1. The multiplexed bit sequence is then ) 55 Chapter 5. The Proposed Adaptive Scheme 2^ 56 transmitted over the channel with an overall code rate R = e2 RR. FR2 first encoder for rate R I punctured code $.4 oe oe o. second encoder for rate R 2 punctured code V( V (2) 2) L . Figure 5.1 Encoding circuit of Scheme 2. 5.2 Decoding Operations The decoding circuit diagram of Scheme 2 is shown in Figure 5.2. The decoding circuit consists of two Viterbi decoders and a delay L. The decoding process involves detecting bursts, estimating channel states, and decoding using a Viterbi algorithm. Burst detection and channel state estimation are provided by the first Viterbi decoder while decoding is (1) performed by the second Viterbi decoder. Let r 2 -.(2) and r i _ L be the received blocks for the ( 1) (2) possible transmitted code blocks v, and v z _ L respectively. The first Viterbi decoder is applied to the received block IT ) to detect bursts using the same BDP as in Scheme 1. The first decoder also provides estimated channel states to all received bits in the following way: • When there is no detected burst, each received bit is assumed in state G. • When a burst is detected, the first decoder declares the beginning of the burst at 1 branches earlier, and each channel bit received after the beginning of the burst is assumed in state B. • After the end of a detected burst, the received bits are again assumed in state G. Chapter 5. The Proposed Adaptive Scheme 2 ^ 57 n(1) r first Viterbi/ channel state decoder / estimator L second Viterbi decoder n(2) id r Figure 5.2 Decoding circuit of Scheme 2. (1)^() 2 The second decoder is applied to both rt _L^ and^The latter is the code block received rt-L L. at time i but delayed by L time units. Notice that both code blocks correspond to a possible input bit at time (i — L). Upon reception of the transmitted bits from the two codes, the second decoder proceeds with a modified Viterbi algorithm. The trellis for the decoder is the combination of the trellises of the two punctured codes. Basic decoding steps are exactly the same as in normal Viterbi decoding where the path metric values are the Hamming distances between the combined received sequence and possible transmitted sequences. However, the metric for each bit is now weighted by a reliability factor which depends on the estimated channel state. It is shown in [24] that this reliability factor is given by O G = log C^ EG^if eG =^ QB =log ( 1— eg ^ EH the channel is estimated to be in state G (5.1) if the channel is estimated to be is in state B. Given a received bit xi and a corresponding code bit yi in a possible transmitted sequence, the metric of yi is given by (x yi) log (-1-40-) if the channel is estimated to be in state G, or by (xi ED yi) log (412.) if the channel is estimated to be in state B at time i. Chapter 5. The Proposed Adaptive Scheme 2^ 58 5.3 Analysis of Decoded BER The performance of the decoded BER of Scheme 2 is now analyzed. A decoding decision is influenced not only by the channel parameters, but also by the channel state estimation. A complete model of the decoder should contain a Gilbert-Elliott channel model and a channel state estimator as shown in Figure 5.3. In the model, PL is the probability of a correct PIcc G B PBG ^ a) Channel model Pt ^ b) Channel state estimator Figure 5.3 Channel model and channel state estimator. estimation of a bit in state G, and PBC is the probability of a correct estimation of a bit in state B. Recall that the second Viterbi decoder uses the received bits at time i and time (i L/R). Let us assume that the guard space and the bursts are relatively long so that the received bits are considered to be taken from long channel sections. Thus, the decoded BER of Scheme 2 can be averaged over possible combinations of these channel sections. In general, let us denote the channel state at time i as S1, the channel state at time (i L/R) as S2, the probability of being in S1 at time i and in S2 at time (i L/R) as Pr{S1, S2}, and the probability of error given S1 and S2 as Pbel(S1 , S2) • The decoded BER of Scheme 2 can be found as the average sum of the product between PrIS1,S21 and Pbel(S1, S2) for all possible states S1 and S2, and is given by Pbe = E Pr{S1, Sl • S2 S 2 }PbeRS1,S2) • (5.2) Chapter 5. The Proposed Adaptive Scheme 2 ^ 59 In the above expression, Pr{S1, S2} is determined by the channel statistics, and Pbei(S1,S2) is affected by the channel state estimation and the performance of the combined code. Let us now denote a good state at time i as G i , and a bad state at time i as B i . Similarly, a good ( ) ( ) state at time (i L/R) is denoted as G i+L/R , and a bad state at time (i L/R) is denoted ) ( as Bo+L/R). The expression in (5.2) of the decoded BER of Scheme 2 can now be written as Pbe =PTIG ( ` ) , G"Lim}Pbel(G(i),G(i+L/R))^Pr G ( ` ) , B (i+L/R) Pbel(G0),B(i+L/R))+ (0) D ri(i-FL/R)1 D Pr O M , B(i+LIR)}P^ b e l( B (i) ,B (i+L/R)) + Pr 1^, V.1^f be i C) G^)• C LR) 103^ (5.3) These probabilities are investigated in the following sections. 5.3.1 Effects of Channel Parameters Given that the channel is in state G at time i, let the probability of being in state G at time (i + L/R) be denoted by PG (i + Li9 G (,) and the probability of being in state B at time (i L/R) be denoted by PB( i + L/R) IG( t ) . Similarly, given that the channel is in state B at time i, the probability of being in state B at time (i + L/R) is denoted by PB (;+LIR)1 B (,), and the probability of being in state G at time (i + L/R) is denoted by PG (i + L/R)1,3 (;). It is shown in [25] that PG(.+LiMIG(t) = PG + PB (PB13^PG13)141'1 PR( ,-I- Li /1 )1G( ; ) = 1 — PG(i+L/R)IG(i) PB("I'L/R)IB(')^PH + PG (PGG^PHOL/R PG(;-FL/R)IB(i) = 1^PB(i+L/R)IB(i)• ^ (5.4) Chapter 5. The Proposed Adaptive Scheme 2^ 60 Using (5.4), the unconditional probabilities of being in state S1 at time i and state S2 at time (i L I R), in which S I and S2 are either state G or state B, can be obtained and given by P r {G ' , G" L/R)} = ( ) PG PG 0 -F L IR )I G (,) PrIG ' , B" L/R)} = 1 — Pr {G (' , Go FL/ RI ( ) ) Pr{B t , B" ( ) - L/R ) } = PBPB(i-1-LiR)1B(') Pr{B ,^LIR) } = 1 — Pr{B (i) , (2) (5.5) where PG and PB are the steady state probabilities of state G and state B respectively. 5.3.2 Effects of Channel State Estimation There are many possible ways in which a channel state is incorrectly estimated, but for simplicity, only the following situations are going to be considered: An incorrect estimation of a bit in state G is made when the bit is part of the guard space where a false alarm occurs. An incorrect estimation of a bit in state B is made when the bit is part of an undetected burst. Estimation in state G: Recall that the BDP uses the received bits from code 1. Let t be the threshold, and 1 be the number of examined branches in the BDP. The probability of an incorrect estimation of a bit in state G can be approximated as the probability of having at least t errors in (UR I ) received bits that are examined by the BDP at one time. The probability of an incorrect state estimation of a bit in state G can thus be approximated by 1/ R, PGB E e=t ( i/Ri ) 4 (1 — EG )(1/Ri )—e.^ / e (5.6) Chapter 5. The Proposed Adaptive Scheme 2 ^ 61 The probability of a correct state estimation of a bit in state G is given by PGG 1/Ri EG)(1/R1)—e. ^ = 1 —^ ^e^`G e=t E^),e^ (5.7) Estimation in state B: A correct estimation of a bit in state B is made when a burst is correctly detected. Given a burst of length / B , if the burst length is longer than fz channel bits, the minimum number of channel errors required for a burst alarm is t in R, bits from code 1 examined by the BDP. However, if the burst length / B is shorter than R, the channel errors must occur in / B , and the minimum number of channel errors required for a burst alarm is t in lB i e received bits from code 1. Averaging over all possible burst lengths, we obtain the probability of a correct state estimation of a bit in state B as follows R^ 7fi-^1 1BPr(1B) E(^)4(1 — EB) 1B—e^li3Pr(1,3) E (17)40 - E B PIT —e ^e^ .B>77^e=t^e 1B< E , PBI B ti > 1B (5.8) The probability of an incorrect state estimation of a bit in state B is given by PBI G = 1 — B .^ (5.9) 5.3.3 Probability of decoded BER of Scheme 2 The second decoder weights the reliability of the bits received at time i and at time (i + L/ R) according to their estimated channel states. If a channel state is incorrectly estimated, the decoder will not use the right reliability value. Regardless of whether or not a Chapter 5. The Proposed Adaptive Scheme 2 ^ 62 channel state estimation is correct, let 01 be the reliability used for the i th received bit, and /32 be the reliability used for the (i + L/R) th received bit. Assume that the i th bit is received with error rate e l , and the (i + L/R) th bit is received with error rate 62. The average increase in combined metric value for the two bits is therefore given by (/3iei + /3262). An average channel error rate 6 * which gives the same average metric increase is thus given by *^01E1 + /32 6 2 6 = 01 + 02 (5.10) where /31 (/32) can be either OG or /3B , and 61 (62) can be either EG or EB. Hence, there are sixteen cases for E G ,E B , f3G , and /3B to appear in the computation of 6 * . The conditional probability of error P bel(S1,S2) in (5.2) depends on the estimation of Si, the estimation of S2, and the performance of Viterbi decoding using the combined code. Let us denote the probability of a correct estimation of a channel state S as P c (S) and the probability of an incorrect estimation of S as P i (S), where S can be either state G or state B. Given a state G, Pc (G) = PG1 G and P,(G) = F%. Similarly, given a state B, Pc (B) = P,',. and P, (B) = P131 G . These probabilities have been discussed, and their approximations are given in section 5.3.2. Let Pv (e) be the Viterbi decoded BER of the combined code on a BSC with channel error rate 6 * . In general, the probability of error Pbel(S1,S2) can be approximated by PbeRs1,s2) ":,-, Pc(S1)Pc(S2)Pv(E'L)+Pc(S1)Pi(S2)Pv(e *c 0+ Pi(S1) Pc(S2)Pv ( 4J + 19,( S 1 ) PI ( S 2 )Pv( eir i) where ecc, El, E i*c , ^ (5.11) and E i*i are given by (5.10) according to the estimated channel states. With respect to possible combinations of channel states, we can have four approximations of Pbej(si,S2) as follow Chapter 5. The Proposed Adaptive Scheme 2^ 63 Pbel(G(i),G(i+ L R)) ti PL PG G PV (E * = OGEG OGEG NG^ ) + 13G 3 1 BEG OGEG) PCB PGGPv 13B OGEG + NEG PCG PCB PV 0G +N + 0BEG + NEG PGB PGB Pv OG ) QB + 0B (5.12) Pv (E * = EG )1 PbeRB(')^Ft)) PBB / p P:3 ,3 Pv (E* = OBEB OBEB)^pBB'ppBG 113G PHB Pv OB^ 0BEB 0GEB I * OGEB + NEB ) pi p/ p E =^ BG" BG" V G F NB^ NB + OG B + NEB 0GE (c• NG + OG (5.13) and Pbel(G(,)B(i-FL/R)) = Pbel(B(t)G(i+L/R)) PGG P131 8 Pv (e * PGBPBBPV (e* PG G P:3B PV (E * PGB pl3i B /Dv (E * /9GEG + /3BEB i3G + NB ) 3BEG + 0B6B +N 0G EG + NEB 0G+ fiB EG + EB 2 PGG^ Pv (e* = IGEG OGEB i3G + 3G ) PGBPBGPv (E * 113G = NEG i@G E8 ) 1313 + OG Pv (E * — EG f p p BG E^ = 13'GB^ v (* EB 2 t OBEG + OGE•i3 13B + OG ) (5.14) Substituting (5.5) into (5.3), the decoded BER of Scheme 2 can now be approximated as Pbe ''-'-' PG{^ PG (i+Li 9G ( ' )Pbel (G( i ),G (s+L/R) ) + PB( i + L i R )IG(') Pbel(G (i ),8 0 + L I R) )1 + PB{ PB (i+L/R) IB (i)Pbel (B(i) ,B(i+L/R)) + PG(1-1-L/RPB(i) I^Pbel(B(i) ,G(i + L I R)) (5.15) 64 Chapter 5. The Proposed Adaptive Scheme 2 ^ where PbeRG (i) G (i+.1./Rp, ) Pbel(B(1),B(i+LIR)), Pb e o(i) ,G o+LIR)), and PbeRG (i), B (i + LiR) ) are given by (5.12), (5.13), and (5.14) respectively. 5.4 Simulation and Numerical Results The performance of Scheme 2 was evaluated by extensive computer simulations on a Gilbert-Elliott channel. These simulations were run with a 90% confidence level using the Monte Carlo method. The minimum number of errors was 100 for decoded BER > 10 -5 , and 10 for decoded BER < 10 -5 and > 10 -6 . The confidence interval was ± 18% from the mean in the first case, and ± 70% from the mean in the second case. Extrapolation was done for lower decoded BER. To show that the proposed scheme is adaptive to channel conditions and performs equally well on channels with bursts and channels with random errors, decoded error rates were obtained for different values of A and for various average burst lengths. Decoded error rates were also obtained for various values of was simulated with fixed values PG EG . The Gilbert-Elliott channel = 0.9901 and PB = 0.0099. 5.4.1 Approximation and simulation results Estimation of the decoded bit error rates of Scheme 2 and simulation results with various values of e, and e B are plotted in Figures 5.4 and 5.5. The average channel error rate was kept at 0.005 for all channel conditions. The channel was simulated with PGG = 0.9995 and PBB = 0.95 in Figure 5.4, and PGG = 0.9999 and PBB = 0.99 in Figure 5.5. Code 1 and code 2 are obtained from the original (2, 1, 4) code given by G = [23, 35] with perforation 1^1]^[ 1 1 patterns^and^which are complementary. In general, the approximated values [10 0^1 of decoded BER for small high-to-low error density ratio A and short bursts are loose as shown in Figure 5.4. The differences can be as much as two orders of magnitude. Again, in Figure 5.5, for A < 10 3 , the simulation results are much lower that the approximation even Chapter 5. The Proposed Adaptive Scheme 2 65 though the discrepancy is less than that in Figure 5.4. The approximation is good for higher A. The reason is that the approximation of Viterbi decoded BER used in the analysis itself is relatively good for high values of A, and loose for low values of A. Results were also obtained for different values for the delay L, and are plotted in Figure 5.6. The actual simulation results are higher than the approximation. This could be due to the fact that there are burst errors caused by not backtracking far enough or by switching to the random mode too early. However, both simulation results and approximation follow the same slope. This figure can be used to select the delay L to optimize the performance of the scheme. 5.4.2 Comparisons with Existing Burst-Error-Correcting Schemes Comparisons are now done with Viterbi decoding with and with no interleaving, Gallager's burst-finding scheme, and Schlegel and Herro's adaptive scheme. In all cases, the total delay and the overall code rates were chosen to be the same for all schemes. The same code 1 and code 2 were used in the proposed scheme. The non-interleaving and interleaving cases employed the (3, 1, 4) nonsystematic code given by G = [52, 66, 76] with free distance df = 12. For Gallager's burst-finding scheme, the (3, 1, 4) systematic code with G = [1, 56, 62] and minimum distance dM = 7 was chosen in order to have a high decoding threshold tML. Schlegel and Herro's scheme used the (3, 1, 4) systematic code with G = [1, 56, 72] and df = 9. Effect of A: For the results in Figures 5.7 and 5.8, simulations were run for different values of A. The average burst length is 20 channel bits in Figure 5.7 and 100 channel bits in Figure 5.8. Scheme 2 outperforms the other schemes for all channel conditions. Improvement over the interleaving scheme is considerable for 0 higher than 1000, which can be as much as 66 Chapter 5. The Proposed Adaptive Scheme 2 ^ 10 10 -2 -3 0 - .....,,,% I 1 . II 10 -4 I / 1 I I I / I I I / I / / / / I / / . 10 -5 . O C.> 10 10 10 •^ ^4^ . . ___.,__,__,___^simulation approximation I -7 . -4 -I -1 .1 I 4 31 21 4.6 x10 .2 - 4 6x10 3 10 A 0.46 0.5 EB 4.6x104 5x 10-5 EG 10 10 10 0.25 0.25x10 3 Figure 5.4 Performance of Scheme 2 with m = 4, R = 1/3, PG G = 0.9995, PBB = 0.95, and average channel en or rate = 0.005. - Chapter 5. The Proposed Adaptive Scheme 2^ 10 10 10 , 67 -2 simulation ^ approximation -3 -4 c0 I^ I^ 10 -8 I^ I^ I I .7 10 10 -a 3 1 2^ 10 4.6x10 2 4.6x 10 10 10 10 0.25 0.46 2.5x10 4.6x 10 0.5 5x10 ' Figure 5.5 Performance of Scheme 2 with m = 4, R = 1/3, PG G = 0.9999, PB B = 0.99, and average channel error rate = 0.005. A EB Ec Chapter 5. The Proposed Adaptive Scheme 2^ 10 -2 I^ • 10 68 simulation approximation -3 Mr^ • = • • • • A 10 10 -5 le^ -e 100 MI I I a 200^300^400^500 Delay L Figure 5.6 Performance of Scheme 2 with m = 4, R = 1/3, PSG = 0.9999, P.. = 0.99, e G = 0.005, and c. = 0.5. Chapter 5. The Proposed Adaptive Scheme 2 69 one order of magnitude in decoded BER. On the other hand, improvement over Schlegel and Herro's scheme is more appreciable for A less than 100. It is roughly one order of magnitude improvement in the performance when A = 100, but it can be as high as three orders of magnitude when A = 10. Since simulations were only run to obtain at least 10 errors for error rates < 10 -5 , more simulation runs are expected to show that the curves for Viterbi decoding with or without interleaving, and Scheme 2 should converge as A approaches 1, where the channel tends to behave as a random-error channel. Effect of average burst length (A B ): The error performances are again compared for different A, in Figures 5.9 and 5.10. It can be seen that Scheme 2 performs well almost in all cases. Meanwhile, Schlegel and Herro's scheme is effective only when the channel has long bursts with a high error rate in state B. It is even worse than the interleaving scheme when e, is relatively small and EG is high as shown in Figure 5.9 where EB = 0.25 and EG = 0.005. The improvement of Scheme 2 over the interleaving scheme can be as high as ten times when A, is longer than 10. Compared to Schlegel and Herro's scheme, Scheme 2 is even 2 orders of magnitude better in terms of decoded BER. However, the performance is slightly worse when A B is small. This is due to the fact that the average guard space (A G ) is relatively small, and the probability of a burst at time i and another burst at time (nL + i) is also increased. As a result, the performance of Scheme 2 tends to be slightly worse for small AB' Effect of E G : The proposed scheme can tolerate relatively high EG , which can be seen from Figures 5.11 to 5.14. While the decoded error rates in both Gallager's scheme and Schlegel and Herro's scheme rapidly increase as EG gets higher, Scheme 2 shows only a slight increase in the decoded BER. In these figures, the decoded BER of Scheme 2 remains almost constant for EG < 10 -3 . For EG from 10 -3 to 10 -2 , the maximum increase in decoded BER of the ^ Chapter 5. The Proposed Adaptive Scheme 2 ^ 70 proposed scheme is less than four times as shown in Figure 5.12 and only one and half times as in Figure 5.14 . On the contrary, the decoded BER of Gallager's scheme and that of Schlegel and Herro's scheme start to show significant increases when E G is as small as 10 -4 . The decoded BER of Gallager's scheme at E G = 10 -2 can be two orders of magnitude higher than that at E G = 10-4 . The increase in decoded BER of Schlegel and Herro's scheme is less severe, but in most figures, this increase is about one order of magnitude. Effect of code rates: For the results obtained in the previous figures, the overall code rate of the proposed scheme was 1/3. Higher code rates can be achieved without significant degradation in the performance as long as code 1 used in the burst detection is good enough to detect most of the bursts. Figure 5.15 shows the performance of the scheme for various code rates. Simulations were run using an original (2, 1, 5) code given by G = [133, 171]. The two rate 2/3 punctured codes were obtained from the original rate 1/2 [1 1 code with perforation patterns^and . The two rate 3/4 punctured codes were [1 1 1] 0^1 0 11^1 0 10 1 0 . The delay in channel bits was chosen to be the obtained with1 ^and 0 1^11 same for all simulations. The code rates in the figure are expressed as the overall code rates. For E G < 10 -3 , compared to the performance with the rate 1/4 code, the decoded BER of the scheme with the rate 1/3 code is about one and half times higher, and the decoded BER with the rate 3/8 code is approximately twice as high. For higher E G , the differences in the performance increase. At E G = 0.01, the decoded BER with the rate 3/8 code is about ten times higher, and the decoded BER of the rate 1/3 code is about five times higher than that of the rate 1/4 code. 71 Chapter 5. The Proposed Adaptive Scheme 2 ^ 10 -2 ^ 10 .. '" .. .. . -3 a. ...•.. :e.' ..... e 10 .4 Fyn / Q; 10 ■■•^. : as 7:3 io 10 °'..' , r -...- ill ••^I/ ri , a> A .: ■ ■■ • •. ....■de"'- I f .• -5 I /f e ....... .. ...... ........... .. -7 Non-interleaved Viterbi - Interleaved Viterbi, di„, 3 Schlegel and Herro's scheme Proposed scheme 2 .^i l — — — - — 10 10 -8 -9 2 4.6 x 10 3 4.6 x 10 - ^ 0.25 2.5 x 10 10 ^ 3 ^ 4 31 21 10 10 ^ 0.46 4.6 x 10 10^A ^ 4^ 0.5 ^ES 5x10 5 Figure 5.7 Comparison of decoded BER for m = 4, R = 1/3, PGG = 0.9995, and P.. = 0.95. Chapter 5. The Proposed Adaptive Scheme 2 ^ 10 72 -2 . ..... •.10 -3 -_m.w,. .----- ... ,^— 10 O 10 II 5 8 . I I 10 Non-interleaved Viterbi - Interleaved Viterbi, d int= 16 ^— — - Schlegel and Herro's scheme Proposed scheme 2 -6 .7 10 10 4 26 10 4.6 x10 10 •2 4.6 x10 ^ 0.25 ^ 2.5 x10 ^ 31 4 ^ 10 ^ 0.46 0.5 10 4.6 x10 -4^5 x10 A Ca - 5 c `c Figure 5.8 Comparison of decoded BER for in = 4, R = 1/3, PGA = 0.9999, and PBB = 0.99. ^- Chapter 5. The Proposed Adaptive Scheme 2^ 10 -2 I I .. .......... 10 -3 / 7.1( ...... /r / \ •/ r^40°' ""•\■■ ti^/^•• ■ ^I 1 :: -4 -^4. ,1 i ' / i1 :^1/ i •^ 1,1 -s s^i . -:^I iI, i 10 -6 ..... . .... - ............ = -• .^\ .. . .: 10 73 ....... ---- ...... ....... - ----------------- ^ Non-interleaved Viterbi ^ - Interleaved Viterbi Schlegel and Herro's scheme Proposed scheme 2 = — _ ___ /' i 21 i^I 10^10^1 0 Average burst length Figure 5.9 Comparison of decoded BER for in = 4, R = 1/3, e G = 0.005, and 6 E, = 0.25. 74 Chapter 5. The Proposed Adaptive Scheme 2^ 10 -2 ....... ......... ........ .. .,_ "7: -71°.„^%. /^..' /^••••^. •—_____---.... -... / r _ .._ -.. — i^ir . ,... 10 -3 --...,...... / .. 10 1 r i r II I/ / -5 I . I 1 Non-interleaved Viterbi - Interleaved Viterbi Schlegel and Herro's scheme Proposed scheme 2 I 10 I / -6 1 1^ 10^10 Average burst length 21 ^ 10 Figure 5.10 Comparison of decoded BER for m = 4, R = 1/3, E. = 0.005, and E B = 0.5. Chapter 5. The Proposed Adaptive Scheme 2 10 ^ 75 ? !^ ir ^ Gallager's scheme ^- Schlegel and Herro's scheme ^ Proposed scheme 2 10 -2 -^ .i' .. i - i .7 e• e ...- „0 10 -4 , ,I # / ." • II # . • . . 0 • .. wirmapp.pfraprerpmp......Www, ....., 10 -5 -6 10 -31 -41 -51 10 ^ ^ 10 10 ^ 10 EG Figure 5.11 Comparison of decoded BER for m = 4, R = 1/3, PGG = 0.9999, PBB = 0.99, and e„, = 0.25. Chapter 5. The Proposed Adaptive Scheme 2^ 10 -2 ¶ 76 g : / :•--e-- ^ Gallager's scheme ^ - Schlegel and Herro's scheme Proposed scheme 2 10 -3 . 1 - ..... I . e ..** 10 7 -4 ..... I,. „,,A ..., .^ t e „..%.*,... ..," .......... .07 II ...., .. / ....., ............. ^ r- 10 •5 -5^ 10 -41 10 ^ EG -31 10 ^ 10 Figure 5.12 Comparison of decoded BER for in = 4, R = 1/3, P.. = 0.9999, PBB = 0.99, and E B = 0.5. 77 Chapter 5. The Proposed Adaptive Scheme 2^ 10 .1 ! !^ !^ ^ Gallager's scheme ^ - Schlegel and Herro's scheme ^ Proposed scheme 2 10 -2 - . -/ i I ...: ....: ... .- - , ....,. 0,0••/. . .-** ,e .......^ ...... ....• ....... ,^ ^ .................................... , .. , 10 10 -4 - -^ -5 -6^-51 10 10 ^ -41^ 10^1 0 EG -31 ^ 10 Figure 5.13 Comparison of decoded BER for in = 4, R = 1/3, PSG = 0.9995, P.. = 0.95, and e. = 0.25. ^ Chapter 5. The Proposed Adaptive Scheme 2 10 ^ 78 -1 ^Gallager's scheme ^ - Schlegel and Herro's scheme Proposed scheme 2 10 -2 . i 1 i 1 . 0 tic ^i i S^i ./ 10 -3 .^ ."^/ -*^■ ..,S^000 . e'' 4?^ /9 10° .^ - .................................... vv.. 4,..104" .0' .... —...... e ..•, . ■•••••#.°°°# ■ 10 -4 -3 1 -4 1^ -5^ 10 ° . 10 ^ Co 10 ^ 10 Figure 5.14 Comparison of decoded BER for 771 = 4, R = 1/3, PGG = 0.9995, P.. = 0.95, and E R = 0.5. Chapter 5. The Proposed Adaptive Scheme 2 10 ^ 79 -2 ^ R = 3/8 ^ - R=1/3 R= 1/4 10 -3 ..-- OG GG • ..,.z . S7 s A 10 ^.................................................... ,...••••° -4 e , .....--A• ^....--- 10 -31 -5 10 10 ^ EG 10 ^ Figure 5.15 Performance of Scheme 2 for different code rates with m = 6, PG0 = 0.9999, P. ', = 0.99, and C B = 0.5. -2 10 Chapter 5. The Proposed Adaptive Scheme 2 ^ 80 5.5 Concluding Remarks The second proposed scheme utilizes all available channel information. All received bits are used in decoding, each bit with a measure of reliability provided through the BDP. An obvious advantage of using reliability levels is the diminishing of the burst effect when the bits in a burst are dominated by the bits in a guard space. An ideal case is to have each bit in a burst interleaved with a bit in a guard space. In general, the proposed scheme performs well under all channel conditions. An exception might be when the channel has a relatively short guard space, in which case a good delay L in the encoder is hard to find. Chapter 6 Conclusions Two new adaptive schemes for burst error correction have been proposed in this thesis. Both schemes are based on punctured convolutional codes. The same burst detection procedure is incorporated in the decoders of both schemes. The burst detection procedure measures channel conditions via the increment of the path metrics in Viterbi decoding. Scheme 1 uses two punctured codes with different memories. A code with the shorter memory is used with Viterbi decoding for random error correction and for burst detection. When a burst is detected, the code with the longer memory is then used with backward sequential decoding to recover the detected burst. Scheme 1 is suitable for phased bursts that are confined to a certain maximum length and are followed by a long guard space. It is found that when the channel has dense bursts, Scheme 1 outperforms both Viterbi decoding with interleaving scheme and Schlegel and Herro's scheme. The performance of Scheme 1 somewhat degrades on random-error channels. Scheme 1 cannot yet be applied to channels with relatively long bursts due to the lack of available good convolutional codes with long memory. However, with the ongoing search of good convolutional codes, the proposed scheme seems to be quite attractive. Scheme 2 uses two punctured convolutional codes obtained from the same original code with complementary perforation patterns. The code sequence of the second code is transmitted after a delay from the transmission of the sequence of the first code. The received sequence corresponding to the first code is used by a Viterbi decoder to detect bursts. The Viterbi decoder also provides channel state estimation for all received bits. A second Viterbi decoder is applied to all received bits with the estimated channel states that are used as a measure of reliability. Scheme 2 is much simpler than Scheme 1, and yet very robust. Given 81 Chapter 6. Conclusions^ 82 the same delay, Scheme 2 outperforms Viterbi decoding with interleaving for all channel conditions, and the improvement can be as high as ten times for channels with burst lengths longer than 10 channel bits. Scheme 2 is also proved to be much less sensitive to errors in the guard space than Gallager's burst-finding scheme and Schlegel and Herro's scheme. For EG as high as 10 -2 , the decoded BER of Scheme 2 is increased by only four times as compared to that at 6G = 10 -3 , but it is rather constant for EG < 10 -3 . On the other hand, the increase in decoded BER for Gallager's burst-finding scheme and Schlegel and Herro's scheme ranges from 10 to 100 times. In Scheme 2, when a higher code rate is desired, the only change in the structure is the perforation patterns, and the performance is not sacrificed considerably. As far as future work is concerned, the following topics related to the thesis deserve further studies: • Study of the proposed schemes for fading channels. • Incorporation of soft decision decoding into the schemes. This will certainly increase the performance of the schemes substantially. Chapter 6. Conclusions^ 83 References [1] E. N. Gilbert, "Capacity of a burst-noise channel," Bell Syst. Tech. J., pp. 1253-1265, Sept. 1960. [2] E. 0. Elliott, "Estimates of error rates for codes on burst-noise channels," Bell Syst. Tech. J., pp. 1977-1997, Sept 1963. [3] L. N. Kanal and A. R. K. Sastry, "Models for channels with memory and their applications to error control," Proc. of IEEE, vol. 66, pp. 724-744, July 1978. [4] L. E. Zegers, "Error control in telephone channels by means of time diversity," Philips Res. Rep., vol. 22, pp. 315-328, June 1967. [5] R. G. Gallager, Information Theory and Reliable Communications. New York: Wiley, 1968. [6] G. D. Forney, "Burst-correcting codes for the classic bursty channel," IEEE Trans. Comm., vol. 19, pp. 772-781, Oct. 1971. [7] D. D. Sullivan, "A generalization of Gallager's adaptive error control scheme," IEEE Trans. Inform. Theory, vol. 17, pp. 727-735, Nov. 1971. [8] S. M. Reddy, "Linear convolutional codes for compound channels," Info. Control, vol. 19, pp. 387-400, Dec. 1971. [9] B. D. Trumpis and P. L. McAdam, "Performance of convoluational codes on burst noise channels," Proc. NTC, pp. 36:3-1 — 36:3-14, 1977. [10] S. Lin and D. J. Costello, Error Control Coding. NJ: Prentice-Hall, 1983. [11] A. D. Drukarev and K. P. Yiu, "Performance of error correcting codes on channels with memory," IEEE Trans. Comm., vol. 34, pp. 513-521, June 1986. [12] C. B. Schlegel and M. A. Herro, "A burst-error-correcting Viterbi algorithm," IEEE Trans. Comm., vol. 38, pp. 285-291, Mar. 1990. [13] A. J. Viterbi, "Error bounds for convolutional codes and an asymtotically optimum decoding scheme," IEEE Trans. Inform. Theory, vol. 13, pp. 260-269, April 1967. Chapter 6. Conclusions^ 84 [14] F. Jelinek, "A fast sequential decoding algorithm using a stack," IBM J. Res. and Dev., vol. 13, pp. 675-685, Nov. 1969. [15] R. M. Fano, "A heuristic discussion of probabilistic decoding," IEEE Trans. Inform. Theory, pp. 64-74, April 1963. [16] D. Haccoun and J. Belzile, "Bidirectional algorithms for the decoding of convolutional codes," 1990 IEEE Book of Abstracts of Inform. Theory Symposium, San-diego, p. 177, Jan. 1990. [17] K. Li and S. Kallel, "Bidirectional sequential decoding for convolutional codes," 1991 IEEE Pacific Rim Conf. on Commun., Comp. and Sig. Proc., Victoria, pp. 200-204, May 1991. [18] D. Haccoun and G. Begin, "High-rate punctured convolutional codes for Viterbi and sequential decoding," IEEE Trans. Comm., vol. 37, pp. 1113-1125, Nov. 1989. [19] G. Begin and D. Haccoun, "High-rate punctured convolutional codes: structure properties and construction technique," IEEE Trans. Comm., vol. 37, pp. 1381-1385, Dec. 1989. [20] Y. Yasuda, Y. Hirita, K. Nakamura, and S. Otani, "Development of a variable-rate Viterbi decoder and its performance characteristics," in 6th Int. Conf. Digital Satellite Commun., AZ, Sept. 1983. [21] M. A. Herro, L. Hu, and J. M. Nowack, "Bit error probability calculations for convolutional codes with short constraint lengths on very noisy channels," IEEE Trans. Comm., vol. 36, pp. 885-888, July 1988. [22] M. Cedervall and R. Johannesson, "A fast algorithm for computing distance spectrum of convolutional codes," IEEE Trans. Inform. Theory, vol. 35, pp. 1146-1159, Nov. 1989. [23] M. C. Jeruchim, "Techniques of estimating the bit error rate in the simulation of digital communication systems," IEEE J. SAC, vol. SAC-2, pp. 153-170, Jan. 1984. [24] S. Kallel, "Analysis of memory and incremental redundancy ARQ schemes over a nonstationary channel," IEEE Trans. Comm., vol. 40, pp. 1474-1480, Sept. 1992. Chapter 6. Conclusions^ 85 [25] H. Larson and B. Shubert, Probabilistic Models in Engineering Sciences, vol. 2. John Wiley and Sons, 1979. [26] J. Hagenauer, "Viterbi decoding of convolutinal codes for fading and burst channels," in Proc. 1980 Seminar on Digital Commun., pp. G2.1—G2.7, Zurich, 1980.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Efficient coding/decoding strategies for channels with...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Efficient coding/decoding strategies for channels with memory Lai, Cuong H. 1992
pdf
Page Metadata
Item Metadata
Title | Efficient coding/decoding strategies for channels with memory |
Creator |
Lai, Cuong H. |
Date Issued | 1992 |
Description | Many digital communication channels are affected by errors that tend to occur in bursts. A great deal of work has been devoted to finding good burst-error-correcting codes and developing burst-error-correcting schemes. However, burst-error-correcting codes are generally useless for long bursts. Some burst-error-correcting schemes suffer long delay in decoding. Others are very sensitive to random errors in the guard space. Most of these schemes are not adaptive to channel conditions. In this thesis, two new schemes are proposed to overcome these drawbacks. The proposed schemes are analyzed over a two state Markovchain channel model. Both schemes employ a combination of two codes. In the first scheme, one of the codes is used for random error correction and for burst detection while the other one is used only for burst recovery. In the second scheme, one of the codes is used for burst detection and for channel state estimation, and both codes are used for error correction. Unlike existing burst-error-correcting schemes, it is shown that the proposed schemes are adaptive to channel conditions and less sensitive to errors in the guard space. For the same delay, the proposed schemes offer better performance than the interleaving schemes. When the channel is heavily corrupted by bursts, the improvement is even more pronounced. |
Extent | 4371752 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
File Format | application/pdf |
Language | eng |
Date Available | 2008-09-15 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0065064 |
URI | http://hdl.handle.net/2429/2013 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 1993-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
Aggregated Source Repository | DSpace |
Download
- Media
- 831-ubc_1993_spring_lai_cuong.pdf [ 4.17MB ]
- Metadata
- JSON: 831-1.0065064.json
- JSON-LD: 831-1.0065064-ld.json
- RDF/XML (Pretty): 831-1.0065064-rdf.xml
- RDF/JSON: 831-1.0065064-rdf.json
- Turtle: 831-1.0065064-turtle.txt
- N-Triples: 831-1.0065064-rdf-ntriples.txt
- Original Record: 831-1.0065064-source.json
- Full Text
- 831-1.0065064-fulltext.txt
- Citation
- 831-1.0065064.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065064/manifest