T H E A P P L I C A T I O N O F S O U R C E - A I D E D D E C O D I N G T O IS-95 C D M A T U R B O S Y S T E M S by N O R M A N C. L O B.A.Sc (EE: Computer Option), The University of British Columbia, 1997 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard CL T H E UNIVERSITY OF BRITISH C O L U M B I A April 1999 © Norman L o , 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of £LX:.CTfZ IC^JL The University of British Columbia Vancouver, Canada DE-6 (2/88) £*> 6-lfiJCE&'/Q & Abstract W e develop source-aided channel d e c o d i n g techniques and apply them for image transmission using the channel forward error correction codes of the IS-95 Code Division Multiple Access ( C D M A ) standard. The source is modeled as a first order M a r k o v model with states that correspond directly to the source coder codewords. The model is used to form a M A P version of the Viterbi algorithm for decoding convolutional codes. For the. case of a two-state M a r k o v model, the generalization of the Viterbi algorithm involves only a modification of the branch metric; while for N-state M a r k o v models, a technique r ' called trellis merging is also implemented to keep the decoding c o m p l e x i t y low. A n iterative model recovery technique is developed which allows the receiver to recover the source model without any a priori information. Simulating these techniques for the case of the two-bit D P C M encoded Lenna image, we find a coding gain over an Additive White Gaussian Noise ( A W G N ) channel of approximately 1.1 dB at a B E R of 10~ for both the 4 forward and reverse link of the IS-95 standard. W e go on to develop a turbo version of the IS-95 reverse l i n k decoder. T h i s involves implementing a "soft-in/soft-out" version of the component decoders, and introducing an iterative decoding procedure. The coding gain found by this turbo enhancement is 0.75 dB at a B E R of 10" . A Markov model-aided version of the turbo reverse link 3 decoder is then developed by migrating the techniques used in the Viterbi algorithm to the soft-in/soft-out decoders. The ensuing Markov model aided turbo decoder has a two-level iterative structure due to the fact that there are source model recovering iterations and ii turbo iterations. Three different architectures for the two-level iterative structure are proposed and compared. Although all three methods provide similar coding gain over an A W G N channel, they differ in speed of convergence and implementation complexity. For the case of transmission of the two-bit D P C M encoded Lenna image, the coding gain over an A W G N channel at the final iteration is 1.6 dB at a B E R of 10" . 2 iii Table of Contents Abstract ii List of Tables vi List of Figures vii Acknowledgment Chapter 1 ix Introduction 1 1.1 Motivation and Objectives 3 1.2 Thesis Outline 4 Principle of Iterative Decoding 6 2.1 Structure of Turbo Decoders 6 2.2 Soft Information 10 2.3 Component Decoders 14 2.4 Application of the M A P Algorithm to Linear Codes 19 Markov Model-Aided Convolutional Decoding 24 3.1 Viterbi Decoding using a Two-State Markov M o d e l 26 3.2 Trellis Merging and Higher State Models 34 3.3 M o d e l Recovery..... 42 Source-Aided Turbo Decoder 49 4.1 C D M A Channel Link....; 50 4.2 Turbo Decoding in the Reverse C D M A Channel 59 4.3 Markov A i d e d Iterative Decoding 64 4.3.1 Modification to the M A P algorithm 65 4.3.2 Turbo iterations within Global Iterations 68 Chapter 2 Chapter 3 Chapter 4 iv Chapter 5 4.3.3 Joint Turbo-Global Iterations 4.3.4 Hybrid Encapsulated and Joint Iterations 4.3.5 Comparison of Different Configurations 76 4.3.6 Complexity Analysis 78 Conclusions and Future Work 71 ....:.73 84 Glossary 88 Bibliography 89 List of Tables Table 3.1 M o d e l Parameters for 1-bit D P C M Image of Lenna 31 Table 3.2 Parameters for Different Models 32 Table 3.3 M o d e l Parameters for 2-bit D P C M Image of Lena 37 Table 4.1 Table 4.2 Table 4.3 Inner Decoder Complexity ! Outer Decoder Complexity of the Different M o d e l Recovery Schemes 81 82 83 List of Figures Figure 2.1 Basic Feedback of Iterative Decoding 8 Figure 2.2 Structure of Parallel Concatenated Encoder/Decoder 9 Figure 2.3 Serial Concatenated Structure for Encoding/Decoding Figure 2.4 Unexpurgated trellis from step 1 of Wolf's trellis construction 10 method 22 Figure 2.5 Expurgated trellis from step 2 of Wolf's trellis construction 23 Figure 3.1 Communication system components 24 Figure 3.2 Two state Markov-model 27 Figure 3.3 Two State Markov Model-Aided Decoding 31 Figure 3.4 Family of curves with varying statistics 33 Figure 3.5 Four State Markov M o d e l 35 Figure 3.6 Trellis for a 4 state convolutional code 36 Figure 3.7 Two Stage Merged trellis for 4 state convolutional code 37 Figure 3.8 Comparison Markov model aided Decoding for N-bit D P C M 38 Figure 3.9 Four State Markov Model-Aided Decoding 39 Figure 3.10 Qualitative depiction of Markov model aided Decoding for 2-bit DPCM 40 Figure 3.11 Iterative M o d e l Recovery Structure Figure 3.12 Iterative M o d e l Recovery using Four State M o d e l with Synthetic Data Iterative model recovery using raw data from 2-bit D P C M Lenna 43 image 44 Grouping together data blocks for model recovery Impact of varying model recovery block sizes 45 46 Figure 3.13 Figure 3.14 Figure 3.15 vii ....42 Figure 4.1 Butterfly structure of Walsh-Hadamard code with length 4 54 Figure 4.2 General Form of butterfly structure 54 Figure 4.3 Forward Channel L i n k of C D M A IS-95 55 Figure 4.4 Reverse Channel L i n k of C D M A IS-95 56 Figure 4.5 Forward L i n k IS-95 57 Figure 4.6 Soft-Decision Markov-Aided Decoding for the Reverse Channel in the IS-95 58 Figure 4.7 Iterative structure of Reverse L i n k IS-95 64 Figure 4.8 Applying Iterative Decoding to IS-95 C D M A . . . 66 Figure 4.9 Markov model-aided Turbo decoder... 68 Figure 4.10 Output of Global Iterations using Synthesized Data 70 Figure 4.11 Output of Global Iterations using Raw Data 71 Figure 4.12 Output of L o c a l Iterations within Global Iterations using Synthetic Data 72 Figure 4.13 Joint-Turbo-Global iterations using Synthesized Data 74 Figure 4.14 Joint Turbo-Global iterations using Raw data 75 Figure 4.15 Hybrid Iterative Scheme using Synthesized Data 76 Figure 4.16 Hybrid Iterative Scheme using raw data 77 Figure 4.17 Qualitative comparison of the three different model recovery techniques 79 viii Acknowledgment The author would like to take this opportunity to express his most sincere gratitude to his supervisor Dr. Samir Kallel. The author is a fortunate recipient of Dr. Kallel's helpful suggestions and valuable guidance throughout the thesis project. The author is also in debt to Dr. Robert L i n k for his constant help, especially in thoroughly proofreading this thesis report. Special thanks is also extended to Ian M a r s l a n d for his generous assistance. F i n a l l y , the author w o u l d like to acknowledge his family for their eternal love and support, and his friends for making h i m laugh throughout the course of the project. This work was funded by N S E R C , B C A S I , and M o b i l e Data Solutions Inc, of Richmond, B C . ix Chapter 1 Introduction The demand for wireless communications has experienced tremendous growth in recent years. The evolution from analog to digital and ultimately to third generation PCS (Personal Communication Systems) is in direct response to such demand. An important catalyst in this development is the shift from voice oriented services to computer data oriented services. The former tends to be laxed in its tolerance to error, whereas the latter generally implies zero error tolerance. As multimedia applications become more popular, developing systems must accommodate for the increased demand in resources associated with these services [1]. Image transmission, which is categorized as data transmission, traditionally does not tolerate errors in an end-to-end connection. The mere size of the data necessary to represent a typical image implies a substantial time investment in its transmission over a low bit-rate channel, even in the presence of a good channel. A less than ideal channel condition generally increases this time requirement many-fold, as erroneously received blocks are retransmitted. One can envision many applications, the most notable being real-time wireless video for teleconferencing over a bandwidth-limited channel, where the constraints in transmission time are such that the error-free requirement on the received data must be alleviated. Since the early days of information theory, efforts to improve channel efficiency through error detection/correction have been exerted, but with the advent of multimedia applications and a growing demand for wireless communications, where bandwidth is very limited, these exertions must be hastened as high volume real-time digital traffic becomes ineluctable. The underlying objective in code design and coding theory is to achieve low 1 Chapter 1 Introduction 2 probability of error without compromising power and complexity requirements. Derived by one of the pioneers of information theory, the Shannon performance bound is the theoretical limit that many code designers aspire to realize. A n important step in the direction of approaching this elusive target began with Forney's study of concatenated codes [ 3 ] . Concatenated codes are constructed through the combination of two or more relatively simple component codes to form a larger coding system. These codes enjoy an exponential decrease in probability of error with only an algebraic increase in complexity [3]. Codes with immense coding gains were originally challenges reserved for theoretical research, but modern applications such as deep space communications have since necessitated the need for such high performance codes [4]. Coding theorists have taken a stride closer to Shannon's limit with the recent innovation of iterative decoding incorporated into the'concatenated codes [5]. This new paradigm is identified as Turbo Coding. The encoding for transmission process is traditionally divided into two independent tasks: compression and error correction. To maximize bandwidth utilization, data compressors, also known as source encoders, are employed to remove redundancy in the data source. However, compressed data is generally more sensitive to noise than uncompressed data because the entropy of each coded bit is higher than that of the uncoded bits; and, the impact of an erroneous decision will have a greater effect on the overall transmission. In most compression schemes, an error, in one bit may propagate or induce errors in other bits. As a remedy to this problem, controlled redundancy is introduced for error detection/correction by a process known as channel coding. Traditionally, the compression and channel coding modules have been designed independently. Chapter! Introduction 3 However, it is well documented that one can achieve a given level of performance with a system of lower complexity by considering both the compression and the error correction component jointly in the design of the overall system [18]. In [23] and [24], the authors proposed a truly integrated approach where source code words were mapped directly onto channel signalling waveforms. In principle, an integrated system should achieve the best possible performance; however, system implementation issues have pushed most of the research towards systems where source and channel coders are p h y s i c a l l y separate components. Although the two components are separate, they are not designed independently. M u c h research has focused on the use of known components cascaded together with a fixed bit rate allocated between the components. This approach has been investigated for voice transmission [27], image transmission [28], and recently to video transmiss i o n [29]. S t i l l , researchers and standards b o d i e s are c o n t i n u a l l y s e a r c h i n g for compression techniques that have a significantly higher noise resilience, while minimizing the sacrifice of compression ratio [25]. The approach of considering the characteristics of both the source and the channel i n the design of the overall codec is k n o w n as Joint Source-Channel Coding (JSCC); and it is a very promising technique in.the development of bandwidth efficient yet noise resilient data transmission systems. 1.1 Motivation and Objectives A l t h o u g h turbo coding has proven itself as a useful channel coding technique, relatively little work has been done to incorporate turbo codes into joint source channel coding system. In the literature, J S C C systems are well documented ([18], [23]-[26],[30][32]), but an extensive investigation into source-aided convolutional decoding has not Chapter 1 Introduction 4 been performed. This thesis contributes to this effort through: reviewing Markov model aided convolutional decoding with a modified Viterbi decoder for two-state Markov models of the source; • applying trellis merging techniques to develop a Viterbi decoding algorithm that is M A P with respect to an N-state Markov model of the source for an arbitrary N ; • exploring a model recovery technique and the influence of data block size. , Turbo decoding is a relatively new concept, but it has already triggered much research ([2], [4]-[9], [13]-[15], [35]). Recently in [35], the application of sub-optimal turbo decoding was proposed for the reverse channel l i n k of the IS-95 C D M A standard. However, due to certain constraints dictated by the standard, only modest improvement is observed. This thesis contributes to this effort by: • simulating an optimal turbo decoder for the reverse channel link; • applying Markov model aided decoding to both the forward and reverse link of the IS-95 C D M A standard; • proposing and comparing three model recovery techniques that combines both source model recovery iterations and turbo iterations. 1.2 Thesis Outline This thesis is organized as follows: Chapter 1 Introduction 5 In Chapter 2, the background necessary for a basic understanding of turbo codes is presented. This focuses on the basic structure of both parallel and serially concatenated systems. Key components critical to the performance of the overall turbo decoding system are introduced. This includes the use of soft information and the BCJR (Bahl-CockeJelinek-Raviv) MAP algorithm. The chapter concludes by reviewing techniques for extending the BCJR MAP algorithm, and ultimately the turbo decoding system, to all linear codes. In Chapter 3, an investigation into source-aided channel decoding is documented. It is shown how a Markov model of the source can be used to aid a channel decoder. This technique is applied to convolutional codes, and a trellis merging technique is given which allows one to efficiently use a Markov model with an arbitrary number of states. This chapter ends with a discussion of an iterative model recovery scheme at the receiver. In Chapter 4, turbo decoding and source-aided decoding are combined in an effort to improve the error performance of the existing IS-95 CDMA system. A quick introduction to the channel codes employed in the standard provides an understanding of the basic components of this system. At this point, source-aided decoding can be applied without the turbo decoding enhancement. To enhance the receiver for turbo decoding, the optimal soft-output decoding algorithm is produced in accordance with the material developed in chapter two. For the coupling of turbo and source-aided decoding, three iterative schemes are proposed. The chapter ends with a comparison of the three schemes. In Chapter 5, concluding remarks and suggestions for future work are presented. Chapter 2 Principle of Iterative Decoding This chapter deals with the basic principles underlying the remarkable new family o f codes known as turbo codes. The discussion begins with an investigation into the general structure of iterative encoders/decoders. Paramount to the effectiveness of these codes is the employment of soft information. This crucial component is explored and dissected into its constituent elements. A t the heart of iterative decoding are the component decoders. A s turbo coding was first conceived, they exploited the use of the m a x i m u m a posteriori ( M A P ) algorithm, which was also named after its discovers as the B C J R (Bahl-CockeJ e l i n e k - R a v i v ) algorithm. In its original formulation, the B C J R M A P algorithm was derived for decoding sources that can be modeled as discrete-time finite-state M a r k o v processes [10], m a k i n g it ideal for convolutional codes and their variants. The same algorithm can be extended to any linear code by forming a trellis from its b l o c k code representation. This is critical for incorporating iterative decoding into existing systems, which may not have been designed for turbo decoding, but could nevertheless benefit from such an enhancement. 2.1 Structure of Turbo Decoders Extending the concept of concatenated codes, a new high performance convolutional code was proposed recently by Berrou, Glavieux, and Thitimajshima [5]. Since its first publication in 1993, this new code has stirred much excitement in the coding community because of its performance proximity to Shannon's theoretical limit. A t a bit error rate of 10" , this 5 code is within 0.7 d B of the l i m i t i n g bound [5]. This complex code is formed from a 6 Chapter 2 Principle of Iterative Decoding 7 parallel concatenation o f two fairly simple component codes through an interleaving scheme. The components used are recursive systematic convolutional or R S C codes (refer to Figure 2.2). R S C codes are convolutional codes with a feedback tap, which gives them an infinite impulse response. The interleavers employed in this scheme are inherently large with pseudo-random characteristics, giving the overall code a very large free distance. The decoder for this large complex code consists o f a concatenation o f decoders for the component codes. Vital to the performance of this type of code is the transfer of soft information from one component decoder to the next. Soft information provides both the signal estimation and the estimation confidence. Without any further enhancement, this soft information feed-forward decoder structure is capable of achieving reasonable performance, but the greater power of this code is derived from the application of turbo iterations. A s in the feed-forward path from the inner decoder to the outer decoder, a feedback path from the outer decoder feeds confidence information gleaned from a first decoding pass back to the inner decoder. W i t h the new confidence assessments, the inner decoder re-decodes the signal improving its previous estimation, and passes its new results to the second decoder. W i t h each i t e r a t i o n , the error p e r f o r m a n c e i m p r o v e s u n t i l the l i m i t i n g b o u n d is approached. In this way, a prohibitively complex decoding process is broken down into r e l a t i v e l y s i m p l e steps. T h i s code adopted the name Turbo Code to i l l u s t r a t e the resemblance of its feed-forward/backward architecture to the c y c l i c nature o f a turbo engine. Since then, this nomenclature has strayed from the original description of parallel concatenated R S C codes to encompass a larger general set o f codes w i t h iterative Chapter 2 Principle of Iterative Decoding 8 decoding structures. Figure 2.1 demonstrates the basic overall structure of iterative decoding. The extrinsic values Soft-In Soft-Out values ^ [ Decoder channel a posteriori values : measurements Figure 2.1 Basic Feedback of Iterative Decoding diagram depicts the feedback operation of the overall system i n the general sense. The details within the block are specific to the particular configuration of the code. Figure 2.2 portrays the architecture for the original turbo code with a parallel concatenated encoder and decoder. In the encoder, R S C codes are punctured to increase the overall rate of the code and the code bits are multiplexed with the information bits to form a single bit stream. The decoder must demultiplex the corresponding punctured codes and decode the resultant bit streams as shown in the diagram. Figure 2.3 shows the encoder/decoder structure for component codes concatenated together serially. The extension of turbo decoding from the original parallel concatenation to this configuration allows the use of iterative decoding for many existing systems. Chapter 2 Principle of Iterative Decoding Parallel Concatenated Encoder * Encoder 1 • Interleaver Encoder 2 Parallel concatenated Decoder a prion Deinterleaver values x extrinsic value Decoder DEC 1 k- Decoder Interleaver DEC 2 a posteriori value yik Y2k Deinterleaver Figure 2.2 Structure.of Parallel Concatenated Encoder/Decoder Applications such as equalization, interference cancellation, and trellis coded modulation can all benefit from this iterative decoding process [7], [8], [9]. Chapter 2 Principle of Iterative Decoding 10 Serial concatenated Encoder channel Outer Encoder Interleaver Inner Encoder •T noise Serial concatenated Decoder a posteriori values a priori values from inner decoder for outer decoder channel a posteriori values ^ measurements for information bits \ / \ / Outer Inner Deinterleaver Decoder Decoder I - a posteriori values a priori values from outer decoder for inner code bits Interleaver Figure 2.3 Serial Concatenated Structure for Encoding/Decoding 2.2 Soft Information Turbo decoding can be applied to any concatenated structure whether it is parallel, serial, or even a hybrid of the two. This is accomplished through the communication o f soft information between separate decoders. The lower complexity constituent decoders do not function independently, but work cooperatively by passing both signal estimation and Chapter 2 Principle of Iterative Decoding 11 reliability measurements to one another. With hard decision decoders, iterations beyond the first pass w i l l not yield further improvement because decoders at a later stage receive only estimation decisions and no information pertaining to the channel or the decisions' reliability. The difference between a strong and weak signal is not distinguished. In contrast, using soft information, iterative decoders can improve the error performance with subsequent iterations because reliability information gleaned from parity bits can be re-applied to the decoding process giving improved confidence measures to the estimations at each and every stage of the decoding system. A s Hagenauer pointed out in [6], the soft-output estimate of a component decoder is comprised of three fundamental parts: channel measurements, a priori knowledge, and extrinsic information. This can be shown using log-likelihood ratios ( L L R ) defined as: ™ L e t * = (x ]t < 2 x ,.\., x ) be a sequence of bits defined by ±Vs,y 2 n - » = (yj, y , . . . , y ) be the 2 n corresponding received sequence, and a be the amplitude of a fading channel (a = 1 for an additive white Gaussian noise ( A W G N ) channel). The soft decoder output is the a posteriori information defined as: Chapter 2 Principle of Iterative Decoding L(X: 'yd = log 12 p(*i = +1 yd p(*i = -1 yd (2.2) x. = +l) = log Assuming coherent detection: exp N (y,--«/) -flog L(x y.) = logt \ 2 pj-^-(y,- ,-) ex />(*,• = +1) M^- = - i ) (2.3) + fl p(*,- = >(*,- = +1) -!) (2.4) = L - y . + L(x-). c L y,- is the channel measurement component, and L(JC,) is the a priori information c component. More specifically, L is known as the channel reliability factor. For the initial c pass, the a priori distribution is assumed equiprobable, which means L(x)=0. Succeeding stages of the decoder will update this information as the decoding process progresses. For forward error correction codes, the channel measurements can be further divided into a contribution from information bits and a contribution from parity bits. For example, consider a (3, 2) block code with the systematic and parity bits for the information bits u = (uj, u ,., u ), defined as: 2 N Chapter 2 Principle of Iterative Decoding Note, that uj can be rewritten as: u 13 - u ©x x 2 ; and this formulation can be exploited to enhance the reliability estimation. A p p l y i n g this to equation (2.4), the new a posteriori value is derived: L(u x y) = LQ • y\ + L(x\) channel + L(x ®x a priori measurements y\,y ) p p 2 extrinsic info ( - ) 2 5 information The latter term is known as the extrinsic information and can be calculated as: ( L(x ®x s p 2 y ,y ) p 2 = log U£\ y' ) - L (y") ^ +e 2 l+e L(x 2 c y*).+ L ( / ) 2 c (2.6) J T h i s (3, 2) parity code could form one of the component codes for the turbo system. Another component code would then form parity bits from an interleaved block of the information bits. Hence, the extrinsic information for uj as computed by the second decoder would be independent of the extrinsic information computed above. A l t h o u g h the complete a posteriori probability is used for the final decoding decision, the extrinsic information is the medium of communication between separate modules. A given component decoder uses the extrinsic information from the previous decoder as a priori information. Essentially, it is the confidence factor associated with an Chapter 2 Principle of Iterative Decoding 14 information bit that is obtained through the soft decoding of its parity bits. Only extrinsic information should ever be used as the new a priori value for the future iterations because the reliability measures should be statistically independent of the estimated received bit. However, since each iteration incorporates the reliability information for all the bits in its modular decoding, the extrinsic information from one decoder to the next w i l l have increasing correlation with every iteration. After many such iterations, saturation occurs where the improvement w i l l become only marginal and further iterations w i l l y i e l d a diminishing return. 2.3 Component Decoders Extrinsic information is the glue that binds the individual decoders together forming a bigger overall system. U s i n g this information, the component decoders act as the main decoding engines. Traditionally, the Viterbi algorithm is the most popular algorithm for decoding convolutional codes, because it is simple to implement and has a relatively low decoding complexity. However, it cannot be used for iterative decoding because the output is a sequence of hard-decisions. It is optimal in the sense of minimizing the probability of sequence error, but does not provide confidence factors associated with its decisions. Different approaches aimed at establishing a soft-output version of the classical Viterbi algorithm have been explored [11], [12]. One such example is the soft-output Viterbi a l g o r i t h m ( S O V A ) , w h i c h makes use of a trace back w i n d o w to produce r e l i a b i l i t y measures [12]. A l t h o u g h "the S O V A delivers soft-information, it is suboptimal i n its estimation of reliability information. There exists a more complex optimal m a x i m u m a posteriori ( M A P ) algorithm [10], which we cover next. 15 Chapter 2 Principle of Iterative Decoding A s Bahl et al. pointed out, although the Viterbi algorithm is optimal in the sequential sense, it may not necessarily minimize the probability of individual symbol or bit error [10]. Consequently, i n 1974, they developed the optimal M A P algorithm which delivers soft decision a posteriori probabilities for the symbols [10]. It is generic to any process that can be represented by a finite-state Markov model, where transitions between states are well-defined. Suppose that u= (uj, u ,..., u ) is the information sequence, x= (xj, x ,..., x„) is the 2 k 2 code sequence, andy= (yj, y^,---, y ) is the received noisy sequence for a rate k/n code. The n objective of the decoder is to estimate the a posteriori probability of the states and state transitions of the M a r k o v source. A trellis is formed by indexing the states with both a state index and a time index. A path through the trellis is defined by a sequence o f states, T and is associated with a unique information sequence. We use the notation S j to denote T the state sequence indexed from time 1 to T, i.e. 5 j = (S v S , ...S ) . Each node of the 2 T trellis is associated with a nodal probability, p(S =s y"), and each branch transition is k associated w i t h a t r a n s i t i o n a l p r o b a b i l i t y , p(S _ j=s', S =s v " ) . T h e a p o s t e r i o r i k k probability for an information bit in terms of the L L R is defined as: L(it ) k = log p(u =+l\y) k p(u =-l\y) k X lQg ' ~*"* ( V) piS^^S^yypiy) (2.7) • = + 1 X : p(S _ =s',S =s,y)/p(y) k (s',s) —> u = -1 k l k Chapter 2 Principle of Iterative Decoding where the sign o f L{u ) k 16 indicates the signal estimation ( + 1 , - 1 ) and the magnitude indicates the reliability. The probability p(y) is the same for both the numerator and denominator, and cancels out leaving the term p(S __ =s', S =s, y) , which, by recursively k ] k applying Baye's theorem, can be shown to be: p(S _ =s',S =s,y) k 1 = p(y S =s) -p(S =s,y S*-i=s') • P(S _ =s',y\ n k k+l k k M) k • lk( '' ) • 5 k s s ) ] l a *-i( ') J (2.8) The term can be obtained in terms of a _ , and can therefore be calculated by tracing k the trellis forward v i a the forward recursion: a (s) k k = p(S =s, yj) k M-l = S P( k-\= '> k= >y'l\ s' = 0 s s s ='X^ *-i '^i 5 (2.9) s =s, )-p( k= 'y l s k-i= ') s s k fa (0) = 1 0 ••«,(»)= ^ . , ( 0 - T The t ^ ) term can be obtained in terms of $ backward via the backward recursion: where | k + a o W = ( ) s s = 0 ^ (2.10) , and can be calculated by tracing the trellis Chapter 2 Principle of Iterative Decoding 17 M -1 P^) = SP. + = X s' = 0 p(y k v k i= ' = X ^ ^ l ^+l n s + i(0-Y, (^') s =*) s k + = S ')'^(^ + l = ''> /t S , where | ^ +1 p + l = q ^ (2.11) q The third term is the metric associated with the channel: y (s', s) = p(S =s,y k k k xeX> « ' deterministic from trellis = ' a priori probability 1 « channel char. P(S =s\s ^)-p{y \x =x ) k y (s\ s) = p(u ) • p(y k « 1 k k k k k k (2.12) x) k where u is the source input and x is the source output corresponding to the transition k k s —> s Although this result is theoretically correct, a finite precision processor cannot always implement this because the joint probabilities p(S _ =s', S =s, y) are often very k l k small, and may lead to numerically unstable results [5]. Therefore, Berrou et al. described a modified MAP algorithm to combat the problem of numerical stability [5]. Instead of cancelling out the constant probability p(y), the joint probabilities are divided by p(y)/ Chapter 2 Principle of Iterative Decoding 18 pCjk). This slight modification affects both the forward and backward recursion calculations. The a posteriori probability equation now becomes: p(S _ =s',S =s,y) ^ k l k P(y ) = P(S _,=s,S,=s k y)-p(y ) k k = a _ 's')-y 's',s)k 1 ^ ^ $ (s) k k where the modified forward probabilities are defined as: a (s) = -Z-f . k (2.14) p(yO B y applying (2.9) and (2.14), the new term can be calculated recursively as: a,(,)=-^— : XIX-i^')-?^',*) . (2.15) The new modified backward recursion probability is defined as: 3t(s) P*(*)= „ , p(y i\yO k •- (2-16) k+ B y e x p a n d i n g the denominator and a p p l y i n g the previous definitions, this c a n be calculated recursively via: ^h(s)-y (s',s) k •P*-1(*') = = = b • With all the adjustments, the modified B C J R M A P algorithm involves calculating: ( - ) 2 1 7 Chapter 2 Principle of Iterative Decoding L(b ) = log k 19 (s',s) —> u = + 1 (2.18) k X , a _ {s')- {s',s)-U(s) k x lk This modified M A P algorithm w i l l not result in numerically unstable values. It is also known as the forward-backward algorithm because of its recursive nature. Because of the importance of this algorithm to turbo codes, and due to it is computational complexity, different variants and sub-optimal derivations have surfaced in an attempt to produce a more efficient algorithm [4], [13], [14], [15]. 2.4 Application of the MAP Algorithm to Linear Codes The M A P algorithm as it was first proposed is generic to codes that exhibit properties of a discrete-time finite-state Markov process [10]. A well-defined process can be represented by a state transition diagram and can be further expanded into a trellis diagram. Convolutional codes fit this criteria w e l l , and are the main target of the decoding algorithms described. The contents of the encoder's shift register defines the state in the encoding/ decoding process. A n input results in a transition as the register contents are shifted or changed from one state to another. The M A P algorithm can also be applied to any code with a trellis representation, and i n [10], it is shown that all linear codes have such a representation. There are different techniques of producing trellises for a common linear block code [21]. In [16], Wolf continued the study of the method first proposed in [10] for trellis construction through the use of partial syndromes in the parity-check matrix. Massey Chapter 2 Principle of Iterative Decoding 20 followed in [17] with a similar method using the generator matrix, and a third method was introduced by Forney in [19]. Although each of these methods produce different trellises for the same code, only the Wolf's method of trellis construction is presented here. For an arbitrary (n,k) linear block code with generator matrix G and parity-check matrix H, c - (CQ, C J , an i n p u t c) n bit vector u = (« , u, 0 x u) forms k the codeword v i a c = u • G. S i n c e the generator matrix and the p a r i t y - c h e c k T matrix satisfy G • H = 0 , the decoder w i l l assume a valid codeword is received i f the T syndrome, defined as s = c • H , is equal to zero. If h j , h , . . . , h are the columns of the 2 n parity-check matrix, then for the syndrome to be zero, the following relationship must be true: i =1• The final syndrome can be formed progressively through partial syndromes, defined as: *o = 0 /ij Sj = Cj • s 2 = Cj • / i j + c • ft 2 n = S S n-\ 2 + C = «j + c • h 2 2 (2.20) - nh n The states in the trellis are simply the sequence of partial syndromes represented by the n -k tuples. L i k e the trellis for convolutional codes, the block code trellis commences and Chapter 2 Principle of Iterative Decoding 21 terminates with the all zero states s and s respectively. A path through the trellis actually 0 n traces out the codeword sequence. The complete trellis considers all possible codewords, and there is a one-to-one correspondence between a particular path through the trellis and a specific codeword. Trellis construction consists of the following two steps: (i) Construct branches for all possible transitions of the trellis for length n, where n is the length of a codeword. This results in an unexpurgated trellis. (ii) Expurgate all paths that do not lead to the zero state at depth n leaving behind the final trellis representation of the linear block code. For illustrative purposes, consider the example of a binary (5,3) code with the parity check matrix: H = 1 1 1,0 0 [_0 1 0 1 1_ A t time t=0, a 0 input w i l l lead to state 00 and a 1 input w i l l lead to state 10. F o l l o w i n g step (1) outlined above, we obtain the unexpurgated trellis shown in Figure 2.4. Then, the paths that do not lead to the final state 00 are removed, resulting in the four state trellis shown in Figure 2.5. The structure of the trellis is irregular compared to that of convolutional codes. Thus a convolutional code is equivalent to a stationary Markov source and a linear block code is equivalent to a time-varying Markov source. For a rate k/n linear block code with Chapter 2 Principle of Iterative Decoding 00 o o \ a——o \ \ 01 O \ \ \ \ \ O \ \ \ / / • / \ x / / \ / / \ \ / o \ / \ \ / o \/ — o / \ \ o -\ A A / \ / \ . \ 11 o / o x W \/ y / / x o—x—o W o \ / \ / \ / o — — o / \ \ \ \ \ 10 22 / o \ / o \ \ o \ / o / \ \ / / \ -o— —o : Figure 2.4 Unexpurgated trellis from step 1 of Wolf's trellis construction method r=n-k, there are at most 2 states at each level. Therefore, with the M A P algorithm, any r parity check code with r parity bits can be decoded with a complexity that is proportional to 2 on an unknown memoryless channel. In general, one would want to m i n i m i z e the r number of states so that the decoding complexity is reduced. In [20], M u d e r defines the term minimal trellis as the trellis with the minimal number of states at each level for a specific code. In [21], it was shown that the Wolf, Massey, and Forney trellis generation methods all yield minimal trellises. A l l minimal trellises for the same code are said to be isomorphic. Except for the specific trellis layout, isomorphic trellises exhibit identical properties in term of error performance and decoding complexity. Chapter 2 Principle of Iterative Decoding 00 o\ \ o 10 o o \ \ oi o Or\ 23 \ o \ \ I I i \ \ o / o / / \ / o o o o o o o / \ no / o — l — Q h - — \ y ' o—\-o / / I \ / \ / \ o / o Figure 2.5 Expurgated trellis from step 2 of Wolf's trellis construction Chapter 3 Markov Model-Aided Convolutional Decoding The c o m p l e x i t y of a digital communication system demands designers to m o d e l the complete system as a c o l l e c t i o n o f smaller functional b l o c k s (refer to F i g u r e 3.1). r Information Source 1 .1 r i i - -i 1 Channel Encoder Source Encoder Joint Source-Channel Encoder Modulator 1 1 L j r -i 1 1 Joint Source-Channel Decoder «1 1 Channel 1 j 1 1 1 1 1 Information Sink 1 1 Source Decoder 1 Channel Decoder 1 1 Demodulator 1 1 j L Figure 3.1 Communication system components Traditionally, these components (information source, source encoder, channel encoder, digital modulator, digital demodulator channel decoder, source decoder, and information sink) have been identified as independent functional units w i t h a clear-cut interface between each module [22]. Each module is designed with individualized criteria and there is little or no interaction between the different groups of designers [31]. The source encoder was designed to maximize channel utilization through compression techniques. T h e ultimate goal o f source c o d i n g is to remove a l l redundancy i n an i n f o r m a t i o n sequence and produce an output sequence of equally likely binary digits. Naturally, the 24 Chapter 3 Markov Model-Aided Convolutional Decoding more effective the source encoding, the more the compressed sequence becomes sensitive to channel noise. The channel is the p h y s i c a l m e d i u m on w h i c h data is sent v i a the modulated and demodulated signalling waveforms. Since the channel is not ideal and is prone to noise, controlled redundancy is introduced to combat error by a process known as channel coding. Shannon's separation principle says that source and channel encoders can be treated independently without any overall performance loss [26]. However, this was valid only when the link from source to channel is error free and/or the source output ' contains no redundancy [30]. Under non-ideal situations where correlation between source bits does exist, source and channel encoding/decoding should not be treated separately [31]. A s the author of [31] pointed out, Shannon, in his classical paper, also ascertained that i n the absence o f such ideal conditions, any residual redundancy i n the source encoded data can and should be exploited for channel error protection. According to [31], most source coding algorithms for voice, audio, and images produce correlation in certain coded bits. When the source and channel encoders/decoders are not designed independently, the system is referred to as a joint source/channel coding system. Joint source/channel coding systems which use traditional source coders and channel coders fall into two main categories. One is concatenated source/channel coders that cascade a known source coder and a known channel coder and allocate the fixed bit rate between the source and the channel coder to maximize system performance. In the other category are systems where the source coder is modified to account for the presence of the noisy channel; and systems where the channel decoder is modified to take into account properties of the source. 25 Chapter 3 Markov Model-Aided Convolutional Decoding .26 Here we explore the latter approach. The source data sequence will be modeled as a Markov source, and our channel decoder will use a Markov model of the source to aid its decision making process. We choose to deal with convolutional codes as the channel codes because of their pervasiveness in existing systems. We give a derivation which shows how to perform Markov model aided decoding for a general Markov model. For the case of a two state model, the necessary modification to the Viterbi decoding algorithm becomes obvious. We go on to cover a technique called trellis merging which allows the Viterbi algorithm to be used with an N-state Markov model. Finally, this chapter concludes with an iterative decoding technique which allows the receiver to recover the model without any a priori information of the source statistics. The results of this chapter are to appear in [39]. 3:1 Viterbi Decoding using a Two-State Markov Model Markov model-aided decoding involves modeling the source data set as a finite-state Markov sequence, and incorporating it into the decoding process. The model parameters are the state transition probabilities of the source, where a state refers to a source code word, or to a subset of the bits in a source code word. Consider a simple two state model based on the data bits of an information sequence. A zero information bit will be represented by state zero while a non-zero bit will be represented by state one. The transitions between the data bits are characterized by the four transition probabilities. More redundant the data sets result in more pronounced models, and more benefit to the sourceaided channel decoder. With an ideal source encoder, the information bits are equally likely, and the Markov model will add no advantage or disadvantage to the decoding Chapter 3 Markov Model-Aided Convolutional Decoding 27 p(Oll) p(OIO)( ( 0 ) ( 1 ) )p(lll) p(llO) Figure 3.2 Two state Markov-model process. There are different ways of making the model parameters available to. the receiver. These include training the decoder on some standard data set, transmitting the model parameters as side information with a stronger error protective codes [33], or estimating the model parameters iteratively at the receiver. The training of the receiver on a standard sequence is an approximation technique since different data sets can have completely different statistics. Transmitting the model parameters incurs direct overhead cost; furthermore, transmission errors can corrupt the model used by the receiver. Model estimation through iterative methods avoids these problems, but produces a larger decoding delay because the parameters are extracted and fed back to the decoding system similar to that of turbo decoding. In a system with a single encoder/decoder pair, the Viterbi algorithm can be used for decoding because in the absence of a multi-stage system, hard decision decoders will suffice. Unlike the exhaustive tree search algorithm, the Viterbi algorithm requires Chapter 3 Markov Model-Aided Convolutional Decoding 28 relatively little computational effort and yet produces optimal performance as opposed to probabilistic techniques such as the Fano or Stack algorithm [22]. Note that the Viterbi algorithm is not limited to convolutional codes since linear block codes, or any codes that can be transformed into a linear block code, have a trellis representation (refer to section 2.4). The Viterbi algorithm searches through the trellis eliminating paths at each node, and is optimal in the sequential sense. The conventional Viterbi decoder can incorporate a Markov model into the decision making process, as we show in the following. For a given rate k/n code with information sequence u = (u , uj, u ...u ,j), 0 encoder output x = (x , x 0 h x ,..., x .j), 2 N 2 and received sequence y = (y , y N 0 lt channel y ,.-.y^.i) 2 the estimation ii = u is correct i f y = x. The objective of the decoder is to m i n i m i z e the probability of error as calculated by: p(E) = J > ( £ y) • p(y) (3.1) y where p(y) is the probability distribution of the received sequence and is independent of the decoding process. Hence, to minimize the probability of error p(E), one minimizes the conditional probability. P(E y) = p(u*u\ y) Note that minimizing p(ii *u\ Baye's rule we get: y) is the same as maximizing p(u = u | y). (3.2) Applying Chapter 3 Markov Model-Aided Convolutional Decoding p(u = u y) = P(y u= U) p(u) or simply P(y) 29 P(y u)-p(u) P(y) (3.3) For optimal decoding, it is sufficient to maximize the numerator term [26]. In conventional V i t e r b i decoding the term p(u), w h i c h is the a priori probability o f the information sequence, is not k n o w n and it is assumed the same for every sequence u. Hence, the conventional Viterbi decoder maximizes p(y u), which corresponds to maximum likeli- hood decoding. A t each node, the decoder calculates the branch metrics associated with the trellis transitions from the nodes at the previous time to that node. These are added to the partial cumulative metrics associated with each path or sequence up to the node. A survivor path is chosen with the best metric, and all other paths merging into this particular node are eliminated. In a discrete memoryless channel ( D M C ) , the channel probabilities are independent, or p(y | «) = "«) • (3.4) Furthermore, for data described by a first order Markov source, it can be shown that Pi") - Yl ( i p u i-\) u (3.5) Therefore, to maximize the a posteriori probability of choosing the correct sequence it is sufficient to maximize Chapter 3 Markov Model-Aided Convolutional Decoding 30 (3.6) In the l o g domain, this expression becomes a sum o f branch metrics w h i c h i n c l u d e channel and source transition probability terms: (3.7) channel term a priori probabilities from model The decoding algorithm proceeds as previously described, except the modified metric branch metric is used. The performance o f the source-aided decoder was evaluated v i a M o n t e C a r l o simulations over an A W G N channel. Each simulation point within this thesis contains at least 2 0 0 b i t errors. W e use a one b i t differential pulse code m o d u l a t e d ( D P C M ) monochrome image of Lenna for modeling and transmission purposes. Although one bit D P C M encoding of the Lenna image produces poor picture quality and would not be used in practical situations, these simulations show that improvement can be attained through the use o f source m o d e l i n g . T h e simulation uses a rate 1/3 convolutional code with constraint length 9 and generator polynomials gj=557 , g =663 , and g =771 8 2 8 3 8 (refer to Figure 3.3 for the simulation results). The model parameters, w h i c h are acquired by Chapter 3 Markov Model-Aided Convolutional Decoding 31 measuring the DPCM Lenna image prior to transmission, are given in Table 3.1. Table 3.1Model Parameters for 1-bit DPCM Image of Lenna 1 0 b -2 I : -1.5 p(Olx) ' P(llx) • P(xl0) 0.6518 0.3482 P(xll) , 0.1441 0.8559 p(0)=0.2922 p(l)=0.7078 h ::::::: 4 :::::::: : l : : : : : : : : : : : : : : : : : :l;:::;:::: J::::::::: J::::::::: j -1 -0.5 0 0.5 SNR (dB) 1 1.5 2 2.5 3 Figure 3.3 Two State Markov Model-Aided Decoding To demonstrate the impact of the model, we show a family of curves with various parameters. Good model candidates will be those derived from images encoded with schemes that are widely used. The JPEG standard and many other transform-based image encoders Chapter 3 Markov Model-Aided Convolutional Decoding 32 decompose an image into a number of relatively independent components. In a progressive transmission scheme, these components may be sent separately. In applying source-aided decoding, the best candidate component would be the one with the most redundancy such as the DC terms of a discrete cosine transform (DCT). As outlined in [32], we used a 7-bit Lloyd-Max quantization of the DC terms of a 8-pixel by 8-pixel DCT, and have encoded these quantized values with folded binary code. We considered transmitting the most significant bits. Statistics established using this technique are presented in Table 3.2. For Table 3.2Parameters for Different Models DCT Model Artificial Model p(Olx) P(1M p(0lx), • p(Hx) P(xl0) 0.8511 0.1489 0.9518 0.0482 p(xll) 0.1436 0.8564 0.0441 0.9559 p(0)=0.5110 p(l)=0.4890 p(0)=0.7922 p(l)=0.2078 comparison purposes, we also include parameters with more pronounced statistics. The effect of the different data and parameters are exhibited in Figure 3.4. For faster simulations, a 1/3 convolutional code with constraint length 3 and generator polynomials gj=5, g =7, g =7 is employed. This weaker code was chosen to show the convergence of the 2 3 curves for source-aided and non-aided decoding in the limit of high signal-to-noise ratios (refer to Figure 3.4 for simulation results). This was necessary to reduce simulation time as a weaker code requires less computational effort. Naturally, the more pronounced the model is, the better the decoder performs with the aid of source modeling, The performance improvement is more prominent at lower signal-to-noise ratios. Chapter 3 Markov Model-Aided Convolutional Decoding 33 SNR (dB) Figure 3.4 Family of curves with varying statistics A t lower signal-to-noise ratios, the channel is more noisy, and the statistical model is more effective at biasing the metric. However, in a clean channel, the channel term of the branch metric more accurately reflects the transmitted bits, and the source term is relatively weak. Therefore one observes the converging of the source-aided and non-aided curves at high signal-to-noise ratios. To verify this, one can consider the separate contributions of the channel probabilities and the a priori probabilities of the model. In an A W G N channel, the channel probabilities are given by: 34 Chapter 3 Markov Model-Aided Convolutional Decoding x<*;-*!) 2 p(yt\ d=-j=— u 1 h 2-a bit of the i 2 (3.8) • 2 2"X^~^ +constant _/ 2•a t P • log[/?(y,.| w,)] = where jcj e {-1, +1} is the j e x t h code w o r d , and yj is the corresponding received value. 1 o « 1 0 2 fE, 1 0 ^ (3.9) F r o m (3.8) and (3.9), at high S N R (E /N ), a becomes very small making the channel B 0 2 term of the branch metric in (3.7) dominate over the a priori term. Similarly, at low S N R , o becomes very large giving the channel term of the branch metric a lower weight relative 2 to the a priori probability term. The relative contribution from the two terms is inherently regulated by the channel conditions. 3.2 Trellis M e r g i n g a n d H i g h e r State M o d e l s For source coders which produce fixed length codewords of length greater than one bit, we must consider M a r k o v models with a higher number o f states. F o r example, we may consider two or three bit D P C M encoded images whose M a r k o v models have four and eight states, respectively. The source model for the two bit case is shown in Figure 3.5. For q-bit source code words, i f we block q channel code words into a single block, Chapter 3 Markov Model-Aided Convolutional Decoding 35 Figure 3.5 Four State Markov Model the block will correspond to a single source codeword, and the Viterbi algorithm employed in the two state Markov model aided decoder can be used as before except the model now has Q=2 states. By grouping source bits, the decoder for a 1/n code will parse q n-bit q codewords into one super codeword. This can be viewed as transforming a 1/n code into a q/qn code. In the trellis representation, this can be seen as eliminating q-1 intermediate nodes for every q nodes and connecting the two remaining end nodes with a larger number of paths directly. This process is known as trellis merging. Refer to Figure 3.6 and Figure 3.7 for a visual depiction of a trellis and its corresponding merged trellis. Trellis merging reduces the overall length of the trellis by a factor of q, but increases the number of branches by a factor of 2 ' . q 1 For the case where q=2, the overall trellis length is reduced by a factor of 2, while the number of branches is increased by a factor of 2. Therefore, there is no significant increase in the overall decoding complexity. However, for q>2, there is an increase in overall decoding complexity. As mentioned earlier, the decoding 36 Chapter 3 Markov Model-Aided Convolutional Decoding 00 01 10 11 Figure 3.6 Trellis for a 4 state convolutional code algorithm is not changed, and the metric has the same form as in Equation (3.7), except that ui is now a q-bit source codeword, and p{yAuf) is replaced by q-1 (3.10) m = 0 where y • is the n-bit code word corresponding to the source code bit u i . To evaluate the performance of the iV-state model aided decoders, simulations were completed for both a two bit D P C M and a three bit D P C M encoded L e n n a image. For consistency, we always employ the convolutional encoder with constraint length 9 and generator polynomial g =557 , g =663 , g =771 . A s compared to the results of the single 1 8 2 8 3 8 37 Chapter 3 Markov Model-Aided Convolutional Decoding o 00 01 X 5? JC X o 10 11 Figure 3.7 Two Stage Merged trellis for 4 state convolutional code bit D P C M encoded image, the two bit D P C M encoded image shows noticeable improvement in the Markov model coding gain. However, the improvement from a two bit D P C M image compared to that of a three bit D P C M image revealed very minor improvement in the bit error rate. The model parameters for the two bit D P C M L e n a image are given by Table 3.3. Table 3.3Model Parameters for 2-bit D P C M Image of Lena p(00lxx) p(01lxx) p(10lxx) p(lllxx) p(xxlOO) 0.5839 0.3469 0.0072 0.0620 p(xxl01) 0.0403 0.6054 0.0152 0.3391 p(\\H0) 0.0070 0.0858 0.4765 . 0.4307 p(00)=0.0435 p(01)=().364H p(10)=0.0642 p(ll)=0.5274 Chapter 3 Markov Model-Aided Convolutional Decoding 38 Table 3.3Model Parameters for 2-bit DPCM Image of Lena p(xxlll) p(OOIw) p(Ollxx) p(10lxx) p(lllxx) 0.0056 0.02337 0.05267 0.7079 p(00)=0.0435 p(01)=0.3648 p(10)=0.0642 p(ll)=0.5274 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 SNR (dB) Figure 3.8 Comparison Markov model aided Decoding for N-bit DPCM To demonstrate the effectiveness of the modeling process, simulations with real image data are compared to those completed with synthesized data. The synthesized data was computer generated using the parameters obtained from measuring the two bit D P C M Lenna image statistics. The parameters of the model are reflective of the Lenna image as a Chapter 3 Markov Model-Aided Convolutional Decoding 39 whole and may not be good a representation of image sub-parts. In other words, the model assumes a stationary source, whereas real images are never truly stationary. Therefore, Markov model aided decoding of synthesized data should yield better results as compared to M a r k o v model aided decoding of real image data. However, as the simulation results demonstrate, the performance on the actual image data follows very closely to that of the simulated ideal data (refer to Figure 3.9). This justifies the modeling process. Please refer No Source Model Markov Model-Aided, actual data Markov Model-Aided, ideal data 10" -1.5 -0.5 0 0.5 SNR (dB) 1 1.5 2.5 Figure 3.9 Four State Markov Model-Aided Decoding to Figure 3.10 for an example of images obtained from Markov M o d e l aided decoding of the 512 x 512 Lenna image, and notice the very dramatic visual improvement attained! Chapter 3 Markov Model-Aided Convolutional Decoding 40 Figure 3.10 Qualitative depiction of Markov model aided Decoding for 2-bit DPCM Chapter 3 Markov Model-Aided Convolutional Decoding Not Source-aided at S N R = -1 dB Source-aided at S N R = -1 dB Not Source-aided at S N R = 0 dB Source-aided at S N R = 0 dB 41 Figure 3.10 Qualitative depiction of Markov model aided Decoding for 2-bit DPCM 42 Chapter 3 Markov Model-Aided Convolutional Decoding 3.3 Model Recovery Consider the problem of making the model available to the receiver. Overhead is incurred from sending the model parameters explicitly. U s i n g a standard training sequence to establish the model may lead to inaccuracy and ultimately degraded performance as different images have different statistics. Obtaining the model iteratively does not incur, the problems of the other two methods, with the drawback of a processing time delay at the receiver. The iterative modeling process is completed by first decoding the data file conventionally with no prior knowledge of the data. During the decoding procedure, the decoder counts the number of states and the number of state transitions encountered. The model parameters are calculated from the measurements, and the decoding process is repeated w i t h the new model parameters (refer to Figure 3.11). Measurements from previous Information Sink Source Decoder Channel Decoder Demodulator i Figure 3.11 Iterative Model Recovery Structure iterations are not used so that the effect of previous decoding errors w i l l not accumulate and weight down future improvements. This process is repeated iteratively until conver- Chapter 3 Markov Model-Aided Convolutional Decoding 43 gence, where further iterations yields minor or negligible improvement, is achieved. Iterative modeling w i l l produce improvement as long as each stage of the iteration yields better results than the previous one. F r o m Figure 3.12, we observe that over the entire -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 SNR (dB) Figure 3.12 Iterative Model Recovery using Four State Model with Synthetic Data range of channel S N R , after two decoding iterations, the model is recovered well enough so as to give essentially the same M a r k o v model coding gain as the case in w h i c h the receiver has perfect a priori knowledge of the source model. Figure 3.12 shows the results of simulations performed over synthesized data modeled from a two-bit D P C M encoded Lenna image, whereas the source data for Figure 3.13 is the actual data obtained directly Chapter 3 Markov Model-Aided Convolutional Decoding 10 10" I I L ) ! ! 44 I I 1 rr _ 10 m 2 U J — * 0th Iteration 1 st Iteration 2nd Iteration 5th Iteration — —e— B io ! -4 J -1.5 -2 i i -1 . -0.5 i 1 1 1 r 0 0.5 1 1.5 2 SNR (dB) Figure 3.13 Iterative model recovery using raw data from 2-bit DPCM Lenna image from the image file. B o t h simulations employ the same F E C code, w h i c h is a rate 1/3 constraint length 9 convolutional code with generator polynomial gj = 557 , g = 663 , 8 and g 3 2 8 = 771 . A s observed from the curves, the model recovery process has a similar 8 impact on the B E R performance for both the synthetic data and the actual image data. A l t h o u g h we have not reported the quantitative difference between the iteratively recovered model and the true model, we note that M a r k o v model aided decoding has a certain amount of insensitivity to the accuracy of the model. F o r even when the B E R obtained using the true model is identical to the B E R obtained using the iteratively estimated model, there is discrepancy between the two models on the order of five percent. Chapter 3 Markov Model-Aided Convolutional Decoding 45 To reduce the decoding delay incurred by the iterative model recovery technique, we have considered breaking the image file into blocks and performing model recovery block-by-block, independently for each block. The model for a block of small size tends to be very pronounced, and any transmission error causes an extreme deviation away from the true model. A block must be sufficiently large so that there is sufficient error cancellation in the measured transition probabilities. Without error cancellation, the model measured at the receiver has no chance of biasing the branch metric so as to improve the decoder's error probability. Rather than change the data block size of the encoder/decoder, which is often fixed by standards, model recovery procedure can group adjacent data blocks together to achieve larger model recovery blocks that are big enough for sufficient error cancellation (refer to Figure 3.14). The results of varying model recovery block size Data Block J i l l Model Recovery Iterations Model Recovery J Block Image File Figure 3.14 Grouping together data blocks for model recovery Chapter 3 Markov Model-Aided Convolutional Decoding 46 is observed in Figure 3.15. Here simulations are performed on actual raw data from the SNR = -3dB SNR = -2dB SNR = -1dB S N R = OdB 16 32 Number of Frames 2850 Figure 3.15 Impact of varying model recovery block sizes two-bit D P C M Lenna image. In this scenario, synthetic data is not applicable because we are interested in observing the impact of different sub-image models on the overall decoding performance. In synthesized data, the data bits are generated within the confines of a specified model; therefore, regardless of the data blocks size, the statistics are stationary. However, smaller data blocks within a larger image file may not be stationary and statistics can vary drastically from one block to another. The simulations are performed using the same rate 1/3 convolutional code with constraint length 9. The data block size used is 184 data bits, and the complete image file consists of a collection of 2850 such data Chapter 3 Markov Model-Aided Convolutional Decoding blocks. F o r our simulations, we monitor the relative improvement of the fourth model recovery iteration over a first pass through the system. To accomplish this, we measure the ratio of the two B E R values over a range of model recovery block sizes, which span from a single data block to the complete image file. One w i l l expect a trade-off between the stationarity effect and the error cancellation effect. A smaller block w i l l y i e l d a model more reflective of the local statistics. However, a model, w h i c h is very pronounced locally, w i l l be prone to errors. In the extreme case, consider a model developed for each individual bit in the data transmission. If a zero bit is received, the model recovered w i l l have p(0) = 1 and redecoding w i l l simply give back the same result. A single error w i l l produce a model w h i c h is completely incorrect. Therefore, in this scenario, statistical modeling w i l l provide no benefit to the overall coding performance. The results observed in Figure 3.15 are consistent with this expectation. Because the results are expressed as a ratio of B E R at the fourth iteration over the B E R of a noniterative system, a smaller value indicates better performance in terms of benefits from the model recovery process. F r o m Figure 3.15, we observe that the best result occurs at a model recovery block size of approximately 32 for all S N R ' s we investigated. The curve indicates a trend of improved B E R values as the model recovery block size increases from a single data block to the optimal size observed above, and a slightly inferior performance is observed after this point. This is expected since the effect of error cancellation and model accuracy is balanced at approximately 32 model blocks. For the Lenna image, this is equivalent to dividing the image into 90 sub-images. This investigation carried out on 47 Chapter 3 Markov Model-Aided Convolutional Decoding the L e n n a image only as a mean of demonstrating the impact of partitioning an image transmission into smaller blocks for model recovery. One can anticipate, that the best way to divide up the image is very image dependent. Different approaches for sub-image division is a study worthy of a separate investigation. 48 Chapter 4 Source-Aided Turbo Decoder The primary purpose of iterative decoding as outlined in the previous two chapters is to obtain the a priori probabilities necessary to perform MAP decoding. Turbo decoding exploits the structure of one component code to approximate the a priori probability for input to the following component decoder. Source-aided channel decoding exploits residual redundancy in the information source which is modeled as a collection of a priori probabilities. Although the two probabilities are from different sources and are gathered differently, the two probabilities can be used together to form a Markov model aided turbo decoding system. In this chapter we examine the application of Markov model-aided turbo decoding to the codes of an existing concatenated system, the reverse channel link of the Code Division Multiple Access (CDMA) standard, IS-95. The reverse channel link is composed of the serial concatenation of two subcomponent codes. The concept of turbo code, which was proposed after the development of the standard, had no influence on the design of the reverse link code. Therefore, the codes employed in the IS-95 standard were not optimized for turbo decoding. Nevertheless, its concatenated structure makes it a good candidate for the application of turbo decoding. To effectively apply iterative decoding to the reverse channel link, we must modify the existing decoder by reformulating the component decoders as soft-in/soft-out decoders. Source-aided decoding via Markov modeling can further enhance this turbo decoded system. In practical systems, the decoder has no precognition of the source model parameters and model recovery must be performed. The small interleaver block size dictated by the standard poses a challenge to the model 49 Chapter 4 Source-Aided Turbo Decoder 50 recovery process. To overcome this difficulty, three m o d e l recovery schemes were proposed and compared. 4.1 CDMA Channel Link In 1990, Q u a l c o m m proposed the second North American standard for digital cellular communication, which was later adopted in 1993 as IS-95 (interim standard 95). This standard is based on a spread spectrum technique where a narrow band signal is expanded into a signal with a bandwidth much broader than the original signal. Spread spectrum techniques were first employed by the military for the purposes of an ti-jamming because of their low probability of intercept properties. In consumer applications, these properties were exploited for the purpose of multiple access in a common bandwidth. Although the forward and the reverse link employ many of the same operating principles, they are applied differently. Both the forward and reverse link are heavily dependent on the WalshHadamard code. The forward channel link uses the Walsh-Hadamard code for direct sequence spreading, whereas the reverse link uses it for coded spreading. In the forward link, the Walsh-Hadamard code is used primarily for channel assignment, whereas in the reverse link, the same Walsh-Hadamard code is employed for extra redundancy in error correction. Because of this code's importance to the IS-95 standard, this section w i l l commence with the description of the Walsh-Hadamard code and its properties. Having established this, the IS-95 transmitter/receiver structure is presented. A t this point, it is shown how Markov model aided decoding can be applied to improve the performance of the existing system. Chapter 4 Source-Aided Turbo Decoder 51 The Walsh-Hadamard functions are binary orthogonal sequences, whose lengths are a power of two. The Walsh-Hadamard code is defined by the Hadamard matrix, which was named after its initial discoverer Jacque Hadamard [22]. The Walsh-Hadamard code words are formed by a process called the Hadamard Transform which consists of taking linear combinations of different columns in the Hadamard matrix. For example, a code word F is formed v i a F = H • f, where H is the Hadamard matrix a n d / i s the input vector. Hence the Walsh-Hadamard codes can be viewed as linear block codes [22]. The Hadamard matrix is a n x n square matrix with two elements: + l ' s and - l ' s . The inner product o f any two distinct rows i n the matrix is zero, and the m u l t i p l i c a t i o n of the Hadamard matrix with its transpose forms the identity matrix scaled by a factor of n, i.e. H •H T = n • I. From this definition, the inverse transform can be shown to be: H T (4.1) This implies that both the forward and reverse transforms are governed by the same matrix. The nxn Hadamard matrix is defined recursively by: H n/2 H n/2 H n/2 -H, n/2 where Hj = 1. Thus (4.2) Chapter 4 Source-Aided Turbo Decoder 52 1 1 1 1 H 2 = 1 1 1-1 HA = 1 -1 1-1 1 (4.3) 1-1-1 1-1-1 1 A n alternative representation of the Hadamard Transform via the Kronecker multiplication is: F = H „ - f H ® H 2 ® 2 ... ® H 2 f (4.4) m = G G n G m n ... • G n f • n f J J The above means that a larger Hadamard transform of order n can be decomposed into a series of' m=log (n) Kronecker multiplications. M o r e importantly, it implies that there 2 exists a matrix G that has the same effect as the Hadamard transform when multiplied with itself m times. This can be demonstrated with an example. Consider the matrix: 10 G 4 = 1 0 1 0 - 1 0 0 10 0 1 1 0 - 1 (4.5) Chapter 4 Source-Aided Turbo Decoder 10 1 H 4 = G 4 = 53 0 1 0 1 0 1 1 1 1 1 0 -1 0 1 0 -1 0 1-1 0 1 0 1 0 1 0 1 1 0 1 0 -1 0 1 0 -1 1-1 (4.6) 1 -1 -1 1-1-1 1 B y expanding the above multiplication we get: 10 f = Gf = A 1 = fo + fi 0 - 1 0 fx /o~/2 Si A+/3 h /l-/3_ 1 1 0 - 1 (fo + / (4.7) +/ ) fo fo 1 0 - 1 0 fx fo ~fl (fo + / ) - ( / l + / ) 0 10 fl fx + / 3 (fo -f ) + (fx-fs) f3 fx -fs (fo -f )~(fx 10 A fo 0 10 0 F = Gf 0 1 0 1 0 1 1 0 - 1 + / 2 2 ) + (/l 2 3 3 (4.8) 2 2 These matrix operations have a repetitive configuration and can be represented diagrammatically. In this form, the transform takes on a butterfly structure similar to that of the fast Fourier Transform (refer to Figure 4.1). L i k e the fast Fourier transform, this operation, which is known as the fast Hadamard transform ( F H T ) , has a complexity of O(logN). This fast transform is identical to the fast Fourier transform with the twiddle factor set equal to one. The Fast Hadamard Transform allows a fast implementation of the Walsh-Hadamard code and can be used in the IS-95 standard. For channel coding purposes, there are two fundamental codes employed in the C D M A IS-95 standard. T h e forward and the reverse l i n k s u t i l i z e both the W a l s h - Chapter 4 Source-Aided Turbo Decoder fo + fl fl- fi 54 / - \ + + / - Fj x ' F 2 - - Figure 4.1 Butterfly structure of Walsh-Hadamard code with length 4 fi ^ Figure 4.2 General Form of butterfly structure Hadamard sequence and convolutional codes. These two codes are employed differently as dictated by the nature of the channel. Because the base stations generally have greater t r a n s m i s s i o n power than the m o b i l e stations, the forward l i n k (also k n o w n as the downlink) is not confined to a strong low rate code. However, one base station usually communicates to multiple mobile stations; therefore, ideal channelization is desired. For forward error correction ( F E C ) , the forward link relies on a rate 1/2 convolutional code with constraint length 9 (generator polynomials are gj=753 , g =561 ), and the orthogo8 2 8 nal properties of the Walsh-Hadamard code are exploited for channelization purposes to produce direct sequence (DS) spreading. There are a total of 64 separate channels with Chapter 4 Source-Aided Turbo Decoder 55 one pilot channel, one s y n c h r o n i z a t i o n channel, and up to seven p a g i n g channels. Interleaving is performed to combat bursty channel errors. T h e n , data s c r a m b l i n g is performed according to an individualized long pseudo-noise (PN) sequence for privacy purposes, followed by the application of a short P N sequence used to identify the base station. The forward link uses Quadrature Phase Shift Keying ( Q P S K ) modulation (refer to Figure 4.3). Short PN Sequence Walsh-Hadamard (I-Channel) Code Data I Scrambling I | — - f QPSK Information Convolutional Encoder Sequence* r=l/2 K=9 Interleaver - Long PN Sequence Generator Modulation Short PN Sequence (Q-Channel) Figure 4.3 Forward Channel Link of CDMA IS-95 The reverse channel link (also known as the uplink) generally has a smaller power supply than the base station; therefore, a stronger channel code is required. The reverse channel uses a rate 1/3 convolutional code with constraint length 9 (generator polynomials are gj=557 , g =663 , g =771 ). The same 64 Walsh-Hadamard codes as the ones i n 8 2 8 3 8 forward link are employed, except that instead of channelization, they are used as a block code for the purpose of error correction. Every 6 scrambled code bits coming out of the c o n v o l u t i o n a l encoder w i l l be used for selecting the c o r r e s p o n d i n g 64-bit W a l s h Hadamard codeword. In contrasted to the forward link, the reverse link, in a power limited 56 Chapter 4 Source-Aided Turbo Decoder environment, is more dependent on the coding performance of the code rather than the multiple access aspect. The employment of low rate codes to achieve the spectrum spreading effect was coined coded spreading [34]. The interleaver in the reverse link fits between the convolutional code and the Walsh-Hadamard code, and has the extra responsibility of adapting the two different component codes. F o r the traffic channel, a 576 bit b l o c k interleaver with 32 rows and 18 columns is used. The information sequence is written into the interleaver c o l u m n - w i s e and read out row-wise. In the reverse l i n k , the l o n g P N sequence is used for D S spreading. A s noted in [34], this D S spreading, in contrast to the coded spreading employed by the Walsh-Hadamard code, provides no significant coding gain. The reverse link undergoes the same short P N sequence spreading as the forward link prior to the application of Offset Quadrature Phase Shift K e y i n g ( O Q P S K ) modula. tion (refer to Figure 4.4). Short PN Sequence (I-Channel) (1 input bit) Information Sequence *" w (3 coded bits) (two 3 coded bits) (64 Walsh bits) QPSK Convolutional 64-ary Encoder Y*\ Interleaver >. Orthogonal ^1 _|_ Modulator V_ r=l/3 K=9 Long PN Sequence Generator Modulation Short PN Sequence (Q-Channel) Figure 4.4 Reverse Channel Link of CDMA IS-95 W i t h both the forward and the reverse channel architectures now presented, we consider applying Markov model aided decoding for both systems. To evaluate the coding Chapter 4 Source-Aided Turbo Decoder 57 performance, we assume an interference free environment, and only the FEC component of the system is considered. In the forward link, this is comprised of the single rate 1/2 convolutional encoder/decoder. With a two bit merged trellis and the code outlined previously, computer simulations produce the BER performance as shown in Figure 4.5. -2 -1.5 -1 0 0.5 SNR (dB) 1 Figure 4.5 Forward Link IS-95 Data synthesized from the model parameters obtained from the two bit DPCM Lenna image is used. The FEC component in the reverse link is comprised of the serial concatenation of the Walsh-Hadamard code and the rate 1/3 convolutional code. The decoding of the Walsh-Hadamard sequence is performed through the application of the FHT, which provides a sub-optimal reliability estimation of the Walsh-Hadamard code words [35]. A . Chapter 4 Source-Aided Turbo Decoder 58 soft decision two bit merged trellis Viterbi decoder exploits these reliability measures to enhance decoding performance. For source-aided decoding, the model parameters are applied only to the outer Viterbi decoder. M o d e l parameters are meaningless to the inner Walsh-Hadamard code since the interleaver destroys any correlation between adjacent bits. Again, synthesized data obeying the two-bit D P C M monochrome Lenna image statistics is used as source information; and the associated model parameters are made available to the receiver as side information. From the simulation results it can be seen that modelaided decoding provides significant improvement over non-aided decoding (refer to 1.5 2 SNR (dB) 2.5 Figure 4.6 Soft-Decision Markov-Aided Decoding for the Reverse Channel in the IS-95 Figure 4.6). A t a B E R of 10 , an improvement of 1 dB is observed. We go on to further enhance the reverse link by developing a turbo decoding version of the receiver. Chapter 4 Source-Aided Turbo Decoder 59 4.2 Turbo Decoding in the Reverse CDMA Channel S i n c e we have a concatenated c o d i n g system, the c y c l e o f decoding and redecoding, after establishing extrinsic information at each stage, can be exercised to improve the signal estimation. A l t h o u g h the F H T provides sub-optimal soft decoding estimations, optimal iterative decoding relies on an optimal soft-decision decoder, for the inner code. W i t h i n this section, the B C J R M A P algorithm is applied to the case of the Walsh-Hadamard code. A s mentioned previously, the Walsh-Hadamard code used in the IS-95 standard can be considered to be a rate 6/64 linear orthogonal block code. A s such, it is systematic with the information bits at positions 2 ~ . However, the number of states involved in a 5 k linear block code with rate k/n is 2 ~ , and unlike for convolutional codes, the trellis is n k highly irregular. Attempts at m i n i m i z i n g the trellis for applications of the V i t e r b i and B C J R M A P algorithms have been addressed in [20], [36], and [37]. A M A P decoding rule for the systematic block code is presented in [38]. The problem of complexity is reduced in systematic codes, because there are two possible paths leaving each state associated with the information bit, while for the parity-check bits, there is only one deterministic path leaving the node. This is similar to the terminating tail of a convolutional code trellis where the end sequence is known. Therefore, for low rate codes, where k « the trellis can be collapsed. Let u-{u h u ,..., u ) refer to the information sequence, x = 2 k (xj, x ,..., x ) refer to the code sequence, a n d y = (y 2 n, most of n lt y ,..., y ) refer to the received 2 n sequence. Since the code is systematic, u =x . F r o m equation (2.13), the a posteriori k k Chapter 4 Source-Aided Turbo Decoder 60 probabilities are calculated through the joint probabilities: . , p(s',y\ • p(yk) = — — (s',s,y) P P(y) ' I ) x — p(yl \\ ) s + P( > ? * | ) S • S p(y'h s') = a _^s') • k y (s',s) k where the branch transition probabilities are described as: P( )' u k P(yk\ k)> for systematic bits x (4.9) y (s',s) = k /^(jy^l for non-systematic bits p(u = +l) From the definition L(u ) = — — , we can getp(u ) by p(u = -l) k k k k exp[L(w )] jt ^ = + 1 ) = l + exp[-L(w .)] A and exp[LK)] ^ P = A = l + exp[-L( ,)] ) M Therefore, exp[±L(« )] t P ( U k = ± l > >= l + exp[±L(X)] = C ° n S t a n t ' e x P [ L ( M ^ '"* / 2 ] ( 4 - 1 0 ) A s described in the second chapter, the term p{y \ x ) characterizes the channel properk k ties. Note that because of the irregular trellis structure, a branch transition probability is only defined when there is a state transition from s' to s at time k-1 to time k. A l l other state transitions have a probability o f zero. F r o m the above equation and the fact that Chapter 4 Source-Aided Turbo Decoder 61 u =x , the transition probability is dependent only on y and x . The branch transition k k k k p r o b a b i l i t y c a n then be rewritten as a f u n c t i o n o f the r e c e i v e d sequence a n d its corresponding codewords [38]. This is defined in the log domain as: Mx ,y ) k k p{x = + 1 ;y y = log \_p{x = -\;y )_ k k k (4.11) k where p(x ;y ) = y (s', s). Then, from equation (4.9) we get: k k k Lc • y + L{u ) k for systematic bits k Lc • y (4.12) for non-systematic bits k N o w , the a posteriori probability of the received information bit u given a received k codeword can be obtained by summing over all codewords that contain the u bit. The soft k M A P decision rule then becomes: X X( p( x L(u ) k y) X, u =+\ = log t p x y) (4.13) x, u =-\ k Assuming that each code bit is received independently of the others, and applying Baye's rule we get: Chapter 4 Source-Aided Turbo Decoder n^.-l x L(u ) k 62 *.•>•• n = log x, u =+\ I; = ] pi»j)\. ;€ jyj k J x jn^h)- n ^ ) | x, u =-l I / i k p(u = = +l;y )- k £ k je sys J \ PiXiWi)} Yl = log X P(u = -Uy ) k k II P( i>yd x x,u =-l l/= J t Finally from equation (4.10) we get the M A P decision rules in terms of channel measurements, a priori probability, and extrinsic information: X L(u ) k = Lc-y + k L(u ) k { fl exp[(A(x,y .)-x )/2]| I I + log' " ^ X IT exp[(A(x.,y .)-^)/2] x,u =-l l,-= \ i±k (4.14) ( t measurements a priori prob. t extrinsic information The a priori probability is initially zero. Subsequent iterations use as a priori probabilities the extrinsic information obtained from the outer decoder. Having obtained the L values for the Walsh-Hadamard code, the extrinsic information is passed into the de-interleaver and then into the outer decoder. The input coming into the outer decoder is assumed to be independent due to the presence of the interleaver/de-interleaver. The outer decoder forms its metric by combining three output bits of the inner decoder to form a code word for the convolutional M A P decoder. F r o m equation (4.10), the a priori probability of the code words for the outer decoder is: Chapter 4 Source-Aided Turbo Decoder p{u ) • p(u )k p(u k+l 63 ) = A • exp\[L (u ) e k + 2 k • u + L (u ) • u +L (u e k ) •u e k+l k+l k + 2 k+ ] 2 where A refers to a constant and L (u ) refers to the extrinsic information for bit u . F r o m e k k equation (2.1), we can obtain log-domain representation L(u , u , k k+i u ) = Y [L \u ) • u + L\u ) k+ 2 k k k+x •u +L {u ) •u e k+ l k + 2 k+ 2 \ +constant (4.15) These are used to form the y terms for the B C J R M A P algorithm can of the outer convolutional code. The M A P decoder determines the a posteriori probability for the information bit as in equation (2.18). For the feedback to the inner decoder, the outer decoder must also calculate the a posteriori probability for the codewords. The a posteriori probability for codewords are calculated in the same way that the a posteriori probability for the information bit is calculated. Using the same analysis as in chapter two for the codewords c , the n log-likelihood for the code bits c (where i=l, 2,...6 indexes the code bits) are: n X 6Vi(*0-Y*(^)-P*(s) L(c ) = log (s',s) -> c' = + 1 (4.16) nk n £ (s',s) -> c' = - 1 nk and the codeword log-likelihood is given by a _ {s')-y {s\s)-Us) k x k Chapter 4 Source-Aided Turbo Decoder 64 6 L(c ) n = (- ) 4 17 i = 1 Referring to Figure 4.7, the extrinsic information is formed through the removal of Figure 4.7 Iterative structure of Reverse Link IS-95 a priori source probabilities from the a posteriori codeword probabilities. This is done because the output of a given decoder is not to be used as its own input. The turbo decoding system simulation results are shown in Figure 4.8. Turbo iterations yield performance improvement to this coding system even though it was not optimized for such techniques. 4.3 Markov Aided Iterative Decoding Turbo iterations work at the local level giving the decoder soft or reliability measures about decoding decisions for each bit through the decoding of parity bits. Markov modeling works at a higher level using global statistics to improve decoding decisions. The contribution of a global model in a low signal-to-noise (SNR) environment is stronger Chapter 4 Source-Aided Turbo Decoder 65 than that of the local model. This is attributed to the fact that in a noisy environment, the signal estimations and their reliability information may not be accurate. The feeding back of unreliable information greatly hinders any improvement that can be exploited from the iterative turbo process. On the other hand, the global model is more reflective of the model in general and w i l l help compensate for any local anomalies. However, when the S N R is high, the local model w i l l be more effective than the global m o d e l . A clean channel provides better signal estimation than that of a noisy one. Hence, the local statistics associated with each individual bit are more reliable. In this scenario, the global model which estimates the general pattern of bits w i l l not be as beneficial as that of the local estimates. Combining both the global and local statistics can improve performance in both low and high S N R compared to a system without such information. 4.3.1 Modification to the M A P algorithm The M A P decoding algorithm described in chapter two has to be modified for the use of Markov model aided decoding in a turbo system. A s in the Viterbi algorithm, the changes . incorporated involve the trellis mainly. The same trellis merging technique described in chapter two is applicable here (refer to Figure 3.7 for an example of a four state merged trellis). Equation (2.13) can be expanded to incorporate a merged trellis. To perform this s' and s" are introduced as the intermediate and final state of the merged trellis. P(y) • P(y , y -\) k k = u _ (s") • Y * , ' * _ y - i > "> ) s k 2 k s • P*(?) ( - > 4 18 Chapter 4 Source-Aided Turbo Decoder 1.5 66 2 SNR (dB) 2.5 Figure 4.8 Applying Iterative Decoding to IS-95 C D M A (4.19) = I l ^ J i - i x ,x _ s,s',s")k k v p{x ,x _ s,s' k k v \s" x x_ k k l The expression above follows from the application o f marginal probability and Baye's theorem. N o w , i f two received code words are assumed independent, then the above expands to: Chapter 4 Source-Aided Turbo Decoder 67 y ,k-\(yk>yk-\> "> ) = XX^ s x ,x _ ,s, ', ")-p(x ,x _ ,s,s \s") s , k k = XX x t k x s s k k x (4.20) 1 1| x_ k l • p(s, s' I s") where x and x^.; are the code words resulting from the transitions s' —» s" and s —> s' k respectively, a n d u and u _j are the bits c a u s i n g these transitions. F o r the terms k p(y k x, x_ k k lt k s, s', s"), the knowledge of x is all that is necessary for the transitional k probabilities. The term p(x , x _ k k x s, s"), is deterministic having probabilities either zero or one depending on the trellis. Y ^ - i o ^ y * - ! ^ " ' ) = p(y'k **)-p-(y*-i * -i)5 k = P(S> ' O \ S (4-21) p(y \ k)- (yk-iI^-i)-p( l4-1-4 l-\) x p u u k = p{y' \ * )-p(y -i k k k \ x _ )- p (u ,u _\)t k x k (u ,u _ ) 8 k P k k x The dependency on ^ ' can be dropped since s' is an intermediate node i n the merged trellis. Assuming the global model is independent from the reliability of the individual bit, the joint probability can be separated into independent probabilities where p\u , u _ ) k k is the local extrinsic information from a previous decoder stage, and p .{u , u _ ) k k x x is the global a priori statistic. The contribution o f the global model can be observed i n the following simulations (refer to Figure 4.9). Here unaided turbo decoding is compared to Chapter 4 Source-Aided Turbo Decoder 68 turbo decoding with the global model known to the receiver. For comparison with previous results, the data is generated from the four-state Markov-model of the 2-bit DPCM encoded monochrome Lenna image. 2 SNR (dB) Figure 4.9' Markov model-aided Turbo decoder 4.3.2 Turbo iterations within Global Iterations In a realistic system, the receiver has no precognition of the Markov model, and must employ model-recovery techniques. The IS-95 CDMA standard mandates the use of a small interleaver of size 576 code word bits, which amounts to 184 source bits. The interleaver was designed for real-time voice traffic and is not optimal for iterative Chapter 4 Source-Aided Turbo Decoder decoding. Turbo iterations do improve B E R performance over a non-iterative system, but improvement w i l l be more significant with large interleavers such as those proposed for the initial parallel concatenated turbo codes. The influence of a small code block is more prevalent in the global model recovery. A s discussed in chapter three, to recover a Markov model for a data set reliably in noise, the data set must be sufficiently large. Since the required size is significantly greater than 184 bits, new model recovery schemes must be used to overcome the restriction limitation by the small interleaver. The simplest model recovery scheme is to encapsulate turbo iterations within a larger global iteration. B y local iteration, we mean a turbo iteration over a block of data corresponding to the turbo coder interleaver; w h i l e by global iteration, we mean an iteration over a group of blocks that correspond to an image data file. In this encapsulated iterative technique, the turbo decoding is done as in the case when no model recovery is attempted. When the image data file is completely decoded, its Markov model is measured and the file is turbo decoded again as in the previous global iteration, except that the new Markov model is employed. This operation is a very computationally expensive technique. For the computer simulations, we employ six local turbo iterations for every global iteration, and perform six global iterations. This bring the total number of cycles through a specific data block to a total of thirty-six. F o r consistency, the data set is synthesized randomly according to the model parameters for the two bit D P C M monochrome Lenna image . Figure 4.10 depicts the B E R performance at each global iteration and is compared to the curve where no iterations are performed. Figure 4.11 shows the B E R performance of the raw data from an image file. The two curves are very similar, and their similarity 69 70 Chapter 4 Source-Aided Turbo Decoder 1.5 SNR (dB)' 2 Figure 4.10 Output of Global Iterations using Synthesized Data reinstates the validity of the modeling process. Note that from the third global iteration to the fifth global iteration there is almost no improvement in B E R performance. Figure 4.12 demonstrates the results of internal local iterations within each global one. The curves included here are those obtained from the synthetic data. A g a i n , the curves shown that there is very little performance improvement from the third to the fifth global iteration. This result is observed because after three iterations, the estimated model parameters comes very close to the actual model parameters. Due to the quick convergence of the results with respect to the number of global iterations, the total number of iterations can be drastically reduced. Chapter 4 Source-Aided Turbo Decoder ~I 10 5 0 I I 0.5 1 71 : I I I 1.5 SNR (dB) 2 2.5 1 3 Figure 4.11 Output of Global Iterations using Raw Data 4.3.3 Joint T\irbo-Global Iterations The encapsulated iterative technique is a very exhaustive technique in terms of calculations, because the total number of iterations performed is the product of the number of global and local iterations. A n alternative approach is to retrieve and apply the a priori probabilities for both the global M a r k o v model and the l o c a l extrinsic i n f o r m a t i o n simultaneously. F r o m hereon, this procedure w i l l be referred to as the joint turbo-global ( J T G ) iterative technique. Joint turbo-global iterative technique is a variation o f the encapsulated technique with one local iteration performed for each global iteration. The extrinsic information is saved from one global iteration to the next, and the extrinsic 72 Chapter 4 Source-Aided Turbo Decoder 0 th 3 rd Global Iterations Global Iterations 1 Global Iteration st 5 th Global Iterations Figure 4.12 Output of Local Iterations within Global Iterations using Synthetic Data Chapter 4 Source-Aided Turbo Decoder information is saved from one global to the next. A s compared to encapsulated system, for the same total number of iterations per block, the J T G iterations show remarkable performance. This iterative scheme improves the unaided system for the same reason as why iterative m o d e l recovery converges quickly. M a r k o v model aided decoding is generally fairly tolerant to parameter deviation in the model recovery process. Therefore, although the first pass of the J T G iteration produces an inferior model as compared to the encapsulated system, improvement is still observed. So long as there is improvement in each iteration, successive iterations w i l l continue to benefit from the results of previous estimations. However, because the model parameters achieved after each global pass of J T G is inferior to that of the encapsulated system, the J T G decoder w i l l require more model recovering iterations. Nevertheless, the total number of iterations is reduced. For comparative reasons two graphs are shown. Figure 4.13 demonstrate the performance of the J T G scheme on synthesized data whereas Figure 4.14 demonstrate the performance of the J T G scheme on raw data. The two set of curves display similar characteristics. 4.3.4 Hybrid Encapsulated and Joint Iterations A hybrid of the encapsulated and joint-turbo-global iteration system can be developed to take advantage of the benefits of both systems. The objective of this configuration is to reduce the total number of iterations of the encapsulated system and achieve a faster convergence compared to the J T G system. We realize that the encapsulated system requires fewer global (or model recovering) iterations to converge than J T G system 73 Chapter 4 Source-Aided Turbo Decoder 74 1.5 SNR (dB) Figure 4.13 Joint-Turbo-Global iterations using Synthesized Data because the extra local iterations gives better decoding performance each pass through the file. A hybrid technique employs the encapsulated technique for the first global iteration and the J T G iterations for further global iterations. For simulations, the hybrid system was configured to perform one encapsulated iteration and four ensuing J T G iterations after the first pass. For consistency, the source information was synthesized according to the parameters of the two bit D P C M encoded Lenna image. Naturally, the hybrid scheme has the same performance as the encapsulated system on the first pass. The second shows significant improvement over the corresponding J T G iteration. However, subsequent iterations produced smaller improvement steps Chapter 4 Source-Aided Turbo Decoder 75 1.5 . SNR (dB) 2 Figure 4.14 Joint Turbo-Global iterations using Raw data than the J T G iterations. This was expected as the global statistics accumulated through the model recovery process maximizes the quick convergence of the encapsulated system. The performance of the hybrid and J T G system at the fifth iteration is very close. This indicates that after five iterations of model recovery, the difference source model between the J T G and the h y b r i d system is very small (refer to F i g u r e 4.15). F o r comparison purposes we also include Figure 4.16, which shows the hybrid iterative scheme on raw data from an image file. A s in the other schemes, the results of the synthetic data and the raw data are very similar. Again, this reinforces the validity of the modeling process. Chapter 4 Source-Aided Turbo Decoder 1 -" 0 I 0 76 I I I I I 0.5 1 1.5 SNR (dB) 2 2.5 1 3 Figure 4.15 Hybrid Iterative Scheme using Synthesized Data 4.3.5 Comparison of Different Configurations Each of the different configurations has its advantages and disadvantages. The obvious drawback to the encapsulated system is the computational effort required. In an encapsulated iterative system with / number of local iterations and with g number o f global iterations, this system requires a total of / x g number of iterations. Each iteration results in an incremental time delay, and this is not desirable in time critical systems. The obvious advantage of the encapsulated system is its relative simple structure. The model recovery process treats the turbo decoder as a single channel decoder. This system goes through each block of the transmission iteratively and passes on only the global model from one block to the next. The joint turbo-global iterative system attempts to rectify the iterative 77 Chapter 4 Source-Aided Turbo Decoder 1.5 SNR (dB) l Figure 4.16 Hybrid Iterative Scheme using raw data burden of the encapsulated system by merging both global and local statistics. This reduces the total number of iterations, and hence the number of calculations, by a factor of /. However, the trade-off is the convergence characteristics of the curve. The curve does not convergence as quickly as the encapsulated system. This is due to an inferior global model and inferior local extrinsic information. The intra-iterations in the encapsulated system re-calculates the extrinsic information / times for every one time that joint system completes a pass. Therefore, the statistics and the model is a lot more reflective of the actual data. Furthermore, the memory requirement for this system is very demanding. The extrinsic information associated with each and every bit of the transmission must be stored 78 Chapter 4 Source-Aided Turbo Decoder for the complete pass of the iteration. This amounts to 2 x tx x bpx where n is the 2 number of code bits per symbol for the second decoder, bpx is the number of bits per pixel, and tx is the size of the transmission. For a large transmission such as an image file, this can be very costly. A hybrid system is proposed attempting to maximize the advantage of both system. The encapsulated iterations are performed first to obtain superior models for both the global and local parameters. With the superior model, the J T G iterations are then performed to alleviate the computational requirements. This approach inherits the benefit of both system with a smaller iteration count and good convergence characteristics. However, it also inherits the heavy memory requirements of the joint iterative system The performance of the different schemes can be evaluated qualitatively in Figure 4.17. This figure depicts the influence of decoding in the different recovery schemes for the two-bit D P C M encoded Lenna image. The performance was evaluated for the SNR=1. 4.3.6 Complexity Analysis The complexity of the basic concatenated system is compounded by the model recovery process i n a m u l t i p l i c a t i v e fashion. To compute the c o m p l e x i t y o f the concatenated system, the complexity of each component of the decoder can be analyzed and summed together. N a t u r a l l y , different implementation w i l l result i n different c o m p l e x i t i e s . Generally, the trade-off is between storage requirements and speed of performance. In our implementation, we focused more on the speed of the implementation than the size of the program. The channel metric for the inner (Walsh-Hadamard) decoder is stored requiring k 2 xn xN 1 2 elements, where k]Mj is the rate of the inner decoder, k /n is the rate of the 2 2 First Pass of Encapsulated System Fifth Model Recovery Iteration in Encapsulated System Figure 4.17 Qualitative comparison of the three different model recovery techniques First Pass on Hybrid System Fifth Iteration of Hybrid System Figure 4.17 Qualitative comparison of the three different model recovery techniques Chapter 4 Source-Aided Turbo Decoder ; 81 outer decoder, and N is the information block size including the terminating tail. In the initial pass, the inner decoder calculates the channel metric, and in subsequent iterations, only the a priori probabilities are applied for redecoding. To calculate the metric values in k the first pass, n ,xJVx2 + 1 x (2 1 2 1 k -1 +1) number of multiplications/divisions, k k n x 7V x 2 number of exponential function calls, and n xNx2 1 2 1 2 k x (3 • 2 ' + 1) additions/subtractions are performed (refer to as 0(m, m), 0(m, a), and 0(m, e) respeck tively). A temporary working storage of 2 elements is needed to formulate all the 1 k channel metrics. In applying the a priori probabilities, n x Nxk 2 tions and n xNxk 2 x (2 + 1) multiplica1 x additions are performed (referred to as 0(fbk, m) and 0(fbk, a) { respectively). The application of the interleaver/de-interleaver requires 2xn xN storage 2 for Table the extrinsic information in both the forward and the reverse direction. The outer 4.1 Inner Decoder Complexity Multiplication metric calculation n -N-2 ki •(2* Application of a priori prob. + 1 2 Addition n -N- 2 ki 2 I_1 + 1) n -N • k 2 x Exponential N •n • 2 N-n -2 kl kl 2 2 (3-2^+1) 2*« + N •n • 2 N •n- k 2 Storage kl 2 x •(2 + 1) A, Interleaving - • - - 2-N • n 2 decoder utilizes the BCJR MAP algorithm. Rather than recalculate y at every iteration, the calculation can be sped up by calculating it once with Nxn x2 2 k 2 number of multiplica- Chapter 4 Source-Aided Turbo Decoder 82 tions, and storing the result i n an array o f 2 Nx2 m x2 kl x N elements. In c a l c u l a t i n g the a , number of multiplications and N x2 x(2 m kl -\) number of additions are required. The application of M a r k o v probabilities requires an additional TV x 2 x 2 2 number o f multiplication. These values are stored and employed i n the backward path requiring 2 xN m number of storage elements. In calculating the P terms, 2 m+ 1 element of working memory is required as the final a posteriori probabilities are produced through the backward sweep. The computational requirement for the P terms and the final a posteriori information sequence probabilities is the same as that of the a terms. The a k posteriori code word probabilities requires 2 x N number of storage elements. Further, 1 k the a posteriori code word probabilities requires Nx2 1 x k number o f additions. The x Table 4.20uter Decoder Multiplication Y Addition - N •n • 2 kl 2 -N kl 2 a P a posteriori information sequence a posteriori code word sequence N.2 -2 m k2 + 1 N.2 -2 m N-2 .2 m k2 - N-2 -(2 -\) 2 -N N-2 -(2 -\) 2 m m k2+1 + l Storage k2 k2 m m + - N-2 -(2 -l) m 1 kl N-2 -k kx '2 kx x • N o v e r a l l c o m p u t a t i o n a l and storage requirement f o r the different m o d e l r e c o v e r y Chapter 4 Source-Aided Turbo Decoder 83 procedures are given by Table 4.3. Table 4.3 Complexity of the Different Model Recovery Schemes Encapsulation Multiplication Addition Storage tx • g • {0(m,m) tx • g • {0(m,a) M(System) + + (l-\)-[0(fbk,m) + 0(out, a)]} + 0(out,m)]} Joint Turbo Global tx • g • {0(m,m) Hybrid (l-l)-[0(fbk,a) M(System) tx - g • {0(m,a) + 0(fbk,m) + 0(fbk,a) + 0(out, m)} + 0(out, a)} tx • {g • 0(m,m) tx • {g • 0(m,a) + (l + g - \ ) . + (l + + 2 • t • bpx x M(System) g-l). + 2* • t • bpx 1 x [0(fbk,m) + ... ' [0(fbk,a)+ ••• + 0(out, m) ] } + 0(out, a)]} The terms 0(out, m) = 7 • N • 2 • 2 , Ofout, a) = 3 • N • 2 • (2 - 1 ) , and M(system) = m 2" +2 k m+ 1 + N • [2 kl 2 m 2 • (n + 1) + 2 • n + 2* + 2 ] are used to simplify the notation. The 2 2 m 2 number of exponential calls, tx - g - 0(m, e), is common to all model recovery methods, and is not a factor in the comparison of the different schemes. Chapter 5 Conclusions and Future Work Within this thesis, we have investigated iterative channel decoding. M o r e specifically, we have investigated the impact of both turbo iterations and Markov model recovery iterations on the performance of the channel codes specified by the C D M A standard, IS-95. The development of the aforementioned system was accomplished through a series of progressive steps. First, M a r k o v model aided Viterbi decoding was applied to a single convolutional code. Then, turbo decoding for the concatenated codes in the reverse channel link of the IS-95 standard was established. Finally, M a r k o v model aided turbo decoding was developed through the combination of the two previous steps. C o m m e n c i n g w i t h the single encoder/decoder pair, w e established a M A P decoding rule for convolutional codes through the deployment o f a V i t e r b i decoding algorithm which employs a two-state Markov model of the source. The form of the branch metric shows that M a r k o v model aided decoding makes a stronger impact at l o w S N R when decoding a transmission of the one-bit D P C M L e n n a image, at a B E R o f 10" a 1 coding gain of 1.75 dB is observed, while at a , B E R o f IO" a coding gain of 0.7 d B is 3 observed. The results are based on a rate 1/3 convolutional code with constraint length 9 and generator p o l y n o m i a l s : gj = 557 , g = 663 8 2 8> g = 771 . T h e t r e l l i s m e r g i n g 3 8 technique was employed to accommodate higher state M a r k o v models. For a four state model based on the two bit D P C M encoded Lenna image, a 1.75 dB gain was observed at a B E R of 10" , and a 1.0 dB gain was observed at a B E R of 10~ . The modeling process 2 3 was justified by demonstrating that the decoding of ideal synthetically generated data 84 85 Chapter 5 Conclusions and Future Work performs very closely to the decoding of the actual raw image data with the same model parameters. The results were within 0.1 dB across the S N R range investigated. Concluding the investigation of the single encoder/decoder system, we explored a M a r k o v model recovery scheme. T h i s scheme entails measuring the m o d e l parameters after each decoding iteration over the data file, and using the new model for the next data file decoding iteration. Computer simulations demonstrated that the model recovery technique converges to the ideal performance level, where a priori knowledge of the model parameters is assumed known by the receiver, remarkably fast. Over the range of channel S N R considered, two iterations brings the result sufficiently close to the ideal scenario where the decoder has perfect a priori knowledge of the model. To investigate the influence of Markov model aided decoding on the IS-95 channel codes, we first reviewed the components decreed by the standard. A t this point, the theory exposed previously was applied to form a M a r k o v model aided version of the relevant decoders. The forward link demonstrates a Markov model aided coding gain of 1.7 dB at the B E R of 10" and a gain of 1.1 dB at a B E R of 10" . Again, the coding gain diminishes 2 4 at higher S N R . This behaviour is also exhibited in the reverse channel link where a gain of 1.25 dB is observed at a B E R of 10" and a gain of approximately 0.75 dB is observed at a 1 B E R of 10" . The channel code for the forward link is a simple single convolutional code 3 and turbo iterations are not possible; however, the reverse link is comprised of a serial concatenation of two simpler component codes, and turbo iterations here are feasible. Both the size and design of the interleaver, as prescribed by the standard, limits the turbo decoding performance. Nevertheless, at five iterations a gain of about 0.5 dB is observed Chapter 5 Conclusions and Future Work 86 at a B E R of 10~ , and a gain of about 0.7 dB is observed at a B E R of 10" . 3 1 When the receiver has a priori knowledge of the Markov model parameters, turbo decoding dramatically improves the B E R performance. After five iterations at a B E R of 10 , M a r k o v model aided turbo decoding has a gain of 1.25 dB over non-Markov model aided turbo decoding, and an overall gain of around 1.8 dB over non-iterative decoding. In realistic systems, because the receiver has no precognition of the model parameters, one might consider recovering the model w h i l e performing turbo iterations; the model must be recovered iteratively; however, the small interleaver block size does not have enough data samples to allow sufficient error cancellation so that a reasonable model can be recovered from a noisy data set corresponding to one block. To recover the Markov model, and turbo decode, two levels of iterating are required, and here we have proposed and evaluated three such schemes. The first is the encapsulated iterative scheme. This technique entails encapsulating turbo iterations within larger model recovery scheme iterations, and re-initializing the turbo decoding process after each global iteration. A s in the case of the single decoder, the B E R converges after two model recovering iterations. However, this encapsulated scheme is very expensive i n terms o f the total number o f iterations and hence the total decoding time delay. A n alternative approach is to save the extrinsic values computed for the whole file and use them in the next global iteration so that the turbo decoding and the model recovery are performed simultaneously. We refer to this technique as joint turbo-global iterations. A l t h o u g h this approach requires more global model recovery iterations, the overall number of iterations are reduced. A third Chapter 5 Conclusions and Future Work 87 approach, aimed at reducing the total number of passes through the transmitted file, is proposed. This technique is a hybrid between the encapsulated system and the joint turboglobal system. The first global iteration is that of the encapsulated scheme, and the subsequent global iterations are that of the joint turbo-global iteration scheme. A l l three techniques converge to very similar results but have different advantages. W h i l e the hybrid system reduces the total number of file iterations, the encapsulated system requires the least amount of memory, and the joint turbo-global demands the smallest number of total iterations per data bit. The application of Markov model aided turbo decoding has result in a significant improvement in B E R performance which was attained by modifying only the receiver of the IS-95 reverse link. Further study of M a r k o v model aided turbo decoders is certainly warranted. In particular, the techniques developed here could be applied to turbo coders based on a parallel concatenation of codes. Typically these systems have a sufficiently large block size so that no iterative structure beyond that already in place is necessary to perform M a r k o v model recovery. However, parallel concatenated systems usually use recursive systematic codes ( R S C ) . For non-recursive codes, the trellis merging technique works because the encoder state also contains information about the M a r k o v model state. But, unfortunately the recursion in the R S C encoding process destroys the Markov model state information; and as a result, the trellis merging technique as is cannot be directly applied. This can be overcome by a modification of the trellis which renders the calculation more complex. We anticipate that the Markov model coding gain w i l l be similar to that of the serial case but simulations w i l l have to be done to verify this. Glossary AWGN Additive White Gaussian Noise BCJR Bahl-Cocke-Jelinek-Raviv BER Bit Error Rate CDMA Code Division Multiple Access DCT Discrete Cosine Transform DMC Discrete Memoryless Channel DPCM Differential Pulse Code Modulation DS Direct Sequence FEC Forward Error Correction FFT Fast Fourier Transform FHT Fast Hadamard Transform IS-95 Interim Standard 95 JPEG Joint Photographic Experts Group JSCC Joint Source-Channel Code JSCD Joint Source-Channel Decoder JTG Joint Turbo-Global MAP Maxmimum A Posteriori PCS Personal Communication Systems QPSK Quadrature Phase Shift Keying RSC Recursive Systematic Code SNR Signal to Noise Ratio SOVA Soft-Output Viterbi Algorithm 88 Bibliography [1] N . Morinaga, M . Nakagawa, and R. Kohno, "New Concepts and Technologies for Achieving Highly Reliable and High-Capacity Multimedia Wireless Communications Systems," IEEE Communication Magazine, pp. 34-40, Jan. 1997. [2] B . Sklar, " A Primer on Turbo Code Concepts," IEEE Communications Magazine, pp. 94-101, Dec. 1997. [3] G . Forney Jr, Concatenated Codes. M I T Press, Cambridge, M A , 1966. [4] S. Benedetto, D . Divsalar, G . Montorsi, and F. Pollara, " A Soft-Input Soft-Output A P P Module for Iterative Decoding of Concatenated Codes," IEEE Communications Letters, vol.1, no. 1, Jan. 1997, pp. 22-24. [5] C . Berrou, A . Glavieux, and P. Thitimajshima, "Near Shannon L i m i t ErrorCorrecting coding and decoding: Turbo Codes," in Proceedings of ICC'93, Geneva, Switzerland, M a y 1993, pp. 1064-1070. [6] J. Hagenauer, "The Turbo Principle: Tutorial Introduction and State of the Art," International Symposium on Turbo Codes, Brest, France, Sept. 1997, pp. 1-11. [7] C . Douillard, M . Jezequel, C . Berrou, A . Picard, D . Didier, and A . Glavieux, "Iterative Correction of Intersymbol Interference: Turbo-Equalization," European Transaction on Telecommunications, vol. 6, pp. 507-511, Sept. 1995. [8] S. Kaiser and J. Hagenauer, "Mult-Carrier C D M A with Iterative Decoding and Soft Interference Cancellation," in Proceedings of GlobeCom'97, Arizona, Nov. 1997, pp. 6-10. [9] R . Robertson and T. Woerz, " A Novel Bandwidth Efficient Coding Scheme Employing Turbo Codes," in Proceedings of the IEEE International Conference on Communications (ICC'96), Dallas, U S A , June 1996, pp. 962-967. [10] L . Bahl, J. Cocke, F. Jelinek, and J. Raviv, "Optimal Decoding of Linear Codes for M i n i m i z i n g Symbol Error Rate," IEEE Transaction on Information Theory, vol. IT-20, pp. 284-287, March 1974. 89 Bibliography 90 [11] D . K w a n and S. Kallel, " A Truncated Best-Path Algorithm," IEEE Transaction in Communication, vol. 46, no. 5, pp. 568-572, M a y 1998. [12] J. Hagenauer and P. Hoeher, " A Viterbi Algorithm with Soft-Decision Outputs and its Applications," in Proceedings of the IEEE GlobeCom Conference, Dallas, T X , Nov. 1989, pp. 1680-1686. [13] I. Marsland, P. Mathiopoulos, and S. Kallel, "Noncoherent Turbo-Equalization for Frequency Selective Rayleigh Fast Fading Channels," International Symposium on Turbo Codes and Related Topics, Brest, France, Sept. 1997, pp. 196-199. [14] S. Pietrobon and A . Barbulescu, " A Simplification of the Modified Bahl Decoding Alogrithm for Systematic Convolutional Codes," International Symposium on Information Theory and its Applications, Sydney, N S W , Nov. 1994, pp. 10731077. Revised Jan. 1996 (www.itr.unisa.edu.au/~steven/turbo). [15] Y. Chang, "Parallel Decoding of Turbo Codes," Electronic Letters, vol. 32, no. 13, Jun. 1996, pp. 1188-1189. [16] . J. Wolf, "Efficient M a x i m u m Likelihood Decoding of Linear B l o c k Codes U s i n g a Trellis," IEEE Transaction on Information Theory, vol. IT-24, n o . l , pp. 76-80, Jan. . 1978. [17] J. L . Massey, "Foundations and Methods of Channel Coding," NTG-Fachberichte, vol. 65, pp. 148-157, Sept. 1978. [18] J . L . Massey, "Joint Source and Channel Coding," Communication Systems and Random Process Theory, J. K . Skwirzynski, E d . The Netherlands: Sijthoff and Norhoff, pp. 279-293, 1978. [19] G . Forney Jr., "Coset.Codes - Part II: Binary Lattices and Related Codes," IEEE Transaction on Information Theory, vol. 39, pp. 1491-1513, Sept. 1993. [20] D . J . Muder, " M i n i m a l Trellises for B l o c k Codes," IEEE Information on Theory, vol. 34, pp. 1049-1053, Sept. 1988. [21] A . C . Kot and C . Leung, " O n the Construction and Dimensionality of Linear B l o c k Code Trellises," in Proceedings ISIT1993, Jan. 1993, p.291. Transaction on Bibliography 91 [22] J . G . Proakis, Digital Communications 2nd Ed. N e w York: M c G r a w - H i l l , Inc., 1995. [23] E . Ayanoglu and R . M . Gray, "The Design of Joint Source and Channel Trellis Waveform Coders," IEEE Transaction on Information Theory, vol.' IT-33, pp. 855.865, Nov. 1987. [24] V . Vaishampayan and N . Farvardin, "Joint Design of B l o c k Source Codes and Modulation Signal Sets," IEEE Transaction on Information Theory, vol. 38, no. 4, pp. 1230-1248, July 1992. [25] V . Vaishampayan and N . Farvardin, "Optimal B l o c k Cosine Transform Image Coding for Noisy Channels," IEEE Transaction on Communication, vol. C O M - 3 8 , pp. 327-336, March 1990. [26] K . Sayood, F. L i u , and J.D. Gibson, " A Constrained Joint Source/Channel Coder Design," IEEE Journal on Selected Areas in Communications, vol. 12, no. 9, pp. 727-731, Dec. 1994. [27] R . C . Reininger and J. D . Gibson, "Backward Adaptive Lattice and Transveral Predictors in A D P C M , " IEEE Transaction on Communication, vol. C O M - 2 9 , pp. 74-82,Jan. 1985. [28] D . Cornstock and J.D. Gibson, "Hamming Coding of D C T Compressed Images over Noisy Channels," IEEE Transaction in Communication, vol. C O M - 3 2 . pp. 856-861, July 1984. [29] M . Bystrom and J.W. Modestino, "Combined Source-Channel Coding for Transmission of H.263 Coded Video with Trellis-Coded Modulation over a SlowFading Rician Channel," In the Proceedings of the 1998 IEEE International Symposium on Information Theory, Cambridge, M A , August 1998, pp. 147-157. [30] K . Sayood and J.C. Borkenhagen, "Use of Residual Redundancy in the Design of Joint Source/Channel Coders," IEEE Transaction on Communications, vol. no.6, pp. 838-846, June 1991. [31] J. Hagenauer, "Source-Controlled Channel Decoding," IEEE Transaction on Communications, vol. 43, no. 9, pp. 2449-2457, Sept. 1995. [32] W. X u , J. Hagenauer, and J. Hollmann, "Joint Source-Channel Decoding U s i n g the Residual Redundancy in Compressed Images," In Proceedings of ICC/ Bibliography 92 SUPERCOMM '96 International Conference on Communications, Dallas, T X , June 1996, pp. 142-148. [33] S. E m a m i and S. Miller, " D P C M Picture Transmission over N o i s y Channels with the A i d of a Markov Model," IEEE Transaction on Image Processing, vol. 4, no. 11, pp. 1473-1481, Nov. 1995. [34] T. Frey and M . Bossert, " A First Approach to Concatenation of Coding and Spreading for C D M A - S y s t e m s , " IEEE Synopsium in Spread Spectrum Technology and Applications, Sept. 1996, pp. 667-671. [35] M . Meiler, J. Hagenauer, A . Schmidbauer, and R. Herzog, "Iterative Decoding in the Uplink of the C D M A - S y s t e m IS-95(A)," International Symposium on Turbo Codes, Brest, France, Sept. 1997, pp. 208-211. [36] F.R. Kschischang and V. Sorokine, " O n the Trellis Structure of B l o c k Codes," IEEE Transaction on Information Theory, vol. 41, no. 6, pp. 1924-1937, Nov. 1995. [37] R . J . M c E l i e c e , " O n the B C J R Trellis for Linear B l o c k Codes," IEEE Transaction on Information Theory, vol. 42, no. 4, pp. 1072-1092, July 1996. [38] J. Hagenauer, E . Offer, and L . Papke, "Iterative Decoding of Binary B l o c k and Convolutional Codes," IEEE Transaction on Information Theory, vol. 42, no. 2, pp. 429-445, M a r c h 1996. [39] S. K a l l e l , R. L i n k , and N . L o , "Markov M o d e l A i d e d Convolutional Decoding," submitted to IEEE Transaction on Communications.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- The application of source-aided turbo decoding to IS-95...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
The application of source-aided turbo decoding to IS-95 CDMA systems Lo, Norman C. 1999
pdf
Page Metadata
Item Metadata
Title | The application of source-aided turbo decoding to IS-95 CDMA systems |
Creator |
Lo, Norman C. |
Date Issued | 1999 |
Description | We develop source-aided channel decoding techniques and apply them for image transmission using the channel forward error correction codes of the IS-95 Code Division Multiple Access (CDMA) standard. The source is modeled as a first order Markov model with states that correspond directly to the source coder codewords. The model is used to form a MAP version of the Viterbi algorithm for decoding convolutional codes. For the case of a two-state Markov model, the generalization of the Viterbi algorithm involves only a modification of the branch metric; while for N-state Markov models, a technique called trellis merging is also implemented to keep the decoding complexity low. An iterative model recovery technique is developed which allows the receiver to recover the source model without any a priori information. Simulating these techniques for the case of the two-bit DPCM encoded Lenna image, we find a coding gain over an Additive White Gaussian Noise (AWGN) channel of approximately 1.1 dB at a BER of 10⁻⁴ for both the forward and reverse link of the IS-95 standard. We go on to develop a turbo version of the IS-95 reverse link decoder. This involves implementing a "soft-in/soft-out" version of the component decoders, and introducing an iterative decoding procedure. The coding gain found by this turbo enhancement is 0.75 dB at a BER of 10⁻³. A Markov model-aided version of the turbo reverse link decoder is then developed by migrating the techniques used in the Viterbi algorithm to the soft-in/soft-out decoders. The ensuing Markov model aided turbo decoder has a two-level iterative structure due to the fact that there are source model recovering iterations and turbo iterations. Three different architectures for the two-level iterative structure are proposed and compared. Although all three methods provide similar coding gain over an AWGN channel, they differ in speed of convergence and implementation complexity. For the case of transmission of the two-bit DPCM encoded Lenna image, the coding gain over an AWGN channel at the final iteration is 1.6 dB at a BER of 10⁻². |
Extent | 9249933 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
File Format | application/pdf |
Language | eng |
Date Available | 2009-06-17 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0065336 |
URI | http://hdl.handle.net/2429/9381 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 1999-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
Aggregated Source Repository | DSpace |
Download
- Media
- 831-ubc_1999-0199.pdf [ 8.82MB ]
- Metadata
- JSON: 831-1.0065336.json
- JSON-LD: 831-1.0065336-ld.json
- RDF/XML (Pretty): 831-1.0065336-rdf.xml
- RDF/JSON: 831-1.0065336-rdf.json
- Turtle: 831-1.0065336-turtle.txt
- N-Triples: 831-1.0065336-rdf-ntriples.txt
- Original Record: 831-1.0065336-source.json
- Full Text
- 831-1.0065336-fulltext.txt
- Citation
- 831-1.0065336.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065336/manifest