Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The application of source-aided turbo decoding to IS-95 CDMA systems Lo, Norman C. 1999

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1999-0199.pdf [ 8.82MB ]
Metadata
JSON: 831-1.0065336.json
JSON-LD: 831-1.0065336-ld.json
RDF/XML (Pretty): 831-1.0065336-rdf.xml
RDF/JSON: 831-1.0065336-rdf.json
Turtle: 831-1.0065336-turtle.txt
N-Triples: 831-1.0065336-rdf-ntriples.txt
Original Record: 831-1.0065336-source.json
Full Text
831-1.0065336-fulltext.txt
Citation
831-1.0065336.ris

Full Text

T H E A P P L I C A T I O N O F S O U R C E - A I D E D T U R B O D E C O D I N G T O I S - 9 5 C D M A S Y S T E M S by N O R M A N C. L O B.A.Sc (EE: Computer Option), The University of British Columbia, 1997 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard CL T H E U N I V E R S I T Y OF B R I T I S H C O L U M B I A Apr i l 1999 © Norman Lo, 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of £LX:.CTfZ IC^JL £*> 6-lfiJCE&'/Q & The University of British Columbia Vancouver, Canada DE-6 (2/88) Abstract We develop source-aided channel decoding techniques and apply them for image transmission using the channel forward error correction codes of the IS-95 Code Division Mult iple Access ( C D M A ) standard. The source is modeled as a first order Markov model with states that correspond directly to the source coder codewords. The model is used to form a M A P version of the Viterbi algorithm for decoding convolutional codes. For the. case of a two-state Markov model, the generalization of the Viterbi algorithm involves only a modification of the branch metric; while for N-state Markov models, a technique r ' cal led trellis merging is also implemented to keep the decoding complexity low. A n iterative model recovery technique is developed which allows the receiver to recover the source model without any a priori information. Simulating these techniques for the case of the two-bit D P C M encoded Lenna image, we find a coding gain over an Additive White Gaussian Noise ( A W G N ) channel of approximately 1.1 dB at a B E R of 10~4 for both the forward and reverse link of the IS-95 standard. We go on to develop a turbo version of the IS-95 reverse l ink decoder. This involves implementing a "soft-in/soft-out" version of the component decoders, and introducing an iterative decoding procedure. The coding gain found by this turbo enhance-ment is 0.75 dB at a B E R of 10"3. A Markov model-aided version of the turbo reverse link decoder is then developed by migrating the techniques used in the Viterbi algorithm to the soft-in/soft-out decoders. The ensuing Markov model aided turbo decoder has a two-level iterative structure due to the fact that there are source model recovering iterations and ii turbo iterations. Three different architectures for the two-level iterative structure are proposed and compared. Although all three methods provide similar coding gain over an A W G N channel, they differ in speed of convergence and implementation complexity. For the case of transmission of the two-bit D P C M encoded Lenna image, the coding gain over an A W G N channel at the final iteration is 1.6 dB at a B E R of 10"2. iii Table of Contents Abstract i i List of Tables vi List of Figures vii Acknowledgment ix Chapter 1 Introduction 1 1.1 Motivation and Objectives 3 1.2 Thesis Outline 4 Chapter 2 Principle of Iterative Decoding 6 2.1 Structure of Turbo Decoders 6 2.2 Soft Information 10 2.3 Component Decoders 14 2.4 Application of the M A P Algorithm to Linear Codes 19 Chapter 3 Markov Model-Aided Convolutional Decoding 24 3.1 Viterbi Decoding using a Two-State Markov Model 26 3.2 Trellis Merging and Higher State Models 34 3.3 Model Recovery..... 42 Chapter 4 Source-Aided Turbo Decoder 49 4.1 C D M A Channel Link.. . . ; 50 4.2 Turbo Decoding in the Reverse C D M A Channel 59 4.3 Markov Aided Iterative Decoding 64 4.3.1 Modification to the M A P algorithm 65 4.3.2 Turbo iterations within Global Iterations 68 iv 4.3.3 Joint Turbo-Global Iterations 71 4.3.4 Hybrid Encapsulated and Joint Iterations ....:.73 4.3.5 Comparison of Different Configurations 76 4.3.6 Complexity Analysis 78 Chapter 5 Conclusions and Future Work 84 Glossary 88 Bibliography 89 List of Tables Table 3.1 Model Parameters for 1-bit D P C M Image of Lenna 31 Table 3.2 Parameters for Different Models 32 Table 3.3 Model Parameters for 2-bit D P C M Image of Lena 37 Table 4.1 Inner Decoder Complexity ! 81 Table 4.2 Outer Decoder 82 Table 4.3 Complexity of the Different Model Recovery Schemes 83 List of Figures Figure 2.1 Basic Feedback of Iterative Decoding 8 Figure 2.2 Structure of Parallel Concatenated Encoder/Decoder 9 Figure 2.3 Serial Concatenated Structure for Encoding/Decoding 10 Figure 2.4 Unexpurgated trellis from step 1 of Wolf 's trellis construction method 22 Figure 2.5 Expurgated trellis from step 2 of Wolf 's trellis construction 23 Figure 3.1 Communication system components 24 Figure 3.2 Two state Markov-model 27 Figure 3.3 Two State Markov Model-Aided Decoding 31 Figure 3.4 Family of curves with varying statistics 33 Figure 3.5 Four State Markov Model 35 Figure 3.6 Trellis for a 4 state convolutional code 36 Figure 3.7 Two Stage Merged trellis for 4 state convolutional code 37 Figure 3.8 Comparison Markov model aided Decoding for N-bit D P C M 38 Figure 3.9 Four State Markov Model-Aided Decoding 39 Figure 3.10 Qualitative depiction of Markov model aided Decoding for 2-bit D P C M 40 Figure 3.11 Iterative Model Recovery Structure ....42 Figure 3.12 Iterative Model Recovery using Four State Model with Synthetic Data 43 Figure 3.13 Iterative model recovery using raw data from 2-bit D P C M Lenna image 44 Figure 3.14 Grouping together data blocks for model recovery 45 Figure 3.15 Impact of varying model recovery block sizes 46 vii Figure 4.1 Butterfly structure of Walsh-Hadamard code with length 4 54 Figure 4.2 General Form of butterfly structure 54 Figure 4.3 Forward Channel L ink of C D M A IS-95 55 Figure 4.4 Reverse Channel L ink of C D M A IS-95 56 Figure 4.5 Forward L ink IS-95 57 Figure 4.6 Soft-Decision Markov-Aided Decoding for the Reverse Channel in the IS-95 58 Figure 4.7 Iterative structure of Reverse L ink IS-95 64 Figure 4.8 Applying Iterative Decoding to IS-95 C D M A . . . 66 Figure 4.9 Markov model-aided Turbo decoder... 68 Figure 4.10 Output of Global Iterations using Synthesized Data 70 Figure 4.11 Output of Global Iterations using Raw Data 71 Figure 4.12 Output of Local Iterations within Global Iterations using Synthetic Data 72 Figure 4.13 Joint-Turbo-Global iterations using Synthesized Data 74 Figure 4.14 Joint Turbo-Global iterations using Raw data 75 Figure 4.15 Hybrid Iterative Scheme using Synthesized Data 76 Figure 4.16 Hybrid Iterative Scheme using raw data 77 Figure 4.17 Qualitative comparison of the three different model recovery techniques 79 viii Acknowledgment The author would like to take this opportunity to express his most sincere gratitude to his supervisor Dr. Samir Kal le l . The author is a fortunate recipient of Dr. Kallel 's helpful suggestions and valuable guidance throughout the thesis project. The author is also in debt to Dr. Robert L ink for his constant help, especially in thoroughly proofreading this thesis report. Special thanks is also extended to Ian Marsland for his generous assistance. Final ly , the author would l ike to acknowledge his family for their eternal love and support, and his friends for making him laugh throughout the course of the project. This work was funded by N S E R C , B C A S I , and Mobi le Data Solutions Inc, of Richmond, B C . ix Chapter 1 Introduction The demand for wireless communications has experienced tremendous growth in recent years. The evolution from analog to digital and ultimately to third generation PCS (Personal Communication Systems) is in direct response to such demand. An important catalyst in this development is the shift from voice oriented services to computer data oriented services. The former tends to be laxed in its tolerance to error, whereas the latter generally implies zero error tolerance. As multimedia applications become more popular, developing systems must accommodate for the increased demand in resources associated with these services [1]. Image transmission, which is categorized as data transmission, traditionally does not tolerate errors in an end-to-end connection. The mere size of the data necessary to represent a typical image implies a substantial time investment in its transmission over a low bit-rate channel, even in the presence of a good channel. A less than ideal channel condition generally increases this time requirement many-fold, as erroneously received blocks are retransmitted. One can envision many applications, the most notable being real-time wireless video for teleconferencing over a bandwidth-limited channel, where the constraints in transmission time are such that the error-free require-ment on the received data must be alleviated. Since the early days of information theory, efforts to improve channel efficiency through error detection/correction have been exerted, but with the advent of multimedia applications and a growing demand for wireless communications, where bandwidth is very limited, these exertions must be hastened as high volume real-time digital traffic becomes ineluctable. The underlying objective in code design and coding theory is to achieve low 1 Chapter 1 Introduction 2 probability of error without compromising power and complexity requirements. Derived by one of the pioneers of information theory, the Shannon performance bound is the theoretical limit that many code designers aspire to realize. An important step in the direction of approaching this elusive target began with Forney's study of concatenated codes [ 3 ] . Concatenated codes are constructed through the combination of two or more relatively simple component codes to form a larger coding system. These codes enjoy an exponential decrease in probability of error with only an algebraic increase in complexity [3 ] . Codes with immense coding gains were originally challenges reserved for theoretical research, but modern applications such as deep space communications have since necessi-tated the need for such high performance codes [4 ] . Coding theorists have taken a stride closer to Shannon's limit with the recent innovation of iterative decoding incorporated into the'concatenated codes [5] . This new paradigm is identified as Turbo Coding. The encoding for transmission process is traditionally divided into two indepen-dent tasks: compression and error correction. To maximize bandwidth utilization, data compressors, also known as source encoders, are employed to remove redundancy in the data source. However, compressed data is generally more sensitive to noise than uncompressed data because the entropy of each coded bit is higher than that of the uncoded bits; and, the impact of an erroneous decision will have a greater effect on the overall transmission. In most compression schemes, an error, in one bit may propagate or induce errors in other bits. As a remedy to this problem, controlled redundancy is introduced for error detection/correction by a process known as channel coding. Tradition-ally, the compression and channel coding modules have been designed independently. Chapter! Introduction 3 However, it is well documented that one can achieve a given level of performance with a system of lower complexity by considering both the compression and the error correction component jointly in the design of the overall system [18]. In [23] and [24], the authors proposed a truly integrated approach where source code words were mapped directly onto channel signalling waveforms. In principle, an integrated system should achieve the best possible performance; however, system implementation issues have pushed most of the research towards systems where source and channel coders are phys ica l ly separate components. Although the two components are separate, they are not designed indepen-dently. M u c h research has focused on the use of known components cascaded together with a fixed bit rate allocated between the components. This approach has been investi-gated for voice transmission [27], image transmission [28], and recently to video transmis-s ion [29]. S t i l l , researchers and standards bodies are con t inua l ly searching for compression techniques that have a significantly higher noise resilience, while minimizing the sacrifice of compression ratio [25]. The approach of considering the characteristics of both the source and the channel in the design of the overall codec is known as Joint Source-Channel Coding (JSCC); and it is a very promising technique in.the development of bandwidth efficient yet noise resilient data transmission systems. 1.1 Motivation and Objectives Although turbo coding has proven itself as a useful channel coding technique, relatively little work has been done to incorporate turbo codes into joint source channel coding system. In the literature, J S C C systems are well documented ([18], [23]-[26],[30]-[32]), but an extensive investigation into source-aided convolutional decoding has not Chapter 1 Introduction 4 been performed. This thesis contributes to this effort through: reviewing Markov model aided convolutional decoding with a modified Viterbi decoder for two-state Markov models of the source; • applying trellis merging techniques to develop a Viterbi decoding algorithm that is M A P with respect to an N-state Markov model of the source for an arbi-trary N ; • exploring a model recovery technique and the influence of data block size. , Turbo decoding is a relatively new concept, but it has already triggered much research ([2], [4]-[9], [13]-[15], [35]). Recently in [35], the application of sub-optimal turbo decoding was proposed for the reverse channel l ink of the IS-95 C D M A standard. However, due to certain constraints dictated by the standard, only modest improvement is observed. This thesis contributes to this effort by: • simulating an optimal turbo decoder for the reverse channel link; • applying Markov model aided decoding to both the forward and reverse link of the IS-95 C D M A standard; • proposing and comparing three model recovery techniques that combines both source model recovery iterations and turbo iterations. 1.2 Thesis Outline This thesis is organized as follows: Chapter 1 Introduction 5 In Chapter 2, the background necessary for a basic understanding of turbo codes is presented. This focuses on the basic structure of both parallel and serially concatenated systems. Key components critical to the performance of the overall turbo decoding system are introduced. This includes the use of soft information and the BCJR (Bahl-Cocke-Jelinek-Raviv) MAP algorithm. The chapter concludes by reviewing techniques for extending the BCJR MAP algorithm, and ultimately the turbo decoding system, to all linear codes. In Chapter 3, an investigation into source-aided channel decoding is documented. It is shown how a Markov model of the source can be used to aid a channel decoder. This technique is applied to convolutional codes, and a trellis merging technique is given which allows one to efficiently use a Markov model with an arbitrary number of states. This chapter ends with a discussion of an iterative model recovery scheme at the receiver. In Chapter 4, turbo decoding and source-aided decoding are combined in an effort to improve the error performance of the existing IS-95 CDMA system. A quick introduction to the channel codes employed in the standard provides an understanding of the basic components of this system. At this point, source-aided decoding can be applied without the turbo decoding enhancement. To enhance the receiver for turbo decoding, the optimal soft-output decoding algorithm is produced in accordance with the material developed in chapter two. For the coupling of turbo and source-aided decoding, three iterative schemes are proposed. The chapter ends with a comparison of the three schemes. In Chapter 5, concluding remarks and suggestions for future work are presented. Chapter 2 Principle of Iterative Decoding This chapter deals with the basic principles underlying the remarkable new family of codes known as turbo codes. The discussion begins with an investigation into the general structure of iterative encoders/decoders. Paramount to the effectiveness of these codes is the employment of soft information. This crucial component is explored and dissected into its constituent elements. A t the heart of iterative decoding are the component decoders. A s turbo coding was first conceived, they exploited the use of the maximum a posteriori ( M A P ) algorithm, which was also named after its discovers as the B C J R (Bahl-Cocke-Jel inek-Raviv) algorithm. In its original formulation, the B C J R M A P algorithm was derived for decoding sources that can be modeled as discrete-time finite-state Markov processes [10], making it ideal for convolutional codes and their variants. The same algorithm can be extended to any linear code by forming a trellis from its block code representation. This is critical for incorporating iterative decoding into existing systems, which may not have been designed for turbo decoding, but could nevertheless benefit from such an enhancement. 2.1 S t r u c t u r e o f T u r b o D e c o d e r s Extending the concept of concatenated codes, a new high performance convolutional code was proposed recently by Berrou, Glavieux, and Thitimajshima [5]. Since its first publica-tion in 1993, this new code has stirred much excitement in the coding community because of its performance proximity to Shannon's theoretical limit. A t a bit error rate of 10"5, this code is within 0.7 dB of the l imi t ing bound [5]. This complex code is formed from a 6 Chapter 2 Principle of Iterative Decoding 7 parallel concatenation of two fairly simple component codes through an interleaving scheme. The components used are recursive systematic convolutional or R S C codes (refer to Figure 2.2). R S C codes are convolutional codes with a feedback tap, which gives them an infinite impulse response. The interleavers employed in this scheme are inherently large with pseudo-random characteristics, giving the overall code a very large free distance. The decoder for this large complex code consists of a concatenation of decoders for the component codes. Vital to the performance of this type of code is the transfer of soft information from one component decoder to the next. Soft information provides both the signal estimation and the estimation confidence. Without any further enhancement, this soft information feed-forward decoder structure is capable of achieving reasonable performance, but the greater power of this code is derived from the application of turbo iterations. A s in the feed-forward path from the inner decoder to the outer decoder, a feedback path from the outer decoder feeds confidence information gleaned from a first decoding pass back to the inner decoder. W i t h the new confidence assessments, the inner decoder re-decodes the signal improving its previous estimation, and passes its new results to the second decoder. W i t h each i terat ion, the error performance improves u n t i l the l i m i t i n g bound is approached. In this way, a prohibitively complex decoding process is broken down into relat ively s imple steps. This code adopted the name Turbo Code to i l lustrate the resemblance of its feed-forward/backward architecture to the cycl ic nature of a turbo engine. Since then, this nomenclature has strayed from the original description of parallel concatenated R S C codes to encompass a larger general set of codes wi th iterative Chapter 2 Principle of Iterative Decoding 8 decoding structures. Figure 2.1 demonstrates the basic overall structure of iterative decoding. The values channel measurements extrinsic [ ^ values Soft-In Soft-Out Decoder a posteriori values : Figure 2.1 Basic Feedback of Iterative Decoding diagram depicts the feedback operation of the overall system in the general sense. The details within the block are specific to the particular configuration of the code. Figure 2.2 portrays the architecture for the original turbo code with a parallel concatenated encoder and decoder. In the encoder, R S C codes are punctured to increase the overall rate of the code and the code bits are multiplexed with the information bits to form a single bit stream. The decoder must demultiplex the corresponding punctured codes and decode the resultant bit streams as shown in the diagram. Figure 2.3 shows the encoder/decoder structure for component codes concatenated together serially. The extension of turbo decoding from the original parallel concatenation to this configuration allows the use of iterative decoding for many existing systems. Chapter 2 Principle of Iterative Decoding Parallel Concatenated Encoder * • Interleaver Encoder 1 Encoder 2 Parallel concatenated Decoder a prion values extrinsic value Deinterleaver x k -Decoder Decoder D E C 1 Interleaver D E C 2 y ik a posteriori value Y2k Deinterleaver Figure 2.2 Structure.of Parallel Concatenated Encoder/Decoder Applications such as equalization, interference cancellation, and trellis coded modulation can all benefit from this iterative decoding process [7], [8], [9]. Chapter 2 Principle of Iterative Decoding 10 Serial concatenated Encoder channel Outer Encoder Interleaver Inner Encoder •T noise Serial concatenated Decoder channel measurements a posteriori values for outer decoder a priori values from inner decoder ^ / \ a posteriori values for information bits / Inner Decoder I -a priori values from outer decoder Deinterleaver \ Interleaver Outer Decoder a posteriori values for inner code bits Figure 2.3 Serial Concatenated Structure for Encoding/Decoding 2.2 Soft Information Turbo decoding can be applied to any concatenated structure whether it is parallel, serial, or even a hybrid of the two. This is accomplished through the communication of soft information between separate decoders. The lower complexity constituent decoders do not function independently, but work cooperatively by passing both signal estimation and Chapter 2 Principle of Iterative Decoding 11 reliability measurements to one another. With hard decision decoders, iterations beyond the first pass wi l l not yield further improvement because decoders at a later stage receive only estimation decisions and no information pertaining to the channel or the decisions' rel iabil i ty. The difference between a strong and weak signal is not distinguished. In contrast, using soft information, iterative decoders can improve the error performance with subsequent iterations because reliability information gleaned from parity bits can be re-applied to the decoding process giving improved confidence measures to the estima-tions at each and every stage of the decoding system. A s Hagenauer pointed out in [6], the soft-output estimate of a component decoder is comprised of three fundamental parts: channel measurements, a priori knowledge, and extrinsic information. This can be shown using log-likelihood ratios ( L L R ) defined as: ™ < 2 - » L e t * = (x]t x2,.\., xn) be a sequence of bits defined by ±Vs,y = (yj, y 2 , . . . , yn) be the corresponding received sequence, and a be the amplitude of a fading channel (a = 1 for an additive white Gaussian noise ( A W G N ) channel). The soft decoder output is the a posteri-ori information defined as: Chapter 2 Principle of Iterative Decoding 12 L(X: 'yd = log = log p(*i = +1 yd p(*i = -1 yd x. = +l) (2.2) Assuming coherent detection: exp N (y,--«/) L(xt y.) = log- -flog \ 2 expj-^-(y,- + fl,-) />(*,• = +1) M^- = - i ) (2.3) p(*,- = +1) >(*,- = - ! ) = L c - y . + L(x-). (2.4) L c y,- is the channel measurement component, and L(JC,) is the a priori information component. More specifically, Lc is known as the channel reliability factor. For the initial pass, the a priori distribution is assumed equiprobable, which means L(x)=0. Succeeding stages of the decoder will update this information as the decoding process progresses. For forward error correction codes, the channel measurements can be further divided into a contribution from information bits and a contribution from parity bits. For example, consider a (3, 2) block code with the systematic and parity bits for the informa-tion bits u = (uj, u2,., uN), defined as: Chapter 2 Principle of Iterative Decoding 13 Note, that uj can be rewritten as: ux - u2 © x ; and this formulation can be exploited to enhance the reliability estimation. Apply ing this to equation (2.4), the new a posteriori value is derived: L(ux y) = LQ • y\ + L(x\) + L(x2®xp y\,yp) channel a priori extrinsic ( 2 - 5 ) measurements info information The latter term is known as the extrinsic information and can be calculated as: L(xs2®xp y2,yp) = log ( U£\ y'2) - Lc(y") ^ + e L(x2 y*2).+ L c ( / ) l+e (2.6) J This (3, 2) parity code could form one of the component codes for the turbo system. Another component code would then form parity bits from an interleaved block of the information bits. Hence, the extrinsic information for uj as computed by the second decoder would be independent of the extrinsic information computed above. Al though the complete a posteriori probabili ty is used for the final decoding decision, the extrinsic information is the medium of communication between separate modules. A given component decoder uses the extrinsic information from the previous decoder as a priori information. Essentially, it is the confidence factor associated with an Chapter 2 Principle of Iterative Decoding 14 information bit that is obtained through the soft decoding of its parity bits. Only extrinsic information should ever be used as the new a priori value for the future iterations because the reliability measures should be statistically independent of the estimated received bit. However, since each iteration incorporates the reliability information for all the bits in its modular decoding, the extrinsic information from one decoder to the next w i l l have increasing correlation with every iteration. After many such iterations, saturation occurs where the improvement w i l l become only marginal and further iterations w i l l y ie ld a diminishing return. 2.3 Component Decoders Extrinsic information is the glue that binds the individual decoders together forming a bigger overall system. Using this information, the component decoders act as the main decoding engines. Traditionally, the Viterbi algorithm is the most popular algorithm for decoding convolutional codes, because it is simple to implement and has a relatively low decoding complexity. However, it cannot be used for iterative decoding because the output is a sequence of hard-decisions. It is optimal in the sense of minimizing the probability of sequence error, but does not provide confidence factors associated with its decisions. Different approaches aimed at establishing a soft-output version of the classical Viterbi algorithm have been explored [11], [12]. One such example is the soft-output Viterbi algori thm ( S O V A ) , which makes use of a trace back window to produce re l iabi l i ty measures [12]. Al though "the S O V A delivers soft-information, it is suboptimal in its estimation of reliability information. There exists a more complex optimal maximum a posteriori ( M A P ) algorithm [10], which we cover next. Chapter 2 Principle of Iterative Decoding 15 As Bahl et al. pointed out, although the Viterbi algorithm is optimal in the sequen-tial sense, it may not necessarily minimize the probability of individual symbol or bit error [10]. Consequently, in 1974, they developed the optimal M A P algorithm which delivers soft decision a posteriori probabilities for the symbols [10]. It is generic to any process that can be represented by a finite-state Markov model, where transitions between states are well-defined. Suppose that u= (uj, u2,..., uk) is the information sequence, x= (xj, x2,..., x„) is the code sequence, andy= (yj, y^,---, yn) is the received noisy sequence for a rate k/n code. The objective of the decoder is to estimate the a posteriori probability of the states and state transitions of the Markov source. A trellis is formed by indexing the states with both a state index and a time index. A path through the trellis is defined by a sequence of states, T and is associated with a unique information sequence. We use the notation Sj to denote T the state sequence indexed from time 1 to T, i.e. 5 j = (Sv S2, ...ST) . Each node of the trellis is associated with a nodal probability, p(Sk=s y"), and each branch transition is associated wi th a t ransi t ional probabi l i ty , p(Sk_ j=s', Sk=s v " ) . The a poster ior i probability for an information bit in terms of the L L R is defined as: p(uk=+l\y) L(itk) = log p(uk=-l\y) X piS^^S^yypiy) (2.7) lQg ( ' V )~*"* = + 1 • : X p(Sk_l=s',Sk=s,y)/p(y) (s',s) —> uk = -1 Chapter 2 Principle of Iterative Decoding 16 where the sign of L{uk) indicates the signal estimation ( + 1 , - 1 ) and the magnitude indicates the reliabili ty. The probability p(y) is the same for both the numerator and denominator, and cancels out leaving the term p(Sk__]=s', Sk=s, y) , which, by recursively applying Baye's theorem, can be shown to be: p(Sk_1=s',Sk=s,y) = p(ynk+lSk=s) -p(Sk=s,yk S*-i=s') • P(Sk_l=s',y\ ]) M5) • lk(s''s) • a * - i ( J ' ) (2.8) The term can be obtained in terms of ak_ , and can therefore be calculated by tracing the trellis forward via the forward recursion: k ak(s) = p(Sk=s, yj) M-l = S P(sk-\=s'>sk=s>y'l\ (2.9) s' = 0 ='X^5*-i= s ,'^i l)-p(sk=s'yk sk-i=s') f a 0 ( 0 ) = 1 s = 0 ••«,(»)= ^ . , ( 0 - T t ^ ) w h e r e | a o W = ( ) ^ (2.10) The term can be obtained in terms of $ k + , and can be calculated by tracing the trellis backward via the backward recursion: Chapter 2 Principle of Iterative Decoding 17 M - 1 = X p(ynk+vsk+i=s' sk=*) s' = 0 = X ^ ^ l ^ + l = S ' ) ' ^ ( ^ + l = S ' ' > , / t + l P^ ) = SP.+ i(0-Y, + 1(^') where | p ^  = q ^ q (2.11) The third term is the metric associated with the channel: yk(s', s) = p(Sk=s,yk xeX> « ' 1 « ' 1 « deterministic from trellis a priori probability channel char. = P(Sk=s\sk^)-p{yk\xk=xk) yk(s\ s) = p(uk) • p(yk xk) (2.12) where uk is the source input and xk is the source output corresponding to the transition s —> s Although this result is theoretically correct, a finite precision processor cannot always implement this because the joint probabilities p(Sk_l=s', Sk=s, y) are often very small, and may lead to numerically unstable results [5]. Therefore, Berrou et al. described a modified MAP algorithm to combat the problem of numerical stability [5]. Instead of cancelling out the constant probability p(y), the joint probabilities are divided by p(y)/ Chapter 2 Principle of Iterative Decoding 18 pCjk). This slight modification affects both the forward and backward recursion calcula-tions. The a posteriori probability equation now becomes: p(Sk_l=s',Sk=s,y) ^ P(yk) = P(Sk_,=s,S,=s y)-p(yk) ^ ^ = ak_1's')-yk's',s)- $k(s) where the modified forward probabilities are defined as: ak(s) = -Z-f . (2.14) p(yO B y applying (2.9) and (2.14), the new term can be calculated recursively as: a , ( , ) = - ^ — : . (2.15) XIX-i^')-?^',*) The new modified backward recursion probability is defined as: 3t(s) P*(*)= „ , k •- (2-16) p(yk+i\yO B y expanding the denominator and applying the previous definit ions, this can be calculated recursively via: ^h(s)-yk(s',s) •P*-1(*') = = = b • ( 2 - 1 7 ) With all the adjustments, the modified B C J R M A P algorithm involves calculating: Chapter 2 Principle of Iterative Decoding 19 L(bk) = log (s',s) —> uk = + 1 X , ak_x{s')-lk{s',s)-U(s) (2.18) This modified M A P algorithm w i l l not result in numerically unstable values. It is also known as the forward-backward algorithm because of its recursive nature. Because of the importance of this algorithm to turbo codes, and due to it is computational complexity, different variants and sub-optimal derivations have surfaced in an attempt to produce a more efficient algorithm [4], [13], [14], [15]. 2.4 Application of the MAP Algorithm to Linear Codes The M A P algorithm as it was first proposed is generic to codes that exhibit properties of a discrete-time finite-state Markov process [10]. A well-defined process can be represented by a state transition diagram and can be further expanded into a trellis diagram. Convolu-tional codes fit this criteria wel l , and are the main target of the decoding algorithms described. The contents of the encoder's shift register defines the state in the encoding/ decoding process. A n input results in a transition as the register contents are shifted or changed from one state to another. The M A P algorithm can also be applied to any code with a trellis representation, and in [10], it is shown that al l linear codes have such a representation. There are different techniques of producing trellises for a common linear block code [21]. In [16], Wolf continued the study of the method first proposed in [10] for trellis construction through the use of partial syndromes in the parity-check matrix. Massey Chapter 2 Principle of Iterative Decoding 20 followed in [17] with a similar method using the generator matrix, and a third method was introduced by Forney in [19]. Although each of these methods produce different trellises for the same code, only the Wolf 's method of trellis construction is presented here. For an arbitrary (n,k) linear block code with generator matrix G and parity-check m a t r i x H, an i n p u t b i t v e c t o r u = ( « 0 , ux, uk) f o r m s the c o d e w o r d c - (CQ, C J , cn) v i a c = u • G. Since the generator matrix and the pari ty-check T matrix satisfy G • H = 0 , the decoder w i l l assume a valid codeword is received i f the T syndrome, defined as s = c • H , is equal to zero. If h j , h 2 , . . . , h n are the columns of the parity-check matrix, then for the syndrome to be zero, the following relationship must be true: i = 1 • The final syndrome can be formed progressively through partial syndromes, defined as: *o = 0 Sj = Cj • /ij s 2 = Cj • / i j + c 2 • ft2 = « j + c2 • h2 (2.20) Sn = S n - \ + C n - h n -The states in the trellis are simply the sequence of partial syndromes represented by the n -k tuples. L i k e the trellis for convolutional codes, the block code trellis commences and Chapter 2 Principle of Iterative Decoding 21 terminates with the all zero states s0 and sn respectively. A path through the trellis actually traces out the codeword sequence. The complete trellis considers all possible codewords, and there is a one-to-one correspondence between a particular path through the trellis and a specific codeword. Trellis construction consists of the following two steps: (i) Construct branches for all possible transitions of the trellis for length n, where n is the length of a codeword. This results in an unexpurgated trellis. (ii) Expurgate all paths that do not lead to the zero state at depth n leaving behind the final trellis representation of the linear block code. For illustrative purposes, consider the example of a binary (5,3) code with the parity check matrix: H = 1 1 1,0 0 [_0 1 0 1 1_ A t time t=0, a 0 input w i l l lead to state 00 and a 1 input w i l l lead to state 10. Fol lowing step (1) outlined above, we obtain the unexpurgated trellis shown in Figure 2.4. Then, the paths that do not lead to the final state 00 are removed, resulting in the four state trellis shown in Figure 2.5. The structure of the trellis is irregular compared to that of convolutional codes. Thus a convolutional code is equivalent to a stationary Markov source and a linear block code is equivalent to a time-varying Markov source. For a rate k/n linear block code with Chapter 2 Principle of Iterative Decoding 22 00 o o a——o o o \ \ \ / \ / - \ / \ \ \ / x / x / \ \ \ / / \ / \ \ \ \ / / \ / \ 01 O \ O \ o—x—o o o \ • / W W \ y \/ \/ \ / A A \ / \ / \ / \ 10 o o — — x — o o o . \ / \ \ / \ / \ / \ \ / \ / \ / \ / \ / \ 11 o o o o -o—:—o Figure 2.4 Unexpurgated trellis from step 1 of Wolf's trellis construction method r=n-k, there are at most 2r states at each level. Therefore, with the M A P algorithm, any parity check code with r parity bits can be decoded with a complexity that is proportional to 2r on an unknown memoryless channel. In general, one would want to minimize the number of states so that the decoding complexity is reduced. In [20], Muder defines the term minimal trellis as the trellis with the minimal number of states at each level for a specific code. In [21], it was shown that the Wolf, Massey, and Forney trellis generation methods all yield minimal trellises. A l l minimal trellises for the same code are said to be isomorphic. Except for the specific trellis layout, isomorphic trellises exhibit identical properties in term of error performance and decoding complexity. Chapter 2 Principle of Iterative Decoding 23 00 o Or- o o o o / \ / / / \ \ \ \ / \ \ I / \ / oi o \ o \ o — l — Q h - — \ y ' o \ / \ I I 10 o o — \ -o i o \ / \ / \ / n o o o o o o o o Figure 2.5 Expurgated trellis from step 2 of Wolf's trellis construction Chapter 3 Markov Model-Aided Convolutional Decoding The complexity of a digital communication system demands designers to model the complete system as a col lect ion of smaller functional blocks (refer to Figure 3.1). r 1 1 - -i 1 Information .1 Source Channel 1 1 Modulator Source r i Encoder Encoder i Joint Source-Channel Encoder 1 L r 1 Joint Source-Channel Decoder j -i 1 Channel 1 1 1 1 j Information 1 « 1 Source Channel 1 1 Demodulator Sink 1 1 Decoder Decoder 1 1 L 1 j Figure 3.1 Communication system components Traditionally, these components (information source, source encoder, channel encoder, digital modulator, digital demodulator channel decoder, source decoder, and information sink) have been identified as independent functional units wi th a clear-cut interface between each module [22]. Each module is designed with individualized criteria and there is little or no interaction between the different groups of designers [31]. The source encoder was designed to maximize channel utilization through compression techniques. The ultimate goal of source coding is to remove a l l redundancy in an information sequence and produce an output sequence of equally l ikely binary digits. Naturally, the 24 Chapter 3 Markov Model-Aided Convolutional Decoding 25 more effective the source encoding, the more the compressed sequence becomes sensitive to channel noise. The channel is the physical medium on which data is sent v i a the modulated and demodulated signalling waveforms. Since the channel is not ideal and is prone to noise, controlled redundancy is introduced to combat error by a process known as channel coding. Shannon's separation principle says that source and channel encoders can be treated independently without any overall performance loss [26]. However, this was val id only when the l ink from source to channel is error free and/or the source output ' contains no redundancy [30]. Under non-ideal situations where correlation between source bits does exist, source and channel encoding/decoding should not be treated separately [31]. A s the author of [31] pointed out, Shannon, in his classical paper, also ascertained that in the absence of such ideal conditions, any residual redundancy in the source encoded data can and should be exploited for channel error protection. According to [31], most source coding algorithms for voice, audio, and images produce correlation in certain coded bits. When the source and channel encoders/decoders are not designed independently, the system is referred to as a joint source/channel coding system. Joint source/channel coding systems which use traditional source coders and channel coders fall into two main categories. One is concatenated source/channel coders that cascade a known source coder and a known channel coder and allocate the fixed bit rate between the source and the channel coder to maximize system performance. In the other category are systems where the source coder is modified to account for the presence of the noisy channel; and systems where the channel decoder is modified to take into account properties of the source. Chapter 3 Markov Model-Aided Convolutional Decoding .26 Here we explore the latter approach. The source data sequence will be modeled as a Markov source, and our channel decoder will use a Markov model of the source to aid its decision making process. We choose to deal with convolutional codes as the channel codes because of their pervasiveness in existing systems. We give a derivation which shows how to perform Markov model aided decoding for a general Markov model. For the case of a two state model, the necessary modification to the Viterbi decoding algorithm becomes obvious. We go on to cover a technique called trellis merging which allows the Viterbi algorithm to be used with an N-state Markov model. Finally, this chapter concludes with an iterative decoding technique which allows the receiver to recover the model without any a priori information of the source statistics. The results of this chapter are to appear in [39]. 3:1 Viterbi Decoding using a Two-State Markov Model Markov model-aided decoding involves modeling the source data set as a finite-state Markov sequence, and incorporating it into the decoding process. The model parameters are the state transition probabilities of the source, where a state refers to a source code word, or to a subset of the bits in a source code word. Consider a simple two state model based on the data bits of an information sequence. A zero information bit will be represented by state zero while a non-zero bit will be represented by state one. The transi-tions between the data bits are characterized by the four transition probabilities. More redundant the data sets result in more pronounced models, and more benefit to the source-aided channel decoder. With an ideal source encoder, the information bits are equally likely, and the Markov model will add no advantage or disadvantage to the decoding Chapter 3 Markov Model-Aided Convolutional Decoding 27 p(Oll) p(OIO)( ( 0 ) ( 1 ) ) p ( l l l ) p(llO) Figure 3.2 Two state Markov-model process. There are different ways of making the model parameters available to. the receiver. These include training the decoder on some standard data set, transmitting the model parameters as side information with a stronger error protective codes [33], or estimating the model parameters iteratively at the receiver. The training of the receiver on a standard sequence is an approximation technique since different data sets can have completely different statistics. Transmitting the model parameters incurs direct overhead cost; further-more, transmission errors can corrupt the model used by the receiver. Model estimation through iterative methods avoids these problems, but produces a larger decoding delay because the parameters are extracted and fed back to the decoding system similar to that of turbo decoding. In a system with a single encoder/decoder pair, the Viterbi algorithm can be used for decoding because in the absence of a multi-stage system, hard decision decoders will suffice. Unlike the exhaustive tree search algorithm, the Viterbi algorithm requires Chapter 3 Markov Model-Aided Convolutional Decoding 28 relatively little computational effort and yet produces optimal performance as opposed to probabilistic techniques such as the Fano or Stack algorithm [22]. Note that the Viterbi algorithm is not limited to convolutional codes since linear block codes, or any codes that can be transformed into a linear block code, have a trellis representation (refer to section 2.4). The Viterbi algorithm searches through the trellis eliminating paths at each node, and is optimal in the sequential sense. The conventional Viterbi decoder can incorporate a Markov model into the decision making process, as we show in the following. For a given rate k/n code with information sequence u = (u0, uj, u2...uN,j), channel encoder output x = (x0, xh x2,..., xN.j), and received sequence y = (y0, ylt y2,.-.y^.i) the estimation ii = u is correct i f y = x. The objective of the decoder is to minimize the probability of error as calculated by: the decoding process. Hence, to minimize the probability of error p(E), one minimizes the conditional probability. p(E) = J > ( £ y ) • p(y) (3.1) y where p(y) is the probability distribution of the received sequence and is independent of P(E y) = p(u*u\ y) (3.2) Note that minimizing p(ii *u\ y) is the same as maximizing p(u = u | y). App ly ing Baye's rule we get: Chapter 3 Markov Model-Aided Convolutional Decoding 29 p(u = u y) = P(y u= U) p(u) P(y) or simply P(y u)-p(u) P(y) (3.3) For optimal decoding, it is sufficient to maximize the numerator term [26]. In conventional Vi te rb i decoding the term p(u), which is the a prior i probabil i ty of the information sequence, is not known and it is assumed the same for every sequence u. Hence, the conventional Viterbi decoder maximizes p(y u), which corresponds to maximum l ikel i -hood decoding. A t each node, the decoder calculates the branch metrics associated with the trellis transitions from the nodes at the previous time to that node. These are added to the partial cumulative metrics associated with each path or sequence up to the node. A survivor path is chosen with the best metric, and all other paths merging into this particular node are eliminated. In a discrete memoryless channel ( D M C ) , the channel probabilities are indepen-dent, or p(y | «) = "«) • (3.4) Furthermore, for data described by a first order Markov source, it can be shown that Pi") - Ylp(ui ui-\) (3.5) Therefore, to maximize the a posteriori probability of choosing the correct sequence it is sufficient to maximize Chapter 3 Markov Model-Aided Convolutional Decoding 30 (3.6) In the log domain, this expression becomes a sum of branch metrics which include channel and source transition probability terms: The decoding algorithm proceeds as previously described, except the modified metric branch metric is used. The performance of the source-aided decoder was evaluated v ia Monte Carlo simulations over an A W G N channel. Each simulation point within this thesis contains at least 200 bit errors. We use a one bit differential pulse code modulated ( D P C M ) monochrome image of Lenna for modeling and transmission purposes. Although one bit D P C M encoding of the Lenna image produces poor picture quality and would not be used in practical situations, these simulations show that improvement can be attained through the use of source modeling. The simulation uses a rate 1/3 convolutional code with constraint length 9 and generator polynomials gj=5578, g2=6638, and g3=7718 (refer to Figure 3.3 for the simulation results). The model parameters, which are acquired by (3.7) channel term a priori probabilities from model Chapter 3 Markov Model-Aided Convolutional Decoding 31 measuring the DPCM Lenna image prior to transmission, are given in Table 3.1. Table 3.1Model Parameters for 1-bit DPCM Image of Lenna p(Olx) ' P(llx) • P(xl0) 0.6518 0.3482 P(xll) , 0.1441 0.8559 p(0)=0.2922 p(l)=0.7078 1 0 b I : h : : : : : : : 4 : : : : : : : : : l : : : : : : : : : : : : : : : : : : l ; : : : ; : : : : J : : : : : : : : : J : : : : : : : : : j -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 SNR (dB) Figure 3.3 Two State Markov Model-Aided Decoding To demonstrate the impact of the model, we show a family of curves with various parame-ters. Good model candidates will be those derived from images encoded with schemes that are widely used. The JPEG standard and many other transform-based image encoders Chapter 3 Markov Model-Aided Convolutional Decoding 32 decompose an image into a number of relatively independent components. In a progressive transmission scheme, these components may be sent separately. In applying source-aided decoding, the best candidate component would be the one with the most redundancy such as the DC terms of a discrete cosine transform (DCT). As outlined in [32], we used a 7-bit Lloyd-Max quantization of the DC terms of a 8-pixel by 8-pixel DCT, and have encoded these quantized values with folded binary code. We considered transmitting the most significant bits. Statistics established using this technique are presented in Table 3.2. For Table 3.2Parameters for Different Models DCT Model Artificial Model p(Olx) P(1M p(0lx), • p(Hx) P(xl0) 0.8511 0.1489 0.9518 0.0482 p(xll) 0.1436 0.8564 0.0441 0.9559 p(0)=0.5110 p(l)=0.4890 p(0)=0.7922 p(l)=0.2078 comparison purposes, we also include parameters with more pronounced statistics. The effect of the different data and parameters are exhibited in Figure 3.4. For faster simula-tions, a 1/3 convolutional code with constraint length 3 and generator polynomials gj=5, g2=7, g3=7 is employed. This weaker code was chosen to show the convergence of the curves for source-aided and non-aided decoding in the limit of high signal-to-noise ratios (refer to Figure 3.4 for simulation results). This was necessary to reduce simulation time as a weaker code requires less computational effort. Naturally, the more pronounced the model is, the better the decoder performs with the aid of source modeling, The performance improvement is more prominent at lower signal-to-noise ratios. Chapter 3 Markov Model-Aided Convolutional Decoding 33 SNR (dB) Figure 3.4 Family of curves with varying statistics A t lower signal-to-noise ratios, the channel is more noisy, and the statistical model is more effective at biasing the metric. However, in a clean channel, the channel term of the branch metric more accurately reflects the transmitted bits, and the source term is relatively weak. Therefore one observes the converging of the source-aided and non-aided curves at high signal-to-noise ratios. To verify this, one can consider the separate contributions of the channel probabilities and the a priori probabilities of the model. In an A W G N channel, the channel probabilities are given by: Chapter 3 Markov Model-Aided Convolutional Decoding 34 p(yt\  ud = - j = — e x P x<*;-*!)2 2 - a 2 (3.8) 1 • • 2 log[/?(y,.| w,)] = 2 " X ^ ~ ^ +constant 2 • a _/ where jcj e {-1, +1} is the j t h bit of the i t h code word, and yj is the corresponding received value. 1 fE, o 2 « 1 0 1 0 ^ (3.9) From (3.8) and (3.9), at high S N R (EB/N0), a 2 becomes very small making the channel term of the branch metric in (3.7) dominate over the a priori term. Similarly, at low S N R , o 2 becomes very large giving the channel term of the branch metric a lower weight relative to the a priori probability term. The relative contribution from the two terms is inherently regulated by the channel conditions. 3.2 T r e l l i s M e r g i n g a n d H i g h e r S t a t e M o d e l s For source coders which produce fixed length codewords of length greater than one bit, we must consider Markov models with a higher number of states. For example, we may consider two or three bit D P C M encoded images whose Markov models have four and eight states, respectively. The source model for the two bit case is shown in Figure 3.5. For q-bit source code words, i f we block q channel code words into a single block, Chapter 3 Markov Model-Aided Convolutional Decoding 35 Figure 3.5 Four State Markov Model the block will correspond to a single source codeword, and the Viterbi algorithm employed in the two state Markov model aided decoder can be used as before except the model now has Q=2q states. By grouping source bits, the decoder for a 1/n code will parse q n-bit codewords into one super codeword. This can be viewed as transforming a 1/n code into a q/qn code. In the trellis representation, this can be seen as eliminating q-1 intermediate nodes for every q nodes and connecting the two remaining end nodes with a larger number of paths directly. This process is known as trellis merging. Refer to Figure 3.6 and Figure 3.7 for a visual depiction of a trellis and its corresponding merged trellis. Trellis merging reduces the overall length of the trellis by a factor of q, but increases the number of branches by a factor of 2q'1. For the case where q=2, the overall trellis length is reduced by a factor of 2, while the number of branches is increased by a factor of 2. Therefore, there is no significant increase in the overall decoding complexity. However, for q>2, there is an increase in overall decoding complexity. As mentioned earlier, the decoding Chapter 3 Markov Model-Aided Convolutional Decoding 36 01 00 11 10 Figure 3.6 Trellis for a 4 state convolutional code algorithm is not changed, and the metric has the same form as in Equation (3.7), except that ui is now a q-bit source codeword, and p{yAuf) is replaced by where y • is the n-bit code word corresponding to the source code bit ui . To evaluate the performance of the iV-state model aided decoders, simulations were completed for both a two bit D P C M and a three bit D P C M encoded Lenna image. For consistency, we always employ the convolutional encoder with constraint length 9 and generator polynomial g 1=557 8 , g 2=663 8 , g 3 =771 8 . A s compared to the results of the single q-1 (3.10) m = 0 Chapter 3 Markov Model-Aided Convolutional Decoding 37 00 o 01 X 5? JC X 10 o 11 Figure 3.7 Two Stage Merged trellis for 4 state convolutional code bit D P C M encoded image, the two bit D P C M encoded image shows noticeable improve-ment in the Markov model coding gain. However, the improvement from a two bit D P C M image compared to that of a three bit D P C M image revealed very minor improvement in the bit error rate. The model parameters for the two bit D P C M Lena image are given by Table 3.3. Table 3.3Model Parameters for 2-bit D P C M Image of Lena p(00lxx) p(01lxx) p(10lxx) p(lllxx) p(xxlOO) 0.5839 0.3469 0.0072 0.0620 p(xxl01) 0.0403 0.6054 0.0152 0.3391 p( \ \H0) 0.0070 0.0858 0.4765 . 0.4307 p(00)=0.0435 p(01)=().364H p(10)=0.0642 p(ll)=0.5274 Chapter 3 Markov Model-Aided Convolutional Decoding 38 Table 3.3Model Parameters for 2-bit DPCM Image of Lena p(OOIw) p(Ollxx) p(10lxx) p(lllxx) p(xxlll) 0.0056 0.02337 0.05267 0.7079 p(00)=0.0435 p(01)=0.3648 p(10)=0.0642 p(ll)=0.5274 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 SNR (dB) Figure 3.8 Comparison Markov model aided Decoding for N-bit DPCM To demonstrate the effectiveness of the modeling process, simulations with real image data are compared to those completed with synthesized data. The synthesized data was computer generated using the parameters obtained from measuring the two bit D P C M Lenna image statistics. The parameters of the model are reflective of the Lenna image as a Chapter 3 Markov Model-Aided Convolutional Decoding 39 whole and may not be good a representation of image sub-parts. In other words, the model assumes a stationary source, whereas real images are never truly stationary. Therefore, Markov model aided decoding of synthesized data should yield better results as compared to Markov model aided decoding of real image data. However, as the simulation results demonstrate, the performance on the actual image data follows very closely to that of the simulated ideal data (refer to Figure 3.9). This justifies the modeling process. Please refer 10" No Source Model Markov Model-Aided, actual data Markov Model-Aided, ideal data -1.5 -0.5 0 0.5 1 SNR (dB) 1.5 2.5 Figure 3.9 Four State Markov Model-Aided Decoding to Figure 3.10 for an example of images obtained from Markov Model aided decoding of the 512 x 512 Lenna image, and notice the very dramatic visual improvement attained! Chapter 3 Markov Model-Aided Convolutional Decoding 40 Figure 3.10 Qualitative depiction of Markov model aided Decoding for 2-bit DPCM Chapter 3 Markov Model-Aided Convolutional Decoding 41 Not Source-aided at S N R = -1 dB Source-aided at S N R = -1 dB Not Source-aided at S N R = 0 dB Source-aided at S N R = 0 dB Figure 3.10 Qualitative depiction of Markov model aided Decoding for 2-bit DPCM Chapter 3 Markov Model-Aided Convolutional Decoding 42 3.3 Model Recovery Consider the problem of making the model available to the receiver. Overhead is incurred from sending the model parameters explici t ly. Us ing a standard training sequence to establish the model may lead to inaccuracy and ultimately degraded performance as differ-ent images have different statistics. Obtaining the model iteratively does not incur, the problems of the other two methods, with the drawback of a processing time delay at the receiver. The iterative modeling process is completed by first decoding the data file conven-tionally with no prior knowledge of the data. During the decoding procedure, the decoder counts the number of states and the number of state transitions encountered. The model parameters are calculated from the measurements, and the decoding process is repeated with the new model parameters (refer to Figure 3.11). Measurements from previous Information Sink Source Decoder Channel Decoder Demodulator i Figure 3.11 Iterative Model Recovery Structure iterations are not used so that the effect of previous decoding errors w i l l not accumulate and weight down future improvements. This process is repeated iteratively until conver-Chapter 3 Markov Model-Aided Convolutional Decoding 43 gence, where further iterations yields minor or negligible improvement, is achieved. Iterative modeling w i l l produce improvement as long as each stage of the iteration yields better results than the previous one. From Figure 3.12, we observe that over the entire -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 SNR (dB) Figure 3.12 Iterative Model Recovery using Four State Model with Synthetic Data range of channel S N R , after two decoding iterations, the model is recovered well enough so as to give essentially the same Markov model coding gain as the case in which the receiver has perfect a priori knowledge of the source model. Figure 3.12 shows the results of simulations performed over synthesized data modeled from a two-bit D P C M encoded Lenna image, whereas the source data for Figure 3.13 is the actual data obtained directly Chapter 3 Markov Model-Aided Convolutional Decoding 44 10 L I I ) ! ! I I 10"1 rr _2 U J 10 m 0th Iteration — * — 1 st Iteration —e— 2nd Iteration B 5th Iteration io-4! J i i i 1 1 1 r -2 -1.5 -1 . -0.5 0 0.5 1 1.5 2 SNR (dB) Figure 3.13 Iterative model recovery using raw data from 2-bit DPCM Lenna image from the image file. Both simulations employ the same F E C code, which is a rate 1/3 constraint length 9 convolutional code with generator polynomial gj = 5578, g2 = 6638, and g3 = 7718. A s observed from the curves, the model recovery process has a similar impact on the B E R performance for both the synthetic data and the actual image data. A l though we have not reported the quantitative difference between the i teratively recovered model and the true model, we note that Markov model aided decoding has a certain amount of insensitivity to the accuracy of the model. For even when the B E R obtained using the true model is identical to the B E R obtained using the iteratively estimated model, there is discrepancy between the two models on the order of five percent. Chapter 3 Markov Model-Aided Convolutional Decoding 45 To reduce the decoding delay incurred by the iterative model recovery technique, we have considered breaking the image file into blocks and performing model recovery block-by-block, independently for each block. The model for a block of small size tends to be very pronounced, and any transmission error causes an extreme deviation away from the true model. A block must be sufficiently large so that there is sufficient error cancella-tion in the measured transition probabilities. Without error cancellation, the model measured at the receiver has no chance of biasing the branch metric so as to improve the decoder's error probability. Rather than change the data block size of the encoder/decoder, which is often fixed by standards, model recovery procedure can group adjacent data blocks together to achieve larger model recovery blocks that are big enough for sufficient error cancellation (refer to Figure 3.14). The results of varying model recovery block size Data Block Model Recovery J Block J i l l Model Recovery Iterations Image File Figure 3.14 Grouping together data blocks for model recovery Chapter 3 Markov Model-Aided Convolutional Decoding 46 is observed in Figure 3.15. Here simulations are performed on actual raw data from the S N R = - 3 d B S N R = - 2 d B S N R = - 1 d B S N R = OdB 16 32 Number of Frames 2850 Figure 3.15 Impact of varying model recovery block sizes two-bit D P C M Lenna image. In this scenario, synthetic data is not applicable because we are interested in observing the impact of different sub-image models on the overall decoding performance. In synthesized data, the data bits are generated within the confines of a specified model; therefore, regardless of the data blocks size, the statistics are station-ary. However, smaller data blocks within a larger image file may not be stationary and statistics can vary drastically from one block to another. The simulations are performed using the same rate 1/3 convolutional code with constraint length 9. The data block size used is 184 data bits, and the complete image file consists of a collection of 2850 such data Chapter 3 Markov Model-Aided Convolutional Decoding 47 blocks. For our simulations, we monitor the relative improvement of the fourth model recovery iteration over a first pass through the system. To accomplish this, we measure the ratio of the two B E R values over a range of model recovery block sizes, which span from a single data block to the complete image file. One w i l l expect a trade-off between the stationarity effect and the error cancella-tion effect. A smaller block w i l l y ie ld a model more reflective of the local statistics. However, a model, which is very pronounced local ly, w i l l be prone to errors. In the extreme case, consider a model developed for each individual bit in the data transmission. If a zero bit is received, the model recovered wi l l have p(0) = 1 and redecoding wi l l simply give back the same result. A single error w i l l produce a model which is completely incorrect. Therefore, in this scenario, statistical modeling w i l l provide no benefit to the overall coding performance. The results observed in Figure 3.15 are consistent with this expectation. Because the results are expressed as a ratio of B E R at the fourth iteration over the B E R of a non-iterative system, a smaller value indicates better performance in terms of benefits from the model recovery process. From Figure 3.15, we observe that the best result occurs at a model recovery block size of approximately 32 for all S N R ' s we investigated. The curve indicates a trend of improved B E R values as the model recovery block size increases from a single data block to the optimal size observed above, and a slightly inferior performance is observed after this point. This is expected since the effect of error cancellation and model accuracy is balanced at approximately 32 model blocks. For the Lenna image, this is equivalent to dividing the image into 90 sub-images. This investigation carried out on Chapter 3 Markov Model-Aided Convolutional Decoding 48 the Lenna image only as a mean of demonstrating the impact of partitioning an image transmission into smaller blocks for model recovery. One can anticipate, that the best way to divide up the image is very image dependent. Different approaches for sub-image division is a study worthy of a separate investigation. Chapter 4 Source-Aided Turbo Decoder The primary purpose of iterative decoding as outlined in the previous two chapters is to obtain the a priori probabilities necessary to perform MAP decoding. Turbo decoding exploits the structure of one component code to approximate the a priori probability for input to the following component decoder. Source-aided channel decoding exploits residual redundancy in the information source which is modeled as a collection of a priori probabilities. Although the two probabilities are from different sources and are gathered differently, the two probabilities can be used together to form a Markov model aided turbo decoding system. In this chapter we examine the application of Markov model-aided turbo decoding to the codes of an existing concatenated system, the reverse channel link of the Code Division Multiple Access (CDMA) standard, IS-95. The reverse channel link is composed of the serial concatenation of two subcomponent codes. The concept of turbo code, which was proposed after the development of the standard, had no influence on the design of the reverse link code. Therefore, the codes employed in the IS-95 standard were not optimized for turbo decoding. Nevertheless, its concatenated structure makes it a good candidate for the application of turbo decoding. To effectively apply iterative decoding to the reverse channel link, we must modify the existing decoder by reformulating the component decoders as soft-in/soft-out decoders. Source-aided decoding via Markov modeling can further enhance this turbo decoded system. In practical systems, the decoder has no precognition of the source model parameters and model recovery must be performed. The small interleaver block size dictated by the standard poses a challenge to the model 49 Chapter 4 Source-Aided Turbo Decoder 50 recovery process. To overcome this difficulty, three model recovery schemes were proposed and compared. 4.1 CDMA Channel Link In 1990, Qualcomm proposed the second North American standard for digital cellular communication, which was later adopted in 1993 as IS-95 (interim standard 95). This standard is based on a spread spectrum technique where a narrow band signal is expanded into a signal with a bandwidth much broader than the original signal. Spread spectrum techniques were first employed by the military for the purposes of an ti-jamming because of their low probability of intercept properties. In consumer applications, these properties were exploited for the purpose of multiple access in a common bandwidth. Although the forward and the reverse l ink employ many of the same operating principles, they are applied differently. Both the forward and reverse link are heavily dependent on the Walsh-Hadamard code. The forward channel l ink uses the Walsh-Hadamard code for direct sequence spreading, whereas the reverse link uses it for coded spreading. In the forward link, the Walsh-Hadamard code is used primarily for channel assignment, whereas in the reverse link, the same Walsh-Hadamard code is employed for extra redundancy in error correction. Because of this code's importance to the IS-95 standard, this section w i l l commence with the description of the Walsh-Hadamard code and its properties. Having established this, the IS-95 transmitter/receiver structure is presented. A t this point, it is shown how Markov model aided decoding can be applied to improve the performance of the existing system. Chapter 4 Source-Aided Turbo Decoder 51 The Walsh-Hadamard functions are binary orthogonal sequences, whose lengths are a power of two. The Walsh-Hadamard code is defined by the Hadamard matrix, which was named after its initial discoverer Jacque Hadamard [22]. The Walsh-Hadamard code linear combinations of different columns in the Hadamard matrix. For example, a code word F is formed v ia F = H • f, where H is the Hadamard matrix a n d / i s the input vector. Hence the Walsh-Hadamard codes can be viewed as linear block codes [22]. The Hadamard matrix is a n x n square matrix with two elements: + l ' s and - l ' s . The inner product of any two distinct rows in the matrix is zero, and the mult ipl icat ion of the Hadamard matrix with its transpose forms the identity matrix scaled by a factor of n, i.e. T H • H = n • I. From this definition, the inverse transform can be shown to be: This implies that both the forward and reverse transforms are governed by the same matrix. The nxn Hadamard matrix is defined recursively by: words are formed by a process called the Hadamard Transform which consists of taking H T (4.1) H n/2 H n/2 (4.2) H n/2 -H, n/2 where Hj = 1. Thus Chapter 4 Source-Aided Turbo Decoder 52 H2 = 1 1 1 -1 HA = 1 1 1 1 1-1 1 - 1 1 1 - 1 - 1 1 - 1 - 1 1 (4.3) A n alternative representation of the Hadamard Transform via the Kronecker multiplication is: F = H „ - f H 2 ® H 2 ® . . . ® H 2 f m (4.4) = G G ... • G • f n n n J G m f n J The above means that a larger Hadamard transform of order n can be decomposed into a series of' m=log2(n) Kronecker multiplications. More importantly, it implies that there exists a matrix G that has the same effect as the Hadamard transform when multiplied with itself m times. This can be demonstrated with an example. Consider the matrix: G 4 = 1 0 1 0 1 0 - 1 0 0 1 0 1 0 1 0 - 1 (4.5) Chapter 4 Source-Aided Turbo Decoder 53 H4 = G 4 = 1 0 1 0 1 0 1 0 1 0 -1 0 1 0 -1 0 0 1 0 1 0 1 0 1 0 1 0 -1 0 1 0 -1 1 1 1 1 1-1 1 - 1 1 1 -1 -1 1 - 1 - 1 1 (4.6) B y expanding the above multiplication we get: f = GAf = 1 0 1 0 1 0 - 1 0 0 1 0 1 0 1 0 - 1 fo fo + fi fx / o ~ / 2 Si A + / 3 h / l - / 3 _ (4.7) F = GAf = 1 0 1 0 1 0 - 1 0 0 1 0 1 0 1 0 - 1 fo fo + / 2 (fo + / 2 ) + ( / l + / 3 ) fx fo ~fl (fo + / 2 ) - ( / l +/ 3 ) fl fx + / 3 (fo -f2) + (fx -fs) f3 fx -fs (fo -f2)~(fx (4.8) These matrix operations have a repetitive configuration and can be represented diagrammatically. In this form, the transform takes on a butterfly structure similar to that of the fast Fourier Transform (refer to Figure 4.1). L i k e the fast Fourier transform, this operation, which is known as the fast Hadamard transform (FHT) , has a complexity of O(logN). This fast transform is identical to the fast Fourier transform with the twiddle factor set equal to one. The Fast Hadamard Transform allows a fast implementation of the Walsh-Hadamard code and can be used in the IS-95 standard. For channel coding purposes, there are two fundamental codes employed in the C D M A IS-95 standard. The forward and the reverse l inks u t i l i ze both the Walsh -Chapter 4 Source-Aided Turbo Decoder 54 fo + + fl x / - / - Fj fl- ' \ + F2 fi - -Figure 4.1 Butterfly structure of Walsh-Hadamard code with length 4 fi ^ Figure 4.2 General Form of butterfly structure Hadamard sequence and convolutional codes. These two codes are employed differently as dictated by the nature of the channel. Because the base stations generally have greater transmission power than the mobile stations, the forward l ink (also known as the downlink) is not confined to a strong low rate code. However, one base station usually communicates to multiple mobile stations; therefore, ideal channelization is desired. For forward error correction (FEC) , the forward link relies on a rate 1/2 convolutional code with constraint length 9 (generator polynomials are gj=7538, g2=5618), and the orthogo-nal properties of the Walsh-Hadamard code are exploited for channelization purposes to produce direct sequence (DS) spreading. There are a total of 64 separate channels with Chapter 4 Source-Aided Turbo Decoder 55 one pilot channel, one synchronization channel, and up to seven paging channels. Interleaving is performed to combat bursty channel errors. Then, data scrambling is performed according to an individualized long pseudo-noise (PN) sequence for privacy purposes, followed by the application of a short P N sequence used to identify the base station. The forward link uses Quadrature Phase Shift Keying (QPSK) modulation (refer to Figure 4.3). Information Sequence-* Convolutional Encoder Interleaver r=l/2 K=9 Short PN Sequence Walsh-Hadamard (I-Channel) Code Data I Scrambling I | — - f -Long PN Sequence Generator QPSK Modulation Short PN Sequence (Q-Channel) Figure 4.3 Forward Channel Link of CDMA IS-95 The reverse channel link (also known as the uplink) generally has a smaller power supply than the base station; therefore, a stronger channel code is required. The reverse channel uses a rate 1/3 convolutional code with constraint length 9 (generator polynomials are gj=5578, g2=6638, g3=7718). The same 64 Walsh-Hadamard codes as the ones in forward link are employed, except that instead of channelization, they are used as a block code for the purpose of error correction. Every 6 scrambled code bits coming out of the convolut ional encoder w i l l be used for selecting the corresponding 64-bit W a l s h -Hadamard codeword. In contrasted to the forward link, the reverse link, in a power limited Chapter 4 Source-Aided Turbo Decoder 56 environment, is more dependent on the coding performance of the code rather than the multiple access aspect. The employment of low rate codes to achieve the spectrum spread-ing effect was coined coded spreading [34]. The interleaver in the reverse link fits between the convolutional code and the Walsh-Hadamard code, and has the extra responsibility of adapting the two different component codes. For the traffic channel, a 576 bit b lock interleaver with 32 rows and 18 columns is used. The information sequence is written into the interleaver column-wise and read out row-wise. In the reverse l ink , the long P N sequence is used for D S spreading. As noted in [34], this DS spreading, in contrast to the coded spreading employed by the Walsh-Hadamard code, provides no significant coding gain. The reverse link undergoes the same short P N sequence spreading as the forward link prior to the application of Offset Quadrature Phase Shift Keying ( O Q P S K ) modula-. tion (refer to Figure 4.4). Short PN Sequence (I-Channel) Information w Sequence *" (1 input bit) Convolutional Encoder r=l/3 K=9 (3 coded bits) (two 3 coded bits) (64 Walsh bits) Y*\ Interleaver 64-ary >. Orthogonal ^1 _|_ Modulator V_ Long PN Sequence Generator Modulation QPSK Short PN Sequence (Q-Channel) Figure 4.4 Reverse Channel Link of CDMA IS-95 With both the forward and the reverse channel architectures now presented, we consider applying Markov model aided decoding for both systems. To evaluate the coding Chapter 4 Source-Aided Turbo Decoder 57 performance, we assume an interference free environment, and only the FEC component of the system is considered. In the forward link, this is comprised of the single rate 1/2 convolutional encoder/decoder. With a two bit merged trellis and the code outlined previously, computer simulations produce the BER performance as shown in Figure 4.5. -2 -1.5 -1 0 0.5 1 SNR (dB) Figure 4.5 Forward Link IS-95 Data synthesized from the model parameters obtained from the two bit DPCM Lenna image is used. The FEC component in the reverse link is comprised of the serial concate-nation of the Walsh-Hadamard code and the rate 1/3 convolutional code. The decoding of the Walsh-Hadamard sequence is performed through the application of the FHT, which provides a sub-optimal reliability estimation of the Walsh-Hadamard code words [35]. A . Chapter 4 Source-Aided Turbo Decoder 58 soft decision two bit merged trellis Viterbi decoder exploits these reliability measures to enhance decoding performance. For source-aided decoding, the model parameters are applied only to the outer Viterbi decoder. Model parameters are meaningless to the inner Walsh-Hadamard code since the interleaver destroys any correlation between adjacent bits. Again, synthesized data obeying the two-bit D P C M monochrome Lenna image statis-tics is used as source information; and the associated model parameters are made available to the receiver as side information. From the simulation results it can be seen that model-aided decoding provides significant improvement over non-aided decoding (refer to 1.5 2 2.5 SNR (dB) Figure 4.6 Soft-Decision Markov-Aided Decoding for the Reverse Channel in the IS-95 Figure 4.6). A t a B E R of 10 , an improvement of 1 dB is observed. We go on to further enhance the reverse link by developing a turbo decoding version of the receiver. Chapter 4 Source-Aided Turbo Decoder 59 4.2 Turbo Decoding in the Reverse CDMA Channel Since we have a concatenated coding system, the cycle of decoding and re-decoding, after establishing extrinsic information at each stage, can be exercised to improve the signal estimation. Although the F H T provides sub-optimal soft decoding estimations, optimal iterative decoding relies on an optimal soft-decision decoder, for the inner code. With in this section, the B C J R M A P algorithm is applied to the case of the Walsh-Hadamard code. A s mentioned previously, the Walsh-Hadamard code used in the IS-95 standard can be considered to be a rate 6/64 linear orthogonal block code. A s such, it is systematic with the information bits at positions 25~k. However, the number of states involved in a linear block code with rate k/n is 2n~k, and unlike for convolutional codes, the trellis is highly irregular. Attempts at minimizing the trellis for applications of the Viterbi and B C J R M A P algorithms have been addressed in [20], [36], and [37]. A M A P decoding rule for the systematic block code is presented in [38]. The problem of complexity is reduced in systematic codes, because there are two possible paths leaving each state associated with the information bit, while for the parity-check bits, there is only one deterministic path leaving the node. This is similar to the terminating tail of a convolutional code trellis where the end sequence is known. Therefore, for low rate codes, where k « n, most of the trellis can be collapsed. Let u-{uh u2,..., uk) refer to the information sequence, x = (xj, x2,..., xn) refer to the code sequence, andy = (ylt y2,..., yn) refer to the received sequence. Since the code is systematic, uk=xk. F rom equation (2.13), the a posteriori Chapter 4 Source-Aided Turbo Decoder probabilities are calculated through the joint probabilities: 60 P(s',s,y) . , p(s',y\ x) ' I p(yl+\\s) • p(yk) = — — — • P(S> ? * | S ) P(y) p(y'h s') = ak_^s') • yk(s',s) where the branch transition probabilities are described as: yk(s',s) = P(uk)' P(yk\ xk)> for systematic bits /^(jy^l for non-systematic bits (4.9) p(uk = +l) From the definition L(uk) = — — , we can getp(uk) by p(uk = -l) exp[L(w j t)] ^ = + 1 ) = l + e x p [ L K ) ] and exp[-L(w A .)] P ^ = A ) = l + e x p [ - L ( M , ) ] Therefore, e x p [ ± L ( « t ) ] P ( U k = ± l > > = l + exp[±L(X)] = C ° n S t a n t ' e x P [ L ( M ^ ' " * / 2 ] ( 4 - 1 0 ) A s described in the second chapter, the term p{yk\ xk) characterizes the channel proper-ties. Note that because of the irregular trellis structure, a branch transition probability is only defined when there is a state transition from s' to s at time k-1 to time k. A l l other state transitions have a probability of zero. From the above equation and the fact that Chapter 4 Source-Aided Turbo Decoder 61 uk=xk, the transition probability is dependent only on yk and xk. The branch transition p robab i l i ty can then be rewritten as a funct ion of the received sequence and its corresponding codewords [38]. This is defined in the log domain as: Mxk,yk) = log p{xk = + 1 ;yky \_p{xk = -\;yk)_ (4.11) where p(xk;yk) = yk(s', s). Then, from equation (4.9) we get: Lc • yk + L{uk) for systematic bits Lc • yk for non-systematic bits (4.12) Now, the a posteriori probability of the received information bit uk given a received codeword can be obtained by summing over all codewords that contain the uk bit. The soft M A P decision rule then becomes: L(uk) = log X p(x y ) X, ut=+\ X p(x y) x, uk=-\ (4.13) Assuming that each code bit is received independently of the others, and applying Baye's rule we get: Chapter 4 Source-Aided Turbo Decoder 62 L(uk) = log = log x n^.-l *.•>•• n pi»j)\. x, uk=+\ I; = ] ;€ jyj J x j n ^ h ) - n ^ ) | x, uk=-l I / = i je sys J p(uk = +l;yk)- £ \ Yl PiXiWi)} P(uk = -Uyk) X II P(xi>yd x,ut=-l l /= J Finally from equation (4.10) we get the M A P decision rules in terms of channel measure-ments, a priori probability, and extrinsic information: X { f l e x p [ ( A ( x , y I . ) - x I ) / 2 ] | L(uk) = Lc-yk+ L(uk) + log-' " ^ (4.14) X IT e x p [ ( A ( x . , y ( . ) - ^ ) / 2 ] x,ut=-l l,-= \ti±k measurements a priori prob. extrinsic information The a priori probability is initially zero. Subsequent iterations use as a priori probabilities the extrinsic information obtained from the outer decoder. Having obtained the L values for the Walsh-Hadamard code, the extrinsic information is passed into the de-interleaver and then into the outer decoder. The input coming into the outer decoder is assumed to be independent due to the presence of the interleaver/de-interleaver. The outer decoder forms its metric by combining three output bits of the inner decoder to form a code word for the convolutional M A P decoder. From equation (4.10), the a priori probability of the code words for the outer decoder is: Chapter 4 Source-Aided Turbo Decoder 63 p{uk) • p(uk+l)- p(uk + 2) = A • exp\[Le(uk) • uk + Le(uk+l) • uk+l+Le(uk + 2) • uk + 2] where A refers to a constant and Le(uk) refers to the extrinsic information for bit uk. From equation (2.1), we can obtain log-domain representation L(uk, uk+i, uk + 2) = Y [L \uk) • uk + L\uk+x) • uk + l+Le{uk + 2) • uk + 2 \ +constant (4.15) These are used to form the y terms for the B C J R M A P algorithm can of the outer convolu-tional code. The M A P decoder determines the a posteriori probability for the information bit as in equation (2.18). For the feedback to the inner decoder, the outer decoder must also calculate the a posteriori probability for the codewords. The a posteriori probability for codewords are calculated in the same way that the a posteriori probability for the informa-tion bit is calculated. Using the same analysis as in chapter two for the codewords cn , the log-likelihood for the code bits cn (where i=l, 2,...6 indexes the code bits) are: L(cn) = log X 6Vi(*0-Y*(^ )-P*(s) (s',s) -> c'nk = + 1 £ ak_x{s')-yk{s\s)-Us) (s',s) -> c'nk = - 1 (4.16) and the codeword log-likelihood is given by Chapter 4 Source-Aided Turbo Decoder 64 6 L(cn) = (4-17) i = 1 Referring to Figure 4.7, the extrinsic information is formed through the removal of Figure 4.7 Iterative structure of Reverse Link IS-95 a priori source probabilities from the a posteriori codeword probabilities. This is done because the output of a given decoder is not to be used as its own input. The turbo decoding system simulation results are shown in Figure 4.8. Turbo iterations yield perfor-mance improvement to this coding system even though it was not optimized for such techniques. 4.3 Markov Aided Iterative Decoding Turbo iterations work at the local level giving the decoder soft or reliability measures about decoding decisions for each bit through the decoding of parity bits. Markov modeling works at a higher level using global statistics to improve decoding decisions. The contribution of a global model in a low signal-to-noise (SNR) environment is stronger Chapter 4 Source-Aided Turbo Decoder 65 than that of the local model. This is attributed to the fact that in a noisy environment, the signal estimations and their reliability information may not be accurate. The feeding back of unreliable information greatly hinders any improvement that can be exploited from the iterative turbo process. On the other hand, the global model is more reflective of the model in general and wi l l help compensate for any local anomalies. However, when the S N R is high, the local model w i l l be more effective than the global model . A clean channel provides better signal estimation than that of a noisy one. Hence, the local statistics associ-ated with each individual bit are more reliable. In this scenario, the global model which estimates the general pattern of bits wi l l not be as beneficial as that of the local estimates. Combining both the global and local statistics can improve performance in both low and high S N R compared to a system without such information. 4.3.1 Modification to the MAP algorithm The M A P decoding algorithm described in chapter two has to be modified for the use of Markov model aided decoding in a turbo system. A s in the Viterbi algorithm, the changes . incorporated involve the trellis mainly. The same trellis merging technique described in chapter two is applicable here (refer to Figure 3.7 for an example of a four state merged trellis). Equation (2.13) can be expanded to incorporate a merged trellis. To perform this s' and s" are introduced as the intermediate and final state of the merged trellis. P(y) • P(yk, yk-\) = uk_2(s") • Y * , ' * _ y k - i > s">s) • P*(?) ( 4- 1 8> Chapter 4 Source-Aided Turbo Decoder 66 1.5 2 2.5 SNR (dB) Figure 4.8 Applying Iterative Decoding to IS-95 C D M A (4.19) = I l ^ J i - i xk,xk_vs,s',s")- p{xk,xk_vs,s' \s" xk xk_l The expression above follows from the application of marginal probability and Baye's theorem. Now, i f two received code words are assumed independent, then the above expands to: Chapter 4 Source-Aided Turbo Decoder 67 yk,k-\(yk>yk-\>s">s) = XX^ xk,xk_x,s,s',s")-p(xk,xk_x,s,s,\s") 1 (4.20) = XX 1| x t xk_l • p(s, s' I s") where xk and x^.; are the code words resulting from the transitions s' —» s" and s —> s' respectively, and uk and uk_j are the bits causing these transitions. For the terms p(yk xk, xk_lt s, s', s"), the knowledge of xk is all that is necessary for the transitional probabilities. The term p(xk, xk_x s, s"), is deterministic having probabilities either zero or one depending on the trellis. Y ^ - i o ^ y * - ! ^ " ' 5 ) = p(y'k **)-p-(y*-i *k-i)- P(S>S' \ O = p(yk\xk)-p(yk-iI^-i)-p(ul4-1-4ul-\) (4-21) = p{y'k\ *k)-p(yk-i \ xk_x)- pt(uk,uk_\)- P8(uk,uk_x) The dependency on ^ ' can be dropped since s' is an intermediate node in the merged trellis. Assuming the global model is independent from the reliability of the individual bit, the joint probability can be separated into independent probabilities where p\uk, uk_ x) is the local extrinsic information from a previous decoder stage, and p .{uk, uk_x) is the global a priori statistic. The contribution of the global model can be observed in the following simulations (refer to Figure 4.9). Here unaided turbo decoding is compared to Chapter 4 Source-Aided Turbo Decoder 68 turbo decoding with the global model known to the receiver. For comparison with previous results, the data is generated from the four-state Markov-model of the 2-bit DPCM encoded monochrome Lenna image. 2 SNR (dB) Figure 4.9' Markov model-aided Turbo decoder 4.3.2 Turbo iterations within Global Iterations In a realistic system, the receiver has no precognition of the Markov model, and must employ model-recovery techniques. The IS-95 CDMA standard mandates the use of a small interleaver of size 576 code word bits, which amounts to 184 source bits. The interleaver was designed for real-time voice traffic and is not optimal for iterative Chapter 4 Source-Aided Turbo Decoder 69 decoding. Turbo iterations do improve B E R performance over a non-iterative system, but improvement w i l l be more significant with large interleavers such as those proposed for the initial parallel concatenated turbo codes. The influence of a small code block is more prevalent in the global model recovery. A s discussed in chapter three, to recover a Markov model for a data set reliably in noise, the data set must be sufficiently large. Since the required size is significantly greater than 184 bits, new model recovery schemes must be used to overcome the restriction limitation by the small interleaver. The simplest model recovery scheme is to encapsulate turbo iterations within a larger global iteration. B y local iteration, we mean a turbo iteration over a block of data corresponding to the turbo coder interleaver; while by global iteration, we mean an iteration over a group of blocks that correspond to an image data file. In this encapsulated iterative technique, the turbo decoding is done as in the case when no model recovery is attempted. When the image data file is completely decoded, its Markov model is measured and the file is turbo decoded again as in the previous global iteration, except that the new Markov model is employed. This operation is a very computationally expensive technique. For the computer simulations, we employ six local turbo iterations for every global iteration, and perform six global iterations. This bring the total number of cycles through a specific data block to a total of thirty-six. For consistency, the data set is synthesized randomly according to the model parameters for the two bit D P C M monochrome Lenna image . Figure 4.10 depicts the B E R performance at each global iteration and is compared to the curve where no iterations are performed. Figure 4.11 shows the B E R performance of the raw data from an image file. The two curves are very similar, and their similarity Chapter 4 Source-Aided Turbo Decoder 70 1.5 2 SNR (dB)' Figure 4.10 Output of Global Iterations using Synthesized Data reinstates the validity of the modeling process. Note that from the third global iteration to the fifth global iteration there is almost no improvement in B E R performance. Figure 4.12 demonstrates the results of internal local iterations within each global one. The curves included here are those obtained from the synthetic data. Again , the curves shown that there is very little performance improvement from the third to the fifth global iteration. This result is observed because after three iterations, the estimated model parameters comes very close to the actual model parameters. Due to the quick convergence of the results with respect to the number of global iterations, the total number of iterations can be drastically reduced. Chapter 4 Source-Aided Turbo Decoder 71 10~5I I I : I I I 1 0 0.5 1 1.5 2 2.5 3 SNR (dB) Figure 4.11 Output of Global Iterations using Raw Data 4.3.3 Joint T\irbo-Global Iterations The encapsulated iterative technique is a very exhaustive technique in terms of calcula-tions, because the total number of iterations performed is the product of the number of global and local iterations. A n alternative approach is to retrieve and apply the a priori probabili t ies for both the global Markov model and the local extrinsic information simultaneously. From hereon, this procedure wi l l be referred to as the joint turbo-global (JTG) iterative technique. Joint turbo-global iterative technique is a variation of the encapsulated technique with one local iteration performed for each global iteration. The extrinsic information is saved from one global iteration to the next, and the extrinsic Chapter 4 Source-Aided Turbo Decoder 72 0th Global Iterations 1st Global Iteration 3rd Global Iterations 5th Global Iterations Figure 4.12 Output of Local Iterations within Global Iterations using Synthetic Data Chapter 4 Source-Aided Turbo Decoder 73 information is saved from one global to the next. A s compared to encapsulated system, for the same total number of iterations per block, the J T G iterations show remarkable performance. This iterative scheme improves the unaided system for the same reason as why iterative model recovery converges quickly. Markov model aided decoding is generally fairly tolerant to parameter deviation in the model recovery process. Therefore, although the first pass of the J T G iteration produces an inferior model as compared to the encapsulated system, improvement is still observed. So long as there is improvement in each iteration, successive iterations w i l l continue to benefit from the results of previous estimations. However, because the model parameters achieved after each global pass of J T G is inferior to that of the encapsulated system, the J T G decoder w i l l require more model recovering iterations. Nevertheless, the total number of iterations is reduced. For comparative reasons two graphs are shown. Figure 4.13 demonstrate the performance of the J T G scheme on synthesized data whereas Figure 4.14 demonstrate the performance of the J T G scheme on raw data. The two set of curves display similar characteristics. 4.3.4 Hybrid Encapsulated and Joint Iterations A hybrid of the encapsulated and joint-turbo-global iteration system can be developed to take advantage of the benefits of both systems. The objective of this configuration is to reduce the total number of iterations of the encapsulated system and achieve a faster convergence compared to the J T G system. We realize that the encapsulated system requires fewer global (or model recovering) iterations to converge than J T G system Chapter 4 Source-Aided Turbo Decoder 74 1.5 SNR (dB) Figure 4.13 Joint-Turbo-Global iterations using Synthesized Data because the extra local iterations gives better decoding performance each pass through the file. A hybrid technique employs the encapsulated technique for the first global iteration and the J T G iterations for further global iterations. For simulations, the hybrid system was configured to perform one encapsulated iteration and four ensuing J T G iterations after the first pass. For consistency, the source information was synthesized according to the parameters of the two bit D P C M encoded Lenna image. Naturally, the hybrid scheme has the same performance as the encapsulated system on the first pass. The second shows significant improvement over the correspond-ing J T G iteration. However, subsequent iterations produced smaller improvement steps Chapter 4 Source-Aided Turbo Decoder 75 1.5 2 . SNR (dB) Figure 4.14 Joint Turbo-Global iterations using Raw data than the J T G iterations. This was expected as the global statistics accumulated through the model recovery process maximizes the quick convergence of the encapsulated system. The performance of the hybrid and J T G system at the fifth iteration is very close. This indicates that after five iterations of model recovery, the difference source model between the J T G and the hybrid system is very small (refer to Figure 4.15). For comparison purposes we also include Figure 4.16, which shows the hybrid iterative scheme on raw data from an image file. A s in the other schemes, the results of the synthetic data and the raw data are very similar. Again, this reinforces the validity of the modeling process. Chapter 4 Source-Aided Turbo Decoder 76 10-" I I I I I I 1 0 0.5 1 1.5 2 2.5 3 SNR (dB) Figure 4.15 Hybrid Iterative Scheme using Synthesized Data 4.3.5 Comparison of Different Configurations Each of the different configurations has its advantages and disadvantages. The obvious drawback to the encapsulated system is the computational effort required. In an encapsu-lated iterative system with / number of local iterations and with g number of global iterations, this system requires a total of / x g number of iterations. Each iteration results in an incremental time delay, and this is not desirable in time critical systems. The obvious advantage of the encapsulated system is its relative simple structure. The model recovery process treats the turbo decoder as a single channel decoder. This system goes through each block of the transmission iteratively and passes on only the global model from one block to the next. The joint turbo-global iterative system attempts to rectify the iterative Chapter 4 Source-Aided Turbo Decoder 77 1.5 SNR (dB) l Figure 4.16 Hybrid Iterative Scheme using raw data burden of the encapsulated system by merging both global and local statistics. This reduces the total number of iterations, and hence the number of calculations, by a factor of /. However, the trade-off is the convergence characteristics of the curve. The curve does not convergence as quickly as the encapsulated system. This is due to an inferior global model and inferior local extrinsic information. The intra-iterations in the encapsulated system re-calculates the extrinsic information / times for every one time that joint system completes a pass. Therefore, the statistics and the model is a lot more reflective of the actual data. Furthermore, the memory requirement for this system is very demanding. The extrinsic information associated with each and every bit of the transmission must be stored Chapter 4 Source-Aided Turbo Decoder 78 for the complete pass of the iteration. This amounts to 2 x tx x bpx where n2 is the number of code bits per symbol for the second decoder, bpx is the number of bits per pixel, and tx is the size of the transmission. For a large transmission such as an image file, this can be very costly. A hybrid system is proposed attempting to maximize the advantage of both system. The encapsulated iterations are performed first to obtain superior models for both the global and local parameters. With the superior model, the J T G iterations are then performed to alleviate the computational requirements. This approach inherits the benefit of both system with a smaller iteration count and good convergence characteristics. However, it also inherits the heavy memory requirements of the joint iterative system The performance of the different schemes can be evaluated qualitatively in Figure 4.17. This figure depicts the influence of decoding in the different recovery schemes for the two-bit D P C M encoded Lenna image. The performance was evaluated for the SNR=1. 4.3.6 Complexity Analysis The complexity of the basic concatenated system is compounded by the model recovery process in a mult ipl icative fashion. To compute the complexity of the concatenated system, the complexity of each component of the decoder can be analyzed and summed together. Natural ly , different implementation w i l l result in different complexi t ies . Generally, the trade-off is between storage requirements and speed of performance. In our implementation, we focused more on the speed of the implementation than the size of the program. The channel metric for the inner (Walsh-Hadamard) decoder is stored requiring k 2 1 xn2xN elements, where k]Mj is the rate of the inner decoder, k2/n2 is the rate of the First Pass of Encapsulated System Fifth Model Recovery Iteration in Encapsulated System Figure 4.17 Qualitative comparison of the three different model recovery techniques First Pass on Hybrid System Fifth Iteration of Hybrid System Figure 4.17 Qualitative comparison of the three different model recovery techniques Chapter 4 Source-Aided Turbo Decoder ; 81 outer decoder, and N is the information block size including the terminating tail. In the initial pass, the inner decoder calculates the channel metric, and in subsequent iterations, only the a priori probabilities are applied for redecoding. To calculate the metric values in k + 1 k - 1 the first pass, n2,xJVx2 1 x (2 1 +1) number of multiplications/divisions, k k k n2 x 7V x 2 1 number of exponential function calls, and n2xNx2 1 x (3 • 2 ' + 1) additions/subtractions are performed (refer to as 0(m, m), 0(m, a), and 0(m, e) respec-k tively). A temporary working storage of 2 1 elements is needed to formulate all the k channel metrics. In applying the a priori probabilities, n2x Nxkx x (2 1 + 1) multiplica-tions and n2xNxk{ additions are performed (referred to as 0(fbk, m) and 0(fbk, a) respectively). The application of the interleaver/de-interleaver requires 2xn2xN storage for the extrinsic information in both the forward and the reverse direction. The outer Table 4.1 Inner Decoder Complexity Multiplication Addition Exponential Storage metric calcula-tion n2-N-2ki + 1 •(2* I _ 1 + 1) n2-N- 2ki (3-2^+1) N-n2-2kl N • n2 • 2kl + 2*« Application of a priori prob. n2 - N • kx •(2A,+ 1) N • n2- kx N • n2 • 2kl Interleaving - • - -2-N • n2 decoder utilizes the BCJR MAP algorithm. Rather than recalculate y at every iteration, the k2 calculation can be sped up by calculating it once with Nxn2x2 number of multiplica-Chapter 4 Source-Aided Turbo Decoder 82 t ions, and storing the result in an array of 2 x N elements. In ca lcula t ing the a , Nx2m x 2kl number of multiplications and N x2m x(2kl -\) number of additions are required. The application of Markov probabilities requires an additional TV x 2 x 2 2 number of multiplication. These values are stored and employed in the backward path requiring 2mxN number of storage elements. In calculating the P terms, 2m+ 1 element of working memory is required as the final a posteriori probabilities are produced through the backward sweep. The computational requirement for the P terms and the final a posteriori information sequence probabilities is the same as that of the a terms. The a k posteriori code word probabilities requires 2 1 x N number of storage elements. Further, k the a posteriori code word probabilities requires Nx2 1 x kx number of additions. The Table 4.20uter Decoder Multiplication Addition Storage Y N • n2 • 2kl -2kl-N a N.2m-2k2 + 1 N-2m-(2k2-\) 2m-N P N.2m-2k2+1 N-2m-(2k2-\) 2 m + 1 a posteriori infor-mation sequence N-2m.2k2 + l N-2m-(2kl-l) -a posteriori code word sequence -N-2kx-kx ' 2kx • N overa l l computat ional and storage requirement for the different model recovery Chapter 4 Source-Aided Turbo Decoder 83 procedures are given by Table 4.3. Table 4.3 Complexity of the Different Model Recovery Schemes Multiplication Addition Storage Encapsulation tx • g • {0(m,m) + (l-\)-[0(fbk,m) + 0(out,m)]} tx • g • {0(m,a) + (l-l)-[0(fbk,a) + 0(out, a)]} M(System) Joint Turbo Global tx • g • {0(m,m) + 0(fbk,m) + 0(out, m)} tx - g • {0(m,a) + 0(fbk,a) + 0(out, a)} M(System) + 2 • tx • bpx Hybrid tx • {g • 0(m,m) + (l + g - \ ) . [0(fbk,m) + ... + 0(out, m) ] } tx • {g • 0(m,a) + (l + g-l). ' [0(fbk,a)+ ••• + 0(out, a)]} M(System) + 2*1 • tx • bpx The terms 0(out, m) = 7 • N • 2m • 2 2, Ofout, a) = 3 • N • 2m • (2 2 - 1) , and M(system) = 2k" + 2 m + 1 + N • [2kl • (n2 + 1) + 2 • n 2 + 2*2 + 2 m ] are used to simplify the notation. The number of exponential calls, tx - g - 0(m, e), is common to all model recovery methods, and is not a factor in the comparison of the different schemes. Chapter 5 Conclusions and Future Work Within this thesis, we have investigated iterative channel decoding. More specifically, we have investigated the impact of both turbo iterations and Markov model recovery iterations on the performance of the channel codes specified by the C D M A standard, IS-95. The development of the aforementioned system was accomplished through a series of progres-sive steps. First, Markov model aided Viterbi decoding was applied to a single convolu-tional code. Then, turbo decoding for the concatenated codes in the reverse channel link of the IS-95 standard was established. Final ly, Markov model aided turbo decoding was developed through the combination of the two previous steps. Commenc ing wi th the single encoder/decoder pair, we established a M A P decoding rule for convolutional codes through the deployment of a Vi terbi decoding algorithm which employs a two-state Markov model of the source. The form of the branch metric shows that Markov model aided decoding makes a stronger impact at low S N R when decoding a transmission of the one-bit D P C M Lenna image, at a B E R of 10"1 a coding gain of 1.75 dB is observed, while at a , B E R of IO" 3 a coding gain of 0.7 dB is observed. The results are based on a rate 1/3 convolutional code with constraint length 9 and generator po lynomia ls : gj = 5578, g2 = 6638> g3 = 7718. The t rel l is merging technique was employed to accommodate higher state Markov models. For a four state model based on the two bit D P C M encoded Lenna image, a 1.75 dB gain was observed at a B E R of 10" 2, and a 1.0 dB gain was observed at a B E R of 10~3. The modeling process was justified by demonstrating that the decoding of ideal synthetically generated data 84 Chapter 5 Conclusions and Future Work 85 performs very closely to the decoding of the actual raw image data with the same model parameters. The results were within 0.1 dB across the S N R range investigated. Concluding the investigation of the single encoder/decoder system, we explored a Markov model recovery scheme. This scheme entails measuring the model parameters after each decoding iteration over the data file, and using the new model for the next data file decoding iteration. Computer simulations demonstrated that the model recovery technique converges to the ideal performance level, where a priori knowledge of the model parame-ters is assumed known by the receiver, remarkably fast. Over the range of channel S N R considered, two iterations brings the result sufficiently close to the ideal scenario where the decoder has perfect a priori knowledge of the model. To investigate the influence of Markov model aided decoding on the IS-95 channel codes, we first reviewed the components decreed by the standard. A t this point, the theory exposed previously was applied to form a Markov model aided version of the relevant decoders. The forward link demonstrates a Markov model aided coding gain of 1.7 dB at the B E R of 10"2 and a gain of 1.1 dB at a B E R of 10"4. Again, the coding gain diminishes at higher S N R . This behaviour is also exhibited in the reverse channel link where a gain of 1.25 dB is observed at a B E R of 10"1 and a gain of approximately 0.75 dB is observed at a B E R of 10"3. The channel code for the forward link is a simple single convolutional code and turbo iterations are not possible; however, the reverse l ink is comprised of a serial concatenation of two simpler component codes, and turbo iterations here are feasible. Both the size and design of the interleaver, as prescribed by the standard, limits the turbo decoding performance. Nevertheless, at five iterations a gain of about 0.5 dB is observed Chapter 5 Conclusions and Future Work 86 at a B E R of 10~3, and a gain of about 0.7 dB is observed at a B E R of 10"1. When the receiver has a priori knowledge of the Markov model parameters, turbo decoding dramatically improves the B E R performance. After five iterations at a B E R of 10 , Markov model aided turbo decoding has a gain of 1.25 dB over non-Markov model aided turbo decoding, and an overall gain of around 1.8 dB over non-iterative decoding. In realistic systems, because the receiver has no precognition of the model parame-ters, one might consider recovering the model while performing turbo iterations; the model must be recovered iteratively; however, the small interleaver block size does not have enough data samples to allow sufficient error cancellation so that a reasonable model can be recovered from a noisy data set corresponding to one block. To recover the Markov model, and turbo decode, two levels of iterating are required, and here we have proposed and evaluated three such schemes. The first is the encapsulated iterative scheme. This technique entails encapsulating turbo iterations within larger model recovery scheme iterations, and re-initializing the turbo decoding process after each global iteration. A s in the case of the single decoder, the B E R converges after two model recovering iterations. However, this encapsulated scheme is very expensive in terms of the total number of iterations and hence the total decoding time delay. A n alternative approach is to save the extrinsic values computed for the whole file and use them in the next global iteration so that the turbo decoding and the model recovery are performed simultaneously. We refer to this technique as joint turbo-global iterations. Al though this approach requires more global model recovery iterations, the overall number of iterations are reduced. A third Chapter 5 Conclusions and Future Work 87 approach, aimed at reducing the total number of passes through the transmitted file, is proposed. This technique is a hybrid between the encapsulated system and the joint turbo-global system. The first global iteration is that of the encapsulated scheme, and the subsequent global iterations are that of the joint turbo-global iteration scheme. A l l three techniques converge to very similar results but have different advantages. Whi le the hybrid system reduces the total number of file iterations, the encapsulated system requires the least amount of memory, and the joint turbo-global demands the smallest number of total iterations per data bit. The application of Markov model aided turbo decoding has result in a significant improvement in B E R performance which was attained by modifying only the receiver of the IS-95 reverse link. Further study of Markov model aided turbo decoders is certainly warranted. In particular, the techniques developed here could be applied to turbo coders based on a parallel concatenation of codes. Typically these systems have a sufficiently large block size so that no iterative structure beyond that already in place is necessary to perform Markov model recovery. However, parallel concatenated systems usually use recursive systematic codes (RSC) . For non-recursive codes, the trellis merging technique works because the encoder state also contains information about the Markov model state. But, unfortunately the recursion in the R S C encoding process destroys the Markov model state information; and as a result, the trellis merging technique as is cannot be directly applied. This can be overcome by a modification of the trellis which renders the calculation more complex. We anticipate that the Markov model coding gain w i l l be similar to that of the serial case but simulations wi l l have to be done to verify this. Glossary AWGN Additive White Gaussian Noise BCJR Bahl-Cocke-Jelinek-Raviv BER Bit Error Rate CDMA Code Division Multiple Access DCT Discrete Cosine Transform DMC Discrete Memoryless Channel DPCM Differential Pulse Code Modulation DS Direct Sequence FEC Forward Error Correction FFT Fast Fourier Transform FHT Fast Hadamard Transform IS-95 Interim Standard 95 JPEG Joint Photographic Experts Group JSCC Joint Source-Channel Code JSCD Joint Source-Channel Decoder JTG Joint Turbo-Global MAP Maxmimum A Posteriori PCS Personal Communication Systems QPSK Quadrature Phase Shift Keying RSC Recursive Systematic Code SNR Signal to Noise Ratio SOVA Soft-Output Viterbi Algorithm 88 Bibliography [1] N . Morinaga, M . Nakagawa, and R. Kohno, "New Concepts and Technologies for Achieving Highly Reliable and High-Capacity Multimedia Wireless Communications Systems," IEEE Communication Magazine, pp. 34-40, Jan. 1997. [2] B . Sklar, " A Primer on Turbo Code Concepts," IEEE Communications Magazine, pp. 94-101, Dec. 1997. [3] G . Forney Jr, Concatenated Codes. M I T Press, Cambridge, M A , 1966. [4] S. Benedetto, D . Divsalar, G . Montorsi, and F. Pollara, " A Soft-Input Soft-Output A P P Module for Iterative Decoding of Concatenated Codes," IEEE Communications Letters, vol.1, no. 1, Jan. 1997, pp. 22-24. [5] C. Berrou, A . Glavieux, and P. Thitimajshima, "Near Shannon Limi t Error-Correcting coding and decoding: Turbo Codes," in Proceedings of ICC'93, Geneva, Switzerland, May 1993, pp. 1064-1070. [6] J. Hagenauer, "The Turbo Principle: Tutorial Introduction and State of the Art ," International Symposium on Turbo Codes, Brest, France, Sept. 1997, pp. 1-11. [7] C. Douillard, M . Jezequel, C. Berrou, A . Picard, D . Didier, and A . Glavieux, "Iterative Correction of Intersymbol Interference: Turbo-Equalization," European Transaction on Telecommunications, vol. 6, pp. 507-511, Sept. 1995. [8] S. Kaiser and J. Hagenauer, "Mult-Carrier C D M A with Iterative Decoding and Soft Interference Cancellation," in Proceedings of GlobeCom'97, Arizona, Nov. 1997, pp. 6-10. [9] R. Robertson and T. Woerz, " A Novel Bandwidth Efficient Coding Scheme Employing Turbo Codes," in Proceedings of the IEEE International Conference on Communications (ICC'96), Dallas, U S A , June 1996, pp. 962-967. [10] L . Bahl , J. Cocke, F. Jelinek, and J. Raviv, "Optimal Decoding of Linear Codes for Minimiz ing Symbol Error Rate," IEEE Transaction on Information Theory, vol. IT-20, pp. 284-287, March 1974. 89 Bibliography 90 [11] D . Kwan and S. Kal le l , " A Truncated Best-Path Algorithm," IEEE Transaction in Communication, vol. 46, no. 5, pp. 568-572, May 1998. [12] J. Hagenauer and P. Hoeher, " A Viterbi Algorithm with Soft-Decision Outputs and its Applications," in Proceedings of the IEEE GlobeCom Conference, Dallas, T X , Nov. 1989, pp. 1680-1686. [13] I. Marsland, P. Mathiopoulos, and S. Kal le l , "Noncoherent Turbo-Equalization for Frequency Selective Rayleigh Fast Fading Channels," International Symposium on Turbo Codes and Related Topics, Brest, France, Sept. 1997, pp. 196-199. [14] S. Pietrobon and A . Barbulescu, " A Simplification of the Modified Bahl Decoding Alogrithm for Systematic Convolutional Codes," International Symposium on Information Theory and its Applications, Sydney, N S W , Nov. 1994, pp. 1073-1077. Revised Jan. 1996 (www.itr.unisa.edu.au/~steven/turbo). [15] Y. Chang, "Parallel Decoding of Turbo Codes," Electronic Letters, vol. 32, no. 13, Jun. 1996, pp. 1188-1189. [16] . J. Wolf, "Efficient Maximum Likelihood Decoding of Linear Block Codes Using a Trellis," IEEE Transaction on Information Theory, vol. IT-24, n o . l , pp. 76-80, Jan. . 1978. [17] J. L . Massey, "Foundations and Methods of Channel Coding," NTG-Fachberichte, vol. 65, pp. 148-157, Sept. 1978. [18] J .L. Massey, "Joint Source and Channel Coding," Communication Systems and Random Process Theory, J. K . Skwirzynski, Ed . The Netherlands: Sijthoff and Norhoff, pp. 279-293, 1978. [19] G . Forney Jr., "Coset.Codes - Part II: Binary Lattices and Related Codes," IEEE Transaction on Information Theory, vol. 39, pp. 1491-1513, Sept. 1993. [20] D .J . Muder, "Min imal Trellises for Block Codes," IEEE Transaction on Information on Theory, vol. 34, pp. 1049-1053, Sept. 1988. [21] A . C . Kot and C . Leung, "On the Construction and Dimensionality of Linear Block Code Trellises," in Proceedings ISIT1993, Jan. 1993, p.291. Bibliography 91 [22] J .G. Proakis, Digital Communications 2nd Ed. New York: M c G r a w - H i l l , Inc., 1995. [23] E . Ayanoglu and R. M . Gray, "The Design of Joint Source and Channel Trellis Waveform Coders," IEEE Transaction on Information Theory, vol.' IT-33, pp. 855-.865, Nov. 1987. [24] V. Vaishampayan and N . Farvardin, "Joint Design of Block Source Codes and Modulation Signal Sets," IEEE Transaction on Information Theory, vol. 38, no. 4, pp. 1230-1248, July 1992. [25] V. Vaishampayan and N . Farvardin, "Optimal Block Cosine Transform Image Coding for Noisy Channels," IEEE Transaction on Communication, vol. C O M - 3 8 , pp. 327-336, March 1990. [26] K . Sayood, F. L i u , and J.D. Gibson, " A Constrained Joint Source/Channel Coder Design," IEEE Journal on Selected Areas in Communications, vol. 12, no. 9, pp. 727-731, Dec. 1994. [27] R. C . Reininger and J. D . Gibson, "Backward Adaptive Lattice and Transveral Predictors in A D P C M , " IEEE Transaction on Communication, vol . C O M - 2 9 , pp. 74-82,Jan. 1985. [28] D . Cornstock and J.D. Gibson, "Hamming Coding of D C T Compressed Images over Noisy Channels," IEEE Transaction in Communication, vol. C O M - 3 2 . pp. 856-861, July 1984. [29] M . Bystrom and J.W. Modestino, "Combined Source-Channel Coding for Transmission of H.263 Coded Video with Trellis-Coded Modulation over a Slow-Fading Rician Channel," In the Proceedings of the 1998 IEEE International Symposium on Information Theory, Cambridge, M A , August 1998, pp. 147-157. [30] K . Sayood and J.C. Borkenhagen, "Use of Residual Redundancy in the Design of Joint Source/Channel Coders," IEEE Transaction on Communications, vol . no.6, pp. 838-846, June 1991. [31] J. Hagenauer, "Source-Controlled Channel Decoding," IEEE Transaction on Communications, vol. 43, no. 9, pp. 2449-2457, Sept. 1995. [32] W. X u , J. Hagenauer, and J. Hollmann, "Joint Source-Channel Decoding Using the Residual Redundancy in Compressed Images," In Proceedings of ICC/ Bibliography 92 SUPERCOMM '96 International Conference on Communications, Dallas, T X , June 1996, pp. 142-148. [33] S. Emami and S. Mil ler , " D P C M Picture Transmission over Noisy Channels with the A i d of a Markov Model ," IEEE Transaction on Image Processing, vol . 4, no. 11, pp. 1473-1481, Nov. 1995. [34] T. Frey and M . Bossert, " A First Approach to Concatenation of Coding and Spreading for CDMA-Systems," IEEE Synopsium in Spread Spectrum Technology and Applications, Sept. 1996, pp. 667-671. [35] M . Meiler, J. Hagenauer, A . Schmidbauer, and R. Herzog, "Iterative Decoding in the Uplink of the C D M A - S y s t e m IS-95(A)," International Symposium on Turbo Codes, Brest, France, Sept. 1997, pp. 208-211. [36] F.R. Kschischang and V. Sorokine, "On the Trellis Structure of Block Codes," IEEE Transaction on Information Theory, vol. 41, no. 6, pp. 1924-1937, Nov. 1995. [37] R.J . McEliece , "On the B C J R Trellis for Linear Block Codes," IEEE Transaction on Information Theory, vol. 42, no. 4, pp. 1072-1092, July 1996. [38] J. Hagenauer, E . Offer, and L . Papke, "Iterative Decoding of Binary Block and Convolutional Codes," IEEE Transaction on Information Theory, vol . 42, no. 2, pp. 429-445, March 1996. [39] S. Kal le l , R. Link , and N . L o , "Markov Model Aided Convolutional Decoding," submitted to IEEE Transaction on Communications. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0065336/manifest

Comment

Related Items