Serial Concatenation of Simple Codes and Differential Modulations by Jeebak Mitra B.E, Birla Institute of Technology, 2002 A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T OF T H E REQUIREMENTS FOR THE D E G R E E OF MASTER OF APPLIED SCIENCE in THE FACULTY OF G R A D U A T E STUDIES (Electrical and Computer Engineering) THE UNIVERSITY OF BRITISH COLUMBIA December 2005 © Jeebak M i t r a , 2005 Abstract Error-correcting codes are used in communication systems to provide reliable trans-mission on practical communication channels and thus reduce the number of retrans-missions. The ability of a particular code to be able to detect and correct errors at the receiver depends on the individual code structure as well as on the decoding algorithm. Codes that provide strong error correcting capabilities usually involve a number of operations on the information bits and hence are not easily decodable at the receiver. Rather than relying on the use of a single high complexity code, a concatenation of two or more simple codes can be used. Serially concatenated encoding structures are very popular means to achieving close to capacity performance. Good performance of concatenated codes with practically viable decoding times is attributed to a technique known as iterative decoding. Iterative decoding can only be used in a situation where the component decoders are capable of generating soft information for the transmitted data. Soft information for symbol-by-symbol estimation is usually obtained by using the Bahl Cocke Jelinek Raviv (BCJR) algorithm or some sub-optimal version of it. Differential encoding of the data at the transmitter is regarded as an effective approach to combat phase ambiguities at the receiver, when using phase shift keying (PSK) schemes. Since the information is transmitted in difference of phases rather than the absolute phase of the transmitted symbol, there is more protection against an unknown channel phase at the receiver. The serial concatenation of an error correcting code with ii Il l a differential encoder has been found to provide very good performance for various kinds of channel conditions. In this work we propose and analyse the design of a serially concatenated structure which is very simple to implement and is particularly favourable for a low power sce-nario such as deep space applications. The proposed system comprises of a concatena-tion of simple parity check codes and a differential encoder (DE). Individually, these codes are very weak codes as they provide minimal error-correcting capabilities. We optimise system design parameters through extensive analysis of the system structure using extrinsic information transfer (EXIT) charts. It is shown through simulations and analytical results that the proposed concatenated codes provide performance very close to capacity. Comparison of these simple parity check codes with certain other very powerful outer codes such as Low Density Parity Check (LDPC) codes show the superior performance of the proposed codes inspite of much lower decoding complexity. For the case, where channel phase is unknown or perfect synchronisation is not attain-able at the receiver, several estimation algorithms have been proposed in the literature to combat the effects of channel phase. These algorithms can usually be divided into two categories. The first is those that use pilot symbols, which increases the transmis-sion overhead. The second is to not have any explicit channel estimation mechanism. Here we adopt the second one and consider two approaches. One is based on the quan-tization of the unknown phase and is computationally intensive with the complexity depending on number of levels of quantization. The other is to do a blind estimation based on the information derived from received symbols. On the lines of the second approach we propose a simple method for noncoherent decoding that uses a posteriori information to estimate the channel phase. It is shown that the method works well for the cases of low to moderate variations in channel phase. Contents Contents iv List of Tables viii List of Figures ix List of Abbreviations xiv 1 Introduction 1 1.1 Contributions 5 1.2 Thesis Outline 6 2 Background 9 2.1 System Description 9 2.2 Channel Model 12 2.3 Labeling for the Signal Constellation 13 2.4 Iterative Decoding and Related Concepts 18 iv Contents v 2.4.1 Soft-Output Decoding 19 2.4.2 Extrinsic Information 23 2.4.3 Tools for Iterative Decoding 24 2.5 Decoding for Parity Check Codes 27 2.5.1 Graphical Representation of codes 27 2.5.2 Belief Propagation Decoding 28 2.6 EXIT Charts and Convergence Analysis 32 2.6.1 Mutual Information Transfer Characteristics of Component De-coders 35 2.6.2 Convergence Prediction using EXIT.Charts 37 2.7 BER Simulation Methods 39 3 Code Construction and Coherent Decoding 42 3.1 Code'Construction 43 3.1.1 Code I 44 3.1.2 Code I I 45 3.2 Iterative Decoding of the Serially Concatenated System 48 3.2.1 Differential M-PSK Decoding 49 3.2.2 Decoding of Parity Check Codes 52 3.3 EXIT Chart Analysis 54 3.3.1 DPSK Decoder Transfer Characteristics 55 Contents vi 3.3.2 Parity Check Decoder Transfer Characteristics 56 3.3.3 Convergence Prediction 58 3.4 Mixed Labeling 60 4 Non-Coherent Decoding 62 4.1 Channel Phase Models 63 4.2 Decoding Algorithms 64 4.2.1 Howard & Schlegel method 66 4.2.2 Peleg-Shamai-Galan (PSG) Method 70 4.2.3 Rong Chen Method 73 4.3 Our Approach - a Method 75 5 Results and Discussion 80 5.1 Design Optimization Using EXIT Chart Analysis 81 5.1.1 Coherent Decoding 81 5.1.2 Noncoherent Decoding 87 5.2 Mixed Labeling 96 5.3 Robustness Analysis 99 5.4 BER Simulation Results 102 5.4.1 Coherent Decoding 104 5.4.2 Noncoherent Decoding 109 Con ten t s v i i 6 Conclusions 114 List of Tables 5.1 Minimum required SNR for reliable transmission using an 8-PSK con-stellation at a given rate for Code I codes 82 5.2 Minimum required SNR for reliable transmission using an 8-PSK con-stellation at a given rate for Code II codes 82 5.3 Threshold symbol SNRs for Code I codes - coherent decoding 83 5.4 Threshold symbol SNRs for Code II codes - coherent decoding 85 5.5 Comparison with capacity 86 5.6 Comparison of various methods for a TT/16 phase offset. Differential 8 PSK is used as the inner code and a rate 10/21 parity check is used as outer code 88 5.7 Howard & Schlegel method - Threshold symbol SNRs for Code II codes for constant phase offset of TT/16 90 5.8 Alpha method - Threshold symbol SNRs for constant phase offset of 7r/16 92 5.9 Performance improvement using Mixed labeling - a method for Wiener phase noise with phase offset rr/16 96 5.10 Block Length used for BER simulations 102 viii List of Figures 1.1 Serial concatenation of error correcting code and differential encoder. . 3 2.1 Block diagram of serially concatenated transmitter 10 2.2 Block diagram for receiver of the serially concatenated system 11 2.3 8-PSK signal labelings 14 2.4 4-PSK Signal Labeling 18 2.5 Block diagram for a serially concatenated transmission system employ-ing iterative decoding at the receiver 19 2.6 Example of a bipartite graph 28 2.7 ExampleIterative Decoding system . . . - 32 2.8 Plot of the J(cr) function 36 3.1 Code construction for Code I (r/(2r - 1)) codes 45 3.2 Code Construction for Code II(r/2r + 1) codes 46 3.3 Decoder Structure 48 3.4 Differential 4-PSK trellis section 50 ix List of Figures x 3.5 Parity decoding at a single node 52 3.6 Mutual information transfer chart for APP DPSK decoder at 10 log1 0(Es/A/„) = 6.5 dB 55 3.7 Mutual information transfer chart for Code I . . . . ' 57 3.8 Mutual information transfer chart for Code II 57 3.9 Decoding trajectory of the serially concatenated system 59 3.10 Example of design with mixed labeling. Antigray and Gray labeling has been used in a 1:1 ratio 61 4.1 EXIT Chart for DPSK Decoder with different phase offsets at symbol SNR 101og10(Es/iVo) = 6.5 dB. HSL and a rate 10/21 outer parity check code has been used as an example case 65 4.2 Block diagram for Howard & Schlegel APP channel estimation method 66 4.3 Channel Estimation with Howard & Schlegel method for constant phase offset. The Fig. shows the capability of the method to correctly es-timate the channel phase for a rate 2/3 outer code at symbol SNR 101og10(Es/JVo) = 6.8 dB. Except for a phase offset of ?T/8, the al-gorithm is able to lock to the correct channel phase in less than 20 iterations 69 4.4 Trellis diagram for PSG method. In the Figs. 4.4(a) and 4.4(b) an example case with M = 8 and L = 4 is shown. Thus the original 8-state trellis is now as 32-state trellis. The states (sk) represent the states of the non-expanded trellis. Fig. 4.4 (b) shows restriction on possible previous state in the supertrellis to only 3 states for a corresponding previous state of the non-expanded trellis 71 List of Figures xi 4.5 Trellis Section for noncoherent decoding with Rong Chen method . . . 74 4.6 Trellis section for a method 78 4.7 EXIT chart for Channel Estimation with a -method. Phase Noise Vari-ance 00=0.001 and phase offset TT/16 79 5.1 EXIT chart for Code II codes depicting threshold SNRs with a rate 10/21 code for coherent decoding 84 5.2 Comparison with capacity for the proposed serially concatenated struc-ture. The SNR thresholds for best single labeling and mixed labeling have been plotted 86 5.3 EXIT chart for phase estimation with Howard & Schlegel method - the constant offset case. The figure shows the EXIT chart for an offset of TT/16 with an outer 10/21 parity check code. The symbol SNRs (10 log 1 0(Es/A/o)) at which the thresholds are observed for each label-ing scheme are shown in the legend 91 5.4 EXIT chart showing decoder failure using Howard and Schlegel method for combating Wiener phase noise, a | = 0.001. Outer code is rate 10/21 and symbol SNR 10 log10(Es/iVo) = 5.5 dB 93 5.5 EXIT chart for a-method for a rate 10/21 outer parity check code with Wiener phase noise (o@ = 0.001) 94 5.6 EXIT chart for PSG method for a rate 10/21 outer parity check code and phase noise variance o% — 0.001 using 4 levels of quantization . . . 95 5.7 EXIT chart for a method with 1:1 mix of Gray labeling and NAL for the TT/16 phase offset. The threshold SNR for the Gray and NAL mixture case is 0.2 dB better than the best single labeling (Ungerboeck) 97 List of Figures xii 5.8 EXIT chart for a method with 1:1 mix of Gray labeling and Ungerboeck labeling with a 10/21 outer code for coherent decoding. For a symbol SNR of 10\ogw(Es/No) = 0.6dB neither of the single labelings exhibit convergence. However a 1:1 mix of these two provides a threshold SNR at 0.6 dB 98 5.9 This figure is a zoomed in version of Fig. 5.2. The figure clearly em-phasises the fact that mixed labeling allows us to achieve performance closer to capacity than using just a single labeling 99 5.10 Robustness Analysis : Effect of parameter variation for a-method and PSG method 100 5.11 Average number of iterations performed per block of data using fast decoding for the coherent case 103 5.12 BER curves for coherent decoding of Code I with Ungerboeck and HSL - This graph clearly depicts the problem of error floor that Ungerboeck labeling suffers from and the improvement afforded by HSL. The error floor exists for HSL also but at a lower error rate 105 5.13 Comparison of rate 10/21 Code II code performance with LDPC-DE concatenation of Franceschini et al. for 4-PSK. Block lengths of 6000 and 12000 have been compared for a rate 10/21 code with a mixture of Ungerboeck and Gray labeling in a 1:1 ratio 106 5.14 Comparison of rate 10/21 Code II code performance with LDPC-DE concatenation of Franceschini et al. Our code gives a better performance by 0.6 dB at the cost of a very low loss in code rate of 0.024 107 List of Figures xiii 5.15 Frame Error Rate results for Coherent Decoding case. The Fig. shows the FER for Code I (rate 3/5 and 10/19) codes and Code II (rate 2/5 and 10/21) codes with Ungerboeck labeling. The FER for Code I codes floors out at much higher values than Code II codes 108 5.16 Performance results for Code II codes under coherent conditions using best single labeling for a given code rate and mixed labeling with a 1:1 mix of Gray and NAL 109 5.17 Effect of using a approximate normalised metric for implementing the boxplus operation for BP decoding 110 5.18 Performance of the a-method for the constant offset case. An outer code of 10/21 and a 1:1 Gray-NAL mixture labeling for the inner code have been used and the phase offset is ?T/16. The effect of varying is a can be easily seen as the performance visibly deteriorates from when a = 0.99 to a = 0.90 I l l 5.19 Performance of the a-method using 4-PSK. An outer code of 10/21 and a 1:1 Gray-Ungerboeck mixture labeling for the inner code have been used 112 5.20 Performance comparison of the a-method and PSG method using 8-PSK constellation 113 List of Abbreviations APP A Posteriori Probability AWGN Additive White Gaussian Noise BCJR Bahl-Cocke-Jelinek-Raviv BER Bit Error Rate BICM Bit Interleaved Coded Modulation CSI Channel State Information DE Differential Encoder/Encoding DPSK Differential Phase Shift Keying EXIT Extrinsic Information Transfer FER Frame Error Rate HSL Howard Schlegel Labeling SSP Semi Set Partitioning LLR ' Log Likelihood Ratio MAP Maximum A Posteriori ML Maximum Likelihood MSDD Mutliple Symbol Differential Detection NAL Nuriyev Anastasopoulos Labeling PDF Probability Density Function PSK Phase Shift Keying SISO Soft Input Soft Output xiv List of Abbreviations xv SOVA Soft Output Viterbi Algorithm SNR Signal-to-Noise Ratio TCM Trellis Coded Modulation Acknowledgements It gives me great pleasure to convey my gratitude to Dr. Lutz Lampe for his able guidance throughout the course of this work. I have learnt a lot from him and consider it a privilege to have worked with him. His constant encouragement and support helped me in overcoming the challenges I faced while pursuing this project. The numerous discussions we had were very fruitful and the team work was instrumental in allowing me to achieve the goals that I set out for. I would also like to thank all the professors of the Dept. of ECE, UBC who taught me various courses and helped me enhance my knowledge. I thank all my colleagues in the Communication Theory Group, Dept. of ECE, UBC for creating a friendly and stimulating environment. I would also like to thank my family for being very patient and for believing in me during all these years. Without their love and support I would have not been the person I am. xvi Chapter 1 Introduction Modern communication systems aim at reliable transmission of data at very high rates. This task is usually difficult as the transmission of data through a communication chan-nel suffers various kinds of impairments and thus the received signal is a much degraded version of the original transmitted signal. In a wireless environment there is an added constraint whereby devices should consume minimal power for a longer life. There-fore, there is a need for efficient means to transfer data from the transmitter to the receiver with minimum use of energy. Despite noisy communication channels, data can be transmitted almost error-free using channel coding, as proved by Shannon in 1948 in his classical paper [1], Since then, the goal of information and coding theory has always been to come as close as possible to the Shannon limit performance with acceptable complexity. Several powerful error correcting coding techniques such as Low Density Parity Check (LDPC) codes [2] and Turbo Codes [3] have been invented that allow near-capacity error free transmission over the additive white Gaussian noise (AWGN) channel. Turbo Codes are a concatenation of error correcting codes that use a decoding technique known as iterative decoding at the receiver. Concatenated coding schemes were first studied by Forney in 1966 [4] as a class of codes whose probability 1 2 of error decreased exponentially at rates less than capacity, while decoding complexity increased only algebraically. However, interest in iterative decoding for concatenated codes reached unprecedented levels only after the introduction of Turbo codes [3] in 1993. Iterative decoding refers to a recursive decoding technique for concatenated transceiver structures wherein the component decoders generate soft information of the transmitted bits. An exchange of probabilities (or some other representative in-formation, cf. Section 2.4) on the transmitted bits takes place between component decoders in an iterative fashion and with every iteration the information gets more refined. This process continues till a stopping condition is reached. With performance very close to theoretical Shannon capacity limits, Turbo codes,:have greatly empha-sized the role played by iterative decoding for concatenated coding systems using soft output algorithms. Turbo codes are parallel concatenated convolutional codes, where the information bits are encoded twice. Recently, Benedetto et al. have obtained high coding gains at very low bit error rates by using serially concatenated codes (SCCs) with iterative decoding [5]. Serially concatenated schemes have been extensively stud-ied e.g. in [6] and are reported to provide good performance in the low signal to noise ratio (SNR) environments. Channel coding is usually followed by modulation schemes, whereby the coded bits are converted to symbols to better match the transmitted symbols to the channel. M-ary phase shift keying (MPSK) is a popular digital modulation technique as it allows for a constant modulus of all transmitted symbols. A constant modulus implies that low-cost linear amplifiers can be used at the receiver, thus minimizing the overall cost of the receiver. Binary phase shift keying (BPSK) is the most basic modulation where bits are mapped to two anti-podal (phase difference 180°) points on the complex plane. BPSK affords the least spectral efficiency of 1 bit/channel use. In order to maximise the spectral efficiency of a system higher order modulation schemes are usually considered. Coded modulation (CM) (first suggested by Massey [7]) is a bandwidth efficient method 3 which offers the advantages of a concatenated structure as well uses higher order modu-lation schemes. In coded modulation, coding and modulation are treated as inseparable parts of a single system. Ungerboeck was the first to propose Trellis Coded Modulation (TCM), as a feasible means of integrating coding and modulation. Since then coded modulation has been regarded as an effective means of combining higher order mod-ulation and coding to achieve coding gain without reducing data rate. When using a higher order modulation scheme, the Euclidean distance between the signal constel-lation points gets substantially reduced and hence the system is more susceptible to errors due to channel phase. A common method to overcome the problems of phase ambiguity and sensitivity to tracking errors of M-PSK system is to differentially encode the information (and hence known as differential PSK (DPSK)) before transmission. The information is carried by the difference in phase between adjacent transmitted symbols, rather than by absolute phase of each symbol. Thus differential encoding of a message introduces memory into the modulation scheme. The benefits of coded modulation and differential encoding (DE) can be combined by concatenating an error correcting code and a DPSK encoder with an interleaver in between. This structure has gained immense popularity over the last decade and has been investigated by many, cf. e.g. [8],[9],[10],[11],[12],[13]. This is partly due to the fact that a differentially encoder provides a recursive structure which is mandatory for performance gains in serially concatenated structures. Thus the choice of a differential encoder as the inner code is particularly attractive as it is the simplest code with a recursive structure. The basic structure for such an encoder is shown in Fig. 1.1. Error -Correc t ing Encoder Differential Encoder Interleaver To Channel Figure 1.1: Serial concatenation of error correcting code and differential encoder. 4 In this work the effort has been to construct good, simple and soft-decodable codes. Thus we choose parity check codes as they are very basic codes and the decoding can be performed with great ease as well. The intent here is to exploit the gains obtained from the concatenation of simple codes through iterative decoding. For the design of the parity check code the main consideration was the distance spectral properties of the code so that the number of low weight error events can be reduced. Tullberg et al. emphasize the importance of free distance (dmin) of a code in deciding the effectiveness of serially concatenated structures with an inner accumulate code in [15] through a density evolution analysis. Furthermore, the asymptotic frame error rate of such structures is also dependent on the dmin of the outer code [15]. Simple parity check (SPC) codes have also been used by Jing Li et al. [14] to introduce a new family of codes known as product-accumulate (PA) codes that also have an accumulator as its inner code. Howard and Schegel use a similar concatenated structure in [9] which is shown to provide very good performance. Franceschini et al. use a concatenation of LDPC and DE in [16]. Also Turbo codes have been used in concatenation with a DE i» [17]-For communication over phase noisy channel it is important to be able to account for the channel phase. Coherent decoding assumes that information regarding the channel state is accurately available at the the receiver. In PSK transmission channel state information (CSI) refers predominantly to the channel phase, because amplitude fading merely scales the noise and a good estimate of the fading amplitude is the received amplitude itself, for SNR sufficient for reliable decoding. In practical systems carrier phase tracking is difficult and is prone to false locks and error propagation. Therefore another scenario for decoding is without CSI. This is done either by transmitting pilot symbols as a training sequence or by performing noncoherent decoding i.e. without any explicit mechanisms to estimate the channel phase. Pilot symbols add to the transmission overhead of the system and hence noncoherent decoding emerges as an 1.1 Contributions 5 attractive option. Phase estimation in systems without pilot symbols has been studied extensively and significant contributions have been made for e.g. in [11], [17], [18]. For the noncoherent decoding case there are two different approaches. One tries to model the channel phase as a Markov process observed in noise. The Markov model works well when there are sufficient number of states to model the channel phase. The underlying assumption here is that the channel phase takes only these discrete values. Thus as long as the quantization levels are close enough to actual values of the channel phase the algorithm will exhibit convergence. The second method is a decision-directed approach and relies on the use of soft information on the data computed at the decoder. For example, [9] suggests that the a posteriori information available at the output of the decoders can be used to gain information about the channel. However, it should be noted that the quality of the information is directly dependent on the reliability of the information generation by the decoders. Thus we will have imperfect channel estimates if the reliability information generated is not indicative of the actual transmitted bits. This situation is a little tricky as the quality of the reliability information itself is dependent on the how well the channel is estimated. Also if the channel phase has a high variance the estimation method will be effective only if the lag between the estimated phase and actual phase can be kept at a minimum. 1.1 Contributions The following contributions have been made in the work presented here : • Two general classes of simple parity check codes have been introduced and are shown to be a low complexity means to achieve close to capacity performance when concatenated with a differential encoder. • Design rules have been derived for this system for optimal performance at differ-1.2 Thesis Outline 6 ent code rates through an extensive analysis using EXIT charts. • The concept of mixed labeling for transmitting a block has been proposed and the benefits derived thereof are presented in the form of EXIT charts and BER simulations. • Differential encoding allows for noncoherent decoding of the transmitted symbol. In this work, an estimation technique that uses the differential encoding property for noncoherent decoding on channels with phase noise, as presented in a reputed IEEE conference has been proved to be invalid. The failure of the method has been proven through EXIT chart analysis and the results have been confirmed through private correspondence [19] with the authors of the mentioned paper [9]. • A simple channel estimation algorithm based on the use of a forgetting factor has been proposed and is shown to provide good performance upto moderate levels of channel phase noise variance. The method has a much reduced computational load as compared to benchmark algorithms in the domain. • Use of soft BER estimation based on a posteriori probability to develop a stopping criteria for iteratively decoded system is shown to be an effective means to reduce decoding time. 1.2 Thesis Outline The thesis is organised as follows : Chapter 2 presents the background and concepts that form the basis of our work. We begin with a description of the system design (see Section 2.1) and the channel model (see Section 2.2). This is followed by a description of the signal constellations used and the various labeling schemes that have been analysed in Section 2.3. Section 2.4 1.2 Thesis Outline 7 introduces the basic concepts of iterative decoding of a concatenated system and the use of soft decoding algorithms for such systems. The graphical representation of parity check codes and graph-based decoding is treated in Section 2.5. Section 2.6 describes use of EXIT charts as tool for analysing the performance of iterative decoding in communication systems and their role in predicting the convergence through graphical visualisations of mutual information transfer characteristics. The chapter concludes with a description of the stopping criteria that has been adopted in this work to speed up decoding. In Chapter 3 we examine the performance of the system under the assumption of perfect knowledge of the channel conditions. The two proposed code families are introduced in Section 3.1 and their properties are analysed. In Section 3.2, the application of iterative decoding to the proposed system structure is described and the steps carried out by each of the component decoders are explained in detail. This is followed by an extensive EXIT chart analysis of the proposed system for the case of coherent decoding in Section 3.3. The EXIT analysis enables us to appreciate the finer aspects of the code families introduced in Section 3.1. Section 3.4 introduces the concept of mixed labeling for the constellations and how it can be used to design systems with better performance. Chapter 4 considers the case of no a priori channel information and hence we focus on strategies to compensate for the phase noise. The two phase noise models that have been used to test the approaches are described in Section 4.1. Benchmark algorithms to deal with channel phase on phase noisy AWGN channels are described and analysed through EXIT charts in Section 4.2. Finally, we present a very low complexity approach to non-coherent decoding for phase noisy channels in Section 4.3. We prove through EXIT chart simulations the capability of the proposed method in dealing with phase noise. The results of the analysis and a discussion on the optimum design parameters are the subject of Chapter 5. We establish the design rules through an extensive analysis for 1.2 Thesis Outline 8 the coherent decoding case in Section 5.1.1. The results for the analysis of the non-coherent case are presented in Section 5.1.2. Next, we prove through EXIT analysis the advantages derived from the use of mixed labeling for our serially concatenated structure. A robustness analysis of the noncoherent algorithms is done in Section 5.3. We conclude the chapter with the performance results of our scheme as indicated BER simulations and compare it with some recent results in the literature. Finally Chapter 6 presents the conclusions of the design and analysis of the proposed structure and algorithms. Chapter 2 Background Turbo differential-PSK systems have been investigated by many, cf. e.g.[13],[12], as they provide good performance in the low SNR regime. The primary purpose of this chapter is to describe the typical iteratively decoded structures that have been in-vestigated and briefly introduce the techniques that will be applied to gain a better understanding of the design and analysis of such structures. We start with the sys-tem description and channel model and then describe the various labeling schemes for the signal constellations. This is followed by a description of iterative decoding algo-rithms and a brief review of EXIT chart analysis and its use as a tool for iterative decoding schemes. Lastly, a soft BER estimation method is described as opposed the conventional hard BER estimation, which helps in speeding up the decoding process. 2.1 System Description The system under consideration falls under the general family of serially concatenated structures that have a differential encoder as the inner encoder and have been exten-sively investigated e.g. in [9],[11],[12],[13],[20], for different outer constituent codes and 9 2.1 System Description 10 channel models. We use simple parity check codes as the outer code of serially concate-nated transmission systems. We are motivated by the objective of a low complexity design as afforded by the scheme in [9], where a simple [3,2,2] parity check code has been shown to achieve good performance at a much lower computational cost compared to schemes that use more powerful error-correcting codes. 4-PSK and 8-PSK constella-tions are used as modulation schemes for the differential encoder. 4- PSK constellations provide greater robustness against phase ambiguities as they have a greater Euclidean distance and thus are expected to perform better with uncertain channel phase while 8-PSK schemes allow for higher data rates as more bits are sent per channel use. Outer Encoder Signal Mapper Par i t y Check Encoder Ck Interleaver Wk Inner Encoder Differential Encoder < D Figure 2.1: Block diagram of serially concatenated transmitter. The overall transmission system is as shown in Fig. 2.1. Here, denotes the sequence of information bits from a data source. As shown in Fig. 2.1, the information bits pass through a parity check encoder and we obtain coded bits c\,. The parity check encoder in Fig. 2.1 is a general parity check encoder that can encode at a variety of code rates. The code construction for the parity check encoder is described in Chapter 3. The coded bits are then interleaved and passed on to a signal mapper. An interleaver is usually used to provide protection against total wipe out of contiguous chunks of data due to burst errors. In iteratively decoded systems, the interleaver has a special role to play as it decorrelates information during the soft decoding process. The signal mapper maps the interleaved bits c'k to a point of the M-PSK constellation, where M represents the cardinality of the constellation. The mapping is one-to-one and thus 2.1 System Description 11 each distinct combination of bits is mapped to a distinct point of the constellation. There are many ways in which the mapping can be done and the various schemes used are described in Section 2.3. The mapping is followed by differential encoding which acts a rate-1 non-systematic recursive encoder. A recursive inner code is necessary to provide interleaving gain in a serially concatenated system [6]. The operation of the differential encoder can be described as Xk = WkXk-l. (2.1) wk is the differential symbol and xk is the transmitted symbol. The information is carried in the difference of phases between consecutive transmitted symbols rather than in the phase of a transmitted symbol itself, as in non-differentially encoded M-PSK transmission. For example if the phase of xk-i is fa-\ and the phase of wk is 6k, the phase fa of xk is given by fa = Qk + fa-\. The overall system is closest to the one used by Howard and Schlegel in [9]. In [9] a rate 2/3 single parity check code and a differential 8-PSK encoder are used as outer and inner encoder, respectively. Thus the overall rate of the code is 2/3 and the system aims at achieving a spectral efficiency of 2 bits/channel use. It Chapter 3 the 2/3 parity check code of [9] is shown to fall in a general family of codes described in Section 3.1.1. The y[k] Inner M - D P S K A P P Soft Decis ion Decoder Deinterleaver Outer Par i ty A P P Soft Decis ion Decoder Figure 2.2: Block diagram for receiver of the serially concatenated system receiver structure is shown in Fig. 2.2. The channel outputs yk are processed by the soft 2.2 Channel Model 12 APP DPSK (inner) decoder. The inner decoder is based on the BCJR algorithm [21] as explained in Section 2.4.1. In order to avoid numerical instabilities all computation is performed in log domain and the APP outputs of the inner decoder are symbol log-likelihood ratios (LLRs). Extrinsic information for the parity check decoder is then obtained from the APP outputs of the APP DPSK decoder after removing the a priori inputs. This is followed by a module that converts the symbol LLRs to bit LLRs. This conversion is necessary as the inputs to the parity decoder need to be bit LLRs. The working of the conversion module is described further in Section 2.4.3. The bit LLRs thus generated are deinterleaved and passed on to the parity decoder. The parity decoder algorithm processes the soft information as received from the M-PSK APP decoder using a forward-backward belief propagation algorithm and will be described further in Section 2.5. 2.2 Channel Model We use a complex AWGN channel model with variance cr^ /2 in each dimension. The channel also causes a rotation of the symbols by an amount governed by the channel phase. The received symbol yk at time k thus consists of the transmitted symbol xk multiplied with the channel coefficient hk plus complex noise i.e., Vk = hkxk + nk. (2.2) hk can be represented in the polar form as hk = \hk\exp(j9k), where \hk\ is the mag-nitude of the channel coefficient and 9k is the channel phase at the kih instant. We consider hk to be unit magnitude throughout, thus \hk\ = 1 and the statistical proper-ties of 9k is characterised in Section 4.1 where we describe the channel phase models. As far as coherent decoding is concerned the specifics of the phase model are irrel-evant as perfect knowledge is assumed and hence the phase noise can be accurately compensated for. 2.3 Labeling for the Signal Constellation 13 2.3 Labeling for the Signal Constellation The concatenation of a modulation code and an error correcting code in the system considered implies that the modulation code acts like an inner encoder. The bits are mapped to symbols after being interleaved and hence it can be considered to be a form of bit interleaved coded modulation (BICM) as it is popularly known [22]. The decoding is performed iteratively which categorises it as BICM with iterative decoding or BICM-ID. First, we consider an 8-PSK signal constellation and thus there are a number of ways in which the mapping from bits to signal points on the unit circle can be done. The choice of mapping is usually a crucial design parameter in iterative decoding schemes as it influences the gain over iterations. For a mapping scheme that maps m bits on to a 2m-ary signal constellation the search space consists of 2m! different possibilities. Thus, with increasing constellation order, an exhaustive computer search to find suitable mappings becomes intractable due to high complexity. In [24], Schreckenbach et al, introduced an optimization algorithm for symbol mappings for BICM-ID based on Binary Switching Algorithm (BSA). The algorithms finds the local optimum of a given cost function and distinguishes between the cases of no a priori and ideal a priori information. Ideal a priori information implies perfect knowledge about all the bits except the one which is being detected. The signal constellation is thus reduced to a pair of symbols which differ in only one bit. If the labeling is properly chosen the inter-symbol Euclidean distance can be greatly increased through a priori knowledge. The optimized 8-PSK labeling found in [24] however, is not optimal in our case as differential encoding had not been considered. Differential encoding renders the symbol probabilities non-uniform and thus impacts the overall cost function. 2.3 Labeling for the Signal Constellation 14 010 001 001 100 0 - # o o o 101 0 - # o o o 110 110 Ungerboeck 001 101 # - - # o o o 100 # -Gray 010 - # 000 Antigray 001 Semi Set Partitioning 100 , -H#ooo 101 # - 0 ooo HSL NAL Figure 2.3: 8-PSK signal labelings. 2.3 Labeling for the Signal Constellation 15 8-PSK Labelings The labeling maps that have been considered for 8-PSK constellations in this thesis are shown in Fig. 2.3. Each mapping is different in terms of the Bit Error Rate (BER) performance it provides and the nature of the waterfall curve observed in the BER charts (see Section 5.1). In the following, a brief description of each labeling depicted in Fig. 2.3 is provided and the advantages of using a certain labeling is mentioned. • Ungerboeck labeling: Ungerboeck labeling, also known as natural labeling, is a mapping where the bits are mapped to a point on the signal constellation as wk is the mapped symbol at the kth instant. I is the octal equivalent of the three bits being mapped. As will be seen in Chapter 3, both through EXIT charts and BER curves, Unger-boeck labeling provides for an early onset of the Turbo cliff compared to all other labeling considered. Although, if the coding scheme does not have measures to reduce the number of low weight codewords, one might see a higher error floor with this labeling [9]. • Gray Labeling: In Gray labeling, the signal points are chosen in such way that the Hamming distance between neighbouring points is least. The motivation for that comes from the fact that even if the decision rule at the receiver decides in favour of the nearest neighbour of the actual transmitted symbol, the number of bit errors observed, will be minimum. Although Gray labeling has been found to be optimum for BICM without iterative decoding [23], it is not expected to provide vast improvements in performance when iterative decoding is used. This will be more evident later, from the flatness of the mutual information transfer charts. A brief explanation for this is provided in Section 2.6. follows 2.3 Labeling for the Signal Constellation 16 • Antigray Labeling: In Antigray labeling, the idea is to have maximum Ham-ming distance between nearest neighbours. As opposed to Gray labeling, the Antigray labeling has been found to work better in iterative schemes as mutual information transfer through the decoders changes considerably with increasing a priori knowledge [25]. The performance of an iterative decoding scheme depends on how well the a priori information is used to obtain a conditional probabil-ity for the transmitted bit. The partitioning of mutual information among the conditional sub-channels (see Section 2.6) has a strong impact on the behaviour of the particular mapping in an iterative decoding scheme. For Antigray label-ing the distribution of mutual information over the sub-channels is asymmetric and provides for an explanation for its better performance in iterative decoding systems. • Semi Set Partitioning (SSP) Labeling: SSP labeling has been considered in [23] and has been reported as the universally optimum labeling for BICM-ID [23],[26]. The optimality criteria in [23] is the asymptotic coding gain. With perfect knowledge of the rest of the bits, the minimum Euclidean distance between subsets of symbol with same bit at a given position also increases. The asymptotic gain in Euclidean distance has been computed in [23] and has been shown to be maximum for SSP labeling. • Howard and Schlegel Labeling (HSL): This labeling has been proposed in [8] to have a mapping that generates odd weight error events. In [9], the same setup as [8] has been used, where the outer code is a single parity [3,2,2] code. The very nature of the parity check code yields sequences with even Hamming weights as input to the modulator. Thus there is a high probability that other valid codewords be chosen in the event of a 2-branch error, as the trellis is fully connected. Thus if the mapping is such that we have odd weight 2-branch error events the mean squared Euclidean distance can be improved, resulting in a 2.3 Labeling for the Signal Constellation 17 lower error floor. It has been shown in [8] that HSL labeling provides for this and thereby has a much lower error floor compared to Ungerboeck labeling, for example. • Nuriyev and Anastosopoulos Labeling (NAL): NAL labeling is an exam-ple of a rotationally invariant labeling for non-coherent channels as introduced in [27]. Rotational invariance was originally introduced in the context of coher-ent detection as a method for resolving phase ambiguities for channel estimation without pilot symbols. If a particular modulation is invariant to phase rotations by some angle </> then the phase can be estimated modulo <j> only, without pilot symbols. Rotationally Invariant (RI) codes are codes that are invariant to rota-tions by the base angle or any multiple of the base angle applied to all codewords. An RI encoder is thus, the one to many mapping from an input sequence to all rotated versions of a codeword in an RI code. 4-PSK Labeling This section presents the labeling maps for the 4-PSK constellation. In Fig. 2.4 two different labelings for 4-PSK are depicted. For 4-PSK any other possible labeling scheme will be same as one of the two labelings in Fig. 2.4 in terms of distance spectral properties and thus will have the same BER performance. The labeling rule for Gray and Ungerboeck labeling is the same as mentioned before for the 8-PSK case. As the Euclidean distance is higher for 4-PSK compared to 8-PSK, 4-PSK offers more robustness to phase noise. In the next section, we describe iterative decoding mechanisms, EXIT charts and con-vergence prediction using EXIT charts. 2.4 Iterative Decoding and Related Concepts 18 01 01 10 i 100 111 '00 10 Ungerboeck Labeling Gray Labeling Figure 2.4: 4-PSK Signal Labeling 2.4 Iterative Decoding and Related Concepts Communication systems that employ iterative decoding, poplulary known as the Turbo decoding [3], are serial and/or parallel concatenations of components. The components for such a system would usually include a posteriori probability (APP) based symbol by symbol detectors/decoders and interleavers between components. There is an ex-change of soft information between component decoders in the form of probabilities or LLRs. The information passed on from one decoder to another is called the extrinsic information (explained in Section 2.4.2). More generally, if the outputs of an APP decoder are LLRs, they can be regarded as a reliability values for each information symbol and/or code symbol. In Fig. 2.5, a general serially concatenated system using iterative decoding is shown. There is an outer encoder and an inner encoder. After the information bits are encoded by the outer encoder they are interleaved and re-encoded by the inner encoder. At the receiver, the channel outputs are first processed by the inner decoder and the soft output is deinterleaved and passed onto the outer decoder. The interleaver decorrelates the information generated by each component decoder. It is important to pass on to 2.4 Iterative Decoding and Related Concepts 19 D a t a Hard Decisions • Outer Encoder Outer Decoder Interleaver De-Interleaver Inner Encoder Inner Decoder O K > '~ *" Interleaver 1 Figure 2.5: Block diagram for a serially concatenated transmission system employing iterative decoding at the receiver each component decoder, information that it did not generate itself and hence the term extrinsic is used for such information. Extrinsic information is generated by removing the a priori information from the APP values generated at the output of the component decoder. In the following sections, we describe algorithms to obtain soft outputs from the channel outputs and ways to use them for iterative information exchange. Some tools that are useful for iterative processing such as log-likelihood ratio algebra and the boxplus operation are also explained. 2.4.1 Soft-Output Decoding The heart of the iterative decoding procedure is the use of an algorithm that com-putes the APP of the information symbols in each component decoder. The algorithm provides as output bit or symbol probabilities rather than hard decisions. In digital transmission systems, the received signal is usually a sequence of wave forms. At the decoder, the decision rule in such situations can be optimum either with respect to a sequence of symbols or with respect to the individual symbol [5]. It is a well known 2.4 Iterative Decoding and Related Concepts 20 fact that the optimum symbol decision algorithms must base their decisions on the maximum a posteriori (MAP) probability [5]. MAP algorithms have been known for a long time but were much less popular than the Viterbi algorithm [28] and rarely used in practical systems. The reason for this is that inspite of a much higher computational complexity compared to the Viterbi algorithm, the improvement in performance in terms of symbol error probability, with MAP algorithms, is not impressive. The recent revival of interest in these algorithms is related to the problem of decoding concatenated coding schemes, where they greatly enhance the performance [3]. The concatenation of two or more relatively simple constituent codes results in a very powerful code with a structure that permits an easy decoding and is good means of achieving large coding gains. Although both parallel and serial concatenated schemes have been investigated extensively in the recent past, our focus hereon will be on serially concatenated systems and we will be exemplifying concepts in the light of such a system. It was proved by Forney [4] that the optimum output of the inner decoder should be in the form of the sequence of the probability distributions over the inner code alphabet, conditioned on the received signal or better known as the APP distribution. Thus to work properly all these decoding algorithms have to exchange some kind of soft infor-mation. There have been several attempts to achieve this goal. Some of them are based on modifications of the Viterbi algorithm, that obtain some reliability information at the decoder output in addition to the hard decisions [29]. These solutions are clearly suboptimal as they are unable to supply the required APP. A different approach is based on the original symbol MAP decoding algorithms with the aim of simplifying them to a form suitable for implementation. • In the next section, we consider trellis based soft-output decoding algorithms delivering additional reliability information together with hard decisions. The BCJR algorithm [21], also known as the symbol-by-symbol MAP algorithm is optimal for estimating the states or outputs of a Markov process observed in white noise (Hidden Markov Model). 2.4 Iterative Decoding and Related Concepts 21 However, the computational complexity, numerical instability owing to small proba-bility values, non-linear functions and mixed multiplications make it too difficult for implementation in practice. Some approximations of the algorithm have been derived such as the Max-Log-MAP algorithm. The Log-MAP algorithm, on the other hand is an exact logarithmic domain implementation of the BCJR and is the one that is used in this work. In the following we describe the algorithms without derivation. Soft Decoding Algorithms The B C J R algorithm: Let the state of an encoder at time k be Sk and sk can take on values between 0 and 2M — 1. We represent the input sequence to the encoder by d = di, r i 2 ,dff . The bit dk is associated with the transition from step k — 1 to step k. The MAP algorithm provides us with the ratio of the APP of each information bit dk being 1 to the APP of it being 0. We will denote this ratio by \{dk) and is given by (Y Y 1i{sk,sk-\)ak-i{sk-x)pk{sk)\ A(d*) = Y Y 7o(sfc,Sfc - l)ak-i{sk-i)Bk(sk) (2.3) The 7i, i G {0,1}, are the branch transition probabilities (channel metrics) and are computed from the channel outputs as follows ^i(8k,sk-i) =p(yk\dk = i)q{dk = i\sk,Sk-i)p{sk\sk-i), (2.4) p{Vk\dk = i) is the conditional probability of channel output yk given that the input bit dk = i. The value of q(dk = i\sk, sk-i) is either one or zero depending on whether input i is associated with the transition from state sk-i to sk or not. The last component p(sk\sk-i) is used as a priori information for bit dk. The forward metric (ak) and back-ward metric (Bk) are computed using a forward and backward recursion respectively. The forward recursion of the MAP can be expressed as I «fc(sfc) = ^^7i(s f c,s f c_i)a; f c_i(sfc_i) (2.5) 2.4 Iterative Decoding and Related Concepts 22 and the backward recursion as 1 Pk(sk) = ^2^2li(sk,Sk+1)/3k+1(Sk+l). (2.6) The initialisation of a and /3 is important and depends on how the encoders are ini-tialised. For example if the encoders are in an unknown state to begin with, it is best to assign uniform values for the boundary cases of a and j3. The algorithm in its present form is computationally intensive and was the main reason of it being ignored for many years as implementing efficient exponentiation and multiplication is difficult. If the algorithm is implemented in the log-domain the multiplications become additions and the exponentiations disappear. In the next section, we look at both approximate and accurate log domain implementations of the BCJR algorithm. The Log-BCJR Algorithms: A low complexity implementation of BCJR algorithm is obtained by working in the logarithmic domain such that a number of complicated operations are avoided. Since natural logarithm is a monotonically increasing function we do not alter any of the decisions made by the original MAP algorithm. For an AWGN channel with complex Gaussian noise we have ( i j ^ 1 ( \V° ~ xk{hsk,Sk-l)\2\ /07^ p[yk\dk = h sk, sk-i) = —2 e x P : 2 • ^•1> By taking the logarithm of Eqs. (2.4) and using (2.7) Ti(sk, s f c _ i ) = log[7i(sfc, Sfc_i)] = —2— + log(p(sfc|sjb_i)) + K. (2.8) We can ignore the constant part (K) as it eventually cancels out in the computation of \og(ak) and log(/?*)• We compute the forward and backward recursion metrics as Ak(sk) = \og(ak(sk)) =log ^^exp( log(7i(s f c _i ,s f c ))- l- log(a f c _i(s f c _i))) J , Bk{sk) =\og{/3k(sk)) = log 5Z5^exp(log(7i(5fc + i ,s f c))-l-log(/? f c + 1(5 f c+i))) \ S k + 1 i = 0 2.4 Iterative Decoding and Related Concepts 23 For the additions we use the Jacobi logarithm [30] which is defined as L log y ^ e a i = max{a;} + S(ai, , = max*{ai}. (2.9) i=\ This is also known as max* operation to denote that it is essentially a maximum operator adjusted by 8 ( a \ , a L ) , which is a correction term that is computed as follows for an example case where there are only two terms If a correction term is not used we arrive at an approximation of the BCJR algo-rithm known as the MAX-Log-MAP algorithm [31]. The correction term makes the logarithmic version an accurate BCJR implementation and is known as the Log-MAP algorithm. In the next section, we explain the notion of extrinsic information which is key to good performance in an iterative system. 2.4.2 Extrinsic Information In an iterative decoding scheme it is mandatory for good performance that each compo-nent decoder be fed with information which does not originate from itself. By passing a sequence of reliability information, each decoder takes advantage of the suggestions of the other one [32]. Extrinsic information is the term used to identify the component of the generated reliability value which depends on redundant information introduced by the considered constituent code. In the binary natural reliability value is the LLR. For a bit ak, the LLR is defined as S(a,b) = l o g ( l + e - l ° - 6 l ) and therefore log(ea + eb) = max(a, b) + log(l + e^ 6 1 ) . L{ak) = log p{ak = l\{Inputs}} p{ak = 0| {Inputs}} (2.10) 2.4 Iterative Decoding and Related Concepts 24 where Inputs refer to all inputs to decoder. Two methods have been proposed in the literature [32] to process the extrinsic infor-mation received by each decoder. In the first method the extrinsic information at the input of a decoder is modelled as the output of an AWGN meta-channel. The second method uses the extrinsic information to update the a priori probabilities which are used in the next decoding step [33],[34]. In this work we have used the second model. The a priori information can be interpreted as the probability of state transitions p(sk\sk-i) for the case where a transition from Sk-i to sk is caused by a distinct input symbol. In the following we denote the extrinsic information with zk-From [32], the a priori term p(Sk\sk-i) is given by if P{ak = l\sk = m, sfc_! = m'} = 1 P{sk = m\sk^ = m'} = «{ 1 +fk """"" ' (2.11) — — - if P{ak = 0\sk = m, sfc_i = m'} = 1. 1 + eZk For the log-domain BCJR the a priori can be incorporated as 2ykxk Li{Sk,Sk_i) = — (- zk, o~ where zk is a log-probability or an LLR. In the next section, we look at some com-putational tools for iterative decoding that are used in particular when the APPs are computed-as LLRs. 2.4.3 Tools for Iterative Decoding Log-likelihood Algebra The LLR as defined in Eq.( 2.10) is known as the soft value or the L-value of the random variable ak [32].A general binary random variable u, similar to ak is used to illustrate 2.4 Iterative Decoding and Related Concepts 25 the various operations that are useful while working with LLRs. The conditioned LLR of a binary random variable u, conditioned on a vector y is given by _ / i \ , p(u = l\y) , p(u — 1) , p(y\u = 1) L(uy) = log ^ - f ( = log -( + log ^ -( p(u = 0\y) p(u = 0) p(y\u = 0) = L{u) + L(y\u). Also using, p{ui © u2 = 1) = p(ui = l)p(u2 = 1) + (1 - = 1))(1 - p(u2 = 1)) and p(u = 1) = rrr we have L ( U l © ua) = log j (2.12) for statistically independent random variables i i i and it2- Eq. (2.12) is particularly helpful for operations during parity decoding (see Section 2.5). Since the parity decoder receives bit LLRs as its inputs the decoder should be able to work in LLR domain to generate APP LLRs. More generally, the © in Eq. (2.12) is represented as the boxplus operation and a generalisation to more than two variables is given in the next section [35]. The Boxplus (EB) Operation We will use the notation EB for the addition defined by L K ) E B % ) ^L{ul@u2) with the boundary values L{u) EB oo = L(u) L{u) EB -oo = -L{u) and L(u)EB0 = 0 2.4 Iterative Decoding and Related Concepts 26 By induction it can be proved that [35] j /J J = l ^ 7 = 1 ' 3 = ( J log n ( e L ^ + l )+ n ( e i ( u i ) - l ) ^ i=i J^I n (eL^) +1) - n ( e L { u j ) - ! ) \ j = l j = l Using the relation tanh(u/2) = (eu - l)/(eu + 1) we have ^ f f l L ( % ) = log J'=I 1 + JJtanh(L(u i)/2) 1 -Yltaxih{L{uj)/2) 2 tanh~ 1 CfJtanhCL^-)^) (2.13) Eq. (2.13) is the so-called "tanh" rule. A practical simplification follows from that fact that tanh(a:) and tanh - 1 (a:) are monotonically increasing and have odd symmetry, thus implying, tanh(x) = sgn(x)tanh(|a;|) and tanh-1(a;) = sgn(rx)tanh_1(|a;|). Therefore, the sign and magnitude of say L(ui) EBL(u2) are separable such that the sign of L{u\) ffl L(u2) depends only on the signs of L{u\) and Z/(u2) and the magnitude \L{u\) EB L(u2)| depends on the magnitudes |L(iti)| and |L(w2)|. This simplification can be used to derive an approximation for belief propagation decoding, which is the subject of the next section. 2.5 Decoding for Parity Check Codes 27 2.5 Decoding for Parity Check Codes 2.5.1 Graphical Representation of codes Recently, there has been a lot of interest in representing error correcting codes as graphs (code graphs) [36]. Wiberg et al. have shown in [37] that a type of graphical model called the Tanner graph (introduced by R.M. Tanner [38] to describe a generalization of Gallager codes) is amenable to the description and study of iterative soft-decision decoding techniques. This is particularly advantageous, as concepts from the field of graph theory can be directly applied to the codes and a lot of information about the structure of the code can be gained. A graph G(N, E) is an ordered pair of disjoint sets N and E, where N is the set of nodes (or vertices) and E is the set of edges. A graph is said to be /c-partite with k node classes-N\, N2, • • • ,Nk, if the set of nodes can be partitioned into k disjoint sets such that no two graph nodes in the same set are joined by an edge or in other words are adjacent. The parity check codes, including LDPC codes are thus usually a bi-partite graph or bigraph as in Fig. 2.6. Convolutional codes on the other hand have states as well, and therefore are 3-partite [37]. The nodes in a code graph for a parity check matrix can be categorised into variable nodes or check nodes. The code graph representation allows for message passing de-coding, also known as belief propagation (BP), to be easily applied to decode the parity check codes [36]. For code graphs with dense connections, there are often cycles that make exact probability propagation computationally infeasible, in which case iterative decoding still yields excellent results. The iterative decoding algorithm proceeds as if no cycles were present in the graph, with no guarantee that the computed conditional probabilities are close to the correct values or that they even converge. But, from the excellent results achieved by Turbo-codes [3], one can argue that the approach still works pretty well. 2.5 Decoding for Parity Check Codes 28 Variable Nodes Figure 2.6: Example of a bipartite graph. Fig. 2.6 shows the general structure of a bipartite graph. All parity check codes used in this work can be represented as Fig. 2.6. The next section describes belief propagation decoding on code graphs, which has been used for the decoding of the outer parity check code of our serially concatenated system. 2.5.2 Belief Propagation Decoding In a belief propagation algorithm, messages are passed along the edges of the code graph. The message passed along an edge is the current opinion, or belief about a bit. The message sent from a node v along an edge e, m(e) depends on all messages to node v except the incoming message along the edge e. For decoding of error correcting codes where the messages are probabilities/LLRs, the outgoing message along the jth edge from a bit node of degree n is given by the sum of the incoming probabilities/LLRs, except for the contribution through the jth edge. For implementation purposes, LLRs as messages offers advantages over using probabilities. The key steps are the local application of Bayes' rule at each node and the exchange of results/messages with neighbouring nodes [39]. For each iteration, there are two types of messages that are passed, one from the symbol nodes to the check nodes and the other from the check nodes to the symbol nodes in the form of LLRs. 2.5 Decoding for Parity Check Codes 29 For message-passing algorithms the specification of the order in which messages are updated in the code graph is known as the schedule [40]. For parity decoding, we adopt a flooding schedule where in each iteration all variable (information) nodes and subsequently all check nodes pass new messages to their neighbours. When using convolutional codes a different kind of scheduling, wave scheduling can also be used. This is so because the parity-check matrix of a convolutional codes is block triangular. In each iteration thus, after variable node and check nodes up to time k have been processed, the edges connecting the future check nodes can be immediately updated without waiting for the next iteration. For high-speed communication however, a flooding schedule would be preferred as it is amenable to fully parallel implementation. The wave schedule on the other hand, allows for a speed up of the convergence. Next, we look into certain general representations of the BP decoding that are used for implementation. General Representations of the B P Algorithm BP Decoding for parity check codes can be achieved in several different ways. The BP algorithms can be simplified using the so-called BP-based approximation [41] (also known as the min sum approximation [37]), which greatly reduces the implementation complexity, but incurs a degradation in decoding: performance. This has led to the development of many reduced-complexity variants of the BP algorithm that deliver near-optimum decoding performance. In the following a brief description for an exact and approximate approach [39], is provided for reference. The approaches are described in terms of symbol node (n) update and check node (m) update. We denote the LLR message from symbol node n to check node m as Znm(xn) and the message from node m to n as Lmn{xn). Each symbol node n is assigned an a posteriori LLR L(xn\yn). For every position thus Znm(xn) = L(xn\yn) 2.5 D e c o d i n g for P a r i t y C h e c k C o d e s 30 • B P decod ing based on Jacobian A p p r o a c h ( E x a c t m e t h o d ) : T h e t a n h Q ru le (cf. Sec t i on 2.4.3) can be a l te rna t i ve l y expressed as LlUi) EE L(u2) — l og — r — 7 r~r . w h i c h can be expressed by us ing the Jacobian l o g a r i t h m (Sec t ion 2.4.1) tw ice as L{ur) EE! L{u2) = s g n ( L ( M 1 ) ) s g n ( L ( u 2 ) ) m i n ( | L ( u 1 ) | , | L ( u ) | ) + l og (1 + e x p ( - | L ( u ! ) + L(u2)\)) - l og (1 + e x p ( - | L ( u 1 ) - L(u2)\)) (2.14) F o r a check node m connec ted to d edges the i n c o m i n g messages are Znim(xni), • • • , Zndm{xnd)- T w o sets of a u x i l i a r y r a n d o m var iab les fi a n d b{ are def ined where fi = xni,f2 = xni EG f\,--- , fd = xnd ffl fd-i a n d bd = xnd,bd-i = xTld_i EH bd,--- ,bi = xni E0 6 2- U s i n g E q . 2.14 repeated ly , L ( / i ) , L ( / 2 ) , • • • , L ( / d ) and L(bi), L(b2), • • • , I/(?jd) are c o m p u t e d i n a recurs ive m a n n e r f r o m the respect ive Znm. U s i n g the pa r i t y equa t i on , we o b t a i n xni = {fi-i + bi+i)\/i G 2, 3, • • • , d. T h u s the ou tgo ing message for check node m becomes L,rnn{xrii) — \ L(b2), i = 1 L{fi-\ EQ k+i), i = 2 , 3 , - - - , d (2-15) L(fd-i). i = d T h i s is essent ia l ly the fo rward b a c k w a r d a l g o r i t h m app l i ed to the t re l l i s of a s ingle pa r i t y check code [39]. W e use th is representa t ion of the B P a l g o r i t h m for decod ing of pa r i t y check codes i n our sys tem. T h e next p a r a g r a p h descr ibes an a p p r o x i m a t e representa t ion repor ted i n [39] tha t we invest igate i n order to get an i dea of the deg rada t i on tha t occurs because the a p p r o x i m a t i o n . • N o r m a l i z e d B P - b a s e d D e c o d i n g ( A p p r o x i m a t e m e t h o d ) : T h e key m o t i v a t i o n be-h i n d the mos t w ide l y used B P - b a s e d a p p r o x i m a t i o n is tha t the m a g n i t u d e of L{u\) EQ L(u2) is less t h a n or equa l to the m i n i m u m of the magn i t udes of L(u\) 2.5 Decoding for Parity Check Codes 31 or L(ui). This can be expressed as \L[u\) EBL(u2)| ^ rnin| 1/(^ 1 )|-,|Z/(ui)|} (refer [39] for a proof). Using \L{ux) EB L(u2)\ ~ min(|L(ui)|, |L(fx2)|, the check node update operation for a node n £ M(m), is given by which is the check-node update in the BP-based decoding algorithm. In practice, it requires the determination of two of the incoming LLRs having the smallest magnitudes, as well as of the signs of the outgoing messages. We have lesser number of operations at every EB computation here as the underlying function g(x) = log(l 4- e~lxl) is not implemented. The BP based approximation can be improved by employing a check-node update that uses a normalization constant a greater than one so that the update in Eq. 2.16 is given by Although a should vary with different SNRs and different iterations to achieve the optimum performance, it can be kept a constant for the sake of simplicity n'GjV(m)\n Y[ sgn(Z„<m(av)) (2.16) [39]. Now that we have all the concepts required to understand the decoding process, we move on to the analytical tools for iterative decoding, in the next section. 2.6 EXIT Charts and Convergence Analysis 32 2.6 EXIT Charts and Convergence Analysis A major question for understanding iterative decoding is to determine the regions in SNR with significant or almost no performance improvement over the iterations, separated by a transition called waterfall. Extrinsic information transfer charts, better known as EXIT charts are what allow us to predict this [25]. In the recent past many approaches have been investigated to analyse the convergence behaviour of iteratively decoded systems. The use of mutual information to describe the flow of extrinsic information through soft in/soft out decoders was proposed in [25] for both parallel and serially concatenated codes. The exchange of extrinsic information between constituent codes is visualised as a decoding trajectory in the EXIT chart. In order to illustrate the Y Deinterleaver APP DPSK Decoder Ei Ax 7T Eo - 0 D 2 Interleaver Figure 2.7: Example Iterative Decoding system concepts, we will use the system described in Section 2.1 as an example. Fig 2.7 shows a block diagram of the system of Section 2.1 that is more suitable for understanding the concepts presented here. Each transmitted symbol comprises of m coded bits and thus there are M = 2m symbols in the signal constellation. At the receiver, the signal is iteratively decoded by mutually exchanging soft information between inner APP DPSK decoder and outer decoder. The DPSK decoder takes channel information Y and a priori knowledge Ai on the inner unmapped bits and computes channel and extrinsic information E\ for each of the m coded bits per constellation symbol. The 2.6 EXIT Charts and Convergence Analysis 33 extrinsic output E\ is deinterleaved and is fed as the a priori input A2 to the outer SISO decoder which calculates extrinsic information E2 of the outer coded bits. E2 is re-interleaved and fed back as a priori knowledge A\ to the inner de-mapper where it is exploited to reduce the BER through further iterations. The values A\, A2, Ei, E2 denote LLRs. Since we work with both symbol LLRs and bit LLRs we need to define what transfer of mutual information means in terms of symbols and in terms of bits. The relation between them is given in the paragraphs below. Symbolwise Mutual Information The mutual information between transmitted constellation symbol and received AWGN channel output is given by F) = h£SZp[VkW = ak)log2 {Piyklp(y) a n ) ) d y ( 2 , 1 7 ) with the conditional PDF p(Vk\xk = ak) = -^exp and 1 N P(v) = J^^2p(Vk\x = ak) (2.19) where N is the length of one block/frame of transmitted symbols. Bitwise Mutual Information The symbol wise mutual information can be decomposed into a sum of m bitwise mutual information using the chain rule of mutual information as follows m—1 L = 0 [Vk - aky (2.18) 2.6 EXIT Charts and Convergence Analysis 34 where IL — I(X\\L other bits known) (2.21) 0 < p < m; 0 < h < 1 A number of L other unmapped bits X™ ^ p of the mapping are perfectly known and nothing is known about rest of the m — l — L bits. The bar in Eq. 2.21 indicates that II is averaged over a) bitwise mutual information with respect to all bits of the mapping , b) over all possible combinations to choose L known bits out of the total of m-1 other bits of the mapping and c) over all 2L bit vector realizations. The vector channel with mutual information I(X; Y) carrying m bits can be viewed as being composed of m parallel sub-channels with mutual information II, each carrying a single bit. The chain rule, as used above, actually gives a very interesting interpretation. The mapping of the coded bits influence only the partitioning of the total amount of mutual information among the different conditional sub-channels II where the sum E II always adds up to the constant value of capacity C, independently of the applied mapping. As mentioned in Section 2.3 we consider various labeling schemes that have differ-ent distribution of the mutual information over the conditional sub-channels. That is what in effect defines the gain over iterations during the decoding process. We will see through EXIT chart visualisations that Gray mapping is not amenable to an iter-ative scheme. In gray mapping, the unmapped bit vectors X of neighbouring signal amplitudes differ by one binary digit which is expected to keep the BER small if the de-mapper is a sheer. The reason for its poor performance in an iterative method is that the information transfer through the de-mapper does not change much with in-creasing a priori knowledge [42]. The partitioning of mutual information among the conditional sub-channels II has a strong impact on the behaviour of the particular mapping in an iterative decoding scheme. 2.6 EXIT Charts and Convergence Analysis 35 2.6.1 Mutual Information Transfer Characteristics of Compo-nent Decoders In order to plot EXIT charts we need to obtain the transfer characteristics of each of the component decoders given a fixed amount of a priori information. The information transfer through each of the component decoders can be controlled by the amount of available a priori knowledge. For the APP DPSK decoder the characteristic behaviour is determined by the choice of signal constellation (mapping) and for the outer parity check code the code rate determines the actual shape. The inputs to the DPSK decoder are the noise corrupted channel observations Y and the a priori knowledge A\ on the unmapped bits and the output is in the form of extrinsic information E\. Now in order to obtain the mutual information at the output of each of the component decoders we need a model to characterize the a priori information at the inputs, over iterations. It has been observed that the extrinsic information, as fed back from the outer decoder are almost Gaussian distributed and large interleavers keep the a priori input Ai fairly uncorrected over many iterations [42]. Hence, it seems appropriate to model the a priori input as independent zero-mean Gaussian random variable with variance a\x. Thus we can write the formulation as: The information content of the a priori knowledge IA1 between transmitted unmapped Ai = fiA^i + nAl (2.22) The conditional probability density is given by (2.23) 2.6 EXIT Charts and Convergence Analysis 36 bits X and the LLRs A\ can be computed as follows • ^ 1 ( ^ 1 ) = \ ^2 [ log2 2pAl(£\X = x) pAl(Z\x = o)+PAl(s\x = i) From (2.23) and (2.24) IA1(O-AI) = 1 /+<x> -00 exp(- .(£z3i>I 2a ) i i — l o g 2 [ l + e x p ( - e M (2.24) (2.25) '2no-Ai We define J(a) = IAl (CTI = a) lim J(a) = 0, lim J(a) = 1, a > 0 a—>0 a—»oo The function J(cr) is a monotonically increasing (see Fig, 2.8), and thus reversible, function that cannot be expressed in closed form. Figure 2.8: Plot of the J(<r) function 2.6 EXIT Charts and Convergence Analysis 37 Approximation for the J(-) function: For computer implementations, the J(-) function can be split into two intervals 0 ^ a ^ atp and atp < a < oo where atp = 1.6363. In order to approximate J (a), a polynomial in a is used for the left interval and an exponential for the right interval. The non-linear least squares (NLLS) Marquadt-Levenberg algorithm provides the following approximation [30] J (a) aj,i • tf3 + h,i - a2 + cJ,1-o 0 ^ a ^ atp 1 - exp[aJi2 • a 3 + bJt2 • a2 + cJ>2 • o + dJ>2 atp < a < 10 1 a ^ 10 where (2.26) aj,i = -0.0421061, bJA = 0.209252, cJA = -0.00640081, a 7 ) 2 = -0.00181491, bJA = -0.142675, cJ%2 = -0.0822054, dJ>2 = 0.0549608 For the inverse J(-) function the curve is split into two intervals at Itp = 0.3646. The approximation can thus be expressed as aa,i • I2 + ba>1 • I + ev.i -yjl, 0 ^ I ^ I{ tP -aa>2 • l o g [ - 6 f f , 2 • (10)] - ca,2 -I, Itp < I < 1 (2.27) where ^,1=1.09542 , 6ffil=0.214217, 0^=2.33727 ^,2=706692 , 6^2=0.386013, 0,2=0.75017 2.6.2 Convergence Prediction using E X I T Charts We are interested in predicting the behaviour of the decoding algorithm without actu-ally having to simulate the whole system. This can be done by analysing each decoder separately via its mutual information transfer functions. The LLRs communicated be-2.6 EXIT Charts and Convergence Analysis 38 tween the two decoders can be modelled as outcomes of random variables A/ modelling the LLRs L / out of the outer decoder and An modelling the LLRs Ln of the inner decoder. After the ith iteration the outcomes of A/ and An are distributed with PDFs pAl conditioned on the code bits cn — x and P A 2 conditioned on xn = x for the sym-bols. Analyzing the exact PDFs after each iteration for a given system configuration completely specifies the behaviour of the decoding algorithm. However, it is extremely difficult to obtain expressions for the exact PDFs for most systems. On the other hand, histograms of the communicated LLRs can be observed during decoding and used for estimating the PDFs. A further simplification is possible by observing only a single parameter of the PDFs. The comparative study of six different observed parameters of the PDFs describing the LLRs revealed that mutual information is one of the most accurate and robust measures [43]. The mutual informations from each decoder, as given by Eq. (2.24), are measured after each iteration. The integral in (2.24) can be evaluated numerically [43]. The mutual informations for each of the decoders is expressed as /+0O pAl (£\X = 1) log2[l + exp(-£M, (2.28) •00 /+ 0O pA2{f\X = l)log2[l + exp(-0]#, (2-29) •00 which is arbitrarily closely approximated by the time averages 1 N ' 7 (A / ; C) « 1 - — l°g 2 ( l + e-c"L;(c")) ^ 71=1 1 NU J(A / / ; X) » 1 - — V l o g 2 ( l + e-*-^/(*-)) (2.30) The above approximations are valid only when the PDFs for the LLRs are both sym-metric, i.e, PAl<Z\x = l) - P A l(-z\x = o), PAM* = i) = PA2(-Z\X = o) 2.7 BER Simulation Methods 39 and consistent, i.e., pAl(?\X = l)=PAl(-e.\X = l)exp(0, PA2(t\X = l)=pAa(-t\X = l)w(t)-The first condition is fulfilled by APP-based decoders of any linear code [43]. The consistency condition is matched whenever the output LLRs are valid reliability infor-mation. In classical Monte-Carlo (MC) simulation, the transmission errors are counted to get BER. In systems where the output is LLR, the sign of LLR for the given bit is compared with that of the transmitted bit. If they agree then we set the error indicator Ei for that bit to zero or else its set to one. An unbiased error estimate can be obtained with the assumption that a reliable reference channel is available and that there are a sufficient number of independent errors [44]. Mathematically, where uk is the kth information bit The BER is the first moment (expected value) of the random variable Ei where e7 £ {0,1} are realisations of Ei and the BER is then computed as BER = AT Z_m=l ^In-2.7 BER Simulation Methods 0 if sgn(M f c) = sgn(LLR), else BER = E[EI] = J2PE{ei)eI , 2.7 BER Simulation Methods 40 Here we present an alternative approach to estimate the BER based on soft-values. The method was was first published in [45] and then independently re-invented in [46] by Hoeher et al. and further investigated. The main idea is to use reliability information at the output of an APP module in order to estimate BER. A parameter Z computed as z = ; — T i l (2-3 1) gives the probability that the hard decision on the information bit is wrong. The above will be more obvious from the following LLR = log * = 1 } 1 p(x = +1) = 1 + exp(-LLR) The instantaneous BER is then given by BERinstantaneous = min{p(:C = +1), 1 - p(x = +1)} 1 1 + exp(—LLR—) (2.32) Therefore Z can be used as a soft bit error indicator. The BER is computed as the expected value of Z: BERsoft = E[Z] = JPz{z)dz. In [46] it is claimed that soft BER simulation method outperforms conventional Monte Carlo simulations both in terms of accuracy and simulation times and the superiority is proven by comparing the variance of the estimation error of the two methods. The variance of the so/t-decision MC simulation is less, thus it is particularly useful for the simulation of rare error events. We will use the soft BER estimate to develop a stopping criteria for the number of iterations of the decoding algorithm. There are other methods such as having threshold values for llr which are mostly heuristic and the number of iterations depends on the LLRs reaching a certain threshold value. An iterative scheme for low power wireless devices needs to be computationally efficient 2.7 BER Simulation Methods 41 and hence the goal is to reduce both the computations within one iteration as well as the number of iterations performed to achieve target BER at a given SNR value. We use simple parity check based methods to keep the computation per iteration low and a soft BER based stopping criteria, which reduces the total number of iterations by three times beyond the waterfall region. Chapter 3 Code Construction and Coherent Decoding In this chapter, we consider the case where the log-MAP algorithm for the APP DPSK decoder uses metrics that are computed assuming perfect channel state information. In the following we will use the term coherent metrics for the metrics thus computed and the decoding will therefore be referred to as coherent decoding. The case of coherent decoding is indicative of the best performance that can be achieved for the system under consideration (see Section 2.1). Although, differential encoding (DE) has been primarily used in communication systems to resolve phase ambiguities, it was proved by Peleg et al. in [20] that differential encoding offers better performance when serially concatenated with a convolutional code as opposed to using a stand-alone convolutional code even for coherent decoding. This result is rather surprising since the differential encoder is rate-1 and is known to degrade the minimum Hamming weight of the code in conjunction with conventional decoders [47]. The improvement through DE is attributed to an effect known as spectral thinning [48] that shapes the weight distribution of the outer code and in effect makes it more random compared to a stand-42 3.1 Code Construction 43 alone code of the same length and rate [20]. Motivated by the excellent performance exhibited by DE in a coherent environment we choose to use it as the inner code of our transmission system. As mentioned earlier in the system description (cf. Section 2.1), for the outer code we choose simple parity check codes. The reason we refrain from using more complicated error-correcting codes for the outer code is because we choose low complexity as the design criteria for the receiver and yet attain close to capacity performance. In the next section, code construction for two general families of parity check codes are presented. This is followed by a description of the application of the Log-MAP algorithm (cf. Section 2.4.1) for APP decoding of the PSK decoder and belief propagation decoding for the outer parity check codes. Finally, we look at some EXIT charts to do an analysis of the code design and indicate the performance benefits derived from the specific code designs presented here. Finally, we also introduce the notion of mixed labelings for improved performance. 3.1 Code Construction The performance of concatenated systems using the Turbo principle can usually be divided into three regions [9]. The first is the low SNR/high BER region where the Turbo code does not perform particularly well and iterative decoding does not have much of an effect, no matter how many iterations are done. This is followed by the Turbo cliff or waterfall region. In this region, the BER drops sharply to low values within fractions of dB SNR increase. Finally, at high SNR, there may be an error floor where the BER curve flattens out owing to low-weight error events. The performance in the waterfall region is analysed using EXIT charts (see Section 2.6) but, the analysis in the high SNR region requires a separate method known as the minimum distance asymptote approximation [48]. In simpler terms, it is the minimum distance of the code that determines the code performance in the high SNR region. For a good performance 3.1 Code Construction 44 in the waterfall region, it is important that the mutual information transfer curves match each other nicely. We show through EXIT charts later that this is the case with the code families presented here. However, the minimum distance of a code is defined by the structure of the code and we design codes so as to improved the minimum distance and also prevent short cycles in the structure of the code. The first family of codes is motivated in part by the outer parity check code of Howard and Schlegel in [9], and can be considered an extension of it. The second one is an optimization of the first code structure in terms of minimum distance. A detailed description for both codes are provided in the next sections. In [9], a 2/3 single parity check code with a minimum free distance dmin = 2 had been used, in a system that is very similar to the one described here. Code I is a general class of parity check codes with rate r/(2r — 1) and is used as the outer code of our concatenated system. The rate 2/3 parity check code belongs to this family for the r = 2 case. As the value of r increases, the rate of this class of codes converges to 1/2. Mathematically, This is a very basic error correcting code and both encoding and decoding has very low complexity for these codes. The code graph for Code I codes is very simple and does not have any cycles, thus belief propagation decoding is optimum for these codes. The graphical representation for the r/(2r — 1) parity check codes is shown in Fig. 3.1. The even numbered nodes x0,X2, . . . , X 2 r - 2 > represent the information bits and the odd numbered nodes x\, £ 3 , x 2 r _ 3 , denote the parity bits generated from adjacent information bits by performing a modulo operation on them. More generally, 3.1.1 Code I (3.1) %2i+l = %2i © %2i+2- (3.2) 3.1 Code Construction 45 The checknodes © connect the information bit nodes to the parity bit nodes. An increase in the value of r increases the correlation amongst the coded bits and thus performance becomes better but, there is a loss in code rate. %0 X2 2?2r-4 X2r-2 %2r-5 %2r-3 Figure 3.1: Code construction for Code I (r/(2r — 1)) codes 3.1.2 Code II The major drawback with the Code I codes presented above is that they have a min-imum free distance dmin = 2. This restricts the performance in terms of the observed error floor as low weight error events determine the performance in that region. With a small dmin this problem is difficult to get around. The issue of error floor is more prominent when Ungerboeck labeling is used. In [9], a new mapping scheme, which was referred to as HSL earlier, has been proposed to reduce the error floor. The BER curves show that the error floor does kick in even with the new labeling, although at a lower error rate. One method to get around the error .floor issue would be to use large interleavers and this can be seen for e.g in [9]. The downside here is that the decoding of the block will have to wait till all the symbols for the large block are received. In many applications, for example ones that involves real-time data transmission, delays beyond a certain threshold are unacceptable. To avoid having a wait period and thus 3.1 Code Construction 46 enhance the efficiency of the system we consider small block lengths only. The other approach would be to increase the minimum free distance for the coded system and thus reduce low-weight error events. This is what we try to do here by introducing a family of codes, which we will call Code I I , with a minimum free distance of 3. Code II codes are obtained through ari improvement on the design of Code I codes and the general code construction is shown in Fig. 3.2. In Code II codes we add one more node each, at the beginning and at the end of the code graph (see Fig. 3.2). Thus we have two more nodes compared to Code I codes and hence for r input nodes we have 2r + l(2r — 1 + 2) nodes. The rationale for this extension is explained next. %2r-l X2r-2 X2r-3 X2R Figure 3.2: Code Construction for Code II(r/2r + 1) codes For Code I codes (see Fig. 3.1) apart from the first (x0) and the last (x2r-2) information nodes all other information nodes are connected to two other nodes. However, nodes x0 and x 2 r - 2 are connected to only one more node. Thus, apart from the first and last information node, all other nodes have a minimum distance of 3. Effectively, the first and the last nodes are the only nodes that limit the free distance of this family of codes and thus limit the overall performance of the code. This problem can be taken care of by adding a redundant node each, at the first and last information bit positions. In Fig. 3.2 the node x2r-i is the same bit as XQ and x2r is the same as x2r_2. The rate of this Code II codes also converges asymptotically to 1/2. The trade-off here is a lower 3.1 Code Construction 47 code rate for the same number of information bits as compared to a code from Code I family. Often, it is important to consider not only the bit error rate but the frame error rate as well, since, if errors are reported, e.g, in an Adaptive Repeat Request (ARQ) system, it would require frame retransmissions. It has been reported e.g. in [15] by Tullberg et al. that with a d m i n — 2, two or more inner accumulate codes are required to make the frame error rate (FER) vanish, asymptotically (as blocklength tends to infinity). Whereas, even with a modest increase of the free distance d m i n from 2 to 3, the FER vanishes asymptotically with only a single inner accumulate code. Thus the benefits offered with this code family is two-fold. Firstly, there is no error floor with a variety of labeling schemes and secondly, the frame error rate curve has a higher slope. Inspite of being very simple and having very low decoding complexity, Code II codes exhibit performance comparable to other schemes that use much more powerful outer codes, such as the one proposed by Franceschini et al. in [16]. In [16], a serially concatenated scheme with novel LDPC codes have been proposed that use an inner differential encoder. The authors establish the need for LDPC codes that are specifi-cally designed for use with differential encoding and propose codes to the effect. We show in Chapter 5 that the performance of Code II is almost the same as that in [16]. The drawback as compared to [16] is that Code II affords a slightly lower code rate. Also in [17] Ferrari et al. consider serial and parallel concatenated convolutional codes (Turbo codes) as outer codes for. a system with a differential encoder as the inner code for phase uncertain channels. Such systems inherently have much higher complexity than the code families that have been investigated in this work. In the next section, we look at the mechanics of the actual decoding process of the proposed system. 3.2 Iterative Decoding of the Serially Concatenated System 48 3.2 Iterative Decoding of the Serially Concatenated System The system is decoded according to Turbo principles as shown in Fig 3.3. For coher-ent decoding, we assume that channel phase is completely known, hence no channel estimation techniques are required. In Fig. 3.3 \0(xk) represents the symbol LLRs at Deinterleaver v[k] Inner D P S K A P P Soft Decision Decoder 7T -1 Ai(c f c) 7T + Interleaver A0(cfc) Outer Pa r i ty A P P Soft Decision Decoder Figure 3.3: Decoder Structure the output of the APP DPSK decoder. These symbol LLRs are converted to bit LLRs using a symbol-to-bit LLR conversion module, the workings of which are explained in the next section. The extrinsic information of the APP DPSK decoder becomes the a priori information Aj(c™) for the APP parity decoder after de-interleaving. The APP LLRs A0(c™) at the output of the parity decoder are interleaved and converted to symbol LLRs and fed to the APP DPSK decoder as a priori input, \i(xk). Thus, there is an iterative exchange of information regarding the probability of a transmitted bit being 1 or 0 till a stopping criteria is reached. At the end of the decoding process, a measure of the conditional probability in terms of bit LLRs is obtained, which is used / P(ui = 1)' for obtaining hard decisions. If the LLR is defined as LLR = log ' 1 the bit is decided to be a 1 if L L R ( U J ) > 0, and 0 otherwise. P{ui = 0) then 3.2 Iterative Decoding of the Serially Concatenated System 49 3.2.1 Differential M-PSK Decoding The conditional PDF of the channel symbols yk for the complex AWGN channel is This is used to compute the channel metrics in the form of log-probabilities as follows where a2 is the variance of the complex AWGN channel and Re denotes the real part of ykxk* and ()* denotes complex conjugation. These metrics are then fed to the inner APP decoder for the differential code along with a priori information pA(wk) on the input symbols wk. For the first iteration, no a priori information is available for the inner APP decoder. Therefore, uniform o priori probabilities are used. The APP differential decoder is the inner decoder for the serially concatenated system and can also be seen as a recursive non-systematic convolutional code, with a regular fully connected trellis. A three-stage differential 4-PSK trellis section, corresponding to three symbol time intervals is shown in Fig. 3.4 as an example (the following comments are equally valid for an 8-PSK trellis). The actual trellis is fully connected but for the sake of clarity, only the first section is shown as fully connected. The trellis section depicts two decoding paths. The solid bold path is the correct path and the dashed path above it, results from a phase rotation of TT/2 radians. The differential 4-PSK symbol wk and the transmitted symbol xk are represented as a pair (wk, xk) along the branches of the trellis. The point to be noted here is that while the ?T/2 phase rotation affects the transmitted symbol, the differential (information) symbols wk are identical on both paths. The trellis and the code are thus, rotationally invariant. This rotational invariance will be significant when considering decoding without phase synchronization. The APP decoding is performed given by A(Vfc) Re{2ykxk*} o2 (3.4) Figure 3.4: Differential 4-PSK trellis section using a Log-MAP-BCJR algorithm as presented in Section 2.4.1. The forward metric Ak(Sk) and backward metric Bk(Sk) for the kth instant are calculated using the max* operation as follows Ak(Sk) = max* [Ak^(Sk-i) + Tk{Sk-i, Sk)}, Sk-i Bk(Sk) = max* [ B k + l { S k + 1 ) + Tk(Sk, Sk+1)}. (3.5) Sk-i Sk denotes the state at time instant k and Tk(Sk-i, Sk) is given by Tk{Sk-USk) = A(yk) + Xi{xk). (3.6) xk is the input symbol associated with the transition from state 5^-1 to Sk and the A(yk) is computed as in Eqn. (3.4). The log-APP calculation for a differential symbol xk thus becomes APP(x f c)= max* \Ak^{Sk^) + Tk{Sk-U Sk) + Bk(Sk)}. (3.7) Sk-i,Sk'-{Sk-i+Xk=Sk} 3.2 Iterative Decoding of the Serially Concatenated System 51 After obtaining the symbol log-APPs, the symbol to bit LLR conversion module is used to get the bit LLRs. The bit at the mth position in the symbol xk is denoted by c™. The bit LLR for c™ is computed as follows L(c-) = mn£(APP(xk)) - max* (APP(xk)). (3.8) In order to obtain the extrinsic information from the bit LLRs computed in Eq. 3.8, the APP output of the parity decoder A0(c™) from the previous iteration is subtracted from bit LLRs L(c^). Thus Xi{ct) = L{c^)-X0(c^l). (3.9) After deinterleaving the AjS are passed on to the APP parity decoder as a priori infor-mation. This is followed by the parity decoding of the code according to the principles of BP decoding as explained in Section 2.5. The parity decoder outputs are bit LLRs as well. To convert from APP output of the parity decoder to a priori for the PSK decoder, the input to the parity decoder needs to be subtracted from the APP out-puts before it is interleaved and passed onto the PSK-decoder. The a priori bit LLRs are converted to a priori symbol LLRs by the bit to symbol conversion module. To compute the symbol LLR for symbol xk, at the kth instant xk, we add the bit LLR at the mth position of the binary representation of xk if bin(xk)m = c™ = 1 and not otherwise. Algorithmically, we have Xi(xk) = 0 to begin with and follow the following simple rule: if c™ = 1 KM = KM + Ai(c^) else do nothing (3.10) where m € {0, • • • , log2(M) - 1}. In the next section we focus on the operation of the APP parity decoder. 3.2 Iterative Decoding of the Serially Concatenated System 52 3.2.2 Decoding of Parity Check Codes In Section 2.5.2, the principles of belief propagation decoding and their application to code graphs were briefly introduced. In this section, we describe a forward-backward recursion approach to decode the outer parity check codes based on BP decoding. All BP-based decoding algorithms need to have a schedule for updating the values at the various nodes per iteration. For our scheme the schedule is to first perform a forward traversal from the first variable node to the last variable node, updating values based on the parity check equations and computing a forward metric. A backward traversal is then done in a similar fashion and a backward metric is computed. Finally, we traverse the entire graph again in the forward direction and APPs are computed by combining the forward and backward metrics. The inputs for the parity decoding is the extrinsic information from the PSK APP decoder in the form of LLRs. A section of the code graph is shown in Fig. 3.5 and the edges show the corresponding metrics that are passed along them .In the following the APP for a general node 2i, out[2i] is computed as an example. in[i] denotes the bit LLR values as obtained from the symbol to bit LLR conversion module after interleaving. inl2i - in[2i + 1] Figure 3.5: Parity decoding at a single node 3.2 Iterative Decoding of the Serially Concatenated System 53 Forward Recursion: The forward recursion metric fn[i] is the message passed from the (2i — \)th node to the (2i)th node during the nth iteration. Thus we have only L(2r — 1)/2J forward metrics for Code I and r forward metrics for Code II and the same applies to the number of backward metrics, where r is the number of information bits per block. The boxplus operation is used at the checknode ©, to find the ith forward metric as follows fn[i\ = (fn[i - 1] + in[2i - 2]) EB (in[2i - 1]). (3.11) Backward Recursion: Similarly, the backward recursion metric bn[i] is the message passed from the (2i + l)th node to the (2i)th node during the nth iteration and is computed as bn[i] = (bn[i + 1] + in[2i + 2]) EB (in[2i + 1]). (3.12) Eqs. (3.11) and (3.12) are very general expressions in the sense that any of the exact or approximate method to evaluate the EB operation (cf. Section 2.5.2) can be used to perform the update operations and we use the Jacobian method as described in Section 2.5.2. The decoding for both Code I and Code II proceeds in exactly the same way, the only difference lies in the initialisation of /„ and bn. For Code I codes, we have /„[0] = 0 and bn[k - 1] = 0. For Code II, however, we will have, /n[0] = in[2r - 2] and bn[k - 1] = in[2r - 1] The forward and backward metrics can actually be computed parallely as they are independent of each other. Hence, this suits the objective of fast decoding as well. To compute the APPs, we combine the fn[i] and bn[i] metric to obtain out[2i] = (fn[i] EB in[2i - 1]) EB (bn[i\ BB in[2i + 1]) (3.13) 3.3 EXIT Chart Analysis 54 Thus the out[2i}s are the APP outputs of the parity check decoder. After subtracting the inputs in[2i] from outputs we obtain the extrinsic information from the parity decoder. The extrinsic information thus generated is fed to the PSK differential decoder after interleaving as a priori information. The next section looks into the mutual information transfer characteristics of the component decoders and EXIT charts for the system in order to predict system performance. 3.3 EXIT Chart Analysis We now use EXIT charts to gain further insights into the performance of the concate-nated system. Input a priori LLRs are generated assuming a Gaussian distribution (see Section 2.6). The component APP decoder takes the a priori LLRs, A, with mutual information I A, and produces the extrinsic soft information E, with mutual information IE. The transfer characteristics of the inner PSK decoder and the outer parity check decoder are obtained individually and they would be the same irrespec-tive of the system they are used in. The interleaving process does not alter mutual information. Although, the soft information gets scrambled by interleaving, the overall distribution remains the same and so does the mutual information. Each component decoder is simulated, without the need for implementation and simulation of the actual concatenated system. EXIT chart analysis, thus allows the choice of component codes for optimization of the concatenated system performance without lengthy simulation of each code combination. In the next two sections, we look into the mutual information transfer characteristics of each of the component decoders described in Section 3.2. Unless mentioned otherwise, the transfer characteristics for the APP DPSK decoder are for 8-PSK constellations. 3.3 EXIT Chart Analysis 55 3.3.1 DPSK Decoder Transfer Characteristics In Fig. 3.6 we see the mutual information transfer characteristics for the APP DPSK decoder with various labelings at symbol SNR, lGTog10(Es/./Vo) = 6.5 dB. The inner decoder receives the channel metrics on the transmitted symbols, in addition to the a priori information. Thus, the transfer characteristics are dependent on the channel SNR as well. The abscissa (z-axis) is the mutual information at the input of the PSK decoder (in the form of a priori information), denoted by ia(DPSK). The mutual information at the output of the decoder is represented by /e(DPSK). Thus, this plot gives us an idea of how well the decoder can make use of the information provided to it. Figure 3.6: Mutual information transfer chart for APP DPSK decoder at 10Tog10(Es/Ag = 6.5 dB. 3.3 EXIT Chart Analysis 56 The steeper the slope of the transfer curve, more is the performance improvement over iterations. Thus, the mutual information transfer chart is a good way of predicting if a certain scheme is amenable to iterative decoding or not. For e.g. in Fig. 3.6 we see that the transfer curve for Gray labeling is rather flat as compared to other labeling schemes. Therefore, we can conclude that Gray labeling is not that suitable for iterative decoding compared to other labeling schemes. It can also be seen that for all the other labelings apart from Gray, the slope is higher initially and flattens out a little with higher a priori mutual information. This indicates that the gain with iterations is lesser as the number of iterations increases. We also see that even with la = 0 (no a priori information), we have Ie ^ 0. 3.3.2 Parity Check Decoder Transfer Characteristics The transfer characteristics for Code I and Code II are shown in Fig. 3.7 and Fig. 3.8. The parity check decoder does not receive any input from the channel but receives ex-trinsic information from the inner PSK decoder only. Thus the transfer characteristics are independent of the operating SNR of the system. The characteristic curves in the transfer chart are obtained without any assumptions about the serially concatenated system. Transfer curves for r = 2, 3, 5 and 10 have been plotted in Figs. 3.7 and 3.8. For Code I, r = 2 gives the highest code rate of 2/3, whereas, for Code II the highest code rate considered is 10/21 with r = 10. In Fig. 3.7, it can be seen that as we increase r, there is downward shift in the transfer curves, indicating better performance at lower SNR. The tradeoff is that as we increase r, we reduce the code rate as well, asymptotically reaching ^. On the other hand, with Code II, as we increase the code rate by increasing r, the transfer curves exhibit an upward shift and thus the Turbo cliff onset will be at a higher SNR with increasing r. This will be more clear in the next section on 3.3 EXIT Chart Analysis 57 Figure 3.7: Mutual information transfer chart for Code I Figure 3.8: Mutual information transfer chart for Code II 3.3 EXIT Chart Analysis 58 convergence prediction. 3.3.3 Convergence Prediction EXIT charts can be used to predict convergence in iterative decoding systems by plot-ting the mutual information transfer curves for both the component decoders on the same graph. For the inner decoder the transfer characteristics are plotted the same way as described in Section 3.3.1 however, for the outer decoder the axes are interchanged and for our case this means Ie for the parity check decoder is on a>axis while the la is on the y-axis. If there is a tunnel (gap) between the two curves thus plotted, we conclude that convergence is possible. The SNR value at which the tunnel just opens up is known as the threshold SNR and this is the approximate value at which the onset of the Turbo cliff happens. The BER drops sharply after the threshold value, with an increase of only few fractions of dB in SNR. A sample EXIT chart for the serially concatenated system presented here, is shown in Fig. 3.9. A rate 3/5 parity check code (of the Code I family) and HSL has been used to obtain the transfer curves in Fig. 3.9. The stair case curve show in Fig. 3.9 is known as the decoding trajectory and is the path that the iterative decoder follows to process soft information. The decoding starts with the PSK decoder so we project a straight line from the the point ia(DPSK) = 0, on the transfer curve for the parity decoder. The point of projection on the parity decoder curve is the value of the mutual information available at the input of the parity decoder after the first pass. From this point on the parity decoder curve we drop a projection on the PSK decoder curve. The point of intersection with the PSK decoder curve gives us the a priori information available at the input of the PSK decoder at the beginning of the second pass through the decoding process. Each stair of the staircase curve, thus represents the transfer of extrinsic information between the component decoders with each iteration. Clearly, the trajectory closely matches the PSK decoder and parity 3.3 EXIT Chart Analysis 59 Figure 3.9: Decoding trajectory of the serially concatenated system. decoder characteristics. It is very important that the transfer characteristics for each of the component decoders match each other nicely in order for the decoding scheme to have better performance over iterations. The transfer characteristics for Code II match the parity decoder characteristics better than Code I, hence convergence can be achieved at a lower SNR value. But, Code II has lower code rate, so depending on the restrictions imposed by the design requirements we may use Code I or Code II codes as the outer code. For example, if we need to achieve good performance at very low SNR values we can use Code II codes whereas to send higher bits/channel use we can use Code I codes. The choice may also depend on the target BER performance since Code I codes floor out in the high SNR region after threshold. Hence if the desired BER performance is achieved at given SNR constraint then Code I codes are a viable option. Code II codes will however, always afford better performance in terms of error floor and threshold SNR. . ' 3.4 Mixed Labeling 60 3.4 Mixed Labeling In this section, we present a concept that will be referred to as mixed labeling in the future. Rather than using just a single constellation labeling for the transmission of a block, we can also use more than one labeling scheme for the same block. For e.g, we can transmit half the symbols according to say Labeling A, and the other half according to Labeling B. Also, we can use more than two labelings and have any proportion of symbols mapped according to a certain type of labeling. The performance benefits can be both in terms of a lower onset of the Turbo cliff and a lower error floor. Some labelings such as the HSL typically produce a lower error flow while the Ungerboeck labeling has the advantage of a much lower onset of the Turbo cliff. These two, for example, can be used in 1:1 ratio to have a labeling scheme that has good performance in terms of both the Turbo cliff onset and error floor. The design of mixture schemes becomes particularly easy given that we can obtain the mutual information for a mixture labeling, ImiX, as where Ii denotes the mutual information transfer vector for the component labelings and ttj denotes the fraction of symbols that are labeled using labeling i. To explain this further, Fig. 3.10 shows the transfer characteristics for a mixed labeling curve that uses a mixture of Gray labeling and Antigray labeling in a 1:1 ratio. The Antigray labeling gives a lower mutual information at the output of PSK decoder for small values of the mutual information at the input, whereas, the transfer character-istics for Gray labeling are flatter. When both of the labelings are used in a 1:1 ratio we obtain the transfer characteristics of the mixed labeling. The mixed labeling curve has higher output at low la values as opposed to just using Antigray, and at the same time is less flatter than the curve for Gray labeling. Thus mixture of labelings can be used to shape the characteristics at the output of the decoder so that they match the N (3.14) 3.4 Mixed Labeling 61 Figure 3.10: Example of design with mixed labeling. Antigray and Gray labeling has been used in a 1:1 ratio outer decoder characteristics in a better way. Chapter 4 Non-Coherent Decoding In this chapter, we consider the scenario where the channel phase is a priori unknown at the receiver side and hence decoding is done without explicit channel state infor-mation (CSI). Accurate carrier phase acquisition and tracking on physical channels, to perform coherent decoding, is usually difficult or impractical in many systems due to excessive phase noise or frequency hopping [11]. The implementation of the algo-rithms for coherent decoding in such cases consumes more area on the chip than the decoder itself [9]. This motivates the need for noncoherent detection or detection with unknown carrier phase. All the approaches that have been considered in this chapter perform the decoding noncoherently, i.e., not assuming any prior knowledge on the part of the receiver about the channel phase, and without requiring pilots for channel estimation. The most popular method for decoding in a case where the symbols are differentially encoded is to perform differential detection. Since differential detection suffers a 3 dB loss in performance, as compared to coherent decoding of absolute PSK symbols, Divsalar and Simon [49] proposed to bridge the loss by increasing the obser-vation interval to more than two received symbols. This general class of window based maximum likelihood (ML) detection methods are generally known as Multiple Symbol 62 4.1 Channel Phase Models 63 Differential Detection (MSDD) methods. The MSDD block demodulators make use of the memory of a small block only and further improvement can be obtained by considering a larger block. But the complexity of ML decoding increases exponentially with increasing block size. Hoeher and Lodge propose a trellis based approach in [13] which makes use of the BCJR algorithm. It is reported in [13] that an APP DPSK de-modulator performs asymptotically as well as coded coherent PSK without differential encoding in the AWGN channel while it can marginally outperform the coherent PSK case in Rayleigh fading. The scheme in [13] does not fully realise the performance gains achievable by iterative processing since the symbols are mapped using Gray labeling. The performance gains seen in [13] can be improved by using mapping schemes that are more suitable to an iterative decoding framework. In the following, we focus only on trellis decoding approaches for the unknown channel phase case, as it makes use of all past and future samples. To begin with, we will describe the channel phase models that have been considered, in the next section. 4.1 Channel Phase Models The particulars of the channel phase did not matter in the coherent decoding case as we assumed perfect information about the channel state. The channel used here is the same complex AWGN channel as that used for coherent decoding but we need to characterise the stochastic behaviour of the channel phase for the noncoherent case. As was mentioned in Section 2.2 the channel coefficient in polar form is given by hk = \hk \ exp(j#fc), where \hk\ is the magnitude of the channel coefficient and 6k is the channel phase at the kth instant. We consider two models to characterize the behaviour of 9k which are as follows: • Firstly, we consider the case where the phase of the channel is constant for the entire block of symbols. Thus the unknown channel phase is given by Ok = ©, 4.2 Decoding Algorithms 64 over entire length of the block/frame. This model is appropriate to model short packet transmission systems or frequency hopping, where the phase could be assumed to be approximately constant over one transmission block [9]. • The second phase model is a Wiener noise also known as the Gaussian Random Walk (GRW) model [11]. The channel phase 6k evolves as follows ek = + A e f e , (4.i) where the phase process A Q k is a zero mean Gaussian independent identically distributed (i.i.d) with variance o%. This channel could be used to model phase jitter from oscillator instability and has been investigated, e.g., in [9],[11]. 4.2 Decoding Algorithms For PSK transmissions the channel phase noise can be particularly harmful as all the information is encoded in the phase of the transmitted symbol. Differential encoding is thus typically used for PSK transmissions in a phase noisy environment. However, that EXIT chart for the 8-PSK case for our system, as shown in Fig 4.1, indicates that proper measures to deal with channel phase are required even with differentially encoded symbols. Fig 4.1 shows the EXIT chart for different phase offsets with a Code I code with r = 2 (cf. Section 3.1.1) as outer code and HSL scheme for an 8-PSK constellation. The APP DPSK decoder works the same way as for coherent decoding without taking into account the channel phase. It can be seen that at a symbol SNR 10 log10(Es/Afo) = 6.5 dB the system has a threshold for a channel phase offset of 0 radians, which corresponds to the case of perfect channel information. However, as the phase offset becomes higher an higher the APP DPSK decoder transfer characteristics exhibit a downward drift. Moreover, the transfer characteristics of the component decoders intersect at relatively low values of mutual information. This implies that the 4.2 Decoding Algorithms 65 11 i i i i i 1 1 1 1 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 l a (DPSK- HSL), le(2/3 Parity Check) Figure 4.1: E X I T Chart for D P S K Decoder with different phase offsets at symbol SNR 10 log 1 0 (Es/No) = 6.5 dB. HSL and a rate 10/21 outer parity check code has been used as an example case decoder will be stuck at high BERs and convergence is not possible. Thus we conclude that proper measures are required at the receiver when the channel phase is non-zero. Before our approach is described, we provide a brief review of some of the other trellis based decoding methods proposed in literature [9],[11],[50] for the noncoherent case for the phase models considered. These methods have been used for comparison with our approach. Section 4.2.1 presents the method proposed by Howard and Schlegel where channel phase is estimated and the channel metrics used by the A P P D P S K decoder are computed as in coherent decoding. Sections 4.2.2 and 4.2.3 are methods based on quantization approach., 4.2 Decoding Algorithms 66 4.2.1 Howard & Schlegel method The method for channel estimation described here, was proposed by Howard and Schegel in [9]. They use an APP-based channel estimation technique and there is no overhead as no pilot symbols are used. Information about the channel is extracted from the APP of the symbols. The general setup is shown in Fig. 4.2.1. The estimation module can be thought of as an adjunct unit of the APP DPSK decoder. Vk Inner M - P S K A P P Soft Decis ion Decoder F i l te r + Outer Par i ty A P P Soft Decis ion Decoder Channel Estimation module Figure 4.2: Block diagram for Howard & Schlegel APP channel estimation method The method makes use of the APP information available at the output of the DPSK decoder. By subtracting the a priori information from the output APPs, the APP DPSK decoder generates extrinsic information. We see in Fig 4.1 that even when we have no a priori information (Ia = 0), the APP decoder provides us with some extrinsic information in the presence of phase offset. Extrinsic information is available even in the worst cases of a phase offset, for e.g. for 8-PSK case a phase offset of 7T/4 represents the worst scenario as this implies that the transmitted symbol is shifted to a pointed exactly in the middle of two signal points of the constellation. Fig 4.1 4.2 Decoding Algorithms 67 shows a non-zero extrinsic information for this case as well. From this we infer that an external method of generating initial phase information is not required and estimation methods based on APP information can be devised for this purpose. The APP DPSK decoder generates symbol (log) probabilities for both the differential symbols and the transmitted M-PSK symbols. The log probabilities of the transmitted symbols can be used to estimate the channel over subsequent iterations and convergence can be achieved. The actual mechanics of the estimation is motivated by a simple filtering estimator proposed Schlegel and Grant in [10]. The expectation of the received symbol can be written as follows E[yk] = hkE[xk\. (4.2) Here E[xk] is the expectation over the APP symbol probabilities P{xk) at time k from the APP DPSK decoder, and is given by E[xk] = e*P ( ^ ) V (xk = exp ) . An instantaneous channel estimate is found as hk = ykx*k, (4.3) where xk is the mean E[xk] normalised to unit magnitude, i.e., xk = E[xk]/\E[xk]\. A point to note here is that although E[x] = 0, E[xk] ^ 0, since the APPs P(xk) are not uniform. This is so because the recursive nature of the differential encoder. At the output of the data source any bit pattern is equi-probable and the interleaving is random enough to maintain the uniform probability for all the PSK symbols. However, the symbol probabilities after differential encoding are not essentially random and some symbols may appear with higher frequency than others. At each iteration, the APP DPSK decoder computes soft probability estimates of the channel symbols xk and the channel estimator uses them to compute the instantaneous 4.2 Decoding Algorithms 68 channel estimate according to Eq. (4.3). The individual channel coefficient estimates are then passed through a filter to refine them. For both constant offset and Wiener phase noise the filtered channel estimates are used to compute the coherent decoder branch metrics, p(yk\xk) °c exp ^—\yk — hkxk\2/a2^ to be used by the APP decoder. For the constant phase offset case, the best filter is the averaging filter given by N hk = {l/N)Y,hk (4.4) where N is the frame length. The phase tracking capability of this simple estimation technique for the constant offset case is shown in Fig. 4.3, for an outer 2/3 parity check code using HSL at symbol SNR (Es/N0) of 6.8 dB. For the case where the channel phase is modelled as Wiener noise, the use of an exponentially decaying moving average filter is suggested in [9]. The filter can be represented as k hk = (l-a)J2ak~3hj- (4-5) Here, a is an exponential decay parameter with values depending on the variance of the noise process. The estimation method is particularly attractive as there is no transmission overhead because of pilot symbols. Also, the channel estimation is done with using very simple operations and filtering methods. More specifically the estimation algorithm relies on the rotational invariance inherent in the transmission scheme to extract phase infor-mation. For e.g. in an 8-PSK scheme any rotation of the constellation modulo 7i/4 results in the same decoded symbols. Thus the computational burden is also kept is also kept at a minimum. Although the estimation method gives good performance for the constant phase offset case, it has been found during the course of this work that the results reported in [9] 4.2 Decoding Algorithms 69 0.5 </> 0. c CO 5 0.: 05 3= o <D o.: to CO 0.1J 1 1 ^ . 71/8 Y JX/10 j TC/12 ; / i A n r \ n n v-\ '/~\ S 71/16 j o U U o U U O O u C J U U U U t , u u u 1 / Rate 2/3 oa ritv.che ck code ./ Symbol SNR Es/No = 7dB I i 0 5 10 15 20 25 30 35 40 45 Number of Iterations Figure 4.3: Channel Estimation with Howard &; Schlegel method for constant phase offset. The Fig. shows the capability of the method to correctly estimate the channel phase for a rate 2/3 outer code at symbol SNR 101og10(Es/./Vo) = 6.8 dB. Except for a phase offset of 7r/8, the algorithm is able to lock to the correct channel phase in less than 20 iterations. for the Wiener phase noise case are invalid. The algorithm fails to track the phase for this case and this has been confirmed through a private correspondence [19] with the authors of [9]. This method has very limited tracking capability when the phase noise variance is more than 1 x 10~5. Thus the contribution in this work becomes significant in the light of the fact that although computationally simple, the method in [9] fails for this important case. The decoder failure for the Howard & Schlegel method with Wiener phase noise will be more evident in Chapter 5 from the results of EXIT chart analysis. 4.2 Decoding Algorithms 70 4.2.2 Peleg-Shamai-Galan (PSG) Method In order to gain a better perspective in terms of computational complexity and perfor-mance, we will compare our approach with a some other approaches where the channel phase is essentially discretized over the entire range [0, 2?T]. This approach is referred in the literature as discretized-phase BCJR since after discretization of the channel phase, the phase trajectories are represented on a trellis diagram with L states and subsequently the BCJR algorithm works on an expanded trellis (also referred to as supertrellis) [51]. The first approach is due to Peleg et al. [11]. In [11], Peleg et al. proposed a method for non-coherent M-PSK communication with iterative decoding over an AWGN channel with phase noise. The received symbols yk are represented as yk = xk • exp{j9k) + nk = exp{ji/jk) + nk, where the transmitted symbol, xk — exp and tpk = fa + #fc-The random phase introduced by the channel is taken care of at the decoder by assum-ing that the phase noise 6k is Markovian. The modulation phases fa are independent, implying that ipk is a Markovian process as well. In [11], the approach is to discretize ipk into ML equi-spaced values. Thus the original M-state trellis now becomes an ML state fully connected trellis. An expansion of the trellis inherently implies in-creased computational load when the BCJR algorithm is used as there are more states to traverse now. It is reported in [11] that with L — 8 no perceptible degradation in performance is observed. We explain the concepts here using the example case with an 8-PSK scheme and four equi-spaced levels between the states. The trellis representation of the discretized channel phase can be seen in Fig. 4.4.. There are 32 states and the phase difference between successive states is 27i/32. In a non-expanded fully connected trellis, it is assumed that a given current state,can have any possible previous states and thus 4.2 Decoding Algorithms 71 s 0 S 2 S4 S 5 S 6 S 7 (a) • ' - ' ' ' • (b) ' . . ' Figure 4.4: Trellis diagram for PSG method. In the Figs. 4.4(a) and 4.4(b) an example case with M = 8 and L = 4 is shown. Thus the original 8-state trellis is now as 32-state trellis. The states (sk) represent the states of the non-expanded trellis. Fig. 4.4 (b) shows restriction on possible previous state in the supertrellis to only 3 states for a corresponding previous state of the non-expanded trellis metrics are computed for all possible transitions. However, in order to model the phase noise in the expanded trellis we need to assume some model of the phase noise so that the number of transitions are limited. The-phase noise model for the expanded trellis assumes a discrete random walk (DRW) model. This implies that although the trellis is fully connected for the non-expanded M-state trellis, it is not so for the expanded ML state trellis. DRW assumes that phase differences of ±27i/ML, where L is the number of equi-spaced levels between two neighbouring states, occur with probability Pg and that probability of transitions with a phase difference of more than 2ir/ML is 0. Thus there are only 3M branch transitions per current state of the trellis. This is pictorially represented in Fig. 4.4(b). 1 - Pc Po 1 - Pc 4.2 Decoding Algorithms 72 Mathematically this can be expressed as follows, pe/2 A9 = ±2ix/ML PAe(A9)={ (4.6) 1-Pe A9 = 0 where p&g(A9) denotes the transition probability for the phase and the value of pg depends on the statistical behavior of the channel phase. Specifically the PSG method needs to know the variance of the channel phase since the variance of the parameter pg needs to be adjusted so the variance of the DRW phase noise increments, upon which the DPSK decoder is based, equals the variance of the actual channel phase <JQ . For e.g., for the Gaussian random walk pg can be computed as E[Ae>) = p,/2 • ( J L ) 2 + p.,2 • (H)2 + (1 - p.) • o = 4 »=-sV (4-7) ML There is an obvious mismatch with between the actual channel phase model (cf. Sec-tion 2.2) and the model used by the receiver, however, the phase noise difference over an interval of N symbols has the same variance and the PDF of the DRW phase noise difference will tend to Gaussian for increasing N, [11]. A P P Detection The branches of the trellis connect the state with phase difference Aipk equal to the sum of the differential modulation phase A(j)k = fa + <j>k-\ a n d phase noise increment A9k. The trellis matches the discrete random walk model to approximate the phase noise. Thus 3M branches originate in each state corresponding to M possible modulation values and to three possible phase noise steps of 0 and ±2n/ML radians (Eq. 4.6). 4.2 Decoding Algorithms 73 However, the model just approximates the behaviour of ipk when 9k is not finite state as is the case for Gaussian random walk. Thus the trellis has ML states and 3 x M2L branches. The probability P(r, 77) of a path 77 through the trellis can be easily computed using the BCJR algorithm. For the AWGN channel with complex noise variance o\ p(rk\A) = -^exp(- |r f c - exp(j'^)|2)/a^) The most computationally intensive part is the modulation decoder where 3 x M2L branch operations are done at each iteration. Thus, if we use 8-PSK modulation and 4 levels of quantization we have 768 branch operations per symbol, per iteration. Some simplifications are therefore suggested by Peleg et al. in [11] considering that symbol metrics p(rk\ipk), corresponding to branches which originate or terminate in a same state and differ only in the phase noise step A9, are approximately equal. The corresponding branch metrics will thus differ only in the multiplicative term p(A9k). Therefore, the forward and backward recursion metrics for BCJR can be computed using only the branches with zero phase noise difference A9k and a small correction term being applied to them (see [11] for details). In the next section, Section (4.2.3), we consider an approach that is similar to the one described here but it limits the allowed branch transitions, and thus offers reduced computational complexity. The tradeoff is that we can use it for a block fading channel. 4.2.3 Rong Chen Method Rong Chen et al. propose a sub-optimal non-coherent detector for the block fading channel in [50]. The approach in [50] can be regarded as a special case of the PSG method in the sense that it applies only to the constant offset case. Block modulation codes have also been investigated by Sun and Leib [52], Warner and Madhow [53], and Peleg and Shamai [54]. 4.2 Decoding Algorithms 74 The detector assumes that the channel phase xjik takes on only L discrete values in the interval [0, 2ir] ( 1 2 L - 1 1 , x Vfc e JO, - 2TT , - 2 TT , • • • , — ^ — 2 TT j (4.8) Instead of quantizing the phase shift ipk over the interval [0,2ir] as shown in Eq. (4.8), we can quantize it in the sub-interval [0, 2-K/M], thus reducing the number of quantization levels. The reduced number of phase quantization levels is V = L/M. Thus any / can be written as / = m'L' +1', where 0 ^ m! ^ M and 0 ^ V ^ V. Figure 4.5: Trellis Section for noncoherent decoding with Rong Chen method The conditional probability of the received symbols can be used to compute the symbol APPs recursively in a BCJR-like algorithm. The overall complexity is equivalent to L times the complexity of a BCJR algorithm applied to the coherent demodulation of the modulation code. To elucidate the above further, we show the trellis section suitable for a BCJR algorithm implementation, in Fig. 4.5. 4.3 Our Approach - a Method 75 There will be a total of N trellis sections, similar to the one shown in Fig. 4.5 and N trellis state classes. Each state sk represents a 2-tuple (tfikjXk), where ipk £ 0, 2-K/ML', • • - , ( ! / - 1)2IT/ML' and xk e M. The number of possible values for each state is M • V. The fact that leads to a reduced computational complexity here as com-pared to the PSG method [11], is that branch transitions are allowed between state Sk — (ipk,Xk) at the kth instant to Sk+\ = C0A;+i, Xk+i) at the (k + l ) t h instant, if and only if, ipk = f^c+i and xk+i — (xk + Wk) mod M. Thus we can conclude that the trellis is composed of complete bipartite graph of 2M vertices as shown in Fig. 4.5. This implies that this method is equivalent to the PSG method with pe = 0. However, the use of this method is restricted only to the block modulation (constant phase offset) case and is not good for a scenario where the channel phase varies over symbols, for e.g. as in Gaussian Random Walk model. In the definition above, as state transitions are not allowed between states with different values of tpk, the total number of branches within each trellis section is M2L'. It is well known that the complexity of a BCJR algorithm is proportional to the number of trellis branches [55]. Hence, it is obvious that a method that provides for tracking capability of the channel phase without an expansion of the trellis will afford reduced complexity given that the operations performed per trellis section to track the phase are simple enough to not overwhelm the decoding complexity. In the next section we describe a method that tracks the phase with no trellis expansion and the performance is comparable to the benchmark methods described. 4.3 Our Approach - a Method When the channel phase has relatively high variance then the fluctuations in the value of the instantaneous phase is large over successive samples. Thus in order to get an estimate of the channel phase in such a situation we need to have a method that gives 4.3 Our Approach - a Method 76 more weightage to channel estimates from more recent symbols. In effect, this means that we somehow need to introduce a forgetting factor for past channel estimates. Howard and Schlegel [9] tried to do this using a moving average filter over the estimated channel phases up to time k — 1, to get an estimate for the channel phase at time k. As mentioned, earlier their method works only for constant phase offset or very low phase noise variances. For moderate to high variance of the phase noise, this approach completely fails. In this section, we propose a novel channel estimation method which has very good performance for both phase noise models and has a much lower complexity compared to methods that expand the trellis, Section 4.2.The common aspect between the method in [9] and our method is that both are based on the use of a posteriori probabilities. We will first describe the estimation method and then prove through EXIT chart results the phase tracking capability "of the algorithm. The channel estimation is in-built in the forward backward algorithm for the PSK decoder. Over iterations, as the quality of the APPs get better through iterative information exchange between the component decoders, the quality of the estimate also gets better. Thus, there is a positive coupling between the quality of the channel estimates and the APPs at the output of the DPSK decoder. Rather, than having one single channel estimate for each trellis section, we have a tentative channel estimate for each state of the trellis section. Let us denote the channel estimate at the kth time instant by hk[sm], where sm denotes the mth state at the kth instant and m G {0, ••• , M — 1} for an M-DPSK transmission. Similar to the Howard & Schlegel approach, we use a forgetting factor a as well but, the method of update of the channel estimate is fundamentally different. The update here is done based on the channel estimate of the state that has the highest forward or backward metric during a forward recursion or backward recursion, respectively. The estimation process, will be explained here considering the forward recursion process. We have the initial channel estimates 4.3 Our Approach - a Method 77 as follows: Initialisation halsm] = y0x* [sm] where x[sm] is the PSK symbol corresponding to state sm The channel metrics for the first symbol, y0, are computed coherently using the initial channel coefficients as computed above. The forward metric is computed as in Eq. (3.5). In order to compute the forward metric Ak(sm) for a given next state sm, the Log-BCJR algorithm requires that we do a max* operation over all possible previous states at time k — 1. The previous state corresponding to the highest computed forward metric for time k is denoted by s m a x . The channel estimate for the state sm, at the kth time instant is given by Update Equation hk[sm] = ahk-ilsmax] + (1 - a)yfc-i^ fc*[sm] (4.9) hk-i[smax] is the channel estimate corresponding to the state s m a x at the (k — l)th instant, a is the forgetting factor and can have values between 0 and 1. The exact value of a will depend on the variance of the phase noise of the channel. The estimate hh[sm] thus obtained, is used to compute the channel metric for state sm for the next received symbol, yk. Therefore, at every instant k, we have M channel estimates for each of the M states of the trellis and they are computed according to Eq. (4.9). Fig. 4.6 shows a trellis section for the alpha method for time instants k — l,k and H I . The estimates computed during (k — l)th instant are chosen to compute the estimate at the kth instant based on the results of a max* operation and this estimate is then used to compute the metric for all the states connected to it in the (k + \)ih instant. This method is not computationally intensive as there is no expansion of the trellis and the channel estimation update equations are such that we require very few extra operations. The key step here is the determination of an estimate based 4.3 Our Approach - a Method 78 • MO] hfc+i[o] h i[i] m—- — 0 hk-![2] ^ • ftfc+i[2] ^-i[3] m' / ' ' ' / / / • M 3 \ V \ X hk-i[4] • ' / • hk[4] \ \ % \ • ftfc+i[4] ^*-i[5] #'' / • hk[5] \ • ftfc+i[5] hk-i[6] 4 / • A* [6] \ • fc*+i[6] hk-i[T) • • ft* [7] • /ijfe+i[7] fc - 1 fc Figure 4.6: Trellis section for a method on the previous state that gives the highest forward metric, for the next time instant. Since the forward metric will be highest for the state with the highest likelihood, we can reason that an estimate corresponding to that state is the closest to the actual channel coefficient. Also, in a Wiener noise model the highest correlation is between successive samples. Thus it is reasonable to update the channel estimate for a state of the trellis at time k based on the estimate of that state at time k — 1. The fluctuation in the values due to the variance is taken care of by the forgetting factor a. More the variance, lesser the value that a should have. The update method used here is similar in nature to the one proposed by Anastasopoulos and Chugg [56] for phase tracking in Turbo coded systems. The difference between [56] is in the use of and definition of the parameter a. The phase estimation process is modeled as a first order decision directed phase locked loop (DD-PLL) and the design parameter a controls the PLL noise equivalent bandwdith [56]. Fig. 4.7 shows the EXIT chart for an example case 4.3 Our Approach - a Method 79 )[ i i i i i i i 1 1 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 la (DPSK), le (3/5 Parity check code) Figure 4.7: EXIT chart for Channel Estimation with a -method. Phase Noise Variance cr|=0.001 and phase offset 7r/16 where channel estimation is done using the a-method. HSL has been used at a symbol of SNR 101og10(Es/./Vo)) = 7.5 dB and the outer code is a 3/5 parity check code. As mentioned earlier, we can predict convergence in this case as there is tunnel between the transfer characteristics of the component decoders. There is considerably large gap between the transfer curves, implying that the onset of the Turbo cliff is at a much lower value than 7.5 dB. The results for the thresholds for various labelings is presented in Chapter 5. Chapter 5 Results and Discussion In this chapter, an analysis for the designs and algorithms proposed in Chapter 3 and Chapter 4 is done and the performance results are presented. EXIT charts are used ex-tensively for the analysis and convergence prediction of the proposed methods. Based on the results of the analysis, parameters to obtain optimum system performance are chosen for both coherent and noncoherent decoding. We first present the results for coherent decoding and derive design rules for the system in terms of the optimum label-ing and code rates for the outer code. A similar exercise is then done for noncoherent case and we find that the design rules are equally valid for this case. Then a robustness analysis of the algorithms presented for noncoherent decoding is done is Section 5.3 and finally we present BER simulation results in Section 5.4 that validate the results of the analysis. EXIT analysis is done for all cases for both coherent and noncoherent decoding however, BER results are provided only for the best designs for the system considered. The analysis here primarily focuses on the 8-PSK cases however, BER results have been presented for the 4-PSK case as well for the sake of completeness. 80 5.1 Design Optimization Using EXIT Chart Analysis 81 5.1 Design Optimization Using EXIT Chart Anal-ysis Since there are quite a few degrees of freedom in the proposed system design that can be exploited to arrive at the best possible performance at a given code rate, we need to optimize the parameters and evaluate designs. The quickest way to perform such an evaluation without having to simulate the entire concatenated system is to use EXIT charts. In the following sections we derive design rules for the proposed system, based on the insights gained through EXIT charts of the system. Initially, we restrict ourselves to the use of only one labeling scheme for a transmission block. The case of mixed labeling is treated in Section 5.2. 5.1.1 Coherent Decoding This section presents the results for an AWGN channel with perfect knowledge of the channel phase at the receiver. This simple case is the best way to examine the effects of the code structure and labeling scheme on the performance of the systems as the system is not plagued with the effects of any unknown parameter as in the noncoherent case. As was mentioned in Section 3.3.3, the threshold SNR is where the tunnel just opens up between the mutual information transfer characteristics of the component decoders. Also, as the shape of the transfer characteristics of both component decoders affect the value of the threshold SNR, it is reasonable to expect that certain labelings are more suitable to certain code rates than others. Hence, to cover a broader range of code rates, we have considered codes with r = 2,3 and 10 for Code I and Code II families and their performance with different constellation labelings is characterized through EXIT charts. We have codes with rate 2/3, 3/5 and 10/19 for the Code I family and 5.1 Design Optimization Using EXIT Chart Analysis 82 2/5, 3/7 and 10/21 for Code II family. The code rate decreases for Code I and increases for Code II, with an increase in r. Parity Code Rate Bits/channel use Sym SNR Bit SNR 2/3 2 (2) 5.79 dB 2 .79 dB 3/5 9/5 (1.8) 4.67 dB 2.12 dB 10/19 30/19 (1.58) 3.50 dB 1.52 dB Table 5.1: Minimum required SNR for reliable transmission using an 8-PSK constella-tion at a given rate for Code I codes A better insight into the performance of the code is gained by comparing it with the channel capacity. The SNR required to achieve a given code rate governed by Code I codes is given in Table 5.1 for the 8-PSK constellation according to constel-lation constrained capacity. The capacity is given both in terms of the symbol SNR (101og10(Es/Aro)) and the bit SNR (10 \ogw(Eb/N0)). These values represents the min-imum SNR required to transmit data reliably with 8-PSK constellation at a certain rate (bits/channel use). Table 5.2 gives the corresponding values for rates governed by Code II codes. Parity Code Rate Bits/channel use Sym SNR Bit SNR 2/5 6/5 (1.2) 1.53 dB 0.74 dB 3/7 9/7 (1.29) 1.88 dB 0.79 dB 10/21 30/21 (1.43) 2.57 dB 1.02 dB Table 5.2: Minimum required SNR for reliable transmission using an 8-PSK constella-tion at a given rate for Code' II codes Table 5.3 gives the threshold symbol SNR required for each of the labeling schemes considered with a Code I outer code for coherent decoding. For a 2/3 parity check 5.1 Design Optimization Using EXIT Chart Analysis 83 code, NAL gives the best performance in terms of onset of the Turbo cliff at a symbol SNR of 6.25 dB which is only 0.46 dB from capacity. For rates 3/5 and 10/19 codes however, Ungerboeck labeling gives a better performance and the thresholds are at 0.63 dB and 0.75 dB from capacity respectively. When we compare all the schemes we see that both SSP and Antigray labeling have the highest threshold SNR for a 2/3 outer code whereas for rates 3/5 and 10/19 Antigray has the highest threshold. It can also be seen in Table 5.3 that the difference between the lowest and the highest threshold SNRs increases from a difference of 0.45 dB between NAL and Antigray for rate 2/3 to 0.75 dB between Ungerboeck and Antigray for 3/5 and reaches as high as 1.3 dB between Ungerboeck and Antigray for a rate 10/19 parity check code. Thus as the code rate decreases, significant gains are obtained by choosing the right labeling scheme. This point stresses the importance of optimization of the considered system in terms of choosing the right code rate and labeling scheme and serves as the motivation for the analysis done here. Labeling 2/3 3/5 10/19 NAL 6.25 dB 5.4 dB 4.7 dB HSL 6.6 dB 5.75 dB 5.0 dB Ungerboeck 6.3 dB 5.3 dB 4.25 dB SSP 6.7 dB 6.0 dB 5.45 dB AntiGray 6.7 dB 6.05 dB 5.55 dB Table 5.3: Threshold symbol SNRs for Code I codes - coherent decoding Similarly, for Code II codes Table 5.4 presents the threshold SNRs. Here we present values for Gray labeling as well since the transfer curves for Gray labeling provide a threshold at SNR values that are not too worse as compared to other labelings. For Code I codes however the transfer characteristics for Gray labeling intersect those of the outer parity decoder even at much higher SNRs compared to other schemes and hence 5.1 Design Optimization Using EXIT Chart Analysis 84 Figure 5.1: EXIT chart for Code II codes depicting threshold SNRs with a rate 10/21 code for coherent decoding. have not been investigated. This was expected as Gray labeling is proven to not give the best gains for an iterative framework [42]. However, what was not expected was the performance of SSP labeling which turns out to be the worst among all the labelings considered for Code II. SSP labeling has been proven to be the optimum labeling scheme for 8-PSK constellation for BICM-ID schemes without differential encoding [24]. However, we find that when differential encoding is used this is not the case. The labeling scheme proposed by Howard and Schlegel is not at either extreme and provides a Turbo onset that is between the best and the worst case. Ungerboeck labeling gives the best performance in terms of the lowest threshold SNR for Code II codes as well. Although NAL has the same threshold as Ungerboeck for the low code rate of 2/5, the performance gets progressively worse with higher code rate. Antigray labeling also has worse performance as the code rate tends to 1/2. As an illustration the EXIT 5.1 Design Optimization Using EXIT Chart Analysis 85 chart for Code II codes for a rate 10/21 outer parity check code is shown in Fig. 5.1 Surprisingly, Gray labeling provides a threshold SNR better than a few other labelings in spite of the initial flatness exhibited by its transfer characteristics (cf. Fig. 5.1). This can be attributed to the better matching of the transfer curves of Gray labeling with those of Code II codes. Thus, optimality is more dependent on the matching of transfer characteristics and the design rule for a given code family and code rate cannot be generalised blindly for other code rates. However, some of the trends are exhibited across the entire spectrum and a few of them have been mentioned above. For example, Ungerboeck labeling consistently affords the lowest threshold SNRs across all code rates. Labeling 2/5 3/7 10/21 Gray 2.85 dB 3.2 dB 3.8 dB NAL 2.3 dB 2.9 dB 4.0 dB HSL 2.75 dB 3.4 dB 4.4 dB Ungerboeck 2.3 dB 2.8 dB 3.65 dB SSP 3.0 dB 3.7 dB 4.8 dB AntiGray 2.65 dB 3.4 dB 4.7 dB Table 5.4: Threshold symbol SNRs for Code II codes - coherent decoding A comparison with capacity for the Code II codes leads us to the conclusion that the system offers performance at 0.77 dB for rate 2/5, 0.92 dB for rate 3/7 and 1.08 dB from capacity for a rate 10/21 outer code. This is an interesting observation since the trend here is opposite to that observed for Code I codes. The distance from capacity increases with decrease in code rate for Code I codes whereas for Code II codes the distance from capacity increases with increase in code rate. This information has been summarised in Table 5.5 for reference. Although, Code II codes provide much lower thresholds compared to Code I codes, they are actually further away from capacity than 5.1 Design Optimization Using EXIT Chart Analysis 86 3.5 2.5 c c to a> 1.5 Q. m 0.5 01— -10 8 -PSK Capacity * Best Individual labeling o Mixed Labeling > / <* 10log 1 0(Es/N o) 10 15 20 Figure 5.2: Comparison with capacity for the proposed serially concatenated structure. The SNR thresholds for best single labeling and mixed labeling have been plotted. Code Rate Distance from capacity Code Rate Distance from Capacity 2/3 0.46 dB (NAL) 2/5 0.77 dB (Ungerboeck) 3/5 0.63 dB (Ungerboeck) 3/7 0.92 dB (Ungerboeck) 10/19 0.75 dB (Ungerboeck) 10/21 1.08 dB (Ungerboeck) Table 5.5: Comparison with capacity Code I codes. The close to capacity performance of these simple codes can be better understood from Fig. 5.2 where the 8-PSK capacity curve has been plotted and the the points on the graph represent the threshold SNR values for the two code families. 5.1 Design Optimization Using EXIT Chart Analysis 87 5.1.2 Noncoherent Decoding For the noncoherent case several approaches had been presented in Chapter 4. In this section we see how each of the method performs for the two phase models described in Section 4.1. The labeling and code rate optimisation was done for coherent decoding in Section 5.1.1. It needs to be seen if the same design rules apply when the channel phase is not a priori known. Thus the objective here is twofold in the sense that we need to verify the design rules of Section 5.1.1 and compare the performance of the proposed aj-method with other benchmark methods to deal with uncertain channel phase. Constant Phase Offset We start with the simpler case of constant phase offset. A comparison of the per-formance of the H & S method, PSG method and a method for a 10/21 outer parity check code and inner DPSK decoder with 8-PSK is given in Table 5.6, for a phase offset of TT/16. The reason we choose an offset of 7r/16 here is to be able to compare with performance results reported in [9]. For the H & S method the filter used to obtain an estimate of the channel phase is the averaging estimator, while the a method uses an a of 0.99. Ideally, a should be as close to 1 as possible since the channel phase is constant for the entire block but, in terms of threshold SNR there is no visible gain by making a infinitesimally close to 1. The results for the PSG method have been ob-tained with a quantization to L = 4 levels per state of the non-expanded trellis. When the inter-symbol phase difference (7r/4 for 8-PSK) is quantized into 4 levels, 7r/16 is one of the levels, hence the decoder is able to track the correct offset with just 4 levels. Therefore, improvement is not seen by increasing the number of levels for a phase offset of TT/16. This will however not be the case if the phase offset is completely random (but still constant for one block) and does not correspond to a state of the expanded trellis. This result is intuitive since a higher level of granularity for the search of the 5.1 Design Optimization Using EXIT Chart Analysis 88 random phase should bring us closer to the actual channel phase and thus approach convergence at a lower SNR. Labeling H & S Alpha PSG (4-levels) NAL 4.0 dB 4.1 dB 4.0 dB HSL 4.35 dB 4.4 dB 4.35 dB Ungerboeck 3.65 dB 3.7 dB 3.65 dB SSP 4.7 dB 4.8 dB 4.7 dB AntiGray 4.7 dB 4.8 dB 4.7 dB Table 5.6: Comparison of various methods for a 7r/16 phase offset. Differential 8 PSK is used as the inner code and a rate 10/21 parity check is used as outer code. In Table 5.6 the performance of the H & S method and the PSG method can be seen to be exactly the same for all labeling schemes while that of the a method is a little worse than the other two. The reason for this is that the averaging estimator of H & S method is the optimum estimator for the constant phase case and for PSG method the fact that 7r/16 is one of the levels of the expanded trellis leads to very good performance. The a method is however superior in the sense that it does not need to know if the channel phase is constant and thus there are no assumptions regarding the statistical behavior of the channel phase. This is not the case with H & S method as the choice of the filter depends on the behaviour of the channel phase. The PSG method also needs to know the variance of the channel phase and is also much more computationally intensive because of the high number of branch operations in the expanded trellis. For e.g. for the differential 8-PSK case the a method operates on a 8-state trellis whereas PSG method uses a 32-state trellis. Therefore, the a-method provides for huge savings in terms of power consumed on the receiver chip compared to the PSG method. For a wireless device, it is important that the power dissipation and hence number of operations for decoding be kept at a minimum. 5.1 Design Optimization Using EXIT Chart Analysis 89 A point to mention here is that the PSG method in effect, collapses to the Rong Chen method (cf. Section 4.2.3) for phase estimation for a constant phase offset. Thus both of them give the same result and hence are not treated separately. The Rong Chen method however will not perform well for the phase noise case because the algorithm inherently assumes that the channel phase is constant over the entire block and hence will not be considered for performance comparisons. To gain a better perspective on how the different labeling schemes perform, we now look at the performance of individual methods with different code rates and all labeling schemes considered in this work. Howard & Schlegel method: The EXIT chart analysis for the H & S method was not presented by the authors in [9], as the channel estimation is external to the forward-backward recursion of APP DPSK decoder and hence obtaining the EXIT charts is not as straightforward. This however, is not the case for a method or the PSG method. While the a method has the estimation operation in-built in the forward-backward recursion, the PSG method has no explicit estimation technique. Hence, obtaining the mutual information transfer characteristics proceeds exactly the same way as for coherent decoding for those two methods. For the H & S method, an external channel estimation technique implies that two parameters, the a priori mutual information and the estimate of the channel phase, vary per iteration of the algorithm as opposed to the other two methods, where only the a priori mutual information (la) is a variable. Thus, for the H & S method we need to devise a method to be able to incorporate both the channel estimate and the a priori mutual information as variables. In the following, the approach we have taken to obtain the mutual information transfer characteristics for the H & S method is described. It can be argued that since the H & S method relies on APPs for channel estimation, the accuracy of the estimate is directly related to the quality of the APPs. The output APPs of DPSK decoder are LLRs and the magnitude, |LLR|, gives a measure of the 5.1 Design Optimization Using EXIT Chart Analysis 90 reliability. It is also valid that the quality of the APPs in itself depends on how good the estimate of the channel phase is. Thus both the parameters are coupled and improvement in quality of one depends on the improvement of the other. Hence, if we use a priori mutual information as the controlling parameter, we essentially fix the quality of both the APP and the channel estimate. However, the algorithm still needs an estimate of the channel phase to compute the channel metrics in the first place since the estimate during a given iteration is computed only after the APPs are computed. If the increments of the a priori information are kept reasonably small for the EXIT chart simulations, the estimate of the phase from the predecessor a priori information step is a good approximation of the estimated channel phase for the current step and hence can be used for channel metric computation. Following this logic we obtain the transfer characteristics for the H & S method and an example EXIT chart for an outer 10/21 code is shown in Fig. 5.3. The transfer characteristics have been obtained with a step size of 0.05. Our assumption regarding the channel estimates is also verified by the smoothness of the curve and the fact that the transfer characteristics are monotonically increasing with increasing a priori information. Labeling 2/3 3/5 10/19 2/5 3/7 10/21 NAL 6.3 dB 5.45 dB 4.8 dB 2.35 dB 2.95 dB 4.0 dB HSL 6.65 dB 5.8 dB 5.1 dB 2.8 dB 3.4 dB 4.35 dB Ungerboeck 6.4 dB 5.35 dB 4.3 dB 2.35 dB 2.85 dB 3.65 dB SSP 6.7 dB 6.05 dB 5.45 dB 3.05 dB 3.7 dB 4.7 dB AntiGray 6.75 dB 6.1 dB 5.5 dB 2.7 dB 3.45 dB 4.7 dB Table 5.7: Howard k Schlegel method - Threshold symbol SNRs for Code II codes for constant phase offset of 7r/16 The performance for a range of code rates for Code I and Code II are presented in Table 5.7. Since the threshold SNR depends on the shape of the respective transfer 5.1 Design Optimization Using EXIT Chart Analysis 91 la(DPSK), le (10/21'parity check code) Figure 5.3: EXIT chart for phase estimation with Howard & Schlegel method - the constant offset case. The figure shows the EXIT chart for an offset of 7r/16 with an outer 10/21 parity check code. The symbol SNRs (101og10(Es/ATo)) at which the thresholds are observed for each labeling scheme are shown in the legend. characteristics we see that the performance varies with the code rate used. It can be seen in Table 5.7 that while both NAL and Ungerboeck provide a Turbo cliff at 101og10(Es/./Vo) = 2.35 dB for an outer code rate of 2/5, Ungerboeck labeling gives better performance by 0.1 dB at a rate of 3/7 and 0.35 dB at a rate of 10/21 compared to NAL. Ungerboeck labeling can be seen to be consistently providing for the earliest onset of the Turbo cliff. On comparing the values with the threshold SNR for the coherent case (cf. Table 5.3 and Table 5.4), we see that the degradation is less than 0.1 dB for most cases, which is indicative of the optimality of the averaging estimator. Also, we see that same performance trends of the coherent decoding case are maintained in case of constant phase offset. For example, Ungerboeck labeling gives the lowest 5.1 Design Optimization Using EXIT Chart Analysis 92 threshold SNR whereas SSP labeling has the highest. Also as the code rate increases, for example for the Code II codes from 2/5 to 10/21, HSL progressively performs better than Antigray labeling for both coherent decoding and the constant phase case. In the light of these facts, it can be comfortably stated that the design rules for the coherent decoding case are valid for the constant phase case. a method: The corresponding threshold SNRs with a 7r/16 phase offset are presented in Table 5.8. We see that while the performance was off by less than 0.1 dB compared to coherent decoding for the H & S method, here the threshold SNRs are off by around 0.15 dB. Specific results differ depending on what labeling and code rate is considered. For e.g., for Ungerboeck labeling the values are either same as for H & S method or off by only 0.05 dB whereas for SSP labeling the values are off by as much as 0.15 dB (rate 2/5) to as little as 0.05 dB (rate 2/3). The general trends are however, the same, for example NAL provides for the least threshold for a rate 2/3 code, while Antigray has the highest threshold for both H & S method and cv-method. This in effect proves that ai-method performs almost as good as the optimal averaging estimator and at the same time the same design rules for coherent decoding apply with o;-method as well. Labeling 2/3 3/5 10/19 2/5 3/7 10/21 NAL 6.35 dB 5.65 dB 4.85 dB 2.35 dB 3.0 dB 4.1 dB HSL 6.65 dB 5.85 dB 5.1 dB 2.85 dB 3.5 dB 4.4 dB Ungerboeck 6.45 dB 5.35 dB 4.35 dB 2.35 dB 2.9 dB 3.7 dB SSP 6.75 dB 6.15 dB 5.5 dB 3.2 dB 3.8 dB 4.8 dB AntiGray 6.8 dB 6.2 dB 5.65 dB 2.75 dB 3.5 dB 4.8 dB Table 5.8: Alpha method - Threshold symbol SNRs for constant phase offset of 7r/16 An exhaustive data set for the PSG method has not been provided as it performs the same as H & S method for a phase offset of TT/16. However, when the phase is a random constant value, performance varies depending on how much the trellis is expanded and 5.1 Design Optimization Using EXIT Chart Analysis 93 an analysis is provided in Section 5.3. In the next section we investigate the case of the Wiener phase noise and find out factors that affect performance. Wiener Phase Noise We now proceed to the analysis for the Wiener phase noise case. Each of the approaches presented in Chapter 4 are treated separately in the following sections. The EXIT chart analysis for all cases have been done with a phase variance of GQ = 0.001 and a Code II outer code with rate 10/21. Figure 5.4: EXIT chart showing decoder failure using Howard and Schlegel method for combating Wiener phase noise, cr@ = 0.001. Outer code is rate 10/21 and symbol SNR 101og10(Es/iVo) = 5.5 dB Howard & Schlegel Method: It was mentioned in Section 4.2.1 that the method 5.1 Design Optimization Using EXIT Chart Analysis 94 proposed by Howard and Schlegel in [9] does not work for the phase noise case. The EXIT chart of Fig. 5.4 proves that this is the case. Even at a sufficiently high SNR, 101og10(Es/Aro) = 5.5 dB, the transfer characteristics for the APP DPSK decoder fails to reach the (1,1) point on the EXIT chart for a moderate phase noise variance of 0.001. Hence, this method is not pursued further. la(DPSK),le(10/21 Parity Check) Figure 5.5: EXIT chart for a-method for a rate 10/21 outer parity check code with Wiener phase noise (cr| = 0.001) a. Method: The a— method performs nicely with a phase noise variance of 0.001 and the phase tracking capability of this method is evident from Fig. 5.5. Threshold SNRs for the rate 10/21 code are observed at values shown in the EXIT chart and the decoder exhibits convergence as transfer characteristics meet at the (1,1) point on the EXIT chart without intersecting each other. P S G Method 5.1 Design Optimization Using EXIT Chart Analysis 95 The PSG method also performs well for the phase noise case and the EXIT chart results are shown in Fig. 5.6. The values in Fig. 5.6 have been obtained by using a value of L — 4 for the trellis expansion. The threshold SNRs are on an average 0.3-0.4 dB lower than the a-method but at a much higher computational effort. ^—AntiGray @ 5 dB NAL @ 4.1 dB A SSP @ 4.95 Ungerboeck @ 3.8 dB| ^— HSL @ 4dB ^ 1 0 / 2 1 Parity Check 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 la(DPSK),le(10/21 Parity Check) Figure 5.6: EXIT chart for PSG method for a rate 10/21 outer parity check code and phase noise variance cr| = 0.001 using 4 levels of quantization The values for the a-method on the other hand are close enough to not cause severe degradation in performance and decoding can be achieved at a much reduced compu-tational cost. 5.2 Mixed Labeling 96 5.2 Mixed Labeling The idea of mixed labeling to shape the transfer curves was introduced in Section 3.4. In this section, we look at the performance benefits derived by using more than one la-beling scheme for transmission of a block of data. In Sections 5.1.1 and 5.1.2 a threshold SNR was found using EXIT charts for a given code rate, and the labeling that provided the lowest threshold SNR was reported. In this section, we try to find out if mixture labelings can deliver performance that is better than single labeling schemes. We com-pare threshold values for mixed labelings with the lowest single labeling threshold SNR values for a given code rate. Code Rate Ungerboeck Mixed (1:1 Gray+NAL) 3/5 5.35 dB 5.25 dB 3/7 2.9 dB. 2.7 dB 10/19 4.35 dB 4.15 dB 10/21 3.7 dB 3.5 dB Table 5.9: Performance improvement using Mixed labeling - a method for Wiener phase noise with phase offset 7r/16 To give us an idea of how much better we can get with mixed labeling we present a few examples in Table 5.9. For each of the code rates presented in Table 5.9, Ungerboeck labeling is the best single labeling with thresholds as shown. We see that on an average there is an improvement of 0.2 dB in terms of threshold SNR with mixed labeling (1:1 Gray and NAL). The important point here is that we stand to gain by using a mixture of labelings than using single labeling. The labelings can be mixed in any proportion, we choose a 1:1 mix as it is a simpler to analyse. Fig. 5.7 shows an example EXIT chart for the noncoherent decoding case where an improvement of 0.2 dB is achieved using Gray and NA labeling as compared to the best single labeling for a constant phase 5.2 Mixed Labeling 97 offset of 7r/16. Although an exhaustive analysis of all ratios of mix has not been done, Figure 5.7: EXIT chart for a method with 1:1 mix of Gray labeling and NAL for the 7r/16 phase offset. The threshold SNR for the Gray and NAL mixture case is 0.2 dB better than the best single labeling (Ungerboeck). it can be claimed from the example of Table 5.9 and Fig. 5.7 that a mixture of labelings can usually be explored to have a lower threshold SNR. To prove that the analysis done for 8-PSK constellation applies equally well for the 4-PSK case, we present the results for mixing Gray labeling and Ungerboeck labeling in Fig. 5.8 for coherent decoding. As can be seen from Fig.5.8 at a symbol SNR of 0.6 dB the single labelings intersect with the 10/21 parity check code transfer characteristic while there is a tunnel for the 1:1 mixture of the two labelings. Thus mixed labeling can provide gains under very general conditions. The gains are more visible in a distance from capacity curve as was shown in Fig. 5.2. 5.2 M i x e d Labe l ing 98 la (4-DPSK), le(10/21 Parity Check) Figure 5.8: E X I T chart for a method wi th 1:1 mix of Gray labeling and Unger-boeck labeling wi th a 10/21 outer code for coherent decoding. For a symbol S N R of 10logw(Es/No) = O.QdB neither of the single labelings exhibit convergence. How-ever a 1:1 mix of these two provides a threshold S N R at 0.6 d B F i g . 5.9 is magnified version of F i g . 5.2 and it underlines the benefits derived from mixed labeling. For each of the code rates, a mixture of two labelings is considered and each dist inct point on the graph represents a distinct code rate. The distance from capacity is reduced i n a l l the cases when a mixed label ing scheme is used compared to using a single labeling. Th i s is a representative result as the goal was not to be exhaustive and further benefits may be possible when using more than two labelings. 5.3 Robustness Analysis 99 2.2 3 1 8 3 "55 c I 1.6 o <u Q. CO in 1.4 I ! I ..... 8-PSK Capacity •-*-•• Best Individual labeling —©— Mixed Labeling / / ' yS / SO* / / / 10log10(Es/No) Figure 5.9: This figure is a zoomed in version of Fig. 5.2. The figure clearly emphasises the fact that mixed labeling allows us to achieve performance closer to capacity than using just a single labeling. 5.3 Robustness Analysis In the algorithms presented for the noncoherent case, there are number of parameters that can vary and affect performance in various ways. For example, for the phase quantization based approaches the number of levels play a role in defining system performance. Similarly for the proposed a method, the particular value of a as a forgetting factor will have a role to play depending on the randomness of the channel phase. It is our endeavour in this section to characterise the system performance in the event of such variations. We show what values of the variable parameters provide the best performance given a certain set of conditions. 5.3 Robustness Analysis 100 a -Method ! / o f =0.005 j c £=o.oo 1 Cons ant O f f s ^ - ^ Cohere a n t -c apacity i I.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 a —» PSG Method of=0.001 Constant Offset 2.5h Coherent Capacity 4 5 6 Number of Phase Levels Figure 5.10: Robustness Analysis : Effect of parameter variation PSG method. for o-method and 5.3 Robustness Analysis 101 To begin with we look at the effect of variation of a in the a-method. In Fig. 5.10 we do the analysis of the effect of the a using a 10/21 outer parity check code and a 1:1 mixture of Gray and NA labeling for the 8-PSK constellation, for both constant offset and Wiener phase noise. The y-axis of the plot represents the threshold SNR required for convergence. We see that for a constant offset the performance gets better with an increasing a which is expected as with a higher a we are able to take into account more correlation of the phase estimates. Since the correlation is 1 for this case, performance gets asymptotically better as a approaches 1. For the Wiener phase noise, we consider two cases with phase noise variance cr@ = 0.001 and a% = 0.005. For erg — 0.001, it is observed that the performance gets better only till a — 0.95 as a increases and higher values of a do not allow for convergence. This trend is more evident with a@ = 0.005, as values higher than a = 0.85 give worse performance. The explanation for this is that with higher phase noise variance the correlation between successive channel phases is less and hence a lower forgetting factor is needed in order to be able to track the variations in the phase faster. For the quantized channel phase approach, the factor that affects performance is the number of levels of quantization. The effect of number of levels is shown in the bottom plot of Fig. 5.10. For the constant offset case we see that the performance of PSG (Rong Chen) method improves with higher levels of quantization which is in a way expected as the granularity of the search for the closest phase offset increases with increasing number of levels. Improvement in performance by using more levels is observed only for the case where the phase offset is random and thus not a state (phase) of the expanded trellis. No improvement is observed in going from say 4 levels to 8 levels if the the phase offset is a level of the 4-level super-trellis. We see an improvement in performance for a phase noise variance of 0.001 as we go from 2 through 8 levels. 5.4 B E R Simulat ion Results 102 5.4 BER Simulation Results T h i s section presents the B E R results for different code rates and the M - D P S K con-stellation labeling that was proven to be the best in the analysis in Sections 5.1.1-5.2. For the purposes of simulat ion and verification of design, the system has been pro-grammed in the C programming language and M A T L A B has been used for graphical visualisations to gain a better understanding of the system behavior. We are part icu-larly interested in the case of short block lengths so that proposed iterative structures can be used for short packet transmissions and at the same t ime has scalability. The exact frame/block length, N, has been specified for each case in Table 5.10. Code Rate B lock Length Code Ra te Block Length 2/3 15000 2/5 15000 3/5 15000 3/7 15015 10/19 12027 10/21 12012 Table 5.10: B lock Length used for B E R simulations The interleaver plays a significant role in determining the performance of an iterative decoding system and thus the choice of an interleaver is important . A pseudo-random (S-random) spread interleaver has been used here as it is known to perform better than many random interleavers [57]. The E X I T charts are obtained by averaging the mutua l information at the output of the decoder for 120,000 bits and the actual decoding trajectory (cf. F i g . 3.9) may be a l i t t le deviant for smaller block lengths. For the B E R simulations, a m i n i m u m of 20 frame errors are observed, so that we get an ensemble average. W e use a stopping cri ter ia based on soft decision Monte Car lo s imulat ion as described in Section 2.7. A soft B E R , B E R S 0 / f is computed after each i teration and if B E R S 0 / j < 1 x 10~ 8 , we do not perform further iterations on the same block. Th i s cr i ter ia is very effective in reducing the number of iterations in the low B E R region and we w i l l cal l this fast decoding. We show the reduction in average 5.4 BER Simulation Results 103 number of iterations performed per block for coherent decoding in Fig. 5.11 for various code rates using a mixture of Gray labeling and NAL. It can be seen in Fig. 5.11 that the effective number of iterations in the high SNR region is reduced by as much as 3 times for by using the fast decoding criteria. Thus fast decoding proves to be an effective means for huge computational savings in the high SNR/low BER region. 2/5-Gray+NAL 3/5-Gray+NAL 10/19-Gray+NAL A - 10/21-Gray+NAL Figure 5.11: Average number of iterations performed per block of data using fast decoding for the coherent case. The rationale for fast decoding is that the gain over iterations becomes lesser as more iterations are performed over a block of data and further iterations only enhance the quality of the reliability information (LLRs) after the waterfall region. Hence, after a certain point further iterations do not affect the error rate in anyway and we can safely restrict the number of iterations. The number of iterations may also be restricted heuristically by choosing a threshold value for the LLRs and stop decoding when that value is crossed [9]. Rather than doing it in a hand-crafted way, we choose to do it based 5.4 BER Simulation Results 104 on an estimation of the BER from the soft values unless stated otherwise. However, for low SNRs a maximum of 50 iterations are used. We now present the BER plots for the coherent decoding case. 5.4.1 Coherent Decoding In Fig. 5.12 we show the performance with Code I codes for Ungerboeck labeling and HSL. The EXIT chart results for Code I codes (cf. Section 5.1.1) predicted convergence at 101og10(£s/./Vo) = 6.3 dB (101og10(£6/7Vo) = 3.3 dB) for Ungerboeck labeling and 6.6 dB (101og10(E6/A^o) = 3.6 dB) for HSL. From Fig. 5.12 it can be seen this is the case. The Turbo cliff for each of the labelings occurs approximately at the same values predicted through EXIT charts, thus validating our analysis. However, we see that an error floor sets for the high SNR region. Also, Ungerboeck labeling has a higher error floor. The HSL lowers the error floor to a smaller BER value but the problem still persists. If we are targeting BERs as afforded by these codes then Code I codes are perfectly fine-as they are higher rate codes. However if.BERs lower than 10 -6 are required, the need for a method to mitigate the problem of error floor is required. With the proposed Code II codes it is our endeavour to overcome the error floor problem and we now look at some performance results using the 10/21 Code II outer code. We compare our results with the system of Franceschini et al. as reported in [16]. In [16] both 4-PSK and 8-PSK schemes have been used in a serially concatenated structure with LDPC codes as outer code and DE as the inner code. Fig. 5.13, shows the results for coherent decoding using the two different system structures. Two different block lengths of 6000 and 12000 have been considered. Fig. 5.13 shows that for a 4-PSK labeling and a block length of 6000, our system performs 0.4 dB better and for a block length of 12000 we are about 0.3 dB better. 5.4 B E R Simulat ion Results 105 10° HSL - Ungerboeck Capacity (2.79 dB) T or LU CD 10' ,-7 2.6 2.8 3 4.2 4.4 4.6 4.8 5 Figure 5.12: B E R curves for coherent decoding of Code I w i th Ungerboeck and H S L -T h i s graph clearly depicts the problem of error floor that Ungerboeck labeling suffers from and the improvement afforded by H S L . The error floor exists for H S L also but at a lower error rate T h e point to be noted here is that the outer code in our case is a very low complexi ty pari ty check code and does not involve using powerful L D P C codes that require much more number of operations for decoding and need to be specifically designed for use wi th D E such as in [16]. Therefore a better performance is achieved inspite of a much simpler outer code. However, we do loose slightly on the code rate. For a 10/21 code we have an efficiency of 0.476 as compared to 0.5 of Franceschini et ai, [16]. A similar comparison can be seen in F i g . 5.14 for the 8 - P S K case. A 1:1 mixture of Gray and N A L for the 8 - P S K constellation has been used and the block lengths are 12000 for the L D P C and 12012 for the rate 10/21 pari ty check code. A s opposed to 5.4 B E R Simulat ion Results 106 1 0 l o g io<V N o>^ Figure 5.13: Compar ison of rate 10/21 Code II code performance w i t h L D P C - D E concatenation of Franceschini et al. for 4 - P S K . B lock lengths of 6000 and 12000 have been compared for a rate 10/21 code wi th a mixture of Ungerboeck and Gray labeling in a 1:1 ratio. Code I codes no error floor kicks in even at B E R as low as 1 0 - 7 for coherent decoding and thus low weight error events can be substantially reduced wi th Code II codes. In Section 3.1.2 i t was argued that by increasing d m i n to 3 in Code II codes, the frame error rate also decreases to acceptable values while this is not possible w i th Code I codes as they have a d m i n of 2. F i g . 5.15 shows the frame error rate curves for some example Code I and Code II codes . W h i l e the F E R curves for Code I codes floor out at relatively high values, the ones for Code II can be seen to reach sufficiently low values. Hence it can be concluded that the structure of Code II codes are effective in reducing the F E R as well . 5.4 BER Simulation Results 107 I0log10( E^No) —> Figure 5.14: Comparison of rate 10/21 Code II code performance with LDPC-DE concatenation of Franceschini et al. Our code gives a better performance by 0.6 dB at the cost of a very low loss in code rate of 0.024 We earlier proved through EXIT charts and capacity curves, the better performance afforded by mixed labeling. In Fig. 5.16 we compare the BER results for Code II codes for the best single labeling and the 1:1 Gray and NAL mixture labeling. For all code rates the mixed labeling is seen to have better performance than the best single labeling by 0.1-0.25 dB which reinstates the fact that mixed labeling can provide better performance. In Section 2.5.2 a normalized BP-based approximate method for the general represen-tation of BP decoding was introduced which provided a reduced complexity decoding. Fig. 5.17 shows the results for using the exact and approximate representations pre-sented in Section 2.5.2 for the coherent decoding of a 3/7 code. The BER curves 5.4 B E R Simula t ion Results 108 £5 ^ CD * - 2/5 Ungerboeck -»— 3/5 Ungerboeck a - 10/21 Ungerboeck - B — 10/19 Ungerboeck 10 2 |og 1 0(E b /No)^ Figure 5.15: Frame Er ror Rate results for Coherent Decoding case. T h e F i g . shows the F E R for Code I (rate 3/5 and 10/19) codes and Code II (rate 2 /5 and 10/21) codes wi th Ungerboeck labeling. The F E R for Code I codes floors out at much higher values than Code II codes. for the two methods are close enough to conclude that the approximate method does not to lead to a degradation in quali ty of results. Th i s implies that the approximate method for B P decoding can be used for the decoding of pari ty check codes at a reduced computat ional load without sacrificing accuracy. T h e next section provides the results for noncoherent decoding using the different algorithms presented for the two phase noise models. 5.4 B E R Simula t ion Results 109 10 t: j : : : : : : : : : : : | :l; ; h I Figure 5.16: Performance results for Code II codes under coherent conditions using best single labeling for a given code rate and mixed labeling wi th a 1:1 m i x of Gray and N A L . 5.4.2 Noncoherent Decoding T h e results for the constant offset case are first presented. For noncoherent decoding the stopping cri ter ia is not based on the soft B E R estimation and we use a stopping cri ter ia that is used in cyclic redundancy check ( C R C ) i.e., the iterations are performed t i l l we observe zero errors for the block or we reach the m a x i m u m number of iterations (50). The soft B E R stopping cri ter ia is not effective in reducing the number of iterations for the noncoherent case since imperfect channel estimation leads to a soft B E R estimate that is not indicative of the actual hard B E R . F i g . 5.18 provides the results for a- method for a phase offset of TT /16. A n outer pari ty code of rate 10/21 and an 8 - P S K constellation w i t h a 1:1 mixture of G r a y and N A L has 5.4 BER Simulation Results 110 Figure 5.17: Effect of using a approximate normalised metric for implementing the boxplus operation for BP decoding. been used. The variation with a has been depicted in Fig. 5.18 by using a = 0.90 and a — 0.99. As was expected a = 0.99 proves to be a better choice as the performance can be seen to be worse by around 0.1 dB with a — 0.90. This statement applies to all possible choices of system parameters however, the exact degradation will depend on constellation size, code rate and labeling scheme used. The ability of the a-method in estimating the channel phase within acceptable accuracy is demonstrated for a 4-PSK constellation using a 1:1 mixture of Gray and Ungerboeck labeling in Fig. 5.19. As for the 8-PSK case, for a constant phase offset, much better performance is observed using a value of 0.99 for a than using 0.9. A 0.3 dB degradation results when the phase is time varying with variance <7Q = 0.001. In Fig. 5.20 the performance of the a-method and the PSG method are compared for 5.4 B E R Simulat ion Results 111 Figure 5.18: Performance of the a-method for the constant offset case. A n outer code of 10/21 and a 1:1 G r a y - N A L mixture labeling for the inner code have been used arid the phase offset is 7r/16. The effect of varying is a can be easily seen as the performance vis ib ly deteriorates from when a = 0.99 to a = 0.90 an 8 - P S K constellation. For both the methods a rate 10/21 outer pari ty check code and a 1:1 mixture of Gray and N A L has been used. The P S G method uses 4 levels of quantizat ion per state of the trellis and a = 0.9 for our approach. For the constant phase offset case there is a very slight degradation wi th the a-method and for the Wiener phase the degradation is less than 0.2 d B for a phase variance of a% — 0.001 compared to the P S G method. Therefore it may be concluded that inspite of being very low complexity the a-method achieves performance that it close to the P S G method. The results for the H & S method, are also presented and it is seen from the B E R curve in F i g . 5.20 that the H & S method fails to achieve convergence for the phase noise case even at high S N R s (cf. F i g . 5.4). For the phase offset case the H h S method and 5.4 B E R Simulat ion Results 112 10° SNR (1cfog10(Eb/No)) % Figure 5.19: Performance of the a-method using 4 - P S K . A n outer code of 10/21 and a 1:1 Gray-Ungerboeck mixture labeling for the inner code have been used. the P S G method exhibit the same performance as was proved through E X I T analysis in Section 5.1.2. For both constant offset (see F i g . 5.18) and phase noise (see F i g . 5.20) we do see an error floor at lower B E R values wi th a-method. T h i s occurs due to the residual errors caused by imperfect channel estimation. A l though , lower error floors are possible through better (more random) interleavers the problem persists as the a-method is a linear estimator and hence is sub-optimal in this case. 5.4 BER Simulation Results 113 Figure 5.20: Performance comparison of the a-method and PSG method using 8-PSK constellation. Chapter 6 Conclusions We have shown that a serially concatenated system combining simple outer parity check codes with an inner differential M-PSK modulation code offers very close to capacity performance with iterative decoding. Two general code structures namely Code I and Code II, have been proposed that asymptotically converge to a rate 1/2 code. The design is motivated by the need for simple yet powerful codes that have good distance spectral properties. Both encoding and decoding of the codes can be implemented with very low complexity. Short blocklengths have been considered so that the design is suitable for a wide range of applications such as packet transmissions. The rotationally invariant property of the inner code aids in channel phase estima-tion and supplies sufficient soft information from the inner APP DPSK decoder to allow initial operation under unknown channel phase conditions. A simple channel estimation procedure using the soft information from the APP DPSK achieves close to coherent performance when there is explicit channel phase information at the receiver. Pilot symbols are not required for this purpose. The estimation procedure has been bench-marked against phase quantization based approaches that have a much higher computational load and it is shown that inspite of being simple our method does not 114 115 cause much degradation in performance in the noncoherent case. The method has been evaluated under both constant and time varying simulated phase processes. . The design has been optimized for performance through an EXIT chart analysis and it is shown that performance as close as 0.46 dB from capacity can be achieved with the right choice of labeling schemes and code rates. Further gains are shown to be possible by using mixture labelings rather than a single labeling for a block of data. The simple parity check codes have been shown to have better performance than a similar serially concatenated structure with LDPC codes as outer codes. However, there is a tradeoff in terms of code rate as the LDPC codes afford slightly higher code rates than the codes proposed here. Bibliography [1] C. E. Shannon, "A mathematical theory of communication," Bell System Technical Journal, vol. 27, pp. 379-423, July and Oct. 1948. [2] R. Gallager, Low Density Parity Check Codes. MIT Press, 1963. [3] C. Berrou, A. Glavieux, and P. Thitimajshima, "Near shannon limit er-ror-correcting coding and decoding: Turbo-codes," Proc. of IEEE Conference on Communications, Geneva, May 1993. [4] G. David Forney Jr., Concatenated codes. Massachussetts Institute of Technology, Cambridge, Ma, 1966. [5] S. Benedetto, G. Montorsi, D. Divsalar, and F.Pollara, "Serial concatnetation of interleaved codes : Performance analysis, design and iterative decoding," TDA Progress Report, pp. 1-26, 1996. [6] , "Serial concatnetation of interleaved codes : Performance analysis, design and iterative decoding," IEEE Trans. Inform. Theory, vol. 44, no. 3, pp. 909-926, May 1998. [7] J. Massey, "Coding and modulation in digital communciations," in Proc. of the 1974 International Zurich Seminar on Digital Communicatiions, Zurich, March 1974, pp. E2(l)-E2(4). 116 B I B L I O G R A P H Y 117 [8] S. Howard , C . Schlegel, and L . Perez, "Differential turbo coded modula t ion over unsynchronized channels," in IASTED 3rd International conference on Wireless and optical Communications, Banff, Alberta , Canada, 2002, 2002, pp. 96-101. [9] S. Howard and C . Schlegel, "Differentially-encoded turbo coded modula t ion wi th app channel estimation," i n Proceedings of IEEE Globecom 2003, San Fran-cisco, USA. I E E E , December 2003. [10] C . Schlegel and A . Grant , "Differential space-time turbo codes," IEEE Trans. Inform. Theory, vo l . 49, no. 9, pp. 2298-2306, September 2003. [11] M . Peleg, S. Shamai , and S.Galan , "Iterative decoding for coded noncoherent mpsk communications over phase noisy awgn channel," Proceedings of IEE, vol . 147, no. 2, pp. 87-95, A p r i l 2000. [12] K . Narayanan and G . L . Stuber, " A serial concatenation approach to iterative demodulat ion and decoding," IEEE Trans. Commun., vol . 47, no. 7, pp. 956-961, Ju ly 1999. [13] P. Hoeher and J . Lodge, "Turbo D P S K : Iterative differential psk demodulat ion and channel decoding," IEEE Trans. Commun., vol . 47, no. 6, pp. 837-843, June 1999. [14] J . L i , K . Narayanan, and C . Georghiades, "Product accumulate codes: a class of codes w i th near-capacity performance and low decoding complexity," IEEE Trans. Inform. Theory, vo l . 50, no. 1, pp. 31 - 46, January 2004. [15] H . M . Tul lberg and P. H . Siegel, "Serial concatenated t c m wi th an inner accumu-late code - part i i : Densi ty evolution analysis," IEEE Trans. Commun., vol . 53, no. 2, pp. 252-262, February 2005. B I B L I O G R A P H Y 118 [16] M . Franceschini, G . Ferrari , R . Rahel i , and A . Cur ton i , "Serial concatenation of ldpc codes and differential modulations," IEEE Trans. Commun., vol . 23, no. 9, pp. 1758-1768, September 2005. [17] G . Ferrar i , A . Anastasopoulos, G . Colavolpe, and R . Rahel i , "Adapt ive iterative detection for the phase-uncertain channel: Limited-tree-search versus truncated-memory detection," IEEE Trans. Veh. Technol, vol . 53, no. 2, pp. 433-442, M a r c h 2004. [18] J . Dauwels and H . - A . Loeliger, "Phase estimation by message passing," in Proc. 2004 IEEE Int. Conf. on Communications, June 2004, pp. pp. 523-527. [19] S. Howard , Pr ivate Correspondence. [20] M.Pe leg , I.Sason, S.Shamai, and A . E l i a , " O n interleaved, differentially encoded convolutional codes," IEEE Trans. Inform. Theory, vol . 45, no. 7, pp. 2572-2582, November 1999. [21] L . R . B a h l , J . Cocke, F . Jelinek, and J . Rav iv , " O p t i m a l decoding of linear codes for min imiz ing symbol error rate," IEEE Trans. Inform. Theory, pp. 264-287, M a r c h 1974. [22] G . Caire , G . Tarrico, and E . Big l ie r i , "Bit-interleaved coded modulat ion," IEEE Trans. Inform. Theory, vo l . 44, no. 3, pp. 927-946, M a y 1998. [23] X . L i , A . Ch indapo l , and J . A . Ritcey, "Bit-interleaved coded modula t ion wi th iterative decoding and 8psk signaling," IEEE Trans. Commun., vol . 50, no. 8, pp. 1250-1257, August 2002. [24] F . Schreckenbach, N . Gor tz , J . Hagenauer, and G . Bauch , "Opt imiza t ion of symbol mappings for bit-interleaved coded modulat ion," IEEE Commun. Lett., vol . 7, no. 12, pp. 593-593, December 2003. B I B L I O G R A P H Y 119 [25] S. ten B r i n k , "Convergence of iterative decoding," IEEE Electronic Letters, vol . 35, pp. 806-808, 1999. [26] Y . Huang and J . A . Ritcey, "Op t ima l constl lat ion labeling for i teratively de-coded bit-interleaved space-time coded modulat ion," IEEE Trans. Inform. Theory, vol . 51, no. 5, pp. 1865-1871, M a y 2005. [27] R . Nur iyev and A . Anastasopoulos, "Rota t ional ly invariant and rotat ional ly robust codes for the awgn and noncoherent channel," IEEE Trans. Commun., vo l . 51, no. 12, pp. 2001-2010, December 2003. [28] A . J . V i t e r b i , "Error bounds for convolutional codes and an asymptot ical ly op-t i m u m decoding algori thm," IEEE Trans. Inform. Theory, vol . 13, pp. 260-269, A p r i l 1967. [29] J . Hagenauer and P. Hoeher, " A vi terbi a lgori thm w i t h soft decision outputs and its applications," in Proc. of IEEE Globecom 1989, vo l . 3, 1989, pp. 1680-1686. [30] G . Kramer , "Serially and parallel concatenated (turbo) codes," M i n i Course at T U - W i e n and F T W . [31] P. Robertson, P . Hoeher, and E . Vi l l e rb run , " O p t i m a l and sub-opt imal m a x i m u m a posteriori algorithms suitable for turbo decoding," European Trans. Telecomm., vol . 8, M a r c h - A p r i l 1997. [32] G . Colavolpe, G . Ferrari , and R . Rahe l i , "Ext r ins ic information in iterative de-coding : A unified view," IEEE Trans. Commun., December 2001. [33] P.Robertson, "I l luminat ing the structure of code and decoder of parallel con-catenated recursive systematic (turbo) codes," in IEEE Global Commun. Conf. (GLOBECOM'94), San Franscisco, CA, December 1994, pp. 1298-1303. B I B L I O G R A P H Y 120 [34] C . Dou i l l a rd , M . Jezequel, C . Berrou , A . P i c a r t , P. Didier , and A . Glavieux, "Iter-ative correction of intersymbol interference : Turbo equalization," Europ. Trans, on Telecommun. (ETT), vol . 6, pp. 507-511, Sep t . /Oc t . 1995. [35] J . Hagenauer, E . Offer, and L . Papke, "Iteratived decoding of binary block and convolutional codes," IEEE Trans. Inform. Theory, vo l . 42, no. 2, pp. 429-445, M a r c h 1996. [36] F . Kschischang and B . Frey, "Iterative decoding of compound codes by probabil i ty propagation i n graphical models," IEEE Trans. Inform. Theory, vol . 16, no. 2, pp. 219-230, February 1998. [37] N . W i b e r g , H . - A . Loeliger, and R . Ko t t e r , "Codes and iterative decoding on general graphs," Eur. Trans. Telecommunication, vol . 6, pp. 513-525, Sep t /Oc t 1995. [38] R . Tanner, " A recursive approach to low complexity codes," IEEE Trans. Inform. Theory, vol . 27, pp. 533-547, September 1981. [39] J . Chen, A . Dholak ia , E . Eleftheriou, M a r c P . C . Fossorier, and X i a o - Y u H u , "Re-duced complexity decoding of ldpc codes," IEEE Trans. Commun., vol . 53, no. 8, pp. 1288-1299, Augus t 2005. [40] G . Colavolpe, "Design and performance of turbo gallager codes," IEEE Trans. Commun., vo l . 52, no. 11, pp. 1901-1908, November 2004. [41] M . P. C . Fossorier, M . M i h a l j e v i c , and H . Imai , "Reduced complexi ty iterative decoding of low density pari ty check codes based on belief propagation," IEEE Trans. Commun., vo l . 47, no. 5, pp. 673- 680, M a y 1999. [42] S. ten B r i n k , "Designing iterative decoding schemes wi th the extrinsic information transfer chart," AEU International Journal of Electronics and Communications, vol . 50, no. 6, pp. 389-398, November 2000. B I B L I O G R A P H Y 121 [43] M . Tuchler, S. ten B r i n k , and J . Hagenauer, "Measures for t racing convergence of iterative decoding algorithms," in 4th IEEE/ITG Conference on Source and Channel Coding, Berlin, Germany. I E E E , January 2002, pp. 53-60. [44] P. Hoeher, "Adapt ive modula t ion and channel coding using rel iabi l i ty informa-t ion," in International OFDM Workshop, Hamburg, Germany, September 2000, pp. 14.1-14.4. [45] H . - A . Loeliger, " A posteriori probabilites and performance evaluation of trellis codes," in IEEE International Symposium on Information Theory(ISIT), Trond-heim,Norway, June 1994. [46] P. Hoeher, I. L a n d , and U . Sorger, "Log-l ikel ihood values and monte carlo sim-ula t ion - some fundamental results," in International Symposium on Turbo codes and Related Topics, Brest, France, September 2000. [47] D . Divsa la ra and M . Simon, " M a x i m u m l ikelhihood differential detection of un-coded and trellis coded ampli tude phase modula t ion over awgn and fading channels - metrics and performance," IEEE Trans. Commun., vol . 42, pp. 76-89, January 1994. [48] L . Perez, J . Seghers, and D . Costello, " A distance spectrum interpretation of turbo codes," IEEE Trans. Inform. Theory, vol . 42, pp. 1698-1709, November 1996. [49] D . Divsalar and M . Simon, "Mul t i p l e symbol diffrenetial detection of mpsk," IEEE Commun. Lett, vol . 38, pp. 300-308, M a r c h 1990. [50] R . - R . Chen , R . Koet ter , U . Madhow, and D . Agrawal , "Joint noncoherent de-modula t ion and decoding for the block fading channel : A pract ical framework for approaching shannon capacity," IEEE Trans. Commun., vol . 51, no. 10, pp. 1676-1689, October 2003. B I B L I O G R A P H Y 122 [51] G . Colavolpe, A . Barb ie r i , and G . Caire , "Algor i thms for iterative decoding in the presence of strong phase noise," IEEE J. Select. Areas Commun., pp. 1748- 1757, September 2005. [52] F . S u n and H . L e i b , "Mult iple-phase codes for detection wi thout carrier phase ref-erence," IEEE Trans. Inform. Theory, vol . 44, pp. 1477-1491, Ju ly 1998. [53] D . Warr ier and U . M a d h o w , "Spectrally efficient noncoherent communicat ion," IEEE Trans. Inform. Theory, vol . 48, pp. 651-668, M a r c h 2002. [54] M.Pe leg and S.Shamai, " O n coded and interleaved noncoherent mult iple symbol detected mpsk," Eur. Trans. Telecommun., vol . 10, no. 1, pp. 65-73, February 1999. [55] V.S .Pless , W . C . H u f f m a n , and R . B r u a l d i , Eds. , Handbook of Coding theory. E l -sevier, Ams te rdam, T h e Netherlands, 1998. [56] A . Anastasopoulos and K . M . Chugg, "Adapt ive iterattive detection for phase tracking in turbo-coded systems," IEEE Trans. Commun., vol . 49, no. 12, pp. 2135-2144, December 2001. [57] S. Dol inar and D . Divsalar , "Weight dis t r ibut ion for turbo codes using random and non-random permutations," JPL progress report 4^-122, pp. 56-65, August 1995.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Serial concatenation of simple codes and differential...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Serial concatenation of simple codes and differential modulations Mitra, Jeebak 2006
pdf
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
Page Metadata
Item Metadata
Title | Serial concatenation of simple codes and differential modulations |
Creator |
Mitra, Jeebak |
Publisher | University of British Columbia |
Date Issued | 2006 |
Description | Error-correcting codes are used in communication systems to provide reliable transmission on practical communication channels and thus reduce the number of retransmissions. The ability of a particular code to be able to detect and correct errors at the receiver depends on the individual code structure as well as on the decoding algorithm. Codes that provide strong error correcting capabilities usually involve a number of operations on the information bits and hence are not easily decodable at the receiver. Rather than relying on the use of a single high complexity code, a concatenation of two or more simple codes can be used. Serially concatenated encoding structures are very popular means to achieving close to capacity performance. Good performance of concatenated codes with practically viable decoding times is attributed to a technique known as iterative decoding. Iterative decoding can only be used in a situation where the component decoders are capable of generating soft information for the transmitted data. Soft information for symbol-by-symbol estimation is usually obtained by using the Bahl Cocke Jelinek Raviv (BCJR) algorithm or some sub-optimal version of it. Differential encoding of the data at the transmitter is regarded as an effective approach to combat phase ambiguities at the receiver, when using phase shift keying (PSK) schemes. Since the information is transmitted in difference of phases rather than the absolute phase of the transmitted symbol, there is more protection against an unknown channel phase at the receiver. The serial concatenation of an error correcting code with a differential encoder has been found to provide very good performance for various kinds of channel conditions. In this work we propose and analyse the design of a serially concatenated structure which is very simple to implement and is particularly favourable for a low power scenario such as deep space applications. The proposed system comprises of a concatenation of simple parity check codes and a differential encoder (DE). Individually, these codes are very weak codes as they provide minimal error-correcting capabilities. We optimise system design parameters through extensive analysis of the system structure using extrinsic information transfer (EXIT) charts. It is shown through simulations and analytical results that the proposed concatenated codes provide performance very close to capacity. Comparison of these simple parity check codes with certain other very powerful outer codes such as Low Density Parity Check (LDPC) codes show the superior performance of the proposed codes inspite of much lower decoding complexity. For the case, where channel phase is unknown or perfect synchronisation is not attainable at the receiver, several estimation algorithms have been proposed in the literature to combat the effects of channel phase. These algorithms can usually be divided into two categories. The first is those that use pilot symbols, which increases the transmission overhead. The second is to not have any explicit channel estimation mechanism. Here we adopt the second one and consider two approaches. One is based on the quantization of the unknown phase and is computationally intensive with the complexity depending on number of levels of quantization. The other is to do a blind estimation based on the information derived from received symbols. On the lines of the second approach we propose a simple method for noncoherent decoding that uses a posteriori information to estimate the channel phase. It is shown that the method works well for the cases of low to moderate variations in channel phase. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-01-05 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0065422 |
URI | http://hdl.handle.net/2429/17548 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2006-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_2006-0084.pdf [ 5.65MB ]
- Metadata
- JSON: 831-1.0065422.json
- JSON-LD: 831-1.0065422-ld.json
- RDF/XML (Pretty): 831-1.0065422-rdf.xml
- RDF/JSON: 831-1.0065422-rdf.json
- Turtle: 831-1.0065422-turtle.txt
- N-Triples: 831-1.0065422-rdf-ntriples.txt
- Original Record: 831-1.0065422-source.json
- Full Text
- 831-1.0065422-fulltext.txt
- Citation
- 831-1.0065422.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
data-media="{[{embed.selectedMedia}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0065422/manifest