UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Performance enhancement of xDSL systems using LDPC coding with DMT modulation Zheng, Thomas S. H. 2003

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2004-0002.pdf [ 3.72MB ]
Metadata
JSON: 831-1.0065472.json
JSON-LD: 831-1.0065472-ld.json
RDF/XML (Pretty): 831-1.0065472-rdf.xml
RDF/JSON: 831-1.0065472-rdf.json
Turtle: 831-1.0065472-turtle.txt
N-Triples: 831-1.0065472-rdf-ntriples.txt
Original Record: 831-1.0065472-source.json
Full Text
831-1.0065472-fulltext.txt
Citation
831-1.0065472.ris

Full Text

Performance Enhancement of xDSL Systems Using LDPC Coding with DMT Modulation by THOMAS S H ZHENG B.Sc, Peking University, 1986 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (Department of Electrical & Computer Engineering) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA November 2003 © Thomas S H Zheng, 2003 In presenting t h i s thesis i n p a r t i a l f u l f i l m e n t of the requirements for an advanced degree at the University of B r i t i s h Columbia, I agree that the Library s h a l l make i t f r e e l y available for reference and study. I further agree that permission for extensive copying of t h i s thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It i s understood that copying or publication of t h i s thesis for f i n a n c i a l gain s h a l l not be allowed without my written permission. Department of E l e c t r i c a l & Computer Engineering The University of B r i t i s h Columbia Vancouver, Canada ii Abstract Channel coding is playing an pivotal role for improving the performance of xDSL systems in terms of higher data rate and long loop reach. The current channel coding scheme for standard DSL concatenates Reed-Solomon (R-S) codes and multi-dimensional Trellis coded modulation (TCM) codes. Although RS-TCM coding provides a significant coding gain for DSL systems, this gain still far away from the channel capacity. Recently rediscovered low-density parity-check (LDPC) codes, which were invented by Gallager in 1962, have become the best-known practically implentmentable codes for closely approaching the channel capacity. The probabilistic decoding for LDPC codes is an effective and applicable technique based on the message-passing algorithm which employs probabilistic message-passing on a normal graph generated from the parity-check matrix of LDPC codes. Our empirical results showed that for flat AWGN channels and various Q A M modulation schemes more than 2.0 dB performance improvement of DSL systems was obtained with LDPC coding instead of the RS-TCM scheme. This thesis emphasizes the study of the performance of LDPC codes applied in xDSL systems operating on frequency-selective multilevel AWGN channels. Two different frequency-selective channel models were used in our simulation studies, where bit-error rate (BER) was measured (via computer simulation) vs. received signal-to-noise ratio (SNR). LDPC codes were employed together with multilevel Q A M in DMT-xDSL systems. Coding gains greater than 2.8 dB were obtained at 10~6 BER when compared with RS-TCM coding. Table of Contents Abstract " Table of Contents iii List of Tables. vi List of Figures... vii Acknowledgments x Chapter 1 Introduction 1 1.1 Motivation for DSL Study 2 1.2 Basic Concepts 3 1.2.1 ADSL 3 1.2.2 VDSL 4 1.3 DSL Channel 4 1.3.1 Attenuation 4 1.3.2 Impairments 5 1.3.3 Composite channel 7 1.4 Simulink 8 1.5 Outline of Thesis 8 Chapter 2 DSL Systems 10 2.1 DMT Modulation 10 2.1.1 Adaptive multichannel transmission 10 2.1.2 Bit-loading algorithm 12 2.1.3 DMT application 13 iv 2.2 DSL Systems 15 2.2.1 T C M encoding in ADSL 17 2.2.2 T C M decoding in ADSL 21 Chapter 3 LDPC Coding and Decoding 22 3.1 Gallager LDPC Coding 22 3.2 Gallager LDPC Decoding 24 3.3 Probabilistic Decoding for LDPC Codes 25 3.3.1 Normal graph 26 3.3.2 Basic definitions and simplified notations 27 3.3.3 Message-passing algorithm ;, 29 3.3.4 Binary block codes 33 3.3.5 LDPC codes 37 Chapter 4 LDPC Codes in DSL Systems 44 4.1 Constructing an LDPC Code 44 4.2 Encoding process for DMT systems 46 4.2.1 LDPC encoding and constellation mapping 47 4.3 Decoding process for DMT systems 49 4.3.1 Ideal A W G N channel 50 4.3.2 Frequency-selective AWGN channel 58 Chapter 5 Simulation Models and Results 63 5.1 System Models 63 5.2 Basic Parameters 64 5.2.1 Bit allocation table 64 5.2.2 Data rate, bit error rate and signal to noise ratio 65 5.3 Initialization 66 5.4 Simulation Results 67 5.4.1 Results for DMT systems over ideal A W G N channel 67 5.4.2 Results for DMT systems over frequency-selective A W G N channel 70 5.4.3 Effect of using simplified LDPC decoding 74 Chapter 6 Summary and Conclusions 76 6.1 Summary 76 6.2 Conclusions 77 6.3 Future Work 78 Appendix A The Calculation of the Shannon Limit 79 Glossary of Acronyms 82 Selected Bibliography 84 VI List of Tables Table 2.1 Relation between 4-D and 2-D cosets 19 Table 5.1 Gaps and gains for different coding schemes over flat AWGN channels 69 Table 5.2 Gaps and gains of different coding schemes over two different channels (BAT-1 and BAT-2) 74 vii List of Figures Figure 1.1 ADSL reference model 3 Figure 1.2 Channel gains in a twisted pair copper line with two bridged taps 5 Figure 1.3 Crosstalk between twisted pair lines 6 Figure 1.4 Noise spectrum in the ADSL loop combining various impairments 7 Figure 2.1 Basic multicarrier transmitter 10 Figure 2.2 Basic concept of adaptive multichannel (multitone) transmission 11 Figure 2.3 ADSL frequency spectrum 12 Figure 2.4 Discrete-time representation of multichannel data transmission system 14 Figure 2.5 Block Diagram of a DSL system with DMT modulation 16 Figure 2.6 Wei's 16-state 4-D convolutional encoder 17 Figure 2.7 T C M Encoder in ADSL 18 Figure 2.8 Constituent 2-D cosets for Wei's code 18 Figure 2.9 One stage of trellis diagram of the Wei's code 20 Figure 2.10 Diagram for decoding Wei's 16-state 4-D code 21 Figure 3.1 Gallager's construction of H matrix of a regular (20, 3, 4) LDPC code 23 Figure 3.2 A example of normal graph with one node 27 Figure 3.3 A normal graph with two nodes 29 Figure 3.4 Message-passing for two nodes 30 Figure 3.5 A bipartite graph for a Hamming code 34 Figure 3.6 A normal graph with a bit node 34 Figure 3.7 A normal graph with a check node 35 via Figure 3.8 A normal graph for an LDPC decoder 38 Figure 3.9 Message-passing flows for an LDPC decoder 40 Figure 4.1 Overall LDPC encoding and symbol mapping 47 Figure 4.2 Diagram of a system with coding, mapping over an ideal AWGN channel 50 Figure 4.3 Diagram for calculation of a posteriori probabilities for 16-QAM 52 Figure 4.4 Diagram for 16-QAM with Gray coding 55 Figure 4.5 Frequency-domain diagram for a distorted AWGN channel 59 Figure 4.6 Frequency-domain zero-forcing equalizer for a distorted AWGN channel 61 Figure 5.1 Three system models for simulations 64 Figure 5.2 Performance curves of coding schemes over AWGN channel with 2 -QAM: (A) 16-QAM; (B) 64-QAM; (C) 256-QAM; (D) 1024-QAM 68 Figure 5.3 Comparison of performance curves of different length of LDPC coding schemes with 16-QAM over AWGN channel 69 Figure 5.4 Two examples of BAT-1 and BAT-2 for 256 DMT channels 71 Figure 5.5 Frequency responses of the channel filter with different length of taps 72 Figure 5.6 Comparison of performance curves of DMT systems over the channel filter with different length of taps 72 Figure 5.7 Comparison of performance of DMT systems with different coding schemes over the channel filter with 100 taps (a) BAT-1; (b) BAT-2 73 Figure 5.8 Comparison of performance of (938, 4, 14) LDPC coding by using two different arithmetic operations for iterative message-passing decoding algorithm 75 ix Dedication To my wife Alice, my daughter Isabel, and my son Timothy Acknowledgments I would like to express my sincerest gratitude to my supervisor Professor Robert W. Donaldson, for his guidance, encouragement, and support throughout this thesis. I am extremely grateful to him for his assistance without exception and for his extensive suggestions for revision of this thesis. Thank you Dr. Donaldson for being such a nice mentor. I wish to thank Professor Robert Schober for his insightful ideas and his precious time spent with me discussing some topics related my research. His help motivated and inspired me in many ways. I would also like to thank the members of our communications group for their help and friendship, including Wujun Cao, Jian Chen, Zhibing Chen, Victor Fong, Cyril Iskander, Ying Luo, L i Ma, Xinrong Wang, Zhipeng Wang, George Wai Wong, Lijuan Wu, Kaiduan Xie, Zhanping Yin, Fei Yu, Chu Zhang, Jie Zhao, and Juan Zi. Finally, I would like to express special thanks to my wife Alice for her consistent support during my studies at UBC. Her deep love and encouragement made this thesis possible. I am especially grateful to her for taking such a good care of our children. Chapter 1 Introduction In the beginning, the telephone access network was originally constructed at a large scale for carrying 4kHz voiceband telephony. This plain old telephone system (POTS) consisted of single twisted-pair copper wires which were reliable and could be laid out to a large and widespread population at relatively low cost. Later, as data transmission needs grew and electronic hardware improved, voiceband modems became available. POTS wires could then be used to carry other types of point-to-point communications such as telegraph, fax, and computer data. More recently, as computer networks and internet traffic grew rapidly the limited capacity of POTS links became problematic. It was believed that POTS wires had reached their limits. In the mid 1980s the digital subscribe line (DSL) modems were invented, thereby enabling spectrum usage on the twisted-pair copper wires to increase to tens of megahertz. Some of the noteworthy DSL developments are as follows: • First, the integrated services digital network (ISDN) was introduced as a service in 1986. The basic rate interface (BRI) of ISDN provides a total of 144 kb/s of symmet-ric digital information (including two 64 kb/s B channels and one 16 kb/s D channel) over loops of length up to about 5.5 km. • In 1992, the first high bit-rate digital subscribe line (HDSL) went into service offering 1.544 or 2.048 Mb/s over loops of up to about 3.7 km using 0.5 mm (24 AWG) twisted pair without an intermediate repeater. • The first prototype modems for asymmetric digital subscriber line (ADSL) appeared in 1992 following developments at Stanford University and AT&T Bell Labs. ADSL offers bit rates of up to about 6 Mb/s downstream (towards customer) and up to about 640Kb/s upstream (towards central office) by using the spectrum up to 1.1 MHz. The reach of ADSL is about 4 km and the old POTS service is maintained. • Very high bit-rate digital subscriber line (VDSL) is an extension of ADSL technology to higher rates, up to 52 Mb/s downstream. At such high bit rates, the copper loops must be so short that optical fiber would be used for all but the last few thousand feet. 1 Chapter 1 Introduction 2 1.1 Motivation for DSL Study Given a noisy communication channel, Claude E. Shannon [28] proved that there exists codes that make it possible to transmit information over the channel with an arbitrarily low probability of error if the information rate in bits per channel use is less than the channel capacity. Knowledge of this limit on information transmission rate is useful. However, Shannon's theorems are non-constructive and don't describe means to find codes which approach the capacity limit. More importantly, even if an oracle gave us sequences of such codes, it would still be necessary to know how to encode and decode such codes efficiently. Today, suppliers of DSL technology are moving to optimize their plant usage and to operate their equipment closer to performance limits. It is evident that channel coding may play a pivotal role for improving the performance of DSL systems in terms of higher data rate and long loop reach. The current channel coding scheme for standard DSL concatenates Reed-Solomon (RS) codes and multi-dimensional Trellis coded modulation (TCM) codes. Although RS-TCM coding provides a significant coding gain for DSL systems, this gain still far away from reaching the channel capacity. Recently rediscovered low-density parity-check (LDPC) codes have become the best-known practically implementable codes for closely approaching the capacity of various channels [7, 21]. Specifically, LDPC coding is already under consideration for use in DSL systems. This coding scheme for multilevel transmission should enhance the performance of the existing ADSL standards as well as those of the evolving VDSL. In this thesis we apply LDPC coding techniques to DSL systems which operate over two types of channels, namely additive white Gaussian noise (AWGN) channels and frequency-selective multilevel AWGN channels. To study the mechanism of LDPC codes a message-passing algorithm on a normal graph exploited in the iterative probabilistic decoder is proposed. A generic scheme for implementing LDPC-coded modulation system is then introduced. Simula-tions are employed to compare this new coding scheme with the one currently used in the DSL RS-TCM standard. Our performance analysis and results show that significantly improved coding gains are obtained in DSL systems over both types of channels. Chapter 1 Introduction 1.2 Basic Concepts 3 DSL is a local loop technology that uses existing standard twisted-pair copper wire to support voice and data communications simultaneously over a single wire pair without diminish-ing either the voice or data capacity. 1.2.1 ADSL In ADSL systems, bandwidth provided one direction is different from that in the other. To simplify, more data can be delivered downstream (toward the user) than on the return upstream channel. A POTS splitter separates the voice channel from the data channel through highpass and lowpass filters to provide coexisting service. ADSL increases transmission rates from T l (1.544 Mbps) and E l (2.048 Mbps) to 6 Mbps downstream and from 64 Kbps to 640 Kbps upstream. ADSL is a single pair implementation and includes POTS service. The initial application for ADSL was video on demand (VOD). ADSL has since become popular for access to the Internet, corporate networks, and interactive multimedia applications such as multi-player gaming, and for online education. Additionally, ADSL operates at speeds of up to 6 Mbps so that it converts a POTS line into a high-speed digital line that provides simultaneous services of phone, fax and ultra-fast Internet access, as shown in Figure 1.1. Other applications might include broadcast entertainment video, distance learning, CD-quality music on demand, home shopping, and remote L A N access. PSTN Network Packet Network Central Office POTS Splitter Local Loop Unshielded Twisted Pair A D S L Modem (Center) ATU-C User End P O T S Splitter Telephone -PC A D S L Modem (Remote) ATU-R Figure 1.1 ADSL reference model. Chapter 1 Introduction 4 1.2.2 VDSL VDSL is a short-distance, high-bandwidth solution proposed to complement optical fiber as it becomes widely deployed. The VDSL bandwidth is considerably wider than that of conven-tional DSL, but the technology is still in its infancy. There are currently few products supporting VDSL technology. One drawback for VDSL is the crosstalk (XTalk) among adjacent wire pairs, which act as antennae in the short wave range thereby interfering with radio communications. VSDL will be used primarily for loops fed from an optical network unit (ONU), which is typically located less than a kilometer from the customer. Few VDSL loops will be served directly from a central office (CO). Optical fiber connects the ONU to the CO. VDSL transmission over a twisted wire pair is used for the few thousand feet (the "last mile") from the ONU to the customer premises. 1.3 DSL Channel In 1881, Alexander Graham Bell invented the unshielded twisted pair (UTP). With a sufficiently short space between twists, the electromagnetic coupling of energy over a small segment of wire is canceled by the out-of-phase energy coupled on the next segment of wire. Modern telephone cables are designed with slightly different twist rates for each wire pair to assure minimal crosstalk. Copper conductors are used to minimize signal attenuation due to electrical resistance. 1.3.1 Attenuation The conductor and insulation are the major contributions to UTP cable attenuation. Attenuation is both frequency dependent and cable length dependent. In general, the higher the signal frequency and/or the greater length of cable, the greater the attenuation. Figure 1.2 shows a typical qualitative example. The two dips in the figure are created by bridged taps. In spite of the large attenuation, it is still possible to use the twisted pair over a large frequency range (up to a few MHz) and to achieve high transmission rates [31]. Chapter 1 Introduction 5 A H(f) Channel Gain Length L\<L2 f Frequency Figure 1.2 Channel gains in a twisted pair copper line with two bridged taps Several types of DSL services are based on exploiting the bandwidth inherent in twisted wire pairs. Originally intended for transmission of speech (4 kHz voiceband) more than 100 years ago, the twisted pair copper wire has become increasingly valuable in terms of bandwidth utiliza-tion and commercial application. This has given rise to the popular saying that "DSL technology turns copper into gold". 1.3.2 Impairments UTP cables were originally constructed to support services only in the voiceband spectrum. Several impairments become apparent when subscriber loops are used in the higher frequencies by DSL modems. A. Crosstalk Crosstalk noise in DSLs arises because the individual wires in the cable of twisted pairs radiate electromagnetically. The electric and magnetic field thus created induce currents in neighboring twisted pairs, leading to undesired crosstalk signals on those other pairs. Generally there is a distinction between two different types of crosstalk, near-end crosstalk (NEXT) and. far-end crosstalk (FEXT) (as shown in Figure 1.3). NEXT is the leakage from one wire pair into another at the same end of the cable whereas FEXT is the leakage between wire pairs on different ends of the cable. Normally, NEXT is most severe at the CO where an attenuated upstream signal can be drowned by strong NEXT interference. At the customer end the Chapter I Introduction 6 (\ Twisted pair 1 /fext „ ^next Twisted pair 2 Figure 1.3 Crosstalk between twisted pair lines, individual wires are spliced from the binder of cable thereby reducing NEXT levels. The power spectral density (PSD) of NEXT and FEXT have been derived using transmis-sion line theory [3]. The NEXT PSD from a source with PSD S(f) is modeled by Snext(f) = kn fl5S(f),where kn = 1 0 _ 1 3 ( g ) ° ' 6 . (1.1) In (1.1), kn is a constant determined from empirical studies [3] with N being the number of pairs in the same binder that carries the DSL service. The FEXT PSD is similarly modeled as follows: Sfex,{f) = kff2d\H(f,df S(f), where kf = 3. 27 X 10 ~ 1 9 ( ^ ) ° ' 6 (1.2) In (1.2), H(f, d) is the channel transfer characteristic with d denoting the transmission distance. B. Radio frequency interference Radio frequency interference (RFI) is the result of imbalance in the DSL pair. The imperfectly balanced UTP might emit radiation (egress) and receive outside radio-frequency signals (ingress). For DSL loops, RFI is ingress noise introduced from radio transmitters in the vicinity of Chapter 1 Introduction 7 UTP wires. Typical examples of RFI include amateur radio transmitters operating in the so called H A M radio band, and A M radios [3]. Although the transmit antennas of both type of radios are normally located far from UTP cable, some ingress into the wires of xDSL loops is always possible. C. Impulse noise There are many potential sources of nonstationary interference caused by transient electromagnetic fields in the region of UTP cables. Such burst noise is usually referred to as "impulse noise". Such noise arises from electronic components in the home such as refrigerator motor switching, phones ringing, microwave starting, electric drill motors, and other electrical appliances. 1.3.3 Composite channel It is typical to have 50 twisted pairs bundled into one cable for several kilometers. As a result, the most dominant noise is the NEXT and FEXT interference created by services from A Noise Power Spectrum -35 (dBm/Hz) 600kHz 1.9MHz Figure 1.4 Noise spectrum in the ADSL loop combining various impairments other cables. The statistics of these noises have been studied for many years both theoretically and by extensive measurements [29, 3]. Figure 1.4 shows typical power spectra [31] combining all impairment factors described previously in a 50-pair cable. The main point of this discussion is Chapter 1 Introduction 8 that the total noise spectrum is quite complicated and is far from being a constant or a monotone decreasing function of frequency. 1.4 Simulink This thesis involves a study of the various systems of DSL transceivers with the assistance of a Simulink environment. Simulink is a software package for modeling, simulating, and analyz-ing dynamic systems. It supports linear and nonlinear systems, modeled in continuous time, sampled time, or a hybrid of the two. Systems can also be multirate, where different system parts are sampled or updated at different rates. For modeling, Simulink provides a graphical user interface (GUI) for building models as block diagrams, using click-and-drag mouse operations. With this interface, one can draw the models just one would with pencil and paper (or as most textbooks depict them). This approach has substantial advantages over those of previous simulation packages that require formulation of differential equations and difference equations in a language or program. Simulink includes a comprehensive block library of sinks, sources, linear and nonlinear components, and connectors. One can also customize and create one's own blocks. In the last few years, Simulink has become the most widely used software package in academia and industry for modeling and simulating dynamic systems. Simulink encourages one to explore various alternatives. 1.5 Outline of Thesis This thesis proposes methods for applying LDPC codes in DSL systems over frequency-selective multilevel AWGN channels. The iterative probabilistic decoding approach utilizes a message-passing algorithm on a normal graph that is constructed by the parity-check matrix. The obvious improvement of performance is obtained by using this coding scheme in discrete multitone (DMT) systems over both flat AWGN and distorted AWGN channels. In Chapter 2, the framework of current DSL systems is presented. In Section 2.1 we first Chapter 1 Introduction 9 describe the specifications and implementation of DMT Q A M modulation. Then, we demonstrate a generic blockflow of the DSL systems with DMT modulation. After that we show processes of RS T C M encoding and Viterbi decoding of the current ADSL standard. In Chapter 3, we briefly introduce the properties of regular LDPC codes invented by Gallager and present effective means of decoding for such codes. In Section 3.3, we focus on the topic of probabilistic iterative decoding algorithms for linear binary block codes, particularly for LDPC codes. The mechanism of message-passing on a normal graph is introduced. We apply this message-passing algorithm for LDPC codes and further obtain the representations of posterior probabilities and log-likelihood posterior probabilities for corresponding messages. A simplified application for the algorithm is proposed by utilizing "min" approximations. In Chapter 4, we construct appropriate LDPC codes for DSL systems. In Section 4.1, we use a trianglized systematic sparse parity-check matrix for L D P C codes to simplify the procedures of encoding and decoding. In Section 4.2, we choose a structure that is compatible with that of DMT DSL standards. In Section 4.3, to simplify the computation for decoding we employ M-ary double Gray-coded pulse amplitude modulation (PAM). We propose schemes which using frequency-domain zero-forcing equalization in the channel detector for demodulation and to facilitate the decoding of LDPC codes over various frequency-selective AWGN channels. In Chapter 5, we show the results of our simulation. These results demonstrate perfor-mance improvement in the form of coding gain of around 2.5 dB at 1CT6 bit error rates over existing DSL system designs. These results indicate that received power-to-noise levels may be reduced by 44%, either by transmitting at reduced power levels (thereby also reducing NEXT and FEXT on adjacent DSL lines), or by increasing DSL line length between repeaters, or by some combination of these approaches. Finally, in Chapter 6 we provide a summary for this thesis, articulate important conclu-sions and propose future research related this topic. Chapter 2 DSL Systems Discrete multitone (DMT) modulation was introduced by Kalet [17] in 1989. DMT is based on the methods proposed by Peled and Ruiz [24] in 1980 to take advantage of digital signal processors and the discrete Fourier transform (DFT). Use of DMT for ADSL was first proposed by John M . Cioffi in 1991. In March 1993 the DMT system was chosen to be the basis of the ANSI standard for ADSL. 2.1 DMT Modulation . The DMT is a form of multicarrier modulation (MCM), the basic principle of which is shown in Figure 2.1. A block of b bits is transmitted over a set of parallel passband channels covering disjoint frequency bands. The transmitter sums TV independent waveforms and the receiver separates them with frequency selective filters. N b = i = 1 bx bits f Serial b2 bits w w b bits ^ to Parallel bN bits W * 2 w FN Figure 2.1 Basic multicarrier transmitter. 2.1.1 Adaptive multichannel transmission In data communication systems, a common and difficult problem is that of data transmis-sion over a wideband channel with severe intersymbol interference (ISI). The basic concept is illustrated in Figure 2.2, where two DSL transmission line characteristics are depicted; each 10 Chapter 2 DSL Systems 11 would have severe ISI if a single wideband signal were transmitted. An efficient method to solve this problem is DMT, whose essence is to divide a wideband channel into a large number of parallel narrowband AWGN subchannels. These subchannels usually correspond to contiguous disjoint frequency bands. Data transmission over a wideband channel is transformed into the parallel transmission over the subchannels. The overall data rate is the sum of the individual data rates of the subchannels. (a) UTP Channel Model Bits/channel Attenuation Bits/channel A A A Frequency Frequency Frequency (b) UTP Channel Model with Tap, RFI, XTalk Frequency Frequency Frequency Figure 2.2 Basic concept of adaptive multichannel (multitone) transmission If each of such subchannels has sufficiently narrow bandwidth, then, with appropriate signal design each subchannel has little or no ISI [16, 29], and each subchannel independently approximates an AWGN channel. In a DMT system, the transmission channel is partitioned into equal bandwidth subchan-nels [17]. The purpose of DMT in DSL is to effectively utilize the frequency-dependent channel characteristics. The multiple subchannels are modulated by quadrature amplitude modulation (QAM). The high-rate bit stream is thus divided into tens or hundreds of low-rate streams. Each Q A M modulator works in a different frequency band and transmits bit streams at different rates. Chapter 2 DSL Systems 12 An example of the frequency spectrum and modulation strategy of an ADSL system is shown in Figure 2.3. Additionally, a specific example of DMT is orthogonal frequency division multiplex-ing (OFDM) 1 , which is using in many wireless transmission applications. 16 Q A M 64 Q A M 4 Q A M Power Spectrum Frequency 4kHz 20kHz 1.104MHz Figure 2.3 ADSL frequency spectrum. 2.1.2 Bit-loading algorithm Since the frequency response of the channel in DSL systems can be considered relatively fixed, the signal to noise ratio (SNR) on each subchannel can be determined with good accuracy. Bit-loading means that different numbers of bits are assigned to each subcarrier depending on the associated subchannel SNR. Better subchannels (with high SNR) carry more information, while poor subchannels (with low SNR) carry little or no information (see Figures 2.2 and 2.3). For an AWGN channel (assume all subchannels are narrow enough that they can be taken as AWGN channels), the average number of bits on each subchannel is defined [17, 29] by bk = i log2(l + ^^j(bits)/dimension 0<k<N-l (2.1) The difference between DMT and O F D M is that DMT uses optimal bit-loading algorithm while O F D M puts a fixed and equal number of bits on all subchannels. Chapter 2 DSL Systems 13 where T is a gap that quantifies the effective loss in SNR with respect to (w.r.t.) the channel capacity. The water-filling (or water-pouring) energy distribution algorithm for subchannel loading can be approximated by a flat distribution on virtually all DSL subchannels with minuscule loss in performance, as long as the correct transmission band is used, as shown by P. Chow [6]. 2.1.3 DMT application The DMT technique is used as the standard method for A D S L [1] and VDSL [8]. It is a method by which to approximate complex filters needed to accomodate to the channel filter characteristic by simpler DFT operations. These DFT operations are applied to implement linear matrices transformation for the entire modulation/demodulation process [3, 29]. In DMT, an extended guard interval contains a cyclic prefix. The v samples in the prefix repeat those at the last of the symbol, i.e., With the existence of the cyclic prefix, much processing simplification occurs. In such case, the matrix description H of the DSL channel with frequency transfer characteristic H{f) is an N x N equivalent "circulant" matrix of the following form [29, 16]: x_i = xN_t for i - 1, 2, v. (2.2) h0 hi h2 0 h0 hY / i v _ i hv 0 ••• 0 hv_2 hv_Y hv ••• 0 H = 0 0 0 0 h0 h 0 k hv 0 0 0 •0 h V - 1 hi h2 h3 hv 0 0 0 (2.3) If the discrete-time representation of the channel is depicted as shown in Figure 2.4, then the relation of variables can be described in the compact matrix form as follows: Chapter 2 DSL Systems 14 y = Hx + w (2.4) W H • y Figure 2.4 Discrete-time representation of multichannel data transmission system In (2.4), x, w and y denote, respectively, the transmitted symbol vector , the channel noise vector and the received symbol vector. They are defined as fo l lows : X = XN_2 . . . X 0 ] . W = [\VN_l WN_2 . . . W 0 ] a t l d y = [V/V-I y N - 2 ••• Jo] • An important property of the circular matrix H in equation (2.3) is that it can be spectrally decomposed as follows [16]: H = Q f A Q (2.5) In (2.5), the superscript f denotes Hermitian transposition, Q is a matrix corresponding to DFT and A is a diagonal matrix containing the /V Fourier transform values denoted by ^v-i» •••> for the sequence h0, hu hv characterizing the channel. The kl-th element of NxN matrix Q starting from the bottom right at k - 0 and / = 0 and counting up and leftward is G H = -j=exp(-j2Kj^j fork, I = 0, 1, ..., N- 1 . (2.6) Thus one sees that the matrix Q is an orthonormal matrix with a property that Q Q = I (2.7) 2 When a cyclic prefix is used, the last V output samples are excluded from X, W and y. Chapter 2 DSL Systems 15 where I is the identity matrix. Therefore, Q 1 = Q f . Assume that x = Q f X where X is the frequency-domain vector of channel input sequence. The first N/2 elements of the vector X can be taken as N/2 parallel complex-valued Q A M constellation points. Set the left N/2 elements as X, = XN_t* for i = N/2, ...,N- 1 3 , then the symbol x must be real. Define the frequency-domain vector representation of the channel output vector y as Y = Q y , then Equation (2.9) indicates that N parallel independent operations can be implemented. For the A W G N channel with knowledge of the measured frequency response A,, is available for i - 0, 1, N- 1. By computing a maximum likelihood estimate of each received element 7, the frequency-domain value of each transmitted element X, is determined. It should be mentioned that the price for the computation simplification is the "wasted N energy" in the cyclic prefix. The effective power rate is thereby lowered to — . Y = Q(Hx + w) = Q ( Q f A Q Q X + w)= A X + W (2.8) where W = Qw. From equation (2.8), we obtain Y, = XiXt + W( for i = 0, 1, N- 1. (2.9) 2.2 DSL Systems DMT modulation technology is a part of standards of both ADSL [1] and VDSL [8]. 3 In DSL standards [1, 8] the elements X,- = XN_t* are defined for i = N/2 + 1, N- 1 .there is not data transmitted in the subcarriers 0 and N/ 2 . Chapter 2 DSL Systems 16 Tx data Digital Interface Scrambler RS Encoder Interleaved Path Fast Path Trellis Code Q A M Mapper Gain Scalingi IDFT Cycl ic Prefix D / A Transmitter I Channel | i (Distortion & Noise) fix data De-Scrambler RS Decoder Fast Path T Interleaved Path De- Mapper Vitcrbi Decoder Gain Scaling D F T Remove Prefix A / D Receiver Figure 2.5 Block Diagram of a DSL system with DMT modulation. Figure 2.5 is a general block diagram of a DSL system with DMT modulation. In the transmitter the data from the digital interface is scrambled, RS encoded, and interleaved or not (in the fast path). Data from interleaved and fast paths are then mapped onto a sequence of constella-tion points, each of which is for respective used tone in a DMT symbol. (In an ADSL system, the mapping function may include a Wei's 16-state 4-dimensional (4-D) trellis [32] encoder.) After that, each used tone is multiplied by a gain-scaling coefficient to control the power of the subchannel. The scaled frequency domain symbol is then transformed into the time domain by using an inverse discrete Fourier transform (IDFT). Before entering the channel following D/A conversion the time domain symbol is extended by a cyclic prefix. Following A/D conversion at the receiver and removal of the cyclic extension, the time domain symbols are converted into the frequency domain by using a DFT. The de-mapper function makes decision on the noisy constellation points. The corresponding Viterbi decoding [12, 32] is employed in this stage. Following that, the output byte stream is de-interleaved, the RS codewords are decoded, and the data is then de-scrambled and sent to the digital interface. Chapter 2 DSL Systems 17 2.2.1 TCM encoding in ADSL The simplest forward error correction (FEC) scheme provided in DSL systems is the RS code [1, 8], which is a cyclic code. RS code arithmetic executes in Galois Field 256 (GF256) and allows up to 16 erroneous bytes in a codeword of up to 255 bytes to be corrected. A further option used in the ADSL ANSI T1.413 standard is to concatenate the RS code with T C M [29, 1, 3]. The RS code is here the outer code while the Trellis code is the inner code. In ADSL, T C M uses Wei's 16-state 4-D convolutional encoder [32] to produce convolu-tional codes [1]. The encoding circuit is shown in Figure 2.6. The trellis encoder which uses a set r 1 • s 2 D s „ sl D s3 D s , So D " w — fe. • w W Figure 2.6 Wei's 16-state 4-D convolutional encoder. of bits u = [«!, u2, u3, ux+y_i] as its input vector is shown in Figure 2.7. Using a 4-D convolu-tional encoder, each codeword u is encoded into two binary words v = [ v ^ , vx_l] and w = [w0, wu Wy^i] which are modulated into two constellations of points for the successive subchannels. Some specifications of this encoding scheme are as follows: • w0 = S 0 , and (u0,uuu2) is used to select one of the eight 4-D subsets. • The subset chosen is mapped to two indices that determine the least significant bits (LSBs) of v and w. The mapping scheme is as following: f v0 = w3 w 0 = u2 © u3 \ ( 2 . 1 0 ) [V[ = Uy © u3 w, = u0 © ux © w2 © u3 • The remaining bits of u are mapped to the most significant bits (MSBs) of v and w. In a trellis code modulation system, the expanded constellation is labeled and partitioned Chapter 2 DSL Systems 18 ^x+y-Y U-x+y-2 Ux + 2 ux+ 1 "y-1 U3 U2 Bi t Converter Wei's 16-state 4-D convolutional encoder R = 2 / 3 M i ^ U0 _ v 2 Vi v 0 vv0 Figure 2.7 T C M Encoder in ADSL. into subsets ("cosets") using a technique called mapping by set-partitioning. The eight 4-D cosets in Wei's code are composed of concatenation of a pair of 2-dimensional (2-D) cosets. For 5 0 1 3 2 0 example, C 4 = (C 2 x C 2 ) u (C 2 x C 2 ). The four constituent 2-D cosets, denoted by C 2 , C 2 L , C 2 2 , C2 , are shown in Figure 2.8. 1 3 1 CO 0 2 0 2 1 3 1 3 0 2 0 2 1 3 1 3 0 2 0 2 1 3 1 CO 0 2 0 2 3 * 1 0 1 0 3 2 3 2 1 3 0 2 1 3 0 2 Act 1 3 1 3 0 2 0 2 1 3 1 3 0 2 0 2 Figure 2.8 Constituent 2-D cosets for Wei's code. MS ED Chapter 2 DSL Systems 19 The encoding algorithm ensure that the 2 LSB bits of a constellation point comprise the index k of the 2-D coset C2* where the constellation point locating is specified for k - 0, 1, 2, 3 . The bits (v l 7v0) and (wuw0) are actually binary representations of the index k. The three bits (u2, uu u0) determine which cosets to be chosen from the 8 4-D cosets. The 8 cosets are labeled C 4 " where n is the integer with binary representation (u2, ux, u0). The bit u 3 is used to determine which one of the two Cartesian products of 2-D cosets in the 4-D coset is chosen (see Figure 2.7). The 2 bits (v,,v0) and (w1,w0) are obtained from (u3, u2, uu u0) using the linear equations (2.10). Al l relationships are shown in Table 2.1. Table 2.1 Relation between 4-D and 2-D cosets. 4-D Coset I<2 U\ UQ v, v 0 Wj WQ 2-D Cosets c 0 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 C 2 x C 2 C 2 x C 2 c 4 0 1 0 0 1 1 0 0 0 0 1 1 1 1 0 0 C 2 x C 2 C 2 x C 2 c 2 0 0 1 0 1 0 1 0 1 0 0 1 1 0 0 1 C 2 x C 2 C 2 x C 2 c 6 0 1 1 0 1 1 1 0 1 0 0 1 0 1 1 0 C 2 x C 2 C 2 x C 2 c 1 c 4 0 0 0 1 1 0 0 1 0 0 1 1 1 0 • 0 1 C 2 x C 2 ^ 2 x ^ 2 . c 5 0 1 0 1 1 1 0 1 . 0 0 1 1 0 1 1 0 C 2 x C 2 C 2 x C 2 c 3 0 0 1 1 1 0 1 1 1 o 0 1 0 0 1 1 C 2 x C 2 C 2 x C 2 c 7 c 4 0 1 1 1 1 1 1 1 1 0 0 1 1 1 0 0 C 2 x C 2 C 2 x C 2 The trellis diagram relates to the finite state in Figure 2.6 as shown in Figure 2.9. In Figure 2.9, S = (53, 5 2, Si, S0) represents source state, while T = (T3, T2, Tu T0) represents target state in the finite state machine. Each state S, represented by its hex value in bold, is connected to four Chapter 2 DSL Systems 20 states T, similarly represented by four branches determined by the values of u2 and u{. Each branch is labeled with the 4-D coset specified by the value of u2, ux (and u0 = S0, see Figure 2.6). of 4-D cosets r 0 2 4 6 o c 13 5 7 1 c 2 0 6 4 2 C 3 17 5 3 C 4 6 0 2 4 C 5 7 13 5 C 6 4 2 0 6 C 7 5 3 1 7 C 2 0 6 4 8 C 3 17 5 9 C 0 2 4 6 A C 13 5 7 B C 6 4 2 0 c c 7 5 3 1 D C 4 6 0 2 E C 5 7 13 F C (S3, S2, Si, So) (T3, T2, Tif T0) Indices of 4-D cosets Figure 2.9 One stage of trellis diagram of the Wei's code. The branches connect each source state to those target states that can be reached from that partic-ular source state. The trellis flow is from left to right and each branch represents a 4-D coset. The eight possible indices of 4-D cosets are numbered from 0 to 7. The index numbers are ordered such that the leftmost index corresponds to the topmost branch. The indices on the left side Chapter 2 DSL Systems 21 indicate the cosets numbers of the outgoing branches from the that source state. The indices on the right side indicate the coset numbers corresponding to the incoming branch to that target state. ' 2.2.2 TCM decoding in ADSL The T C M decoder implements the Viterbi algorithm [12, 32] that performs maximum likelihood (ML) decoding. The Viterbi algorithm finds the path through a trellis that is most likely to have resulted in the sequence of received symbols. As a preliminary step, the algorithm must determine that point in each of the multidimensional subsets which is closest to the received point, and compute its associated metric (the squared Euclidean distance between the two points). Since the 4-D constellation is partitioned into eight 4-D subsets ("2-D cosets") with Ad2 as the minimum squared Euclidean distance (MSED) (see Figure 2.8), each received 4-D point can be divided into a pair of 2-D points. The closest point in each 4-D subset and its associated metric can be found based on the point in each of the 2-D subsets which is closest to the corresponding received 2-D point and its associate metric [32]. Figure 2.10 shows the process of the Viterbi decoding. Each received 4-D constellation symbol First received 2 - D point Second received 2 - D point Find closest point in each 2 - D subset and its metric Find closest point in each 2 - D subset and its metric I < Find closest point in each 4-D subset and its metric Trace back trellis diagram and obtain the estimated symbol Demodulate each pair of points into two binary words v and vv Map the 2 LSBs of v and w to (wj.M2.M3); the remaining bits of v and w to the M S B s of u. Figure 2.10 Diagram for decoding Wei's 16-state 4-D code. Chapter 3 LDPC Coding and Decoding Low-density parity-check (LDPC) codes were introduced along with an iterative probabi-listic decoding algorithm by Gallager [14, 15] in the early 1960's. These codes were constructed using sparse random parity check matrices. However, they were not fully utilized until the advent of turbo codes [2], when they were rediscovered and generalized by MacKay and Neal [21], who showed that they perform almost as close to capacity as turbo codes. This reality makes LDPC codes not only attractive from a theoretical point of view, but also ideal for practical applications. Recently, it has been recognized that the various message passing decoding algorithms have provided good decoding performances for LDPC codes [5, 18, 23, 33]. In this chapter we give a brief overview of the origins of LDPC codes and the methods used for their coding and decoding. 3.1 Gallager LDPC Coding LDPC codes, invented by Gallager [15] in his Ph.D. thesis, are defined by a very sparse parity check matrix H with n columns and m rows. The sparsity of H is key property that allows for the algorithmic efficiency of LDPC codes. In Gallager's original construction [14], a (n, p, y) LDPC code is a binary linear block code of length n, with matrix H having a fixed column weight p and a fixed row weight y, where p > 3 and y > P • The integers p and y should be small w.r.t. n in order for H to be sparse. Therefore, m is the total number of parity equations of the code and there are a total of 2k fc ifl 0 codewords, where the message length k - n-m. The code rate R - - = 1 = 1 - - . n n y A specific example is a 15 x 20 matrix H of the regular (n, p, y) = (20,3,4) binary LDPC code constructed by Gallager is shown in Figure 3.1. Obviously, R - 1/4 . The parity-check matrix is divided horizontally into p = 3 equal size sub-matrices, each of which has a single ' 1' in every column. We define (3o v e r,„p as the maximum number of ones in the same row position in 22 Chapter 3 LDPC Coding and Decoding 23 H = 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 n m P Y 20 15 3 4 Figure 3.1 Gallager's construction of H matrix of a regular (20, 3, 4) LDPC code any two columns. In Figure 3.1, one sees that $ o v e r l a p =1 and the code defined by this parity check matrix is a linear block code with the minimum distance dmin = 6 . Therefore, the error correcting capability [34] of the code is t - dmin 1- = 2. If we denote the rows of H matrix as hj = (hjA, hjt2, hln), and the codeword vector as c = (c l s c2, • •., c„), we obtain the parity check constraint equations as the inner product of h, and c as follows: or equivalently: hj c = 0 j = 1,2, m (3.1) (3.2) From (3.2), one sees that every bit c, of codeword c is checked by exactly p parity check Chapter 3 LDPC Coding and Decoding 24 equations. If there is a single bit error in the received vector v at position i (1 < i < n), then the p parity check equations for checking bit v,- will be violated, i.e., v • hj = 1 j e J,• = { j | hJti = 1}, 1 <j < m with |7,.| = p (3.3) where |/,| = p means there are p T s in column i of parity matrix H. For example, for the code given in Figure 3.1 if there is a single error bit in the received vector v at position i = 3 , then J 3 = {1, 8, 13} . Thus the 1st, 8th and 13th parity check equations will fail. If there are two error-bits in the received vector v, since (3„veW„p « p , the total number of parity check failures will increase (due to 2p - $ o v e r l a p > p), and for each erroneous bit position the number of failed parity-check equations will be greater than \_p/2j. In the example of the code given in Figure 3.1, assume two errors occur at position 1 and 6. Thus the 1st, 2nd, 6-th and 7-th parity-check equations fail. The error in position 1 is contained in the 1st and 6-th failed parity-check equations, and the error in position 6 is contained in the 2nd and 7-th failed parity-check equations. If the errors occur at positions 1 and 11, then there are 6 parity-check failures, namely the 1st, 3rd, 6th, 9th, 11th, and 12th parity-check equations. Each erroneous bit position causes 3 failed parity-check equations. 3.2 Gallager LDPC Decoding LDPC codes are linear block codes with a specific construction. In theory they can be decoded utilizing conventional schemes such as standard array and syndrome table decoding [34]. For a binary linear (n, k) block code, the standard array decoder requires a 2" size of memory for each possible binary n tuple. Syndrome table decoding needs to store each of the 2"~k error patterns of length n. These schemes have huge storage requirements as n and n-k are usually large values for LDPC codes, typically with both n and n-k being on of the order of hundreds or thousands. Ctopter 3 LDPC Coding and Decoding 25 As parity-check matrices of LDPC codes are sparse matrices with very low density of ones, iterative techniques used to decode LDPC codes become feasible. Based on specified parity-check matrices H for LDPC codes, two decoding schemes devised by Gallager were the bit-flipping decoding algorithm and the probabilistic decoding algorithm. The bit-flipping decoding algorithm is particularly simple but is usable only for the binary symmetric channel (BSC) at the rates far below channel capacity. It is an iterative decoding scheme based on a hard decision of the received bits [14, 15]. We do not consider this topic in this thesis. The probabilistic decoding algorithm utilizes the structure created by the parity-check equations more systematically than does the bit flipping algorithm. The probabilistic decoder is based on the message-passing algorithm [26], which involves passing probabilistic messages based on a graph or a network structure [18, 33] generated from the parity-check matrix H. Probabilistic decoding algorithms have become a general class of decoding algorithms for LDPC codes. The algorithm decodes directly from a posteriori probabilities at the channel output and exhibits much better performance than the bit-flipping algorithm. We will expand discussion of this algorithm in the next section. 3.3 Probabilistic Decoding for LDPC Codes Gallager provided a probabilistic decoding algorithm that works directly on the nodes and edges of the bipartite graph representing the code [15]. In 1981, Tanner [30] introduced a bipartite graphical model ("Tanner Graph") for LDPC codes and generalized the parity check constraints of LDPC to general linear block code constraints. Tanner proved the optimality of the sum-product and min-sum algorithms for decoding codes defined on cycle-free graphs. The belief propagation [22, 33] algorithm can be used to derive a number of iterative decoding algorithms for error control systems, including for LDPC codes, turbo codes, and others. Based on that the iterative decoding of compound codes is a instance of probability propagation algorithms that operate on a graphical model of the codes [33], the factor graphs were introduced by Kschis-chang, et al. [18]. They were shown to subsume many graphical models such as Bayesian networks, Markov random fields and the Tanner graph. The sum-product algorithm operates on Chapter 3 LDPC Coding and Decoding 26 factor graphs to compute various marginal functions by distributed message passing in the graphs. In 2001, Forney [11] proposed the normal realizations that introduced a normal graph, which has some advantage over the factor graph. There are no topological restrictions and a clean functional separation is obtained between operations in the message-passing algorithm. In following sections, the normal graph and message-passing algorithm are presented. Then we derive LDPC decoding algorithm based on them. 3.3.1 Normal graph A normal graph O is an undirected graph, consisting of nodes and edges, as introduced by Forney [11]. Each node can be connected to any number of edges, and each edge can be connected to one or two nodes. An edge connected to two nodes is called an internal edge, while an edge connected to only one node is called an external edge (a.k.a., "half-edge" or "leaf-edge"). Each edge is usually associated to a variable x, that takes value over a finite, discrete alphabet denoted by A, . The compound term "edge-variable" is used where we discuss the combination of the edge and its associated variable. The variables associated with the external edges are known as the "symbol variables" and represent the coupling of the system with its environment, while the internal edge-variables are known as the "state variables" and represent internal states of the system. To each node is associated a local constraint, which expresses a relationship between the variables associated with the edges connected to that node. As shown in Figure 3.2, node N connects to k + 1 edges that are associated with the variables xQ, xu xk, and it has a constraint set SN that combines the local constraints of k + 1 variables. There is one external edge associ-ated with the symbol variable x0 and there are k internal edges associated with the state variables xx, ..., xk. Let (JC0, xu xk) is a set of variables with a configuration space S - A 0 x Ax x ••• x Ak, and let SN be a subset of S, then the elements of SN are the valid configuration [18]. Suppose Chapter 3 LDPC Coding and Decoding 27 \ internal edges , -' node *0 * . . . external edge constraint set of node N Sj\jci^4.Q x .A^ x ""* XiA^ Figure 3.2 A example of normal graph with one node (x0, xu xk) belongs to SN, which is a constraint set (SN c S ) . If the prior probabilities of x0, xlt ...,xk are available w.r.t. node TV, and in addition, the variables are mutually independent (which is true for the graphs without cycles), then the joint prior probability Pn({XJ = a,-}|*=0) can be factored into the product of the individual prior probabilities* i.e., (3-4) 3.3.2 Basic definitions and simplified notations In Figure 3.2, there are several types of probabilities that can be associated with the variables x, fox i - 0, 1, k. A. Basic definitions the intrinsic probability for xt w.r.t. node TV is the prior probability denoted by PNin{xt = at) = />(*,• = «,•); (3.5) the posterior probability for x, w.r.t. node TV is the conditional probability based on node TV denoted by Chapter 3 LDPC Coding and Decoding 28 PNP" (x< = at) = P(x, = at\N); (3.6) the extrinsic probability for x, w.r.t. node TV is the new information for x, that has been obtained from node TV denoted by PNEX(XI = ai) = CXP{N\xt = ai), (3.7) where the normalization constant CX. = y ^ P(N\x- = a,)j I S used to meet the condition ^ PNex{xt- at) = 1 . "' e A' By using Bayes' rules, P(x, = at\N) = ^ ^ P ( A ^ | x , = ai)P{xt = ai), then PNP"S'(Xi = ai) = CX'PN'\XJ = ai)PNex(Xi = ai), (3.8) where CX' = ( ^ ^ A / " = ai)PNex{Xi - = (P(N)CX)~L is also a normalization constant to meet the condition P/"' s '(x, = a,) = 1. a,e A. In (3.7), PNex(Xj - a,) is a probability that describes the new information for x, that has been obtained from message-passing. The extrinsic message is an outgoing message from node N. For instance, an extrinsic probability PNex(x0 = a0) for the edge-variable x 0 can be computed at node Nbased on the intrinsic probabilities iV"(x, = ai), i = 1, ..., k for the other edge-variables connected to node N, as follows: PNex(x0 = a0) = CXO £ I I P i V ' " ( x ' - = a ' } ( 3 ' 9 ) fli ak i = i (n 0 , a , , . . . , a , ) s SN In (3.9) is a normalization constant, and summation is over all combinations of the variables au a2, ...,ak (a, e A,) such that the configurations (a0, au ak) e A0xAxx ••• xAk satisfy Chapter 3 LDPC Coding and Decoding 29 the constraints set for SN. B. Simplified notations • any variables x,, may take on a value that is omitted in the equations but can be in-ferred from context. For example, we use P(x,) to represent the probability P(x, = a,), where a, is some implicit value. • Instead of indicating the variables being summed over, the "not-sum" or summary no-tation [18] indicates those variables not being summed over. For example, ~{x2} is the notation of the not-sum over variable x2, and h is a function of three variables xx, x2, and x 3 , then the "not-sum" or "summary of h for x2" is denoted as ^ h(xi, x2, x 3) = h(xl, x2, x 3 ) . 3.3.3 Message-passing algorithm To study message-passing on graph with two nodes, the graph labeled O in Figure 3.3 has Figure 3.3 A normal graph with two nodes two nodes NL and NR connected by an edge-variable x0. Besides x0, node NL has k additional edges (labeled xrL, x2, ...,xkL) and node NR has / additional edges (labeled xxR, x2R, x,R). Let the local constraint sets for nodes 7VL and NR be denoted by 5^ and SR, i.e. (x0, x\, ...,xLk) e SL and (x 0,xf, . . . ,xf)e SR. Let the intrinsic probabilities w.r.t. NL and NR be given by Chapter 3 LDPC Coding and Decoding 30 ^PL'"(xL) = P<p'n(xL) i = 1,2, ...,k , and the intrinsic probability w.r.t. <& for the internal VPR'n(xj) = P9in(x?) j = 1,2,...,I edge x0 be set to a uniform value, i.e., P^'"(x0) - J-U for all x0e A0. \Ao\ Consider the extrinsic probability for xf w.r.t. the graph <E> (for example PRx(xi) in Figure 3.4). Assume the notation \IL->R(XO) indicates the message from node NL to node NR along the edge associated with the variable x 0 . The message carries the extrinsic probability of x0 from node NL to be used as intrinsic probability of x0 at the node NR, i.e., Hz.-»*(*(>) = PL"(XO) = PR"(XQ). (3.10) Figure 3.4 Message-passing for two nodes By u t i l i z ing (3.9), PL"(x0) = C, o £ I P * - ' " ^ a n d t h e n f o r ~{*o> J = 1 (jr0,jT,, ...,xt)e SL i - 1,2,...,/ can be computed as follows: pRex(x?) = cy • (3.11) Cliapter 3 LDPC Coding and Decoding 31 If all external variables (x\, ...,xLk,xR, xf) are renamed as (x u x2, ...,xk + l), and if the symbol is a constraint set for graph O in Figures 3.3 and 3.4, then constraint set will satisfy (3.12) below: [(*l> X2, • •., Xk + i) G S<j,] = [(x0, XI, Xk) G SL] O [(x0, Xi, X ; ) G SR]. (3.12) Consider the expression of the extrinsic probability for xf w.r.t. the graph O can also be derived similar as equation (3.11). Combining (3.11) and (3.12), obtains a generalized expression of the extrinsic probability for xt w.r.t. the graph O as follows: f k + l P<t> (xi) — Cx' • ^ -{",} - j = i for i = 1, 2, ...,yt + Z. (3.13) A. Graphs without cycles If a graph is always divided into two disconnected sub graphs when cut at any edge, then the graph is a cycle-free graph. Consider a graph <D is divided into two subgraphs O r and O^, which are connected by an internal edge x,. Then take these two subgraphs as two 'nodes' shown in Figure 3.4 the extrin-sic probabilities for the graph can be obtained by computing them for subgraphs O r and using (3.13). Continuously, we can take each of these subgraphs in turn (e.g., <X>r) and then choose an internal edge (e.g., xf) which connects two smaller graphs (e.g., <X>rr and O ^ ) . Computations for the appropriate messages passing in both directions along the internal edge x, enable the extrinsic probabilities for the graph O r to be obtained by calculating the extrinsic probabilities for the two smaller graphs and $>£ K using (3.13). By executing this scheme recursively, the Chapter 3 LDPC Coding and Decoding 32 extrinsic probabilities for the graph <X> can be obtained by implementing calculations for each individual node and then for messages passing between the nodes. These conclusion for the cycle-free graphs can be also found in [33, 5, 11]. As a result, the message-passing algorithm can be summarized as follows: Message-passing algorithm • stepl. Initialization. If any arbitrary node /Vin the graph <J> is connected to k nodes Ni,N2, • ••,Nk through edges xx,x2, ..., x*., then Input: the intrinsic probabilities from the edge Xj are P<b'"(xj) for; = 1, 2, k Output: the input message along the edge x} are \iN ^,N(xj) = P^'n(Xj) • step2. Sum-product update. The output message from node N to any node TV, via edge*,, Pw-^.C*,) for i = 1, 2, k, is obtained from the input messages \iN.^N(Xj) along the all other k- 1 edges (j = 1,2, ...,k and j* i), i.e., k \LN^Nl(x,) = Cx. £ J ^ p ^ ^ X ; ) for all z = 1,2, (3.14) -{*,•} L=i Equation (3.14) applies to all nodes in the graph. • step3. Message-passing. For any node in the graph, the output message from the node passed over each edge will become the input message for the receiving node. • step4. Repeat step2 and step3. Repeat step2 and step3 for all nodes in the graph in terms of the message-passing schedule until the stopping criterion has been reached. Since the expression (3.14) consist of the sum of the products, it is commonly called the sum-product update rule. The sum-product algorithm has been used for the message-passing algorithm by many authors [14, 18]. The message-passing schedule determines the order in which the messages are passed. The stopping criterion is used to determine whether to stop passing messages and declare that decoding is complete or failed. By utilizing this algorithm we can obtain the extrinsic probabilities for the external edges. Chapter 3 LDPC Coding and Decoding 33 For some applications, the final required information may be the posterior probabilities, which can be derived by calculating the products of the intrinsic and extrinsic probabilities for the external edges using equation (3.8). B. Graphs with cycles A graph is said to have cycles if it is possible to trace a path from a node back to itself by moving along the edges of the graph in only one direction. The length of a cycle is the number of edges along the cycle's path. Cycles are common in many applications. If a graph includes cycles the assumption of independence for variables is invalidated, and the message-passing algorithm can only be used as an estimation approach without the guarantee of convergence. Fortunately, it has been found that for many cases of graphs with cycles, such as in the techniques of turbo and LDPC codes [13, 14, 20, 22, 30], the message-passing algorithm creates accurate estimates for the desired probabilities with much lower computation and complexity than an exact decoding. However, short cycles (e.g., of length 4) result in violation of the independence assumptions. In other words, the larger the cycle lengths the better performance of the algorithm. 3.3.4 Binary block codes Any (n, k) binary block code with k message bits and n-k check bits may be expressed by a bipartite graph. We assume a graph with n left bit nodes x (also called message nodes) and n-k right check nodes s (also called parity-check nodes). Figure 3.5 gives an example of a bipartite graph for a (7, 4) Hamming code with parity-check matrix H . In Figure 3.5 there are two basic types of nodes, the bit nodes x and check nodes s. According to the schemes of normal graphs in previous sections, and because the bit node and check node have different types of constraint sets, we will discuss the normal graph message-passing rules for each. Chapter 3 LDPC Coding and Decoding 34 X\ ~f~ X4 ~T" Xg -f~ X-j Xi H~ X4 + X5 "f~ X(, X-^ ~\~ X$ ~f* Xfi H" .X7 0 0 1 0 1 1 1 0 1 1 1 0 0 1 0 1 1 1 Figure 3.5 A bipartite graph for a (7, 4) Hamming code A. Bit node We use the model of a one node normal graph (e.g., Figure 3.2) to analyze a bit node B. The constraint set for bit node B is SB = {(x0, J C „ ...,xk)\(x0 = xx - ••• - xk)} where variables x0, xu ...,xk belong to the same alphabet A. Figure 3.6 shows a normal graph with a bit node. By using (3.9) the output along any edge-variable xt can be derived as M*->*,(*.• = 0 = C^YIILN^BIXJ = 0 for 1 = 0, 1, ...,k (3.15) j.= P where C X . is the normalization constant. Figure 3.6 A normal graph with a bit node Chapter 3 LDPC Coding and Decoding 35 Comparing (3.9) with (3.15), one sees that (3.15) simplifies the sum of products to a product of the intrinsic probabilities. B. Check node In any check node, a local constraint set specifies that all connected edge-variables sum to zero. If we use the same model of one node normal graph (e.g., Figure 3.2) to analyze the check node C and denote © as addition over GF2 , then the constraint set for the check node C is Sc -{(jt 0,x„ ...,xk)\(x0 ® Xi © ... ® xk) = 0} where variables x0,xu ...,xk belong to the same alphabet A. A normal graph with a check node is shown in Figure 3.7, then the output along any edge-variable x,- is given by (3.16) below: VC^NW = cx. £ n ^ - c ( ^ ( 3 - i 6 ) -{*,.} j.= p ; N check node C XQ N, o : - • constraint set Sc = {(x0,xl,...,xk)\(x0®x1® ...®xk) = 0} Figure 3.7 A normal graph with a check node For binary block codes, A - {0, 1} . For a check node, the constraint set Sc constitutes 2k configurations for k + 1 variables. In equation (3.16), for a fixed x, the summation is over 2k 1 elements so that the computation appears complicated. Fortunately, we can find a way to Chapter 3 LDPC Coding and Decoding 36 simplify computation by using the binary additive law and induction rule. Suppose the input messages to check node C correspond to the intrinsic probabilities (]XN ^c(*i = 1) = Pi defined as pt - P'"(Xj = 1); then < ' . If we choose i = 0 to investigate the lm-»c(*i = 0) = l-pi solution it will have similar generalized results. Therefore, the output message along edge-variables x0 is the extrinsic probability w.r.t. C, i.e., Hc-,w0(*o = !) = © - © * * = 1) (3.17) Hc^o(*o = 0) = P(Xl®-®xk = 0) To calculate (3.17), we first consider the value of P(xx ® • • • © xk = 0) as follows: • For k - 2 , we have P(xx ® x2 = 0) = pxp2 + (1 -P\)(\-pz); then 2P(x 1 ©jc 2 = 0 ) - l = ( l - 2 P l ) ( l -2p2) (3.18) • Assume the equation (3.18) can be generalized to the case when k = K-l for AT > 3, K- 1 2P(xx ®x2 ® ••• ®xK_x = 0 ) - 1 = JJ (1 - 2pi) (3.19) 1 = 1 • Under the hypothesis of equation (3.19), when k = K, let XK = xx®x2® ••• ®xK and XK_ x = xx® x2® ••• ® xK_ x then 2P(XK = 0)-l = 2P(XK_x®xK = 0)-l = [2P(XK_l = 0)-l][l-2pk\. • By using (3.19) we obtain K 2P(xx®x2®---®xK=0)-l = J J ( l - 2 p , ) . (3.20) • According to induction rules, the expression (3.20) is valid for all K > 2 . Furthermore, (3.21) P(xx ®x2 ® ••• ®xk = 0) = -1 ( i + r j ( i - 2 A ) Chapter 3 LDPC Coding and Decoding 37 Substitution of (3.21) to (3.17) yields Hc->w.(*o = 0) = i + r j ( i -2/>,) and (3.22) IIC^N0(XO= 1) = Jl-Y[(l-2Pl) (3.23) where p, = \lN.^c(Xi= 1) are the input message probabilities. Equations (3.22) and (3.23) represent simplified update version of (3.16) when i = 0 , They can be generalized to common expressions for any i - 0, 1, k as follows: H e = 0) = o l + r j ( l - 2 p , ) j = o (3.24) \^C^>N,(Xi — 1) — o l - r j ( l - 2 ^ ) 7 = o (3.25) 3.3.5 LDPC codes LDPC codes are a specific example of linear block codes. By using the update equations for the bit nodes and check nodes described in section 3.3.4, we can exploit decoding on the graph for an LDPC code, as shown in Figure 3.8. A parity matrix H has n columns and m rows; therefore assume that the bit nodes are 5, for i - 1, 2, n and the check nodes are C,- for 7 = 1, 2, m. If there is an internal edge between the nodes 5, and C ; , then the edge is labeled by the edge-variable uitj. For an (n, p, y) LDPC code, there are py internal edges between those nodes in the graph. In addition, there is an Chapter 3 LDPC Coding and Decoding 38 Figure 3.8 A normal graph for an LDPC decoder external edge for each bit node Bt that is associated with the variable xt. The intrinsic probability w.r.t. bit nodes 5, is denoted as pt - Pl"(x,) . If we denote oc( i) as the set of parity checks in which the bit node 5, is involved and as the set of bit nodes connected to they'-th check node Cj, then oc(i) is also the set of row locations in the i-th column of the parity-check matrix that contain a "1", and is also the set of columns in the j-th row that contain a "1". A. Probabilities representations The message-passing algorithm starts with a set of incoming probabilities pt = P'n(xl) . In term of (3.15), the message from bit node 5, to check node Cj is obtained as follows: Chapter 3 LDPC Coding and Decoding 39 U„-><•(",, = 0 = cu Pin(xt = Q • TJ U . C / ^ M / = 0 for £ = 0, 1 (3.26) /s a(,)IU> where a(i) I{7} means the set a(z) except j, and c y is the normalization constant. According to (3.24) and (3.25), the message from check node Cj to bit node B, is given as \LCI-+B,{UI.I = 0) = | ( l + H [1 - 2\LBi^Cj(urj = 1)]) ; (3.27) ^ c ; - * , ( " u = !) = li1 ~ I I [ 1 - '^-cp'.-.j = 1)]) • (3-28) i'e P(/)l{i} The message passed from bit node S, to check node Cj is the probability that Bt has a certain value given the observed value of that bit node, and all the values communicated to B, in the prior round from check nodes other than Cj incident to Bt. On the other hand, the message passed from C,- to S, is the probability that S, has a certain value given all the messages passed to Cj in the previous round from bit nodes other than S,. Equations (3.26), (3.27), and (3.28) express the message-passing algorithm for the LDPC code. We use some representations for the related variables as follows where b - 0, 1: fp\ = Pin(Xi = b) 4 = M-C,->B,(".-,; = b) Vgf = Ppos\Xi = b) We can draw message-passing flows for LDPC codes as shown in Figure 3.9. The update equation (3.26) for a message from bit node B, to check node Cycan be rewritten as qba = cu pb Yl 4'forb - 0,1 (3.29) Chapter 3 LDPC Coding and Decoding 40 bit node Cj check node check nodes ' bit nodes Figure 3.9 Message-passing flows for an LDPC decoder The update equations (3.27) and (3.28) for the message from check node C,- to bit node Bt can be rewritten as follows: 4 = \{} + n h q " i'e P(/)K<} (3.30) i ' e (JO) u o (3.31) where hqtj = q°u - q)j = 1 - 2q\. Combining equations (3.29) and (3.8), the posterior probabilities are provided as follows: & - ci p * n ^ ' f o r b = ° ' 1 (3.32) / 6 <*("') After a number of iterations of the message-passing algorithm, it is expected that these estimate of the posterior probabilities converge to the ideal posterior probabilities w.r.t. the given LDPC code. For binary LDPC codes, it is sometimes advantageous to work with likelihoods, or even log-likelihoods instead of probabilities. Chapter 3 LDPC Coding and Decoding 41 B. Likelihood representations For a binary random variable u let L(p) in (3.33) be the likelihood ratio of u; L(p) = g « ^ = l - £ , (3.33) P(u =1) p where p = P(u = 1) for 0 < p < 1. Applying (3.29) to (3.33) yields L(qtj) = L(p,-)- fj L( ty ) . (3.34) / £ o(i) I{7} Combining (3.30), (3.31)and (3.32) gives 1 + Y\ 5aO L(Sij) = '•'6P(ft'f'} ; (3.35) 1- H bqej • ' e m Hi) L(qt) = L(Pi) • Y[ L(SU') • (3-36) j'e <x(i) C. Log-likelihood representations Define the log-likelihood ratio of u as LLR(p); then LL/?(p) = l n P ( M = Q ) = lnl—P. (3.37) P(w =1) p Application of (3:29) to (3.37) yields LLR(qij) = LLR(pi) + £ LLR(sif). (3.38) j'e cx(i)l<j'} e -e (I +x\ From (3.37), by using tanh(x) = — and 2atanh(x) = In I I for |x| < 1, we ex + e x V l - x / Chapter 3 LDPC Coding and Decoding 42 obtain that 1 - 2p = tanh -LLR(p) \ . Thus substitution of (3.30)-(3.32) into (3.38) yields In comparing these three representations for the message-passing algorithm, one sees that the log-likelihood representations as shown in equations (3.38)-(3.40) simplify the computation considerably. Equation (3.40) recalls that the intrinsic, extrinsic and posterior LLRs w.r.t. the LDPC decoder satisfy the equation (3.8). Based on this posterior probability LLRiqi), bit decisions can be made and a decoded word x is then able to be found. T The parity-check equations (Hx = 0 ) are used to evaluate the decoded word. If all parity checks are satisfied, then the iteration stops; otherwise, it continues to exploit the updated message-passing procedures until all parity checks are satisfied or the iteration counter reaches its limit and declares a decoding failure. If the multiplications in (3.39) can be replaced by additions, then the complexity of implementation for this algorithm will be greatly reduced. Any product of real numbers can be expressed as follows: >'e P0')l{') (3.39) LLR(qt) = LLR(pt)+ £ LLR(sif). (3.40) j'e <x(i) (3.41) Assume that for any x > 0 , a function f(x) is defined as follows: r(jc) = - l n [ t a n h ( j c / 2 ) ] = l n ^ L i . e - l (3.42) Chapter 3 LDPC Coding and Decoding 43 Then r(jc) = r\x) forx>0. (3.43) By using (3.41)-(3.42), equation (3.39) can be written as follows: LLR(Sij) = [ sgn(LLR(qrj)))-r\^ £ T(\LLR(qrj)\)J , (3.44) r s p y ) 10'} • ' i " 6 P 0 ' ) i { i } which consists of sign and amplitude parts. The sign part is given by jQ sgn(LLR(qn)). . . . i'e P(/) 10} The amplitude is determined from T^ ^ T(\LLR(qrj)\)j , which is dominated by a large i 'e P0')IO'} value in the summation, i.e., i f V T(\LLR(qrj)\)j = T[max(T(\LLR(qi.j)\)) \ . Note that the V J V i'e PO) 10} J i'e PO') K O function F(x) is a monotonically decreasing for x> 0, such thatrf r(\LLR(qrj)\)j ~ V <"e P0')IO) T[min(\LLR(qn)\) L Y e PO) Using (3.43), yields that r(T(x)) = x, thus (3.44) can be rewritten as LLR(Sij) = ( T7 sgn(LLR(qn))) • min(\LLR(qa)\). (3.45) ^ / (' 6 p(/) |{«} . i 'e PO) UO By using these "min" approximations, the decoding computation is reduced significantly with only a small degradation in performance. Such reduction in complexity is useful as LDPC codes find their way numerous applications, including for wireless communications and data storage channels. The unique advantages of these "min" approximations for decoding LDPC codes for DSL transmission are described in Chapter 4. Chapter 4 LDPC Codes in DSL Systems Based on the structure of DMT systems introduced in Chapter 2, and the concept of LDPC codes and the iterative message-passing algorithm for their decoding discussed in Chapter 3, we now deal with the issues related to the implementation of LDPC codes in DSL systems. Three aspects are emphasized: the construction of LDPC codes, the encoding process, and the decoding process. When the LDPC codes are designed properly, they can outperform conventional error-correction codes (ECC), such as turbo-codes. For example very long LDPC codes which are within 0.0045dB of the Shannon limit have been constructed by Chung [7]. The method was proposed by Gallager [14] to define a binary (n, p, y) LDPC code by an (m x n) sparse parity-check matrix H built by p sub-matrices of size [ — J x n, where m must be a multiple of p . Based on Gallager's work, MacKay [20] constructed binary LDPC codes without 4-cycles by randomly generating columns, each of which contains exactly p ones. These random construc-tions provide L D P C codes with reasonable distance properties. One criticism of randomly generated LDPC codes is that the complexity of the encoding process for such codes is quadratic in block length. Richardson et al. [27] have discussed the design of LDPC codes using a low-complexity encoding process. Alternative constructions are deterministic, which can be easy to describe using a small number of parameters. Eleftheriou et al. [9] proposed a scheme for deterministic construction of LDPC codes which relies on "array codes" [10]. Array codes are ECCs that have the capability to detect and correct burst errors using an algebraic decoder. These binary array codes have sparse parity-check matrices, and they are decodable as LDPCs, making them suitable for use in soft 4.1 Constructing an LDPC Code 44 Chapter 4 LDPC Codes in DSL Systems 45 iterative decoding schemes. Therefore, binary array codes provide the framework for determinis-tically constructing a class of LDPC codes. Let us define a regular LDPC code by three parameters: a prime number p and two integers p and y where y> p such that p, y<p . Let H, be the (pp xpy) matrix that is defined as follows: Hi = a I I I 2 ( V - 1 ) 2 ( Y - 2 ) a a a ( p - l ) ( Y - p + l ) a ( p - l ) ( Y - p + 2) I a I a I P-2 a I P - 1 2(p-3) 2 (p-2) a a a P - 1 a I Y - 2 a 2 ( T - 3 ) a ( P - 1 ) ( V - P ) (4.1) where I is the (p x p) identity matrix and a is a (p x p) permutation matrix indicating a single left/right cyclic shift, e.g., when p = 5 , a 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 or a = 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 The parameter p and y give the column weight and row weight of H[, respectively. Since there are no two rows that have overlapping ones in more than one position, matrix H[ is a sparse 4-cycle free parity-check matrix which can be decoded utilizing the message-passing algorithm. If we triangulize the matrix simply by replacing the lower-triangular elements of its (ppxpp) leftmost sub-block (dashed in (4.1)) by zeros. The resulting matrix H is created as follows: H = I I I I 0 I a a 0 0 I a 0 0 0 a I p-2 a I P-i 0 2(p-3) 2 (p-2) a a or a I ( Y - 2 ) a 2 (Y - 3 ) a ( P - 1 ) ( Y - P ) (4.2) Chapter 4 LDPC Codes in DSL Systems 46 where 0 is the (p x p) null matrix. The LDPC codes defined by H in (4.2) are irregular LDPC codes since both their row weights and column weights in H are not uniform. These codes have codeword length of n - py, number of parity checks m - pp and information block length k = p(y- p). An LDPC code with n < n is simply obtained by discarding the n-n rightmost columns of H. Then efficient encoding can be performed from H without computing the generator matrix of the code. An n-T tuple x is an LDPC code if and only if H • x = 0, where 0 is a pp x 1 null vector. Let the code T vector be x = where the pp x 1 vector p describes the parity part and p(y- p) x 1 vector s describes the systematic information part of code x. By utilizing advantages of the triangular structure and sparse density of the parity-check matrix H these LDPC codes are fast encodable. We propose to adopt this construct of LDPC codes in our simulations. 4.2 Encoding process for DMT systems As described in standard [1] of DMT-ADSL and a draft trial-use standard [8] of VDSL, the DSL systems impose the data into physical frames and superframe structure. Each frame is encoded in DMT as a single DMT symbol. The number of bits contained in each DMT symbol is determined during initialization and held unchanged at that value until the next initialization is implemented. The length of a DSL frame is determined by the adapted bit rate of the interface [29]. DMT symbols are encoded at the rate of 4000 bps, i.e., one complete transition per 250 ms. The actual amount of data that can be encoded in one transition is variable, depending on the line conditions (such as SNRs) which are found at initialization, the number of tones supported in the channel and the amount of data assigned to each tone. Tones located in noisy areas of the spectrum are suppressed, and the complexity of the symbols' constellations encoded in each tone needs to be set to optimize transmission over the channel. Each frame contains the data of one DMT symbol at one time. The data contained in each frame can be from both the fast data and Chapter 4 LDPC Codes in DSL Systems 47 interleaved data buffers, and can include as well as overhead bits used for error correction, administration and management of the DSL link [1, 3, 8, 29]. 4.2.1 LDPC encoding and constellation mapping In DSL systems, the data frame buffer consists of the fast and interleaved data buffer. Eleftheriou et al. [9] proposed a structure for LDPC encoding and symbol mapping which is redrawn in Figure 4.1. The data frame buffer is split into two parts to be denoted as buffers \\fu and V|/c. Before symbol mapping the bits assigned to buffer will be LDPC encoded, while the bits assigned into buffer \|/„ will remain uncoded. If the fast buffer is not present, the LSB of the interleaved data buffer will be assigned to the LSB of the buffer \\fc. Each codeword with k bits from buffer \j/ e is input to the LDPC encoder and the output of encoder is a codeword with n bits. W(b/2 - 1 ) • w Buffer Buffer Wx W 1 Outer 1 Gray a i • •• ! lbc-l ; ^ h w. 1 Inner Gray apper aved 1 r s V bincr "•Wo w apper Im LDPC S o ) a - - > Buffer • U u v(b/2-- i ) "o Encoder c o Outer Symb Ke 1 • * - * •.-'! > fc: 1 Gray Symb 3 CQ a - - > O , k) ra • ' • ' . V i ' J Q '< Inner Fast » to Gray Fast ... ^ > •—* J ECC Encoder Mapper Figure 4.1 Overall LDPC encoding and symbol mapping By using bit loading schemes, a total of b bits for each specific tone i is extracted from the buffer \\iu and from the output of LDPC encoder as follows: be bits are extracted from LDPC encoder output and denoted by {tbc_u ...,t0); Chapter 4 LDPC Codes in DSL Systems 48 •' bu = b - be bits are extracted from buffer V|/„ and denoted by (zbu-i, •••,z0), where b can be either 1 or a even integer between 0 and 14 inclusive, and 1. when b = 2,4,6,8, 10, 12, 14, • {tbc-u ••; t0) is partitioned as {tbc.u tbCv \tbc^u t0) = (fw |r v>,and • (Zbu-i, • z0) = (Zbu-i, • ••> Z ( 6 / 2 ) - f c C v | z(b/2)-bcv-1, •••> z0) = (zw\Zv) . Then the ex-tracted b = be + bu bits are input to the "Bits/Tone Combiner". The combiner creates two binary 6/2-tuples v = (v{b/2_x), . . . ,v 0) = < z v | 0 and w = ( w ( 6 / 2 _ i ) , ...,w0) = {zw | tw> that are inputted to the "symbol mapper". • be = bcv + bcw, and usually bcv - bcw 2. when b = 1, a bit r0 is extracted and assigned to v = (v0) = (t0), and word w is not used. 3. when b = 0 , no information bits are assigned to the corresponding tone (subchannel). For the first case, b is non-zero and even. Let L - 2b/2, then the two binary words v and w are independently assigned to two L-ary real valued symbols (PAM symbols) that are mapped to the real and imaginary components of the complex Q A M symbol, respectively. Each L-ary real PAM symbol is included in the set: C = {C, = 21-L+ 1} for / = 0, 1, . . . , L - 1. (4.3) Set C is effectively partitioned into 2bc" and 2bc" subsets, such that the minimum Euclid-ean distance between the symbols in each subset is maximized. The bcv LSBs of v and bcw LSBs of w, which are coded by LDPC coder, label the subsets of C according to a Gray coding rule ("inner Gray" as shown in Figure 4.1). The remaining MSBs, which are uncoded, label symbols within a subset following a separate Gray coding rule ("outer Gray"). The real and imaginary components of the complex symbol to be transmitted are generated by this "double Gray-coded mapper". If the length of an LDPC codeword is more than the length of one DMT symbol, then the buffers \|/„ and \|/c correspond to the contents of more than one data frame buffer. In other words, Chapter 4 LDPC Codes in DSL Systems 49 the "fast data buffer" in Figure 4.1 indicates the contents of successive fast data buffers, and "interleaved data buffer" indicates the contents of successive interleaved data buffer. 4.3 Decoding process for DMT systems In conventional DSL systems, bits are mapped to constellation points in the frequency domain. Each received DMT symbol in the frequency domain consists of a frame of complex numbers that are associated with all DMT subchannels. According to the definition of the Q A M constellation [1] used for signalling in each subchannel, we can select the constellation point closest in Euclidian distance to the received signal in each subchannel. Then, based on the constellation point being chosen the bit sequence can be determined by demapper. The detailed procedure has been described in Section 2.2.2. For LDPC decoding schemes in DSL, the decoder deploys the iterative message-passing algorithm based on a normal graph. From the received signals the probabilistic information for each bit of the bit sequence of the DMT symbol is extracted as the input message for the decoder. In our work, double Gray mapping is used for multilevel constellation mapping in the LDPC encoder. The LSBs labeled by "inner Gray" in Figure 4.1, which are all coded by LDPC, can be reliably decoded by utilizing the iterative message-passing algorithm on a normal graph. The uncoded MSBs are produced by simply applying a maximum likelihood estimate based on given values of corresponding LSBs. The study of LDPC codes in this thesis is applied mainly in DSL systems over two differ-ent channel detectors. The one is the ideal (flat) A W G N channel detector, the other is the frequency-selective AWGN channel detector. In either case, we utilize the outputs from channel detector as the initial message inputs to perform message-passing algorithm for LDPC decoding. 4.3.1 Ideal AWGN channel For an ideal AWGN channel, there is no attenuation distortion. A system with coding, mapping, and an ideal AWGN channel is shown in Figure 4.2, the LDPC encoder takes a message bit sequence u to an encoded bit sequence v; then v is mapped to a sequence of symbols x, for Chapter 4 LDPC Codes in DSL Systems 50 transmission to the channel. The received sequence of symbols y will be decoded by a channel detector. Following our discussion in Section 3.3.3, for any edge between nodes, the extrinsic probability w.r.t. one node gives the intrinsic probability w.r.t. the other node. In Figure 4.2, the message passing schemes are applied between the three nodes, i.e., the channel detector Nc, a demapper ND and LDPC decoder NL. u u / L D P C Encoder P A M / Q A M (Gray) Mapper w-0 A W G N Channel L D P C Decoder LLRlN[(v) •V, Demapper p x LLRNM .. Channel Detector •V, Figure 4.2 Diagram of a system with coding, mapping over an ideal AWGN channel Let us consider a transmitted DMT symbol, which corresponds to 512 uniformly spaced time samples without a cyclic prefix. Therefore, each element of transmitted signal vector x is expressed as x[i] for i = 0, 511. When this transmitted symbol passes through the AWGN channel, the noise signal vector w is added to the transmitted signal, where vector w is an i.i.d. vector of A W G N variables. Thus, each element of w, denoted as w[i] for i = 0, 511, is Gaussian noise with zero-mean and variance of a . Suppose y is the received signal vector, then we have y = x + w and y[i] = x[i] + w[i] for i = 0, ...,511. (4.4) In frequency domain, by converting (4.4) via DFT we can obtain Wk = Yk-Xk fork = 0, 511, (4.5) Chapter 4 LDPC Codes in DSL Systems 51 I = 0 . 1 = 0 „, fii w[i] ( .2nik\ 5Jt w[i] (2nik\ w[i] . (2%ik\ .. ,. 1=0 1=0 1=0 If Yk = Yl +jYIk is the received symbol associated to the &-th subchannel, then from (4.5) and (4.6) one concludes that all elements of the noise in frequency domain are also complex i.i.d. Gaussian variables. Each pair of real and imaginary components of the noise in each subchannel are independent of each other, and all has zero mean and variance of ak = a / 2 . Our purpose is to evaluate the probabilities of every bit in the coded binary tuples assigned to each subchannel. A. QAM constellation We can use a 2*-QAM modulation scheme in each subchannel, in which case each subchannel is assigned exactly b bits (called binary ^-tuples). For example, when b - 4, 16-Q A M results Figure 4.3 shows signal points used in the calculation of the a posteriori probabili-ties for the associated bits [1]. Let the binary 4-tuples associated with the 16-QAM constellation points be labeled B, = (b3, b2, bu b0) for i = 0, 1, 15. We would l ike to compute P(Z>, - 0\Yk - C,k) and P(bj - 11 Yk - C,k) for i = 0, 3 , where t,k is a received complex value. Regarding bit Z>o,P(&0 = 0| 7, = Q = £ P(b3,b2, bltb0\Yk = & ,Le., ~(b0 = 1) P(b0 = 0\Yk = C)k) = £ P(Bt\Yk = ^k). (4.7) B,\h = 0 i = 0 15 Assume that all constellation points are transmitted with equal probabilities; by using Chapter 4 LDPC Codes in DSL Systems 52 Figure 4.3 Diagram for calculation of a posteriori probabilities for 16-QAM Bayes' rule we obtain the following: £ P(Yk=^k\Bi) £ />(y*=C*l5<) p(&„ = o |y t = C*) = B,.|(6 0 = 0) i = 0, 15 fl,.|(60 = 0) i = 0, 15 fl,, 1=0 15 B„ i = 0 15 (4.8) where Bt is a binary 4-tuples associated with the i-th constellation point for 16-QAM, and function p(x\Bt) is probability density function ip.d.f.) conditioned on event B,•. Then p(fc0 = i | y t = ; t ) = i - p ( 6 0 = o |y t = Q . (4.9) Alternatively we can compute the a posteriori probabilities for each of the bits correspond-ing to any subchannel. When Yk - C,k = C,k+jC,[, the likelihoods for all valid constellation points are calculable. For any specific value of bit b,;e {0, 1}, we can always partition all constellation points into two subsets. For example, when b0 = 0 in Figure 4.3, an associated subset of points is Sbo = 0 = {B0, B2, B 4 , B6, B8, B 1 0 , 5 1 2 , 5 1 4} (eight points in the dash oval Chapter 4 LDPC Codes in DSL Systems 53 circles), while the other subset of points Sbo = l = {Bu B 3 , B5, B7, B 9 , Bu, B 1 3 , B 1 5 } is associated with b0 - 1. We can obtain the a posteriori probability of b0 = 0 by summing the likelihoods for the eight points in the first subset Sbo = 0 , and then dividing the sum of the likelihoods over all 16 constellation points. Consider the received symbol Yk = C,k - C,k + j^i centered on the constellation point which is associated with binary 4-tuples B, . If we take the noise signal as the two-dimensional Gaussian noise, the conditional p.d.f. of the transmitted symbol A, = AR + jA\ can be expressed as follows: f ( t f -A?) a + (Ci-4)*> 2not 2o? (4.10) Substitution of (4.10),into (4.8), yields exp P(b0 = 0\Yk = Ctk) = B;\ba = 0 i = 0, 15 (Cf -Af ) ' + (C1-A[) 2ol exp B„ i = 0, 15 2ol (4.11) Similarly, we can compute all values of P(b{ = 0 |y t = £ t ) and P{bt = l\Yk - C,k) for i = 0, 3. Returning to diagram in Figure 4.2, the channel detector can evaluate the log-likelihood ratio of extrinsic probabilities for each bit in those 4-tuples B, as follows: Chapter 4 LDPC Codes in DSL Systems 54 Z e x p 2ol UJGM = - l o g ' J ^ (4.12, fl,.|60=l j = 0, 15 2al The demapper does not change the probabilities for any bit Z?,. Thus the intrinsic probabil-ities and LLRs representations w.r.t. bit £>, inputted to LDPC decoder are given by equations (4.13) and (4.14), respectively: P*P>t = 0\Yk = Q = \ T (4-13) 1 + exp(LLRNt,(bi)) LLR^bd = LLR'fcbt) for / = 0, .... 3 . (4.14) By applying (4.12)-(4.14) for all 4-tuples associated with every subchannel we calculate the intrinsic probabilities and LLRs for each bit assigned to each tone, such that we can decode all bits by using the message-passing algorithm on the normal graph constructed by the LDPC parity-check matrix. Similarly, we can expand the application of the method described in this section to Q A M constellations with any size of 2b (such as 64-QAM, 256-QAM, 1024-QAM and so on). The number of constellation points increases exponentially with the number of bits associ-ated with said constellation; accordingly the computation of the intrinsic probabilities and LLRs for each bit of ^-tuples associated with the constellation points will involve the computation of a large number of likelihoods or log-likelihoods. Therefore, it is impractical to apply this method for large Q A M constellations. In Eleftheriou's proposal [9], the set of ^-tuples is split into two subsets of b/2 -tuples. The first subset of b/2 -tuples is mapped to the real part of the constellation point while the Chapter 4 LDPC Codes in DSL Systems 55 second subset maps to the imaginary part of the constellation point. When b>2, the two b/2 -tuples independently select two 2 -ary real symbols. We present an example to discuss this strategy in part B below. B. Gray-coded QAM constellation For b = 4, 16-QAM is applied. The figure to depict the calculation of the a posteriori probabilities for the bits associated with the constellation [9] is shown in Figure 4.4. (b3b2) A' 1 K -1 K -3 - jp -1000 9 1001 !n 1011 10 1010 1100 J 3 _ 1101 i l 5 _ 1111 _ _ J 4 _ 1T10 J 4 _ 01O0 >_ -0101 . _1 0111 J6 01'10 —1° — 0000 _ _ j 0001 $ _ 0011 - -0010 -3 -1 1 3 AR ^ 0 Al A3 AR (b,b0) Figure 4.4 Diagram for 16-QAM with Gray coding Since the binary 4-tuples denoted by (b3, b2, bu b0) can be split into two 2-tuples as (bu b0) and (b3, b2), we would like to compute P(bt = 0|f£ = P(b{ = l\YR = for i = 0, 1 and P(bt = 01Y[ = , P(b, = 111* = for i = 2, 3, where C,k and £'t are the real and imaginary components of the received symbol in the specific subchannel. 1 Consider bit b0, in which case P(b0 = 01 Y*k = C ) = X P { b ° = °>bl\YRk = . Assume 6, =0 next that all real components of constellation points are transmitted with equal probabilities. Chapter 4 LDPC Codes in DSL Systems 56 Using Bayes' rule then yields P(fco = 0 | l ? = # ) = ± = ± : • (4.15) = £|*0, b,) £ Z ^ = $K bx) b, b„ b, b„ Consider that the transmitted real component mapped by 2-tuple {bxb0) is Af for i - 0, ...,3 (refer to Figure 4.4), then • , Similarly, 27tO •exp k v 2a k J (4.16) Z e x p ( C f - A f ) p(fc 0 = o|y? = Cf) = 2 a Z e x p (C-Af) i' = 0 3 2 a * / (4.17) LLR(b0) = Z e x p 1 l = 0, 2 log ( C - A f ) 2a k J Z e x p R\2\ (Cf-Af) i = 1,3 2 a; (4.18) Z e x p ( C - A f ) 1 = 0, 1 2 a k J Z e x p ( C - A f ) 2 a (4.19) Chapter 4 LDPC Codes in DSL Systems 57 Z e x p ( t f -A f ) LLR(b,) = log i = 0,1 2 a? Z e x p i =2 ,3 2a? (4.20) Assume that the transmitted imaginary component mapped by 2-tuple (b3b2) is A\ for i - 0, ..., 3 (refer to Figure 4.4), then X e x P (Ci-Al) LLR(b2) = log i = 0, 2 2o; Z e x p 1 = 1,3 ' (Ci-A!)^ 2 a k J (4.21) Z e x p f (Cl-Ajf LLi?(Z73) = log i = 0, 1 2 a; Z e x p f ( C l - A l f i = 2, 3 2a k J (4.22) By applying (4.17)-(4.22) for all 4-tuples associated with every subchannel we calculate the intrinsic probabilities and LLRs for each coded bit assigned to each tone, such that we can decode all bits by using the message-passing algorithm. It is clearly evident that this 16-QAM constellation with Gray coding simplifies the computation both for the probability and the L L R of every bit associated with the received symbol without any degradation in performance. For large signal constellations of Q A M , say 1024-QAM, the length of the tuples is 6 = 1 0 . We use the strategy for LDPC encoding and constellation mapping described in Section 4.2.1. Let bcv = bcw = 2 , by applying similar methods expressed in this section for all coded LSBs {be = 4) of 10-tuples associated with every subchannel we calculate the intrinsic probabil-Chapter 4 LDPC Codes in DSL Systems 58 ities and LLRs for them, such that we can decode all LSBs by using message-passing algorithm on the normal graph constructed by parity-check matrix. Once the LSBs are produced the corresponding MSBs (bu = 6) are determined by maximum likelihood estimation. The implementation is employed independently to the real and imaginary parts of the received symbols. Thus the computation is much simplified. Until now we have discussed the detection and decoding for LDPC codes in DSL systems with a flat AWGN channel. If the channel transfer characteristic is not flat, all modulated subcar-riers will be attenuated and rotated by different amounts, but they will not affect each other (i.e., they are orthogonal). This situation we consider next. 4.3.2 Frequency-selective AWGN channel Recalling the introduction of discrete multitone modulation in Section 2.1, let the frequency-domain value of transmitted vector x be X , the value of received vector y be Y and the frequency-domain value of AWGN noise vector w be W. A frequency-domain diagram of Figure 2.4 is shown in Figure 4.5, where A is defined as following: 'AT- 1 0 0 A = 0 X, 'N-2 0 0 0 '0 (4.23) k = 0 impulse response of the channel. Then equation (2.9) can be rewritten as following: W, = Yi-XiXi for i = 0, 1, ...,N- 1. (4.24) Chapter 4 LDPC Codes in DSL Systems 59 w X A AWGN • W Channel Figure 4.5 Frequency-domain diagram for a distorted AWGN channel A. QAM constellation Each element of transmitted symbol in (4.24) is attenuated by a specific element of frequency-domain response of the channel. Notwithstanding, a 2* -QAM modulation scheme can still be used for this system. Detecting and decoding techniques as described in part A of Section 4.3.1 are applicable. For i = 0, 1, N- 1 let each of the frequency-domain complex values of the channel characteristic be expressed as Xt = Xf + jX\ and assume a transmitted symbol is A, - AR + jA\. Then W, = Yk-XAi = (tf-XRAR + X'iA'i)+j&'k-KA';-XKA,i). I *1AR iRAK (4.25) Thus, the p.d.f. of the two-dimensional Gaussian noise, centered on the constellation point A, can be expressed as p(Yk\At) = ; exp (ti-xfAUxW + ^-xtf-xUf^ 2 a (4.26) The corresponding formula w.r.t. equation (4.11) should be Chapter 4 LDPC Codes in DSL Systems 60 exp P(fc0 = 0 | n = Q = B,|fc 0 = 0 i = 0 15 2o exp ( ^ - ^ A f + ^ ) 2 + ( c ; - ^ r - x f A ^ (4.27) B,., i = 0 15 2al Similarly, we can compute all values of P(bt = 0\Yk = t,k) and P(fr, = l\Yk = t,k) for i - 0, ..., 3 . The channel detector can evaluate the log-likelihood ratio of extrinsic probabilities for each bit in 4-tuples associated with a subchannel as follows: X exp i a r ' iR- l^ z ^ LLR'jib,) = log fl,l£>0 = 0 ; = o is 2o f ( C - ^ A f + ^ ) 2 + (C [ - ^ A f - X f A ; ) X exp B , | A 0 = 1 ; = o 15 2c* (4.28) For all bits 6, the intrinsic probabilities and LLRs representations w.r.t. bit bt input to the LDPC decoder are given by equations (4.13) and (4.14), respectively. By applying (4.12)-(4.14) for all 4-tuples associated with every subchannel, we can calculate the intrinsic probabilities and LLRs for all bits assigned to whole DMT symbol. B. Gray-coded QAM constellation Estimating the probability or L L R of each coded bit of the b bits using the same method as in part B of Section 4.3.1 will not be directly applicable here. Recall that the set of i-tuples is split into two subsets of b/2 -tuples. The first subset of b/2 -tuples is mapped to the real part of the constellation point while the second subset of b/2 -tuples is mapped to the imaginary part of the constellation point. After the each complex element of the transmitted symbol is multiplied by a complex value A,,, a problem is created, i.e., the real and imaginary parts of the channel noise are not mutually independent, as shown in equation (4.25). The real and imaginary parts of the symbols cannot be processed independently as described previously. Chapter 4 LDPC Codes in DSL Systems 61 w X A AWGN Y h A"1 • w Channel IP -> Y' Figure 4.6 Frequency-domain zero-forcing equalizer for a distorted AWGN channel Let A - 1 be a transfer function defined below in equation (4.29): A" 1 l/XN_x 0 0 1 / V a 0 (4.29) 0 ... l/X0 the received frequency-domain signal Y can be transformed into Y' as shown in Figure 4.6, i.e., Y' = A"'Y. (4.30) Substitution of (2.9) and (4.29) into (4.30) yields Y'= X, + W' for i = 0, 1 i V - 1, (4.31) V are obtained as follows: X-Wf + X'W1, where W' - . Using X, = X? + jX'j and Wt = Wf + jW'(, the real and imaginary parts of W' X: Re(Wi') Im{W') " i " i 1 , v i " i Xfw'j-X'jWf (Xff + rt)2 (4.32) Since Wf and W\ are i.i.d. Gaussian variables with same zero mean and variance Chapter 4 LDPC Codes in DSL Systems 62 o,2 = o2/2, we find from (4.32) that Re{W') and Im{W') have same zero mean and variance o',a = f/2 (4.33) Thus, under the function transformation A " 1 , which is usually called frequency-domain zero-forcing equalization (ZFE), Yk and ok are transformed to Yk' and a'k respectively. The method detailed in part B of Section 4.3.1 is applicable to obtain the posteriori probabilities and LLRs of all coded bits of b -tuples assigned to the DMT symbol. Since the set of frequency-domain values A.,- = Xf+jX'i for i - 0, 1, ...,N-l are known for the fixed DSL channel, equations (4.32) and (4.33) imply that the receiver consists of a set of independent processors operating in parallel. Chapter 5 Simulation Models and Results In this thesis, we consider the use of LDPC codes in DSL systems that conventionally use the RS-TCM FEC coding schemes. The performance of different coding schemes is derived and analyzed. The term "coding gain" is usually employed to quantify the superior performance of one FEC scheme over another with the same transmission power, channel and BER. In this chapter, we present the BER vs. Eb/N0 performance curves for the different FEC coding schemes with various bit allocation tables (BATs). As described in Chapter 2 the BAT is determined during initialization according to channel characteristics [17] and background noises. Our simulations use the Simulink platform. To simulate DSL systems which consist of many function blocks, we need to create some S-Function blocks to implement our algorithms. To reduce the run time, our S-Function blocks are programmed in C programming language. First, we assume the channel is an ideal AWGN channel, in which case the frequency spectrum is a flat over the channel passband. Next, we choose a frequency-selective channel with distortion and interference. The performance of the different coding schemes in DSL systems with 256 DMT over these channels will be determined by system simulations. 5.1 System Models To compare the performance of different FEC coding schemes, we choose three different models for DSL systems to implement simulations as shown in Figure 5.1. In Figure 5.1, in order to view different coding schemes we omitted many other blocks that work in DSL standard systems [1, 8]. As shown in Figure 5.1 (a) and (b), the randomly generated input bit sequence is split into the fast and interleaved path (refer to the Figure 2.5). The data in the two paths are independently scrambled and encoded by RS outer coder [1]. Our proposal for using LDPC codes in DSL systems as shown in Figure 5.1 (c) does not require the 63 Chapter 5 Simulation Models and Results 64 Noise RS Coding • Q A M Mod • Channel • Q A M Dem • RS Decoding (a) RS coding scheme . Noise R S + T C M Q A M Channel Q A M T C M + R S Coding W Mod • W Dem w Decoding (b) RS-TCM coding scheme adopted by current DSL standards Noise LDPC Q A M Channel Q A M LDPC Coding w (Gray) • W Dem Decoding (c) a proposed LDPC coding instead of current coding in DSL Figure 5.1 Three system models for simulations RS outer coder; the data in the fast and interleaved path are input directly to the LDPC encoder (refer to Section 4.2.1). Before presenting our simulation results, we describe some parameters such as bit alloca-tion table, code rate, SNR, and others. 5.2 Basic Parameters 5.2.1 Bit allocation table The bit allocation table (BAT) is determined primarily by channel characteristics and background noises (refer to Sections 1.3 and 2.1) and the specifications of the DMT DSL Chapter 5 Simulation Models and Results 65 standards. The BAT is formed as an initialization activity. In our simulations, we choose 256 subchannels in a DMT symbol; the BAT specifies the number of encoded bits for every subchan-nel of a DMT symbol. Our study focuses on increasing the coding gain for appropriate FEC coding schemes. Some restrictions imposed on the BAT by DSL standards will be neglected. Therefore, the BAT has two types of existence. For ideal AWGN channel, DMT becomes OFDM, all subchannels of a DMT symbol have the same bandwidth and bit rate. For a frequency-selective channel the bit rate for each subchannel may be different. 5.2.2 Data rate, bit error rate and signal to noise ratio Let Af, represent the number of information bits in a single DMT symbol, and let NT be the total number of the bits transmitted in each DMT symbol. Then the overall code rate for the FEC scheme is In our simulation system, we choose 256 subchannels for each DMT symbol; the outputs of Q A M mapper are 256 complex values in the frequency domain. After transformation of Hermitian symmetry and then IFFT, there are 5 = 512 real samples for ideal AWGN channel and S = 512 + v real samples for frequency-selective AWGN channel where v is the length of the cyclic prefix (or guard interval). Assume that Es is the average energy per subchannel symbol in time domain, N0/2 is the two-sided power spectral density of the AWGN process, and Eb is the average energy per single bit of information. The variance of the AWGN is: c 2 = NQ/2 . The relationship between Es and Eb is N, Therefore, we obtain Chapter 5 Simulation Models and Results 66 - j W l O l o ^ . (5.3) where Eb/N0 is commonly called the signal to noise ratio (SNR). The probability of a bit error is estimated using bit error rate (BER) measurements. In the following simulations we present the performance of different FEC schemes by comparing the curves of BER vs. Eb/N0. A l l simulations are implemented on the Simulink platform. Although we utilize different FEC coding schemes in the various DSL systems, the initialization procedures for all of them are almost identical. We describe some pivotal steps as follows: t. 1. Set up a system model including the channel model. 2. Load the BAT determined by SNRs for all subchannels and background noises. 3. Set appropriate total number of information bits based on the specifications of FEC schemes. • For the RS-TCM coding scheme, all information bits are split into the fast and inter-leaved paths by a pre-setting fraction. Set the parameters in every function block in both paths, these are CRC checks, RS coding, and Scrambler blocks. In addition, the parameters in the interleaving block need to be set in the interleaved path. According to the BAT and standards of DSL [1, 8], the parameters can be set for Wei's 16-state 4-D T C M coding block. • For the LDPC coding scheme, an outer coder to increase the performance is not re-quired. Set up the parameters for the LDPC parity-check matrix which is generated us-ing the construction technique by Eleftheriou and Olcer [9]. 4. For frequency-selective AWGN channel models, the frequency response spectrum for the channel needs to be measured for all DMT subcarriers of the channel, which can also be de-rived if the BAT is available. 5. Set the value of output power at the transmitter and the value of variance for AWGN noise. 5.3 Initialization Chapter 5 Simulation Models and Results 67 6. For LDPC coding scheme, the limit of iterations for decoding algorithm is set to be 30. 7. Al l simulations continue until the bit counter reaches 108 or at least 100 errors are detected. 5.4 Simulation Results 5.4.1 Results for DMT systems over ideal AWGN channel In this section, we present the simulation results in the form of BER vs. SNR curves. The performance of the different coding schemes for three system models as shown in Figure 5.1 will be compared. In each of the 256 subchannels of a DMT symbol, various sizes of the Q A M constellations will be used; we choose 2*-QAM constellation for b = 4, 6, 8, 10 corresponding to flat b -bits BAT. That means there are b bits assigned to each of 256 tones that are represented by 2b - Q A M symbols. In each of Figures 5.2 (A)-(D), curves (b), (c), and (d) in each figure are BER vs. Eb/N0 curves for three coding schemes (refer to Figure 5.1) which have approximately the same overall code rates in any figure. Curve (a) in each figure shows performance for corresponding uncoded 2*-QAM modulation scheme in a 256-subchannel DMT system over an A W G N channel and curve (e) in each figure presents the Shannon limit 4 for such scheme over the AWGN channel. Consider Figure 5.2 (A) with 16-QAM as an example to explain the coding schemes corresponding to the curves (b), (c), and (d). Curve (b) presents the performance of (64,48) RS code used in both fast and interleaved paths such that there are 32 redundant bytes per DMT symbol. The coding scheme corresponding to curve (c) concatenates (64, 56) RS coding used in both fast and interleaved paths with a (1024, 892) T C M coding block. There are 16 redundant bytes in RS code and 132 redundant bits in T C M code. For the curve (d), we use (1022, 4, 14) LDPC coding scheme with no outer RS code. To identify the performance of different coding schemes, we choose a common value of 4 For determination of Shannon limit see Appendix A. Chapter 5 Simulation Models and Results — * — (a) Uncoded —&~ (b) RS<64,48) (C) RS(64,56)-(-TCM(1024,892) (d) LDPC(1022,4,14) (lter=30) (e) Shannon limit (A) 16-QAM — ( a ) Uncoded (b) RS(96,80) - O - (C) RS(100,92)+TCM(1536,1404) (d) LDPC(1022,4,14) (lter=30) (e) Shannon limit (B) 64-QAM (a) Uncoded (b) RS(128,112) (c> RS<120,112)+TCM(2O48,1916) (d> LDPC(1022,4,14) (lter=30) (©) Shannon limit •• (C) 256-QAM —+— (a) Uncoded (b) RS (160,144) -e- (c) RS(152,144)+TCM(2560,2428) (d) LDPC(1022,4,14) (lter=30) (e) Shannon limit 20 25 Eb/No (dB) (D) 1024-QAM Figure 5.2 Performance curves of coding schemes over AWGN channel with 2 -QAM: (A) 16-QAM; (B) 64-QAM; (C) 256-QAM; (D) 1024-QAM Chapter 5 Simulation Models and Results 69 10 6 for bit error rate; we then observe the corresponding values of Eb/N0 from Figures 5.2 (A)-(D). The resulting difference in coding gains is summarized in Table 5.1. Table 5.1 Gaps and gains for different coding schemes over flat AWGN channels ( ^ = i o - 6 ) Shannon limit (e) (dB) Gap (a) to (e) (dB) Gap (b) to (e) (dB) Gap (c) to (e) (dB) Gap (d)to(e) (dB) Gain (c) to (a) (dB) Gain (d)to(a) (dB) Gain (d)to(c) (dB) (A) 16-QAM 3.3 11.2 8.70 7.60 5.35 3.60 5.85 2.25 (B) 64-QAM 7.5 11.5. 8.80 7.45 5.25 4.05 6.25 2.20 (C) 256-QAM 12.1 11.6 9.15 7.75 5.20 3.85 6.40 2.55 (D) 1024-QAM 17.0 11.7 9.40 7.35 5.25 4.35 6.45 2.10 From the values in last column of Table 5.1, at a BER of 10"6, the LDPC coding scheme shows a gain of about 2.1-2.55 dB over that of the RS-TCM coding scheme, which in turn has a gain of about 4 dB over the uncoded Q A M modulation. Considering the LDPC codes with 16-QAM constellation over A W G N channel, the performance curves of two LDPC codes are shown in Figure 5.3. At the BER of 10"6, the (2044,6,28) LDPC coding obtains 1 dB more gain than the (1022,3,14) LDPC coding under the same code rate. Therefore, the longer the length of LDPC code, the better the performance. I I -e- (a) (1022,3,14) L D P C Coding 256-DMT over A W G N channel (b=4) I | Eb/No (dB) Figure 5.3 Comparison of performance curves of different length of LDPC coding schemes with 16-QAM over AWGN channel Chapter 5 Simulation Models and Results 70 5.4.2 Results for DMT systems over frequency-selective AWGN channel For the DSL channel, the channel impulse response and noise power spectral density are relatively fixed over any communication session. The direct measurement of SNRs for each of the subchannels is employed during initialization and synchronization of DSL modems. By using the water-pouring algorithm based on given SNRs for each tone we can obtain a specific BAT for a specific DMT channel. Given a BAT, we can obtain the discrete SNRs and frequency responses of the channel by inverse transformation of bit-loading as shown below in equations (5.4) and (5.5): SNRi = 10 • log((2 i ' - 1) x y) and (5.4) H, = IQSNR/W for i = 1,2,..., 256, (5.5) where Z>, denotes the number of bits assigned to the i-th subchannel, and Ht denotes the frequency response at the z'-th tone. Equations (5.4) and (5.5) apply to an ADSL system with BER = 10~7, margin5 = 6dB, and "gap" 14. 0 [3]. In this thesis, the BAT is assumed to be available for the DMT channel, such as BAT-1 or BAT-2 as shown in Figure 5.4. It is evident that the channel with BAT-2 has much more serious distortion than the channel with BAT-1. Before we compare with the performance of the systems over channels with BAT-1 and BAT-2, we first study the impact of using different cyclic prefix length on the system over the channel with BAT-1. According to the analysis for DMT techniques in Section 2.1.3, the length v of cyclic prefix is a key variable. When v is set to a large value, the accuracy of channel estimation is high but the effective energy per bit is lowered. If v is set to a small value with a high energy per bit value, the accuracy of channel model is low. Therefore, the choice of v is traded off against these To guarantee a particular service to customers, DSL service providers require that data and error rates be achieved with all anticipated crosstalk and noise levels increased by some margin m. For A D S L and V D S L 6dB (m = 4) is the accepted value. Chapter 5 Simulation Models and Results 71 (a) BAT-1 ~i 266 r V b, = 1644 V) m o c o E < two factors. 50 100 150 (b) BAT-2 100 150 256 Subchannels 200 250 Figure 5.4 Two examples of BAT-1 and BAT-2 for 256 DMT channels To avoid the degradation of ISI in a DMT system the choice of v is determined directly by the time-domain delay of the channel filter. Usually v should be chosen equal to or greater than the number of channel filter taps. In our simulations, the length of guard interval v is always set equal to the number of taps of the channel filter. In the pre-load function of the Simulink simulation model, we choose v = 16, 32, 64 and 100 to establish channel filters. The corresponding frequency responses of channel filters are shown in Figure 5.5. From Figure 5.5 we find it is the most accurate to describe measured SNRs by the frequency response of channel filter with 100 taps. With the same BAT-1 shown in Figure 5.4 (a) the performance curves of DMT system over the above four models of channel filters are given in Figure 5.6. In standard of A D S L the guard interval of cyclic prefix is v = 32 [1], which Chapter 5 Simulation Models and Results 72 ,i j. H _ , , u • o 1 ' • ' 1 • J 0 50 100 1'50 200 250 0 50 100 150 200 250 Subchannels (tones) Subchannels (tones) Figure 5.5 Frequency responses of the channel filter with different length of taps corresponds to the curve (b). At the BER of 1CT6 the gap is 0.38dB between (b) and (d). In later simulations we choose a 100 tap delay filter to study the performance of different coding i i i ---<>••- (a) 16-taps channel with power rate of 0.968 jO _i i i —©- (b) 32-taps channel with power rate of 0.939 1 1 1 (c) 64-taps channel with power rate of 0.887 ! —4*— (d) 100-taps channel with power rate of 0.835 15.5 16 16.5 17 17.5 18 18.5 19 19.5 Eb/No (dB) Figure 5.6 Comparison of performance curves of DMT systems over the channel filter with different length of taps Chapter 5 Simulation Models and Results 73 schemes. Consider that the DMT system operates over the channel implemented by a 100 tap digital filter. Three FEC coding schemes with approximately the same overall code rate of 0.832 are chosen. The performance curves for BAT-1 and BAT-2 are shown in Figure 5.7; curve (a) presents the performance of uncoded Q A M modulation scheme; curve (b) corresponds to the operation of (102,86) RS coding in both fast and interleaved paths; curve (c) presents behavior of concatenat-Figure 5.7 Comparison of performance of DMT systems with different coding schemes over the channel filter with 100 taps (a) BAT-1; (b) BAT-2 Chapter 5 Simulation Models and Results 74 ing (189,173) RS code through uninterleaved path to a'(1644,1512) T C M code; curve (d) expresses the action of (938,4,14) LDPC coding scheme; and curve (e) indicates the Shannon limit at this code rate for the AWGN channel6. Considering the difference in Eb/N0 in Figure 5.7, the gaps of the various coding schemes from the Shannon limit and the gains from uncoded Q A M are summarized in Table 5.2 for BER = 1(T6. From the values in last column of Table 5.2, for the BAT-1 channel, at a BER of 1(T6 the LDPC coding scheme shows a gain of about 2.8dB over the RS-TCM coding scheme which in turn has a gain of about 4.86 dB over the uncoded dynamically-sized Q A M modulation; for the BAT-2 channel, at a BER of 10"6 the LDPC coding scheme shows a gain of about 4.75dB over the RS-TCM coding scheme which in turn has a gain of about 4.5dB over the uncoded dynamically-sized Q A M modulation. Table 5.2 Gaps and gains of different coding schemes over two different channels (BAT-1 and BAT-2) Shannon limit (e) (dB) Gap (a)to(e) (dB) Gap (b) to (e) (dB) V Gap ; (c)to(e) (dB) • Gap (d)to(e) (dB) Gain (c) to (a) (dB) Gain > (d)to(a) (dB) Gain , (d)to(c) ; . (dB) : ••; BAT-1 12 14.86 11.7 10 7.2 4.86 7.66 2.8 BAT-2 12 16.75 13.5 12.25 7.5 4.50 9.25 4.75 Compare these results to those described in the Section 5.4.1 for systems over a flat AWGN channel. The Q A M modulation and LDPC coding strategy used with the frequency-selective channel model is effective in avoiding degradation of performance inherent in other FEC coding schemes, especially for channels like that where BAT-2 is used to overcome the channel's severe distortion. 5.4.3 Effect of using simplified LDPC decoding Based on all same conditions for the (938, 4, 14) LDPC coding scheme, we compare its performance in decoding using two different arithmetic operations in the iterative message-Consider the computational complexity of the Shannon limit for frequency-selective A W G N channel (refer to Appendix A) and comparison of performance of different coding schemes working on the same systems. The curve of Shannon limit for A W G N channel shown here is only used for a relative reference. Chapter 5 Simulation Models and Results 75 passing algorithm. One operation uses strict sum-product update operation, while the other adopts the "min" approximation operation (described in Section 3.3.5). 16.5 17.5 18 18.5 Eb/No (dB) Figure 5.8 Comparison of performance of (938, 4, 14) LDPC coding by using two different arithmetic operations for iterative message-passing decoding algorithm The simulation results are shown in Figure 5.8. When BER is less than 4 x 10" the gap between two curves is less than O.ldB; thus the performance of "min" approximation is worse by less than O.ldB in comparison with that of strict sum-product update. A degradation of 0.1 dB is negligible. However, the "min" operation greatly simplifies the computation for iterative probabilistic decoding, especially for a long LDPC codes. Therefore, for small enough of BER, the "min" approximation is usable and has significant effect in reducing computational decoding complexity. This reality is of substantial practical importance. Chapter 6 Summary and Conclusions 6.1 Summary In Chapter 2 we summarized the DMT-DSL system and the RS-TCM scheme. The princi-ple of DMT modulation and methods of coding and decoding for Wei's 16-state 4-D convolu-tional T C M codes were introduced. • Chapter 3 gave an introduction to Gallager LDPC codes and to the probabilistic decoding technique of these LDPC codes. The message-passing algorithm is expressed as probabilistic iterative decoding of binary block codes which are based on a normal graph. Three representa-tions of the messages for performing the message-passing algorithm on L D P C codes were discussed. Finally, a simplified 'min' approximation of sum-product update for iterative decoding was derived. The implementation of LDPC codes in DMT-DSL systems was discussed in Chapter 4. From LDPC encoding to symbol mapping we presented an overall LDPC encoder which was proposed by Eleftheriou [9]. Our study emphasized effective encoding and decoding of LDPC codes over channels with severe distortion. The channel model included AWGN and a linear filter was characterized by its frequency response. To simplify modulation Double Gray mapping was applied from a square Q A M to double PAMs. At the detector of receiver, a simple frequency domain zero-forcing equalizer was used. By combining corresponding demapping and probabilistic iterative decoding we created a means to use the LDPC FEC scheme on DMT-DSL systems operating on frequency-selective AWGN channels. Theoretically, almost the same performance using error correction on DMT systems is obtainable over various different channels. Our simulations in Chapter 5 were implemented using the Simulink software package. Many existing blocks were employed including Bernoulli Random Binary Generator, CRC 76 Chapter 6 Summary and Conclusions 77 Generator/Detector, Convolutional Interleaver/Deinterleaver, Scrambler & FEC/FEC & Descram-bler, FFT/IFFT, Q A M Modulator/Demodulator, A W G N Channel, and others. Some new functional blocks were created, including coding/decoding for T C M and LDPC, Gray mapping/ demapping, tone ordering, and others. Al l simulations were implemented systematically. 6.2 Conclusions In this thesis, we show how LDPC codes can work effectively on DMT-DSL systems. The FEC scheme based on LDPC codes and the message-passing algorithm operating on the normal graph are studied. Our scheme is applicable to both ideal and frequency-selective A W G N channels. For severe channel distortion, particularly, the error correction performance of the LDPC coding scheme gives significantly better performance than other FEC schemes. Our experimental results indicate the following important conclusions: • At the BER of 10~6, the RS-TCM scheme yielded coding gains as follows over uncod-ed various Q A M schemes: 4.0dB for AWGN channel, 4.86dB for BAT-1 channel, and 4.5dB for BAT-2 channel. • At the BER of 10"6, the LDPC coding scheme provided more than 2 dB coding gain over the RS-TCM FEC scheme which is the current standard for xDSL systems; spe-cifically: (a) for ideal AWGN channels the coding gain is around 2.1~2.55dB; (b) for the BAT-1 channel the coding gain is 2.8dB; (c) for the BAT-2 channel the coding gain is 4.75 dB. • At the BER of 10~6, the coding gain for LDPC coding relative to uncoded Q A M is much greater over the worse channels (such as BAT-2) than over the better channels (such as BAT-1). In other words, the performance curve for the LDPC coding scheme over the BAT-2 channel is almost same as that for the BAT-1 channel. However, for RS-TCM scheme, more than 2 dB degradation of performance occurs when channel changes from BAT-1 to BAT-2. • The "min" approximation used for probabilistic decoding of longer LDPC codes greatly reduces computation with very small degradation in performance. • Iterative decoding of LDPC codes does not create output burst errors when there is a Chapter 6 Summary and Conclusions 78 decoder failure; this result is different from that of the Viterbi decoder adapted for the decoding of T C M . An outer RS code can be omitted when LDPC codes are used for FEC. 6.3 Future Work We propose some future research as follows: • consider using LDPC codes of length sufficient to achieve near-capacity performance • design an effective equalizer to improve further performance for LDPC codes • combine filtered multitone modulation (FMT) with LDPC codes in DSL systems • consider using LDPC codes for other applications such as space-time coded (STC) systems and CDMA systems Appendix A The Calculation of the Shannon Limit 79 Appendix A The Calculation of the Shannon Limit Let C represent the capacity of the AWGN channel being considered in bits/second. Let P denote the average signal power at receiver. According to the information capacity theorem [16] is stated as follows: • The information capacity of a constinuous channel of bandwidth B hertz, perturbed by AWGN with power spectral density N0/2 and limited in bandwidth to B, is given by C = Slog 21 1 + —— J bits per second. (Al.l) If RB is the information transfer rate in bits per second, then R„<C. (A1.2) The inequality (A1.2) is valid for error-free transmission (i.e., PE - 0). If the error probability PE is allowed not to be zero, then we can obtain the relationship between the maximum information rate and the channel capacity as follows: RH< (A1.3) where H(PE) denotes the binary entropy at the probability P E . Al . l AWGN channel For the 256-DMT systems over AWGN channel, let RS be the sampling rate in DMT symbols per second, substitution of (5.2) yields P = RSEBN, (A 1.4) Since each subchannel in 256-DMT systems uses a 4kHz bandwidth, the bandwidth B of Appendix A The Calculation of the Shannon Limit 80 the channel can be computed as B = 256 x 4kHz - 1. 024MHz. Therefore, combination of equations (Al . l ) , (A1.3) and (A1.4), and replacement of the inequality in (A1.3) with an equality yield Rb(l-H(P,)) Eb 1.024MHzfn- 256x4000 , . . . AT0 = - ^ A ^ l 2 . ( A L 5 ) The equation (A1.5) implies the Shannon limit curve for the AWGN channel. A1.2 Frequency-selective AWGN channel For the 256-DMT systems over frequency-selective A W G N channel, let Rs be the sampling rate in D M T symbols per second. For k = 1,2, 512, let H(fk) be the k-th component of the channel frequency response and Pk be the average received signal power in each subchannel assigned bk bits within Af bandwidth. The information capacity of the k-th subchannel is CK = A / l o g 2 [ l + ] A _ ) (A1.6) where P, = wfw>E>. (A1.7) The total capacity of the overall channel is approximately given as follows: NTAf N( k=\ k=l Then substitution of (A1.3) with replacement of the inequality in (A1.3) with an equality yields Appendix A The Calculation of the Shannon Limit 81 Rb[l-H(P.)] = Af X l o g 2 ( l + Wk)^flhk • g) • (A1.9) k= 1 The definitions of Nh NT, Eb and Es refer to Section 5.2.2. Therefore, we can theoretically get the Shannon limit curve for DMT systems over frequency-selective A W G N channel. However, it is a quite complicated process to obtain the Shannon limit curve by using equation (A1.9). Glossary of Acronyms Glossary of Acronyms ADSL Asymmetric Digital Subscriber Line ANSI American National Standards Institute AWGN Additive White Gaussian Noise BAT Bit Allocation Table BRI Basic Rate Interface BSC Binary Symmetric Channel CO Central Office C D M A Code Division Multiple Access DFT Discrete Fourier Transform DMT Discrete Multitone DSL Digital Subscriber Line ECC Error-Correction Codes FEC Forward Error Correction FEXT Far-End Crosstalk FMT Filtered Multitone modulation GF Galois Field GUI Graphical User Interface HDSL High bit-rate Digital Subscriber Line IDFT Inverse Discrete Fourier Transform Ltd. independent and identically distributed ISI Intersymbol Interference ISDN Integrated Services Digital Network LDPC Low-Density Parity-Check Glossary of Acronyms L L R Log-Likelihood Ratio LSB Least Significant Bit M C M Multicarrier Modulation M L Maximum Likelihood MSB Most Significant Bit MSED Minimum Squared Euclidean Distance NEXT Near-End Crosstalk OFDM Orthogonal Frequency Division Multiplexing ONU Optical Network Unit PAM Pulse Amplitude Modulation POTS Plain Old Telephone System PSD Power Spectral Density Q A M Quadrature Amplitude Modulation RFI Radio Frequency Interference RS Code Reed Solomon code SNR Signal to Noise Ratio STC Space Time Coded T C M Trellis Coded Modulation UTP Unshielded Twisted Pair VDSL Very high bit-rate Digital Subscriber Line VOD Video On Demand ZFE Zero-Forcing Equalization w.r.t. with respect to xDSL A generic term which applies to all DSL technolo XTalk Crosstalk Selected Bibliography 84 Selected Bibliography [I] American National Standard for Telecommunications - Network and Customer Installation Interfaces, "Asymmetric digital subscriber line (ADSL) metallic interface," T1E1.4/98-007R5,Dec. 1998. ; [2] C. Berrou, A Glavieux, and P. Thitimajshima, "Near Shannon limit error-correcting coding and decoding: Turbo-Codes," Proc. IEEE International Communications Conference, 1993, Page(s) 1064-1070. [3] J. A. C. Bingham, ADSL, VDSL, and Multicarrier Modulation, Palo Alto, California: John Wiley & Sons, 2000. [4] J. A . C. Bingham, "Multicarrier Modulation for Data Transmission: An Idea Whose Time Has Come," IEEE Communication Magazine, May 1990, Page(s): 5-14. [5] D. Burshtein and G. Miller, "Expander graph arguments for message-passing algorithms," IEEE Transactions on Information Theory, vol. 47, 2001. [6] P. S. Chow and J. M . Cioffi, "Bandwidth optimization for high speed data transmission over channels with severe intersymbol interference," IEEE GLOBECOM '92. 'Communication for Global Users'., Vol. 1, Dec. 1992, Page(s): 59 -63 [7] S. Chung, G. D. Forney, and T. Richardson, "On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit," IEEE Communications Letters, Feb. 2001, Vol 5, No. 2, Page(s) 58-60. [8] Draft Trial-Use Standard for Telecommunications - Interface Between Networks and Customer Installation, "Very-high bit-rate digital subscriber line (VDSL) metallic interface," T1E1.4/2000-013R3, Nov. 2000. [9] E. Eleftheriou and S. Olcer, "Ggen: Gdmt.bis: Glite.bis: Proposed text on LDPC coding for inclusion in draft recommendation," ITU-Telecommunication Standardization Sector, Temporary Document SC-065, August 2001. [10] J. L. Fan, "Array codes as low-density parity-check codes," in Proc. 2nd Int'l Symposium on Turbo Codes and Related Topics, Brest, France, Sept. 2000, Page(s) 543-546. [II] G. D. Forney, Jr., "Codes on Graphs: Normal Realizations," IEEE Transactions on Information Theory, Feb. 2001, Vol. 47, Page(s) 520-548. [12] G. D. Forney, Jr., "The Viterbi algorithm," Proc. IEEE, Vol. 61, Mar. 1973, Page(s): 268-278 Selected Bibliography 85 [13] M . P. C. Fossorier, "Iterative reliability-based decoding of low-density parity check codes," IEEEJ. Select. Areas Communications, May 2001, Vol. 19, Page(s) 908-917 [14] R. G. Gallager, "Low Density Parity Check Codes," IRE Transaction on Information Theory, 1962, Page(s) 21-28. [15] R. G. Gallager, Low Density Parity Check Codes, MIT Press, Cambridge, M A , 1963. [16] S. Haykin, Communication Systems, 4th Edition, McMaster University, ON: John Wiley & Sons, 2000. [17] I. Kalet, "The Multitone Channel", IEEE Transactions on Communications, pp. 119-124, February 1989. [18] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, "Factor graphs and the sum-product algorithm," IEEE Transactions on Information Theory, Feb. 2001, Vol. 47, Page(s) 498-519. [19] M . Luby, M . Mitzenmacher, A. Shokrollahi, D. Spielman, "Improved low-density parity-check codes using irregular graphs," IEEE Transactions on Information Theory, Vol. 47, No. 2, Feb. 2001, Page(s): 585-598 [20] D. J. C. MacKay, "Good error-correcting codes based on very sparse matrices," IEEE Trans. Inform. Theory, March 1999, Vol. 45, Page(s) 399-431. [21] D.J.C. MacKay and R.M. Neal, "Near Shannon limit performance of Low Density Parity Check Codes," Electronics Letters, Aug. 1996, Vol. 32, Page(s) 1645-1655. [22] D. J. C. Mackay, R. J. McEliece, and J. F. Cheng, "Turbo decoding as an instance of pearl's "belief propagation" algorithm," IEEE J. on Select Areas in Communication, 1997'. [23] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann Publishers, 1988. [24] A . Peled and A. Ruiz, "Frequency domain data transmission using reduced computational complexity algorithms," Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP '80., Vol. 5, Apr. 1980, Page(s): 964 -967 [25] T.J. Richardson, M.A. Shokrollahi and R.L. Urbanke, "Design of capacity-approaching irregular low-density parity-checkcodes," IEEE Transactions on Information Theory, Feb. 2001, Vol. 47, No. 2, Page(s) 619-637. [26] T.J. Richardson and R.L. Urbanke, "The capacity of low-density parity-check codes under message-passing decoding," IEEE Transactions on Information Theory, Feb. 2001, Vol. 47, No. 2, Page(s) 599-618. Selected Bibliography 86 [27] T.J. Richardson and R.L. Urbanke, "Efficient encoding of low-density parity-check codes," IEEE Transactions on Information Theory, Vol. 47, No. 2, Feb. 2001, Page(s) 638-656. [28] C. E. Shannon, "A mathematical theory of communication," Bell Systems Technical Journal, 1948, Vol. 27, Page(s) 379-423, 623-656. [29] T. Starr, J. M Cioffi, and P. Silverman, Understanding Digital Subscriber Line technology. Upper Saddle River, NJ 07458: Prentice Hall, 1999. [30] R. M . Tanner, "A recursive approach to low complexity codes," IEEE Transactions on Information Theory, Sept. 1981, Vol. 27, Page(s) 533-547. [31] P. P. Vaidyanathan, "Filter banks in digital communications," IEEE Circuits and Systems Magazine, Volume: 1 Issue: 2, Second Quarter 2001, Page(s) 4-25. [32] L. F. Wei, "Trellis-Coded Modulation with Multidimensional Constellations," IEEE Transactions on Information Theory, Vol. IT-33, No.4, July 1987, Page(s) 483-501. [33] N . Wiberg, Codes and decoding on general graphs, Ph.D. Thesis, Link-oping University, Sweden, 1996. [34] S. B. Wicker, Error Control Systems for Digital Communication and Storage, Englewood Cliffs, NJ 07632: Prentice Hall, 1995. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065472/manifest

Comment

Related Items