CHANNEL ESTIMATION TECHNIQUES FOR OFDM SYSTEMS IN DISPERSIVE TIME-VARYING CHANNELS by XUEGUI SONG B.Eng., Northwestern Polytechnical University, P. R. China, 2003 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in The College of Graduate Studies (Electrical and Computer Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (OKANAGAN) May 2009 c© Xuegui Song, 2009 Abstract Coherent modulation is more effective than differential modulation for orthogonal frequency di- vision multiplexing (OFDM) systems requiring high data rate and spectral efficiency. Channel estimation is therefore an integral part of the receiver design. In this thesis, two iterative maximum likelihood based channel estimation algorithms are proposed for an OFDM system in dispersive time-varying channels. A multipath channel model is proposed for OFDM uplink transmission in macrocellular systems. The multipath fading channel is modeled such that the channel state can be determined by estimating the unknown channel parameters. A second-order Taylor series expansion is adopted to simplify the channel estimation problem. Based on the system model, an iterative maximum likelihood based algorithm is first proposed to estimate the discrete-time chan- nel parameters. The mean square error performance of the proposed algorithm is analyzed using a small perturbation technique. Based on a convergence rate analysis, an improved iterative maxi- mum likelihood based channel estimation algorithm is presented using a successive overrelaxation method. Numerical experiments are performed to confirm the theoretical analyses and show the improvement in convergence rate of the improved algorithm. ii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Thesis Outline and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Wireless Channels and OFDM Technique . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1 Wireless Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 Large-Scale Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.2 Small-Scale Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 OFDM Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 Basic Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.2 OFDM Implementation Using FFT . . . . . . . . . . . . . . . . . . . . . 11 2.2.3 Advantages and Drawbacks of OFDM . . . . . . . . . . . . . . . . . . . 13 iii Table of Contents 3 ML-based Channel Estimation for OFDM Systems in Dispersive Time-Varying Chan- nels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 Formulation of Channel Estimation Problem . . . . . . . . . . . . . . . . . . . . 16 3.3 An Iterative Channel Estimation Algorithm . . . . . . . . . . . . . . . . . . . . . 22 3.4 Performance Analysis of the Channel Estimation Algorithm . . . . . . . . . . . . 23 3.5 Numerical Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . 28 4 Convergence Study of ML-based Channel Estimation Algorithms . . . . . . . . . . 35 4.1 Convergence Performance Analysis of the Proposed Algorithm . . . . . . . . . . 35 4.2 An Improved Channel Estimation Algorithm . . . . . . . . . . . . . . . . . . . . 41 4.3 Numerical Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . 41 5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Appendices A Derivation of the Discrete CIR Model . . . . . . . . . . . . . . . . . . . . . . . . . . 56 B Derivation of (3.20) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 C Derivation of (3.30c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 D Derivation of the Absolute Value for the Largest Eigenvalue of M−1η Nη . . . . . . . 63 E Matlab Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 iv List of Tables 3.1 Algorithm 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.1 Algorithm 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 v List of Figures 2.1 Signal power fluctuation versus range in wireless channels. . . . . . . . . . . . . . 7 2.2 Basic principle of OFDM modulation. . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Basic principle of OFDM demodulation. . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 OFDM modulation by means of IFFT processing. . . . . . . . . . . . . . . . . . . 13 2.5 OFDM demodulation by means of FFT processing. . . . . . . . . . . . . . . . . . 14 3.1 An MSE comparison for three different approximation orders. . . . . . . . . . . . 20 3.2 An MSE performance comparison between Algorithm 1 and Moose’s estimator for f̂l when fl = 0.05, δh = 1% and δ f = 1%. . . . . . . . . . . . . . . . . . . . . . . 30 3.3 An average MSE performance comparison between the simulated results and the theoretical values for ĥl of Algorithm 1 when δh = 1% and δ f = 1%. . . . . . . . . 31 3.4 An MSE performance comparison between the simulated results and the theoreti- cal values for f̂l of Algorithm 1 when fl = 0.02, δh = 1% and δ f = 1%. . . . . . . 32 3.5 An MSE performance comparison between the simulated results and the theoreti- cal values for f̂l of Algorithm 1 when fl = 0.04, δh = 1% and δ f = 1%. . . . . . . 33 3.6 An MSE performance comparison between the simulated results and the theoreti- cal values for f̂l of Algorithm 1 when fl = 0.06, δh = 1% and δ f = 1%. . . . . . . 34 4.1 An average MSE performance comparison between Algorithm 1 and Algorithm 2, along with the theoretical values for ĥl , when δh = 1% and δ f = 1%. . . . . . . . . 43 4.2 An MSE performance comparison between Algorithm 1 and Algorithm 2, along with the theoretical values for f̂l , when fl = 0.02, δh = 1% and δ f = 1%. . . . . . 44 vi List of Figures 4.3 An MSE performance comparison between Algorithm 1 and Algorithm 2, along with the theoretical values for f̂l , when fl = 0.04, δh = 1% and δ f = 1%. . . . . . 45 4.4 An MSE performance comparison between Algorithm 1 and Algorithm 2, along with the theoretical values for f̂l , when fl = 0.06, δh = 1% and δ f = 1%. . . . . . 46 4.5 An average number of iterations comparison between Algorithm 1 and Algorithm 2 when η = 4/3, δh = 1% and δ f = 1%. . . . . . . . . . . . . . . . . . . . . . . . 47 4.6 The average MSE performance versus number of iterations for ĥl of Algorithm 1 and Algorithm 2 when η = 4/3 and SNR = 25. . . . . . . . . . . . . . . . . . . . 48 4.7 The average MSE performance versus number of iterations for f̂l of Algorithm 1 and Algorithm 2 when η = 4/3 and SNR = 25 dB. . . . . . . . . . . . . . . . . . 49 vii List of Acronyms Acronyms Definitions 3GPP Third Generation Partnership Project BER Bit-Error Rate BPSK Binary Phase-Shift Keying CIR Channel Impulse Response CRB Cramer-Rao Bound DAB Digital Audio Broadcasting DFT Discrete Fourier Transform DVB Digital Video Broadcasting FFT Fast Fourier Transform ICI Inter-Carrier Interference IDFT Inverse Discrete Fourier Transform IFFT Inverse Fast Fourier Transform ISI Intersymbol Interference LOS Line-of-Sight LS Least Square LTE Long Term Evolution ML Maximum Likelihood MMSE Minimum Mean Square Error MSE Mean Square Error OFDM Orthogonal Frequency Division Multiplexing P/S Parallel-to-Serial PAPR Peak-to-Average Power Ratio viii List of Acronyms RF Radio Frequency S/P Serial-to-Parallel SC-FDMA Single-Carrier Frequency Division Multiple Access SNR Signal-to-Noise Ratio SOR Successive Overrelaxation WLAN Wireless Local Area Network WMAN Wireless Metropolitan Access Network ix Acknowledgments I am deeply grateful to my supervisor Dr. Julian Cheng for his enthusiasm, guidance, advice, encouragement, and support. I will continue to be influenced by his rigorous scholarship, clarity in thinking, and professional integrity. I would like to thank Dr. Zheng Du from Huawei Technology Co. and Dr. Norman C. Beaulieu from University of Alberta, Edmonton for their feedbacks, constructive comments, and valuable suggestions on my research work. I would like to express my thanks to Dr. Paramjit Gill for his willingness to serve as my external examiner. I would also like to thank Dr. Richard Klukas and Dr. Jonathan Holzman for serving on the committee. I really appreciate their valuable time and constructive comments on my thesis. I owe many people for their generosity and support during my two-year study at the University of British Columbia. I would like to thank my dear colleagues Chiun-Shen Liao, Junfeng Zhao, James Jianchen Sun, Mingbo Niu, Nick Kuan-Hsiang Huang, Ning Wang, Xian Jin, and Yeyuan Xiao for sharing their academic experiences and constructive viewpoints generously with me dur- ing our discussions. I would also like to thank my dear friends Eric Jinglong Ma, Junning Li, Pan Luo, Yi Zhang, and Yu Hui for sharing in my excitement and encouraging me when I was in frustration during this journey. I would also like to thank all my friends in Vancouver, Beijing, Chengdu, and Xi’an. Finally, I would like to thank my parents for their patience, understanding and support over all these years. All my achievements would not have been possible without their constant encourage- ment and support. x Chapter 1 Introduction 1.1 Background and Motivation The earliest prototype of wireless communications can be traced back to the pre-industrial age. In those years, information was transmitted over line-of-sight (LOS) distances using smoke signals, torch signaling, flashing mirrors, signal flares, or semaphore flags. However, true wireless com- munications, where information is transmitted in terms of electromagnetic waves through complex physical mediums, started from the first experiments with radio communication by M. G. Marconi in the 1890s. Since then, wireless communication systems have been developing and evolving with a rapid pace. In the intervening hundred years, many types of wireless communication systems have flourished, and often later disappeared. By far the most successful wireless communication system has been the cellular system. It is reported that the number of mobile phone subscribers is on the order of 4 billion currently and will rise to 5.6 billion in 2013 worldwide. Undoubtedly, cellular phones have radically changed the life of people and become a critical business tool in most countries. In the last two decades, with the explosive growth of wireless communications, wireless ser- vices have migrated from the conventional voice-centric services to data-centric services. There- fore, one main target for modern wireless communications is to provide the possibility for high end-user data rates. One straightforward way to meet this requirement is to increase the trans- mission bandwidth because it is well known that the achievable end-user data rates are limited by transmission bandwidth and transmitter power. However, the transmitted signal in a wireless com- munication system with wider transmission bandwidth can incur increased corruption due to time dispersion (or equivalently frequency selectivity) of the radio channel. To counteract such signal 1 1.2. Literature Review corruption, researchers have proposed techniques such as receiver-side equalization, multicarrier transmission and specific single-carrier transmission. Orthogonal frequency division multiplexing (OFDM) is a typical multicarrier transmission scheme which can increase the overall transmission bandwidth without suffering from increased signal corruption due to radio channel frequency selec- tivity. In OFDM systems, data is transmitted in parallel by modulating a number of closely-spaced orthogonal subcarriers, thereby converting a frequency-selective channel into multiple flat fading subchannels. Moreover, intersymbol interference (ISI) can be eliminated by inserting a guard in- terval between two consecutive OFDM symbols. With these attractive properties OFDM has been adopted by wireless standards such as digital audio broadcasting (DAB), digital video broadcasting (DVB), wireless local area network (WLAN), and wireless metropolitan access network (WMAN) [1–3]. There are several key challenging problems associated with OFDM systems: performance op- timization, time and frequency synchronization, channel estimation, and peak-to-average power ratio (PAPR) reduction [4]. In this thesis, we will focus on the channel estimation problem and study channel estimation techniques for an OFDM system operating in dispersive time-varying channels. 1.2 Literature Review Channel estimation is a challenging problem in wireless systems because mobile radio channels are highly dynamic. To avoid channel estimation, one can adopt a differential modulation technique instead of coherent modulation. However, such a system typically results in lower data rates and can incur a 3-4 dB penalty in signal-to-noise ratio (SNR) [5]. For OFDM systems which aim to provide high data rate and spectral efficiency, coherent modulation is more effective; hence, channel estimation is often required as an integral part of the receiver design. Channel estimation for OFDM systems in slow fading channels has been widely studied [6– 8]. However, those channel estimation techniques were developed under the assumption that the channel is constant over one OFDM symbol, an assumption that may not hold in some mobile applications. Channel estimation for OFDM systems in fast fading channels has been studied in 2 1.2. Literature Review the following work. In [9], Li et al. presented a minimum mean square error (MMSE) estimator by exploiting both time-domain and frequency-domain correlations of the frequency response of rapid dispersive fading channels. In a related work, Moon and Choi introduced a channel estimation algorithm by adopting a Gaussian interpolation filter or a cubic spline interpolation filter [10]. However, algorithms in both [9] and [10] require knowledge of channel statistics, which may not be available. To make the estimation algorithm independent of the channel statistics, Li discussed in [11] a robust implementation of the MMSE pilot-symbol-aided estimator, which does not depend on channel statistics. In [12], Zhao and Huang proposed a method employing low-pass filtering in a transform domain to estimate the channel transfer function for the pilot subcarriers. Then a high- resolution interpolation is adopted to obtain the channel transfer function for non-pilot subcarriers. In [13], Chang and Su discussed a pilot-aided channel estimation method for OFDM systems operating in Rayleigh fading channels. The channel responses at pilot subcarriers are estimated using a least square (LS) estimator and then interpolated to non-pilot subcarriers using a 2-D regression surface function. This estimator also does not require channel statistics. Recently, Merli and Vitetta proposed a maximum likelihood (ML) based algorithm for joint carrier frequency offset and channel impulse response (CIR) estimation for OFDM systems in [14]. The authors adopted a second-order Taylor series to simplify the estimation problem and derived an approximate closed- form solution to the estimation problem. However, it was derived under the assumption that the channel does not change over one OFDM symbol. In [14], the frequency offset is assumed to be constant for all multipaths implying that the Doppler frequency shift is neglected for individual multipath. In this thesis, we study the channel estimation problem for OFDM systems in dispersive time- varying fading channels. Two ML-based channel estimation algorithms which do not require chan- nel statistics are proposed. Unlike the existing solutions in the literature, our channel estimation algorithms are based on a unique channel model which is particularly appropriate for OFDM up- links in a macrocellular system. In this channel model, the channel state can be determined by estimating a number of unknown channel parameters which do not change over several OFDM symbols. This property of our developed channel model allows us to estimate the channel pa- 3 1.3. Thesis Outline and Contributions rameters once and then employ them to determine the channel state for the next several OFDM symbols. 1.3 Thesis Outline and Contributions This thesis consists of five chapters. Chapter 1 presents background knowledge of wireless com- munication developments and technologies. In modern wireless communication, a high end-user data rate is one main objective of system design and therefore wider transmission bandwidth is desired. However, an increased transmission bandwidth will increase the frequency selectivity of a radio channel and thus cause severe corruption to the transmitted signal. This problem can be solved by using multicarrier transmission techniques such as OFDM. Chapter 2 provides detailed technical background for the entire thesis. Firstly, fading character- istics of wireless propagation environments are presented and classified into large-scale fading and small-scale fading from the view of propagation terrain. Secondly, the basic concept of OFDM and its implementation are explained. Finally, we list the advantages and drawbacks of OFDM compared with a conventional single-carrier transmission. In Chapter 3, a channel estimation problem is formulated based on a discrete-time CIR model. This channel model is particularly appropriate for OFDM uplinks in a macrocellular system. The channel model developed here allows the CIR to vary within one OFDM symbol. To facilitate the channel estimation problem we adopt a truncated Taylor series expansion to approximate the inter- carrier interference (ICI) caused by Doppler frequency shifts. We find that a second-order Taylor series expansion is sufficient for our estimation problem. Then an iterative ML-based algorithm is proposed to estimate the discrete-time channel parameters. Our proposed channel estimation al- gorithm is particularly useful for an OFDM system which has already compensated the frequency offset due to local oscillator mismatch. The mean square error (MSE) performance of the pro- posed algorithm in large SNR regions is analyzed using a small perturbation technique, and then demonstrated by simulated results. In Chapter 4, the convergence performance of the proposed algorithm is analyzed. Based on the analysis, an improved fast converging iterative channel estimation algorithm is presented 4 1.3. Thesis Outline and Contributions using a successive overrelaxation (SOR) method to accelerate the proposed algorithm in Chapter 3. Simulated results are also presented to demonstrate the MSE and convergence performance for the improved algorithm. Chapter 5 summarizes the whole thesis and lists our contributions in this work. In addition, some future work related to our current research is suggested. 5 Chapter 2 Wireless Channels and OFDM Technique In this chapter, we will present some background knowledge concerning wireless channels and the OFDM technique. We first address the fading characteristics of wireless propagation envi- ronments, then introduce the basic concept and fast Fourier transform (FFT) implementation of OFDM. Finally, advantages and drawbacks of the OFDM technique are compared with conven- tional single-carrier modulation. 2.1 Wireless Channels In a wireless communication system, the transmitted signal typically undergoes attenuation and distortion over the transmission path. The overall effect on the transmitted signal caused by the transmission path is one main source of system performance degradation in any wireless system. To address and compensate for the attenuation and distortion caused by the transmission path, researchers have studied wireless channels extensively and proposed different channel models [5], [15–18]. These effects, which are mainly due to path loss, shadowing, scattering and reflecting effects caused by unpredictable objects between the transmitter and receiver, can primarily be categorized into large-scale fading and small-scale fading (see Fig. 2.1). In this section, we will briefly describe both types of fading. 2.1.1 Large-Scale Fading Large-scale fading, which is caused by path loss of the signal, can be characterized as a function of transmitted distance and shadowing effects of large obstacles such as buildings and hills. This phenomenon happens when the mobile moves over a large distance (of the order of the cell size), 6 2.1. Wireless Channels S i g n a l L e v e l ( d B ) Antenna Displacement Small-scale fading Pass loss Large-scale fading Figure 2.1: Signal power fluctuation versus range in wireless channels. and is typically frequency independent [18]. In an ideal LOS channel with no obstructions between the transmitter and the receiver, the received signal power Pr is given by Pr = Pt ( λ 4pid )2 GtGr (2.1) where Pt is the transmitted signal power, λ is the wavelength, d is the distance from the transmitter to the receiver, and Gt , Gr are respectively the power gains of the transmit and receive antennas. The power attenuation PL = Pr/Pt is also referred to as free space path loss and it is obvious from (2.1) that the received signal power Pr is inversely proportional to the square of the distance d between the transmit and receive antennas. However, in real transmission environments, the received signal power Pr does not obey this free space path loss model and it varies randomly due to the terrain. Usually a ray tracing method can be used to trace the signal propagation through a wireless channel. Unfortunately, the free space model and ray tracing method cannot model complex propagation environments accurately. Based on empirical measurements, some empirical models for path loss in typical wireless environments have been developed to predict the average received signal power as the transmitted distance d varies, i.e., the Okumura model, the Hata 7 2.1. Wireless Channels model, the European Cooperative for Science and Technology (COST) model, and the piecewise linear (multi-slope) model [17], [19]. In addition to path loss, the transmitted signal is also subject to shadowing, which is caused by changes in reflecting surfaces and scattering objects along the transmission path. The shadowing causes random attenuation to the transmitted signal. Typically, the log-normal shadowing model, which has been confirmed empirically to accurately model the variation in received power, is used to characterize this random attenuation. 2.1.2 Small-Scale Fading Small-scale fading is due to the constructive and destructive addition of different multipath compo- nents introduced by the channel between the transmitter and receiver. Therefore, it is also referred to as multipath fading. This phenomenon usually occurs over a distance of several signal wave- lengths and is frequency dependent. Since the transmitted signal over a multipath fading channel experiences randomness we must characterize multipath fading channels statistically. Frequency selectivity and the time-varying nature, which depend on the relative relation between parameters of the transmitted signal (i.e., signal bandwidth and symbol duration) and parameters of multi- path fading channels (i.e., delay spread and Doppler spread), are two important characteristics of multipath fading channels. In the following, we will briefly describe these two characteristics of multipath fading channels. Depending on the relative relation between transmitted signal bandwidth and delay spread (or equivalently coherence bandwidth BC), multipath fading channels can be categorized into frequency-nonselective (flat) fading channels and frequency-selective fading channels. The pa- rameter coherence bandwidth is the reciprocal of the delay spread which is defined as the span of the delays of duplicates of the transmitted signal arriving at the receiver via different paths. When the transmitted signal bandwidth is small compared with the coherence bandwidth BC, the channel is called frequency-nonselective or flat fading channel. For a flat fading channel, the spectral com- ponents of the transmitted signal are affected in a similar manner so that the multipath components are not resolvable. Otherwise, if the transmitted signal bandwidth is large compared with the co- herence bandwidth BC, the channel is said to be frequency-selective. For a frequency-selective fad- 8 2.2. OFDM Technique ing channel, the spectral components of the transmitted signal are affected by different amplitude gains and phase shifts. In a frequency-selective fading channel, different multipath components with delay differences significantly exceeding the inverse of the transmitted signal bandwidth are resolvable. Typically, such a frequency-selective fading channel can be modeled as a tapped delay line filter with time-variant tap coefficients. In a similar way, the multipath fading channel can be categorized as slow fading or fast fading based on the relative relation between symbol duration and Doppler spread (or equivalently coher- ence time TC). The parameter coherence time, which measures the period of time over which the channel effect on the transmitted signal does not change, is defined as the reciprocal of the Doppler spread. The fading channel is said to be slow fading if the symbol duration is small compared with the channel coherence time TC; otherwise, it is considered to be fast fading. In a slow fading chan- nel, the transmitted signal is affected by the same amplitude gain and phase shift over at least one symbol duration, while the amplitude gain and phase shift vary within one symbol duration in a fast fading channel. According to the discussion above, we have four types of multipath channel, i.e., slow flat fad- ing channel, slow frequency-selective fading channel, fast flat fading channel, and fast frequency- selective fading channel. In this thesis, we focus on the channel state estimation of a fast frequency- selective fading channel for an OFDM system. The channel is characterized using the CIR as [17] h(τ, t) = L(t) ∑ l γl(t)δ (τ − τl(t)) (2.2) where L(t) is the number of resolvable multipaths, τl(t) is the delay of the lth multipath, and γl(t) is the corresponding complex amplitude. We will look into the details of this channel model in the next chapter. 2.2 OFDM Technique OFDM has become a promising multicarrier modulation technique and has received considerable interest in the past decade. In OFDM systems, data is transmitted in parallel by modulating a 9 2.2. OFDM Technique number of closely-spaced orthogonal subcarriers, thereby converting a frequency-selective channel into multiple flat fading subchannels [4], [20]. Moreover, ISI can be eliminated by inserting a guard interval between two consecutive OFDM symbols. With these attractive properties OFDM has been adopted by wireless standards such as DAB, DVB, WLAN, and WMAN [1–3]. In this section, we will present the basic concept and FFT implementation of OFDM and list the advantages and drawbacks of the OFDM technique compared with conventional single-carrier modulation. 2.2.1 Basic Concept OFDM is a special case of multicarrier transmission which uses parallel data transmission and frequency division multiplexing. The earliest development of this concept can be traced back to the 1950s [21] and the idea was published in the mid-1960s [22], [23]. In an OFDM system, the available bandwidth is divided into a collection of narrow sub-bands, and the data is transmitted in parallel by modulating these closely-spaced orthogonal subcarriers. Let X [k] denote the complex symbols to be transmitted by an OFDM system, such that the OFDM (modulated) signal can be expressed as s(t) = N−1 ∑ k=0 X [k]e j2pi fkt , 0 ≤ t ≤ Ts (2.3) where fk = f0 +k∆ f , j 2 =−1, and N denotes the number of subcarriers in the system. Parameters Ts and ∆ f are called symbol duration and subcarrier spacing of the OFDM system, respectively. These two parameters must satisfy the orthogonality condition (Ts∆ f = 1) to guarantee that the OFDM signal can be demodulated properly by the receiver. To demonstrate this orthogonal property, we first let ϕk(t) =  e j2pi fkt , 0 ≤ t ≤ Ts 0, otherwise, k = 0,1, · · · ,N−1 (2.4) and rewrite (2.3) as s(t) = N−1 ∑ k=0 X [k]ϕk(t). (2.5) 10 2.2. OFDM Technique Using the definition of ϕk(t), one can show 1 Ts ∫ Ts 0 ϕk(t)ϕ ∗ l (t)dt = 1 Ts ∫ Ts 0 e j2pi( fk− fl)tdt = 1 Ts ∫ Ts 0 e j2pi(k−l)∆ f tdt = δ [k− l] (2.6) where δ [k− l] is the delta function defined as δ [n] =  1, n = 00, otherwise. (2.7) From (2.6) one can see that {ϕk(t)}N−1k=0 is a set of orthogonal functions. Using this orthogonal property, the received OFDM signal r(t) (in an ideal case r(t) = s(t)) can be demodulated at the receiver by 1 Ts ∫ Ts 0 r(t)e− j2pi fktdt = 1 Ts ∫ Ts 0 s(t)e− j2pi fktdt = 1 Ts ∫ Ts 0 ( N−1 ∑ l=0 X [l]ϕl(t) ) ϕ∗k (t)dt = N−1 ∑ l=0 X [l]δ [l− k] = X [k]. (2.8) According to the discussion above, we provide an illustrative description of a basic OFDM modulator in Fig. 2.2 and the basic principle of OFDM demodulation in Fig. 2.3. 2.2.2 OFDM Implementation Using FFT The relationship between OFDM and the discrete Fourier transform (DFT) was first addressed in [24]. It has been shown that the DFT can be applied to an OFDM system as part of the modulation and demodulation processes. In practice, the DFT can be implemented with the computationally efficient FFT. Here we will briefly discuss the FFT implementation of OFDM. 11 2.2. OFDM Technique      pi    pi     − pi        Figure 2.2: Basic principle of OFDM modulation.              pi−  pi−         − − pi    Figure 2.3: Basic principle of OFDM demodulation. 12 2.2. OFDM Technique                        Figure 2.4: OFDM modulation by means of IFFT processing. From (2.3), we can sample s(t) at an interval of Tsa = Ts/N. Then the sampled OFDM signal becomes s[n] 4 = s(t)|t=nTsa = N−1 ∑ k=0 X [k]e j2pi fknTs N , n = 0,1, . . . ,N−1. (2.9) Without loss of generality, we can set f0 = 0 so that fkTs = k and s[n] = N−1 ∑ k=0 X [k]e j2pikn N = IDFT{X [k]}, n = 0,1, . . . ,N−1 (2.10) where IDFT denotes the inverse discrete Fourier transform. Therefore, as illustrated in Fig. 2.4, OFDM modulation can be implemented by means of IDFT (or the computationally efficient inverse fast Fourier transform (IFFT)) processing followed by digital-to-analog conversion. Similarly, the demodulation at the receiver can be carried out using efficient FFT processing, as illustrated in Fig. 2.5. 2.2.3 Advantages and Drawbacks of OFDM As a special case of multicarrier transmission scheme, OFDM has the following key advantages over conventional single-carrier transmission [20]: 13 2.2. OFDM Technique               Figure 2.5: OFDM demodulation by means of FFT processing. • OFDM is an efficient way to deal with multipath fading; for a given channel delay spread, the implementation complexity is much lower than that of a conventional single-carrier system with an equalizer. • In relatively slow time-varying channels, it is possible to significantly enhance the capacity by adapting the data rate per subcarrier according to the SNR of that particular subcarrier. • OFDM is robust against narrowband interference, because such interference only affects a small percentage of the subcarriers. • OFDM makes single-frequency networks possible, which is especially attractive for broad- casting applications. However, OFDM also has some drawbacks compared with conventional single-carrier modulation [20]: • OFDM is more sensitive to frequency offset and phase noise. The frequency offset and phase noise will destroy the orthogonality among subcarriers and hence introduces ICI. • OFDM signals have relatively large PAPR, which tends to reduce the power efficiency of the RF amplifier. • OFDM needs an adaptive or coded scheme to overcome spectral nulls in the channel. 14 Chapter 3 ML-based Channel Estimation for OFDM Systems in Dispersive Time-Varying Channels In this chapter, by taking into account the ICI component, we propose an ML-based channel es- timation algorithm (which does not require channel statistics) for OFDM systems in dispersive time-varying fading channels. A channel estimation problem is formulated based on a discrete- time CIR model. This channel model is particularly appropriate for OFDM uplinks in a macro- cellular system. The channel model developed here allows the CIR to vary within one OFDM symbol. To facilitate the channel estimation problem we adopt a truncated Taylor series expansion to approximate the ICI caused by Doppler frequency shifts. We find that a second-order Taylor series expansion is sufficient for our estimation problem. Then an iterative ML-based algorithm is proposed to estimate the discrete-time channel parameters. Using a small perturbation technique, we analyze the performance of the proposed algorithm in large SNR regions. Our computer sim- ulations demonstrate that the proposed algorithm can estimate the desired parameters accurately, and the simulated performances agree with our theoretical analysis. 3.1 System Model We consider an OFDM system with N subcarriers. For each OFDM symbol, we denote the trans- mitted symbols as X [0],X [1], . . . ,X [N−1]. After the IFFT, the time-domain OFDM signal can be 15 3.2. Formulation of Channel Estimation Problem expressed as [25] s[n] = 1 N N−1 ∑ k=0 X [k]e j2pikn N , n = 0,1, . . . ,N−1 (3.1) where j2 =−1. Each OFDM symbol is extended with a cyclic prefix and transmitted after appro- priate pulse shaping. Recalling (2.2), we can describe a multipath CIR by h(τ, t) = L ∑ l=1 γl (t)δ (τ − τl) (3.2) where γl(t) is the complex amplitude of the lth multipath. The number of resolvable multipaths L(t) and the delay of the lth multipath τl(t) are assumed to be L and τl , respectively. According to [6], [9], the CIR can be well approximated by a sampled discrete-time CIR h[m,n] 4 = h(mTsa,nTsa), where Tsa is the sampling interval defined as Tsa = Ts/N and Ts is the OFDM symbol duration. In this thesis, we assume a time-varying multipath channel and express the discrete-time CIR as [12], [26] h[m,n] = L ∑ l=1 hle j2pi fl n N δ [m−nl] (3.3) where hl is the complex amplitude for the lth multipath, fl is the corresponding Doppler frequency shift normalized by 1/Ts, and nl is the delay in samples for the lth multipath. The detailed deriva- tion of this discrete-time CIR model is given in Appendix A. It should be emphasized that the discrete-time CIR in (3.3) is only valid for OFDM uplink transmission in macrocellular systems. Without loss of generality, we assume n1 ≤ n2 ≤ ·· · ≤ nL. It is seen from (3.3) that the CIR can vary within one OFDM symbol duration. 3.2 Formulation of Channel Estimation Problem Our goal is to estimate the channel parameters hl , fl , for l = 1,2, . . . ,L, in the discrete-time CIR in (3.3). We assume that the length of the cyclic prefix is longer than the delay spread of the multipath channel and that the receiver accurately removes the cyclic prefix before implementing the FFT. 16 3.2. Formulation of Channel Estimation Problem For the channel model given in (3.3), one can express the output of the FFT as Y [k] = 1 N L ∑ l=1 N−1 ∑ k′=0 N−1 ∑ n=0 X [k′]e j2pik′(n−nl ) N hle j2pi fl n N e− j2pikn N +w[k] = L ∑ l=1 N−1 ∑ k′=0 e− j2pik′nl N X [k′]hl 1− e j2pi(k′−k+ fl) N ( 1− e j2pi(k ′−k+ fl ) N ) +w[k] = L ∑ l=1 N−1 ∑ k′=0 e− j2pik′nl N X [k′]hlR(k′,k, fl)+w[k] = αkX [k]+βk +w[k], k = 0,1, . . . ,N−1 (3.4a) where αk = L ∑ l=1 e− j2piknl N hlR(k,k, fl) (3.4b) and βk = L ∑ l=1 N−1 ∑ k′ 6=k k′=0 e− j2pik′nl N X [k′]hlR(k′,k, fl). (3.4c) In (3.4a) w[k] is an additive complex Gaussian noise component with zero mean and variance σ2, i.e., CN ( 0,σ2 ) , and R(k′,k, fl) = 1 N N−1 ∑ n=0 e j2pin(k′−k+ fl ) N = 1− e j2pi(k′−k+ fl) N ( 1− e j2pi(k ′−k+ fl ) N ) (3.5) represents the interference of X [k′] on the kth subcarrier caused by a normalized Doppler fre- quency shift fl . In (3.4b) and (3.4c), αk and βk denote the multiplicative distortion at the desired subchannel and the additive ICI term, respectively. In the absence of Doppler frequency shifts ( fl = 0, l = 1,2, . . . ,L), we have R(k ′,k,0) = δ [k′− k] resulting in the conventional model without ICI among subcarriers. In the special case when fl = ε , for l = 1,2, . . . ,L, our estimation problem 17 3.2. Formulation of Channel Estimation Problem is mathematically equivalent to the joint channel and frequency offset estimation problem that has been considered in [14], [27] and [28]. In practical mobile communication scenarios, fl is typically less than 0.1. Consider an OFDM system, for example, with N = 256 subcarriers and a subcarrier spacing of 7.81 kHz. A mobile terminal moving at a speed of 84.4 km/h will result in a normalized Doppler frequency fl = 0.05 with a carrier frequency fc = 5 GHz. Exploiting the fact that fl has small values, one can consider a Taylor series expansion for R(k′,k, fl) as R(k′,k, fl) =  1+ jpi N−1 N fl −pi2 2N2−3N+13N2 f 2l − jpi3 (N−1)2 3N2 f 3l + · · · , k′ = k − j2pi(1−ω)N fl +2pi2 (2−N)ω+N (1−ω)2N2 f 2 l + j4pi 3 N 2(1−ω)2+3ω(1+ω−Nω+N) 3N3(1−ω)3 f 3 l + · · · , k′ 6= k (3.6a) where ω = exp ( j2pi(k′− k) N ) . (3.6b) Applying (3.6a) to (3.4a) and collecting the terms with the same power of fl , one obtains Y [k] = L ∑ l=1 hl { Al[k]+Bl[k] fl +Cl[k] f 2 l +Dl[k] f 3 l + · · · } +w[k] (3.7) for k = 0,1, . . . ,N−1, or, in vector form, ~Y = L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l +~Dl f 3 l + · · · ) +~w. (3.8) The first three leading coefficients Al[k], Bl[k], Cl[k] in (3.7) can be shown to be Al[k] = X [k]e − j2piknl N (3.9a) and Bl[k] = jpi N−1 N X [k]e − j2piknl N − j2pi (1−ω)N N−1 ∑ k′ 6=k k′=0 X [k′]e − j2pik′nl N (3.9b) 18 3.2. Formulation of Channel Estimation Problem and Cl[k] =−pi2 2N 2−3N +1 3N2 X [k]e − j2piknl N +2pi2 (2−N)ω +N (1−ω)2N2 N−1 ∑ k′ 6=k k′=0 X [k′]e − j2pik′nl N . (3.9c) The channel estimation technique in this thesis will be based on (3.8). In (3.8), the coefficient vec- tors ~Al , ~Bl , ~Cl , . . . can be derived given the transmitted training sequence X [k], k = 0,1, . . . ,N−1. We will assume the receiver has knowledge of the training sequence X [k] and performs parameter estimation of the hl’s and fl’s based on (3.8). We note that the Taylor expansion in (3.6a) is an approximation for R(k′,k, fl) and the accuracy of the approximation depends on the order of the Taylor expansion. On the other hand, the order of the Taylor expansion also determines the computational complexity of the channel estimation algorithm. In this work we employ a second-order Taylor expansion for | fl|< 0.1. In order to assess the accuracy of the second-order Taylor series expansion, we compare the MSE of the received OFDM symbol using several approximation orders. Here the MSE is defined as the mean squared difference between the exact value of the signal at the receiver (Y [k]) and its approximation (Ŷ [k]) MSE 4 = E {∣∣∣Y [k]− Ŷ [k]∣∣∣2} (3.10) where E{x} is the ensemble average of a random sequence x. Y [k] and Ŷ [k] are calculated according to (3.4a) and (3.7) assuming w[k] = 0, respectively. From Fig. 3.1 one observes that the MSE increases with the normalized Doppler frequency fl as expected. Fig. 3.1 also shows that the first-order approximation may be inadequate. However, the difference between the second-order approximation and third-order approximation is negligible. Furthermore, with a typical value of fl , e.g. fl = 0.05, the MSE of the second-order approximation is about −50 dB and, thus, can be ignored in our estimation problem. Therefore, we can conclude that a second-order approximation 19 3.2. Formulation of Channel Estimation Problem 0.02 0.04 0.06 0.08 0.1 0.12 0.14 −100 −90 −80 −70 −60 −50 −40 −30 −20 −10 0 fl (Normalized Doppler Frequency) M SE (d B) p=1 p=2 p=3 Figure 3.1: An MSE comparison for three different approximation orders. is sufficient for our problem and modify (3.8) to be ~Y ≈ L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l ) +~w. (3.11) The likelihood function of the received symbol is then given by f ( ~Y ∣∣∣{hl, fl}Ll=1 ) = 1piNσN exp − [ ~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l )]H × [ ~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l )] /σ2 } (3.12) 20 3.2. Formulation of Channel Estimation Problem where (·)H denotes the Hermitian transpose of the argument matrix. One can therefore express the ML estimation for the parameters hl’s and fl’s as { ĥl, f̂l }L l=1 = arg min {hl , fl}Ll=1 ∣∣∣∣∣~Y − L∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l )∣∣∣∣∣ 2 (3.13) where ĥl’s and f̂l’s are the solutions to ∂ ∂hl′ ln f ( ~Y ∣∣∣{hl, fl}Ll=1 ) = 0 (3.14) and ∂ ∂ fl′ ln f ( ~Y ∣∣∣{hl, fl}Ll=1 ) = 0, (3.15) respectively. From (3.12), we can get ln f ( ~Y ∣∣∣{hl, fl}Ll=1 ) = ln 1piNσN − 1σ2 [ ~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l )]H × [ ~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l )] (3.16) and show that (3.14) and (3.15) are equivalent to the following equations [ ~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l )]H ( ~Al′ +~Bl′ fl′ +~Cl′ f 2 l′ ) = 0 (3.17) ℜ  [ ~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l )]H hl′ ( ~Bl′ +2~Cl′ fl′ ) = 0, (3.18) respectively. Here ℜ{·} denotes the real part of the complex argument. It is worth mentioning that we need to pay attention to the properties of complex derivatives [29] when obtaining (3.17) and (3.18). Furthermore, we also need to note that (3.17) and (3.18) are non-linear in the hl’s and fl’s. A technique to solve these two equations will be presented in the next section. 21 3.3. An Iterative Channel Estimation Algorithm Table 3.1: Algorithm 1 Initialization: Initialize h (0) l and f (0) l , for l = 1,2, . . . ,L. Set the iteration counter i = 0. Iterations: Step 1: For each l = 1,2, . . . ,L, calculate h̃ (i+1) l = argmin hl L ( hl, f̃ (i) l ) f̃ (i+1) l = argmin fl L ( h̃ (i+1) l , fl ) Step 2: If max l ( ∣∣∣h̃(i+1)l −h̃(i)l ∣∣∣∣∣∣h̃(i)l ∣∣∣ ) ×100% > δh or max l ( ∣∣∣ f̃ (i+1)l − f̃ (i)l ∣∣∣∣∣∣ f̃ (i)l ∣∣∣ ) ×100% > δ f , let i = i+1 and go to Step 1, otherwise output h̃ (i+1) l ’s and f̃ (i+1) l ’s. 3.3 An Iterative Channel Estimation Algorithm In this section, we propose an iterative approach to estimate the ĥl’s and f̂l’s based on (3.13). The most straightforward approach is the coordinate-ascent algorithm [30] as shown in Table 3.1, where the cost function L(hl, fl) is defined as L(hl, fl) = ∣∣∣∣∣~Y − l−1∑ j=1 h (i+1) j ( ~A j +~B j f (i+1) j + ~C j ( f (i+1) j )2) − L ∑ j=l+1 h (i) j ( ~A j +~B j f (i) j + ~C j ( f (i) j )2) −hl ( ~Al +~Bl fl +Cl fl 2 )∣∣∣∣∣ 2 . (3.19) Note that in Algorithm 1, the first equation in Step 1 is a linear least-square problem and the second equation in Step 1 requires finding the roots for a third-order polynomial. Both equations can be solved using well-known numerical methods. As discussed in [30], the coordinate-ascent al- gorithm is a special case of the space-alternating generalized expectation-maximization algorithm and has the attractive property of increasing the likelihood value at each iteration. 22 3.4. Performance Analysis of the Channel Estimation Algorithm 3.4 Performance Analysis of the Channel Estimation Algorithm In this section, we analyze the MSE performance of the proposed algorithm in large SNR regions assuming the fl’s are small. From the discussion in Section 3.2 we can see that the ML-based estimation algorithm aims to minimize the cost function in (3.13). Using a small perturbation technique [31], one can obtain from (3.13) a linear equation for the perturbation of the estimates as  R11 R12 · · · R1L R21 R22 · · · R2L ... ... . . . ... RL1 RL2 · · · RLL  ︸ ︷︷ ︸ R  ∆~θ1 ∆~θ2 ... ∆~θL  ︸ ︷︷ ︸ ∆~θ =  ~n1 ~n2 ... ~nL  ︸ ︷︷ ︸ ~n (3.20a) where Ri j =  ℜ { ~AHj ~Ai } ℑ { ~AHj ~Ai } ℜ { h∗j~B H j ~Ai } −ℑ { ~AHj ~Ai } ℜ { ~AHj ~Ai } −ℑ { h∗j~B H j ~Ai } ℜ { hi~A H j ~Bi } ℑ { hi~A H j ~Bi } ℜ{hih∗j~BHj ~Bi}  (3.20b) and ∆~θl =  ℜ{∆hl} ℑ{∆hl} ∆ fl  ~nl =  ℜ { ~wH~Al } −ℑ { ~wH~Al } ℜ { hl~w H~Bl }  (3.20c) where ℜ{·} and ℑ{·} respectively denote the real part and imaginary part of the argument, {·}∗ denotes the complex conjugate of the argument, ∆hl = ĥl −hl and ∆ fl1 = f̂l − fl are the estimate perturbations caused by Gaussian noise~nl . A detailed derivation of (3.20) is given in Appendix B. When the Gaussian noise is absent, we have ĥl = hl and f̂l = fl . Equation (3.20) indicates that the perturbations ∆~θl subject to the Gaussian noise components 1Here ∆ fl does not denote the subcarrier spacing. 23 3.4. Performance Analysis of the Channel Estimation Algorithm ~nl , l = 1,2, . . . ,L, are also Gaussian, and one can express ∆~θl as  ∆~θ1 ∆~θ2 ... ∆~θL  =  R11 R12 · · · R1L R21 R22 · · · R2L ... ... . . . ... RL1 RL2 · · · RLL  −1  ~n1 ~n2 ... ~nL  (3.21) or in a more compact form ∆~θ = R−1~n. (3.22) To determine the variance of ∆~θl subject to Gaussian noise components~nl , we have ~ni~n H j =  ℜ { ~wH~Ai } −ℑ { ~wH~Ai } ℜ { hi~w H~Bi }   ℜ { ~wH~A j } −ℑ { ~wH~A j } ℜ { h j~w H~B j }  H =  ℜ { ~wH~Ai } ℜ { ~wH~A j } −ℜ { ~wH~Ai } ℑ { ~wH~A j } ℜ { ~wH~Ai } ℜ { h j~w H~B j } −ℑ { ~wH~Ai } ℜ { ~wH~A j } ℑ { ~wH~Ai } ℑ { ~wH~A j } −ℑ { ~wH~Ai } ℜ { h j~w H~B j } ℜ { hi~w H~Bi } ℜ { ~wH~A j } −ℜ { hi~w H~Bi } ℑ { ~wH~A j } ℜ { hi~w H~Bi } ℜ { h j~w H~B j }  (3.23) and E { ~ni~n H j } = σ2 2 E   ℜ { ~AHj ~Ai } ℑ { ~AHj ~Ai } ℜ { h∗j~B H j ~Ai } −ℑ { ~AHj ~Ai } ℜ { ~AHj ~Ai } −ℑ { h∗j~B H j ~Ai } ℜ { hi~A H j ~Bi } ℑ { hi~A H j ~Bi } ℜ { hih ∗ j ~BHj ~Bi }   = σ2 2 E { Ri j } = σ2 2 Ri j. (3.24) In obtaining (3.24), we have used the fact that ~w is a complex Gaussian noise vector whose el- ements have independent real and imaginary parts with zero mean and variance equal to σ2/2. 24 3.4. Performance Analysis of the Channel Estimation Algorithm Therefore, using (3.20a), (3.22) and (3.24), one can show straightforwardly that E   ~n1 ... ~nL (~nH1 · · ·~nHL )  = σ2 2 R (3.25) and E   ∆~θ1 ... ∆~θL  ( ∆~θ H1 · · ·∆~θ HL )  = σ2 2 R−1. (3.26) Generally speaking, the elements in Ri j, i 6= j, are non-zero and, thus, the small perturbations for different taps, ∆~θl , are correlated. When L << N and a pseudo-random sequence X [k] is used as the training symbols, the values of Ri j, i 6= j, are generally much smaller than those in Rii and, thus, negligible [25]. For such cases, one can rewrite (3.26) as E { ∆~θ Hl ∆ ~θl } = σ2 2 R−1ll , l = 1,2, . . . ,L (3.27) where Rll =  ℜ { ~AHl ~Al } ℑ { ~AHl ~Al } ℜ { h∗l ~B H l ~Al } −ℑ { ~AHl ~Al } ℜ { ~AHl ~Al } −ℑ { h∗l ~B H l ~Al } ℜ { hl~A H l ~Bl } ℑ { hl~A H l ~Bl } ℜ{h∗l hl~BHl ~Bl}  =  ~AHl ~Al 0 ℜ { h∗l ~B H l ~Al } 0 ~AHl ~Al −ℑ { h∗l ~B H l ~Al } ℜ { hl~A H l ~Bl } ℑ { hl~A H l ~Bl } |hl|2 ~BHl ~Bl}  (3.28) 25 3.4. Performance Analysis of the Channel Estimation Algorithm and R−1ll is shown to be R−1ll = 1 |hl|2~AHl ~Al ( ζ − ∣∣∣~BHl ~Al∣∣∣2) ×  ζ |hl|2− ( ℑ { hl~A H l ~Bl })2 ℜ { hl~A H l ~Bl } ℑ { hl~A H l ~Bl } −~AHl ~Alℜ { hl~A H l ~Bl } ℜ { hl~A H l ~Bl } ℑ { hl~A H l ~Bl } ζ |hl|2− ( ℜ { hl~A H l ~Bl })2 −~AHl ~Alℑ { hl~A H l ~Bl } −~AHl ~Alℜ { hl~A H l ~Bl } −~AHl ~Alℑ { hl~A H l ~Bl } ( ~AHl ~Al )2  (3.29) where ζ is defined as ζ = ~AHl ~Al~B H l ~Bl for the sake of convenience. Equation (3.27) indicates that the perturbations ∆~θl for different taps can be treated independently. However, for the same l, the perturbations ∆hl = ℜ{∆hl}+ jℑ{∆hl} and ∆ fl are still correlated because the non-diagonal elements in Rll are not negligible. It is of interest to obtain some explicit results for the estimation performance. Motivated by [32], we consider the asymptotic behavior with a white training sequence. Under this condition, we have ~AHl ~Al = 1 (3.30a) ~BHl ~Al =− jpi(N−1) N (3.30b) ~BHl ~Bl = 2pi2(N−1)(2N−1) 3N2 (3.30c) where the derivations of (3.30a)-(3.30b) are trivial, and a detailed derivation of (3.30c) can be found in Appendix C. According to the definition of ∆~θl , we can obtain ∆~θl∆~θ H l =  ℜ{∆hl} ℑ{∆hl} ∆ fl   ℜ{∆hl} ℑ{∆hl} ∆ fl  H =  (ℜ{∆hl})2 ℜ{∆hl}ℑ{∆hl} ℜ{∆hl}∆ fl ℜ{∆hl}ℑ{∆hl} (ℑ{∆hl})2 ℑ{∆hl}∆ fl ℜ{∆hl}∆ fl ℑ{∆hl}∆ fl ∆ f 2l  (3.31) 26 3.4. Performance Analysis of the Channel Estimation Algorithm and ∆h∗l ∆hl = |∆hl|2 = (ℜ{∆hl})2 +(ℑ{∆hl})2 . (3.32) Therefore, we can obtain the expression for the estimation performance as E {∆h∗l ∆hl} = E { (ℜ{∆hl})2 +(ℑ{∆hl})2 } = E { (ℜ{∆hl})2 } +E { (ℑ{∆hl})2 } . (3.33) Applying (3.27), (3.29) and (3.30) to (3.33), one can obtain E {∆h∗l ∆hl} = σ2 ( 2~AHl ~Al~B H l ~Bl − ∣∣∣~BHl ~Al∣∣∣2) 2~AHl ~Al ( ~AHl ~Al~B H l ~Bl − ∣∣∣~BHl ~Al∣∣∣2) = (5N2−6N−1)σ2 2(N2−1) . (3.34) Similarly, we can obtain E { ∆ f 2l } = σ2~AHl ~Al 2 |hl|2 ( ~AHl ~Al~B H l ~Bl − ∣∣∣~BHl ~Al∣∣∣2) = 3N2σ2 2pi2(N2−1) |hl|2 . (3.35) When N → ∞, we have lim N→∞ E {∆h∗l ∆hl}= lim N→∞ (5N2−6N−1)σ2 2(N2−1) = 5σ2 2 (3.36) and lim N→∞ E { ∆ f 2l } = lim N→∞ 3N2σ2 2pi2(N2−1) |hl|2 = 3σ2 2pi2 |hl|2 . (3.37) 27 3.5. Numerical Results and Discussions In comparison, the Cramer-Rao bound (CRB) at large SNR for the classical frequency offset esti- mator of Moose is [33], [34] CRBMoose ( fl) = ( 2 2pi )2 2σ2 |hl|2~AHl ~Al = 2σ2 pi2 |hl|2 . (3.38) Thus, from (3.37) and (3.38), we can see that the estimation algorithm considered in this paper can achieve 10log10 2 3/2 = 1.25 dB gain over Moose’s estimator. In Section 3.5, it will be seen that simulations performed to test the analytical results confirm the theoretical analysis. 3.5 Numerical Results and Discussions In this section, we present some numerical results by considering a discrete baseband OFDM uplink in our simulation. The complex amplitude hl is randomly chosen, and the normalized Doppler frequency fl is chosen to be less than 0.1. The total number of subcarriers is fixed at N = 256. A binary phase-shift keying (BPSK) signaling scheme is adopted in the simulation. We set the stopping threshold values for the proposed algorithm as δh = 1% and δ f = 1%. A second-order Taylor series expansion is adopted in the simulation. Here the MSE is adopted as the performance measure. Fig. 3.2 plots the MSE performance comparison between Algorithm 1 and Moose’s estimator for fl in a fading channel with L = 1. Here hl is randomly chosen and fl is set to be fl = 0.05. The MSEs are obtained by averaging over 2000 trials, and the theoretical performance predictions of f̂l are obtained using (3.37) and (3.38). From Fig. 3.2, we observe that the simulated MSE performance of both algorithms and their theoretical predictions are in excellent agreement over a wide range of SNR values. The simulated performance curve for Algorithm 1 exhibits an error floor, which is caused by the approximation error when the average SNR value is greater than 25 dB. This is the cause of discrepancy between the simulated curve and the theoretical curve for SNR greater than 25 dB. However, the values of MSE in these regions are small and the error floor is negligible. Fig. 3.2 also shows that Algorithm 1 outperforms Moose’s estimator by about 1.2 dB over a wide range of SNR values. 28 3.5. Numerical Results and Discussions Fig. 3.3 plots the average MSE performance of ĥl in a multipath fading channel with L = 3 for the proposed algorithm and Figs. 3.4-3.6 plot MSE performance of f̂l for the same sys- tem with three different fl values. Here the hl’s are arbitrarily chosen to be ~hl = [0.2944 + 1.6236 j,−1.3362−0.6918 j,0.7143+0.858 j], and the fl’s are set to be ~fl = [0.04,0.02,0.06], re- spectively. The MSEs are obtained by averaging over 1500 trials, and the theoretical performance predictions of ĥl and f̂l are obtained using (3.36) and (3.37), respectively. From Figs. 3.3-3.6, we observe that the simulated MSE performance of ĥl and f̂l for Algorithm 1 and their theoretical predictions have excellent agreement over a wide range of SNR values. Figs. 3.4-3.6 also show that the MSE performance degrades with an increasing value of fl , owing to increased (determin- istic) approximation errors. From Fig. 3.4, it is seen for fl = 0.02 that the simulated result and the theoretical result agree over the entire SNR range. Figs. 3.3, 3.5 and 3.6 suggest that the simulated results overestimate the theoretical results when the average SNR value is large. This is because we adopt a second-order Taylor series expansion to simplify the estimation problem, and the error floor is caused by the approximation error. However, the values of MSE in these regions are small and the error floor is negligible. 29 3.5. Numerical Results and Discussions −10 0 10 20 30 40 −80 −70 −60 −50 −40 −30 −20 −10 Average SNR (dB) M S E o f f̂ l (d B ) Simulation result for Algorithm 1 Simulation result for Moose estimator Theoretical value for Algorithm 1 Theoretical value for Moose estimator Figure 3.2: An MSE performance comparison between Algorithm 1 and Moose’s estimator for f̂l when fl = 0.05, δh = 1% and δ f = 1%. 30 3.5. Numerical Results and Discussions −10 0 10 20 30 40 −60 −50 −40 −30 −20 −10 0 Average SNR (dB) A ve ra g e M S E o f ĥ l (d B ) Simulated result Theoretical value Figure 3.3: An average MSE performance comparison between the simulated results and the the- oretical values for ĥl of Algorithm 1 when δh = 1% and δ f = 1%. 31 3.5. Numerical Results and Discussions −10 0 10 20 30 40 −70 −60 −50 −40 −30 −20 −10 Average SNR (dB) M S E o f f̂ l (d B ) Simulated result Theoretical value Figure 3.4: An MSE performance comparison between the simulated results and the theoretical values for f̂l of Algorithm 1 when fl = 0.02, δh = 1% and δ f = 1%. 32 3.5. Numerical Results and Discussions −10 0 10 20 30 40 −70 −60 −50 −40 −30 −20 −10 Average SNR (dB) M S E o f f̂ l (d B ) Simulated result Theoretical value Figure 3.5: An MSE performance comparison between the simulated results and the theoretical values for f̂l of Algorithm 1 when fl = 0.04, δh = 1% and δ f = 1%. 33 3.5. Numerical Results and Discussions −10 0 10 20 30 40 −70 −60 −50 −40 −30 −20 −10 Average SNR (dB) M S E o f f̂ l (d B ) Simulated result Theoretical value Figure 3.6: An MSE performance comparison between the simulated results and the theoretical values for f̂l of Algorithm 1 when fl = 0.06, δh = 1% and δ f = 1%. 34 Chapter 4 Convergence Study of ML-based Channel Estimation Algorithms In Chapter 3, by taking into account the ICI component, we proposed an ML-based channel esti- mation algorithm (which does not require channel statistics) for OFDM systems in dispersive time- varying fading channels. Our proposed channel estimation algorithm is iterative. In this chapter, we will study the convergence performance of the iterative algorithm presented in Chapter 3 ana- lytically. Furthermore, we propose an improved fast converging channel estimation algorithm for OFDM systems in dispersive time-varying fading channels. An SOR method [35] is adopted to accelerate our proposed channel estimation algorithm. Our computer simulations demonstrate that the improved algorithm, which achieves the same MSE performance as the algorithm proposed in Chapter 3, has better convergence performance. 4.1 Convergence Performance Analysis of the Proposed Algorithm In this section we first analyze the convergence rate of Algorithm 1 proposed in Chapter 3. Based on the convergence rate analysis we propose an improved fast converging algorithm using an SOR method [35] in the next section. We first rewrite the linear expression in (3.20a) for small pertur- 35 4.1. Convergence Performance Analysis of the Proposed Algorithm bation as a linear Gauss-Seidel iteration [35, eq. (3.15), page 72] D  ∆~θ (i+1) 1 ∆~θ (i+1) 2 ... ∆~θ (i+1) L  =−L  ∆~θ (i+1) 1 ∆~θ (i+1) 2 ... ∆~θ (i+1) L  −U  ∆~θ (i) 1 ∆~θ (i) 2 ... ∆~θ (i) L  +  ~n1 ~n2 ... ~nL  (4.1) or, equivalently, D∆~θ (i+1) =−L∆~θ (i+1)−U∆~θ (i) +~n (4.2) where D is a diagonal matrix whose entries are the diagonal elements of R; L and U are the lower and upper triangular matrices of R, i.e., L+D+U = R. Applying (3.20a) to (4.1), we can obtain D  ∆~θ (i+1) 1 ∆~θ (i+1) 2 ... ∆~θ (i+1) L  =−L  ∆~θ (i+1) 1 ∆~θ (i+1) 2 ... ∆~θ (i+1) L  −U  ∆~θ (i) 1 ∆~θ (i) 2 ... ∆~θ (i) L  +R  ∆~θ1 ∆~θ2 ... ∆~θL  , (4.3) i.e., (D+L)  ∆~θ (i+1) 1 −∆~θ1 ∆~θ (i+1) 2 −∆~θ2 ... ∆~θ (i+1) L −∆~θL  =−U  ∆~θ (i) 1 −∆~θ1 ∆~θ (i) 2 −∆~θ2 ... ∆~θ (i) L −∆~θL  (4.4) or, in a more compact form as (D+L) ( ∆~θ (i+1)−∆~θ ) =−U ( ∆~θ (i)−∆~θ ) . (4.5) The convergence rate2 of (4.5) is determined by the largest (in an absolute value sense) eigenvalue of (D+L)−1U [35]. The Gauss-Seidel iteration is a simple and efficient algorithm. However, by exploiting the 2With this convention, the smaller the largest eigenvalue is, the faster the iterative algorithm will converge. 36 4.1. Convergence Performance Analysis of the Proposed Algorithm structure of the Fisher information matrix R, one may adopt an overrelaxation philosophy to accel- erate the convergence rate. We will next introduce an SOR method [35] to achieve faster conver- gence. Corresponding to the linear iteration (4.2), an overrelaxed linear iteration can be expressed as [35, eq. (3.22), page 73] D∆~θ (i+1) = η ( −L∆~θ (i+1)−U∆~θ (i) +~n ) +(1−η)D∆~θ (i) (4.6) where η is the relaxation factor. Applying (3.20a) to (4.6), we can obtain (D+ηL) ( ∆~θ (i+1)−∆~θ ) = [(1−η)D−ηU ] ( ∆~θ (i)−∆~θ ) . (4.7) Similarly to the Gauss-Seidel iteration in (4.5), the convergence rate of the successive overrelaxed iteration in (4.7) depends on the largest absolute value of the eigenvalues of (D + ηL)−1[(1− η)D−ηU ] [35]. To achieve the fastest convergence rate for (4.7), one is required to minimize the largest eigenvalue of (D + ηL)−1[(1−η)D−ηU ] by choosing the optimal value for η . To simplify the analysis, we again assume that Ri j = 0 for i 6= j. Under this assumption, D, L and U are block diagonal matrices composed of 3× 3 matrices Dl , Ll and Ul , l = 1, . . . ,L. Thus, for the first step, we consider the optimization on the eigenvalues of (Dl + ηLl) −1[(1−η)Dl −ηUl] separately. For the sake of convenience, we denote al = ~A H l ~Al, bl = |hl|2 ~BHl ~Bl, cl = ℜ { h∗l ~B H l ~Al } , dl = ℑ { h∗l ~B H l ~Al } , 37 4.1. Convergence Performance Analysis of the Proposed Algorithm and obtain Mη 4 = Dl +ηLl =  al 0 0 0 al 0 ηcl −ηdl bl  =  1 0 0 0 1 0 ηcl/al −ηdl/al 1   al 0 0 0 al 0 0 0 bl  (4.8) Nη 4 = (1−η)Dl −ηUl =  (1−η)al 0 −ηcl 0 (1−η)al ηdl 0 0 (1−η)bl  . (4.9) It is straightforward to show that M−1η Nη =  1−η 0 −ηcl/al 0 1−η ηdl/al −η(1−η)cl/bl η(1−η)dl/bl (1−η)+η2(c2l +d2l )/albl  . (4.10) The eigenvalues of (4.10) are the roots of the characteristic equation λ 3− [ 3(1−η)+ η 2(c2l +d 2 l ) albl ] λ 2 +(1−η) [ 3(1−η)+ η 2(c2l +d 2 l ) albl ] λ −(1−η)3 = 0. (4.11) For the sake of simplicity, we denote kl 4 = c2l +d 2 l albl = ∣∣∣~BHl ~Al∣∣∣2 ~AHl ~Al~B H l ~Bl (4.12) 38 4.1. Convergence Performance Analysis of the Proposed Algorithm and simplify (4.11) to λ 3− [3(1−η)+η2kl]λ 2 +(1−η)[3(1−η)+η2kl]λ − (1−η)3 = 0. (4.13) It is shown in Appendix D that the optimal value of η which minimizes the largest eigenvalue (in absolute value) of M−1η Nη , is η = 2(1−√1− kl) kl , (4.14) and the corresponding largest absolute value for the eigenvalues of M−1η Nη is |λ |= |1−η |= ∣∣∣∣1− 2(1−√1− kl)kl ∣∣∣∣ . (4.15) Generally, an SOR method requires knowledge about the structure of the Fisher information matrix. It is observed in (3.20b) that this matrix (the Fisher information matrix Rll) depends on the channel realization hl . However, (4.12) and (4.14) indicate that for a given 3×3 Fisher information matrix Rll , the optimal η value does not depend on the channel realization hl . Instead, it only relies on the training sequence. In other words, given the training sequence, the receiver can employ η in (4.14) to achieve a fast convergence rate for the overrelaxed iteration. Typically convergence accelerating algorithms require full knowledge of the Fisher information matrix Rll to achieve better convergence rates. This implies that one needs to estimate hl and it renders such algorithms less useful. In contrast, the overrelaxation method proposed in this work only requires information about the training sequence and, therefore, is more robust. This property makes the overrelaxation method an attractive approach to achieve a fast convergence rate in practical receivers. From (4.12) and (4.14), it is observed that the η value depends on kl and may vary for different l. Substitution of (3.30) into (4.12) yields kl = pi2(N−1)2/N2 2pi2(N−1)(2N−1)/3N2 = 3(N−1) 2(2N−1) (4.16) 39 4.1. Convergence Performance Analysis of the Proposed Algorithm which is, in fact, independent of l assuming a white training sequence is employed. By letting N → ∞, we have the limiting value η , from (4.14) and (4.16), as η = lim N→∞ 2 ( 1−√1− kl ) kl = 4 3 (4.17) which leads to a convergence rate of |1−η |= 1/3. In comparison, considering the Gauss-Seidel iteration by letting η = 1, one can express (4.10) as M−1η Nη =  0 0 0 0 0 0 0 0 kl  (4.18) and conclude that the largest eigenvalue for the iteration is kl . Thus, the largest eigenvalue for the Gauss-Seidel iteration is kl = 3/4, which is larger than that of 1/3 for the SOR method, implying a faster convergence for the later3. In each Gauss-Seidel iteration, a value that maximizes the likelihood function is produced as an output. In contrast, an SOR method adopts a less aggressive strategy by combining the Gauss- Seidel solution and the previous value linearly as h (i+1) l = η h̃ (i+1) l +(1−η)h (i) l (4.19a) f (i+1) l = η f̃ (i+1) l +(1−η) f (i) l (4.19b) where η is an overrelaxation parameter, and h̃ (i+1) l and f̃ (i+1) l are the solutions to the Gauss-Seidel iteration. When η = 1, the SOR method degenerates to a Gauss-Seidel iteration. 3We should comment that the η value in (4.17) is only asymptotically optimal. In Section 4.3, we will demonstrate that this suboptimal solution already shows significant improvement over the Gauss-Seidel iteration. 40 4.2. An Improved Channel Estimation Algorithm 4.2 An Improved Channel Estimation Algorithm The above discussion is based on the linear approximation (3.20a) near the ML estimation result. Algorithm 1 is, however, a nonlinear estimator. Using the overrelaxation method, we can propose an improved new iterative ML algorithm as shown in Table 4.1. Note that the function L(hl, fl) in Step 1 is the cost function defined in (3.19) and η = 4/3, which is the asymptotically optimal value given by (4.17). Our numerical results will show that the improved algorithm can achieve good performance, both in terms of MSE and convergence rate. Table 4.1: Algorithm 2 Initialization: Initialize h (0) l and f (0) l , for l = 1,2, . . . ,L. Set the iteration counter i = 0. Iterations: Step 1: For each l = 1,2, . . . ,L, calculate h̃ (i+1) l = argmin hl L ( hl, f (i) l ) f̃ (i+1) l = argmin fl L ( h (i+1) l , fl ) Let h (i+1) l = η h̃ (i+1) l +(1−η)h (i) l and f (i+1) l = η f̃ (i+1) l +(1−η) f (i) l . Step 2: If max l ( ∣∣∣h(i+1)l −h(i)l ∣∣∣∣∣∣h(i)l ∣∣∣ ) ×100% > δh or max l ( ∣∣∣ f (i+1)l − f (i)l ∣∣∣∣∣∣ f (i)l ∣∣∣ ) ×100% > δ f , let i = i+1 and go to Step 1, otherwise output h (i+1) l ’s and f (i+1) l ’s. 4.3 Numerical Results and Discussions In this section, we present some numerical results by considering the same discrete baseband OFDM uplink transmission discussed in Chapter 3 in our simulation. All the simulation param- eters adopted here are the same as those considered in Section 3.5. Here we compare the MSE performance and convergence performance of Algorithm 1 and Algorithm 2 to demonstrate the performance of the improved algorithm. Fig. 4.1 plots the average MSE performance of ĥl for the two proposed algorithms and Figs. 4.2- 4.4 plot MSE performance of f̂l for the same system with three different fl values. The MSEs are obtained by averaging over 2000 trials, and the theoretical performance predictions of ĥl and f̂l are 41 4.3. Numerical Results and Discussions obtained using (3.36) and (3.37), respectively. From Figs. 4.1-4.4, we observe that the simulated MSE performance of ĥl and f̂l for Algorithm 1 and Algorithm 2, along with their theoretical predictions, have excellent agreement over a wide range of SNR values. We can notice a slight difference between the MSE performance of Algorithm 1 and Algorithm 2 when the SNR values are greater than 30 dB. However, the MSEs of both algorithms in this SNR region are less than −50 dB and the differences are therefore negligible. We can conclude that the two algorithms have almost the same performance in terms of MSE. Fig. 4.5 plots the average number of iterations required by the two proposed algorithms to achieve the MSE performance shown in Figs. 4.1-4.4. As expected, Fig. 4.5 shows that the average number of iterations of both algorithms decreases with increasing value of SNR up to 20 dB. The average number of iterations of both algorithms is unchanged when the average SNR value is greater than 20 dB. It is shown in Fig. 4.5 that Algorithm 2 converges faster than Algorithm 1, and it achieves 63% improvement in the number of iterations in the large SNR region. From Fig. 4.5, it is seen that the asymptotically optimal η value obtained under a high SNR assumption also works when the SNR value is low. Figs. 4.6 and 4.7 respectively plot the average MSE performance of ĥl and f̂l versus the number of iterations. For both figures, the average SNR value is fixed at 25 dB, and we observe that the MSE for Algorithm 2 decreases faster than that of Algorithm 1. Our numerical results clearly demonstrate that Algorithm 2, which uses an SOR technique, can achieve faster convergence than Algorithm 1. 42 4.3. Numerical Results and Discussions −10 0 10 20 30 40 −60 −50 −40 −30 −20 −10 0 Average SNR (dB) A ve ra g e M S E o f ĥ l (d B ) Algorithm 1 Algorithm 2 Theoretical value Figure 4.1: An average MSE performance comparison between Algorithm 1 and Algorithm 2, along with the theoretical values for ĥl , when δh = 1% and δ f = 1%. 43 4.3. Numerical Results and Discussions −10 0 10 20 30 40 −70 −60 −50 −40 −30 −20 −10 Average SNR (dB) M S E o f f̂ l (d B ) Algorithm 1 Algorithm 2 Theoretical value Figure 4.2: An MSE performance comparison between Algorithm 1 and Algorithm 2, along with the theoretical values for f̂l , when fl = 0.02, δh = 1% and δ f = 1%. 44 4.3. Numerical Results and Discussions −10 0 10 20 30 40 −70 −60 −50 −40 −30 −20 −10 Average SNR (dB) M S E o f f̂ l (d B ) Algorithm 1 Algorithm 2 Theoretical value Figure 4.3: An MSE performance comparison between Algorithm 1 and Algorithm 2, along with the theoretical values for f̂l , when fl = 0.04, δh = 1% and δ f = 1%. 45 4.3. Numerical Results and Discussions −10 0 10 20 30 40 −70 −60 −50 −40 −30 −20 −10 Average SNR (dB) M S E o f f̂ l (d B ) Algorithm 1 Algorithm 2 Theoretical value Figure 4.4: An MSE performance comparison between Algorithm 1 and Algorithm 2, along with the theoretical values for f̂l , when fl = 0.06, δh = 1% and δ f = 1%. 46 4.3. Numerical Results and Discussions 0 5 10 15 20 25 30 35 40 8 9 10 11 12 13 14 15 16 Average SNR (dB) A ve ra g e N u m b er o f It er a ti o n s Algorithm 1 Algorithm 2 Figure 4.5: An average number of iterations comparison between Algorithm 1 and Algorithm 2 when η = 4/3, δh = 1% and δ f = 1%. 47 4.3. Numerical Results and Discussions 0 5 10 15 20 −40 −35 −30 −25 −20 −15 −10 −5 0 5 Number of Iterations A ve ra g e M S E o f ĥ l (d B ) Algorithm 1 Algorithm 2 Figure 4.6: The average MSE performance versus number of iterations for ĥl of Algorithm 1 and Algorithm 2 when η = 4/3 and SNR = 25. 48 4.3. Numerical Results and Discussions 0 5 10 15 20 −55 −50 −45 −40 −35 −30 −25 −20 Number of Iterations A ve ra g e M S E o f f̂ l (d B ) Algorithm 1 Algorithm 2 Figure 4.7: The average MSE performance versus number of iterations for f̂l of Algorithm 1 and Algorithm 2 when η = 4/3 and SNR = 25 dB. 49 Chapter 5 Conclusions In this chapter, we first summarize the contributions of this work, and then suggest some future work. 5.1 Summary of Contributions Through the numerical simulations and mathematical analyses, the performance of our proposed algorithms have been verified. In the simulation, we do not compare the performance of our proposed algorithms with any other solutions because of the uniqueness of our channel model. However, we do compare the simulated estimate’s performance with the CRB, which is the best one can achieve. The contributions of this thesis can be summarized as follows. 1. A discrete CIR model, which allows the CIR to vary within one OFDM symbol, has been de- veloped for OFDM uplink transmission in macrocellular systems. The CIR is characterized by the channel parameters hl and fl , where hl is the complex amplitude for the lth multipath and fl is the corresponding Doppler frequency shift normalized to the duration of an OFDM symbol. In this case, the channel state can be determined by estimating the unknown channel parameters so that the channel estimation problem can be simplified. 2. Taking into account the ICI component, we formulate the channel estimation problem based on our proposed discrete CIR model. Due to the fact that fl is typically less than 0.1 in prac- tical mobile communication scenarios, the ICI factor R(k′,k, fl) is represented by a truncated Taylor series expansion to facilitate the channel estimation problem. We also demonstrate that a second-order Taylor series approximation is sufficient for a typical channel estimation problem. 50 5.2. Future work 3. We propose two iterative ML-based channel estimation algorithms, which do not require channel statistics, for OFDM systems in dispersive time-varying fading channels. These two algorithms can estimate the desired parameters accurately in order to provide channel state information for the receiver. Our proposed channel estimation algorithms are particularly useful for an OFDM system which has already compensated for the frequency offset due to local oscillator mismatch. The MSE performance of the proposed algorithms in large SNR regions has been analyzed using a small perturbation technique. Since our algorithms are iterative, the convergence performance is also analyzed. Based on the analysis, an SOR method is adopted to accelerate the proposed algorithm. The performance analyses are veri- fied by our simulated results. 5.2 Future work In this work, we have proposed two iterative ML-based channel estimation algorithms and demon- strated their performance in terms of MSE. However, channel estimation is only one part of the receiver design. Based on accurate channel estimates, we will design a receiver structure based on the proposed channel model. The bit-error rate (BER) performance of such a system will also be investigated. Single-carrier frequency division multiple access (SC-FDMA) is an OFDM based technique that has been adopted for the Long Term Evolution (LTE) uplink transmission by the third Genera- tion Partnership Project (3GPP). We believe that our proposed channel model and channel estima- tion algorithms can be applied to such a system. In our future research, we will study the channel estimation problem for SC-FDMA systems using the dispersive, time-varying channel model pro- posed in this thesis. 51 Bibliography [1] M. K. Ozdemir and H. Arslan, “Channel estimation for wireless OFDM systems,” IEEE Communications Surveys and Tutorials, vol. 9, no. 2, pp. 18–48, July 2007. [2] I. Koffman and V. Roman, “Broadband wireless access solutions based on OFDM access in IEEE 802.16,” IEEE Communications Magazine, vol. 40, no. 4, pp. 96–103, Apr. 2002. [3] I. Barhumi, G. Leus, and M. Moonen, “Optimal training design for MIMO-OFDM systems in mobile wireless channels,” IEEE Transactions on Signal Processing, vol. 51, no. 6, pp. 1615–1624, June 2003. [4] Y. Li and G. L. Stüber, Orthogonal Frequency Division Multiplexing for Wireless Communi- cations. New York: Springer Publications, 2006. [5] J. G. Proakis, Digital Communications, 4th ed. New York: McGraw-Hill, 2000. [6] J.-J. van de Beek, O. Edfors, M. Sandell, S. Wilson, and P. Börjesson, “On channel estimation in OFDM systems,” in Proc. IEEE Vehicular Technology Conference, VTC 1995, Chicago, IL, USA, July 25–28 1995, vol. 2, pp. 815–819. [7] O. Edfors, M. Sandell, J.-J. van de Beek, S. K. Wilson, and P. O. Börjesson, “OFDM channel estimation by singular value decomposition,” IEEE Transactions on Communications, vol. 46, no. 7, pp. 931–939, July 1998. [8] P. Chen and H. Kobayashi, “Maximum likelihood channel estimation and signal detection for OFDM systems,” in Proc. IEEE International Conference on Communications, ICC 2002, New York, NY, USA, May 2002, vol. 3, pp. 1640–1645. 52 Chapter 5. Bibliography [9] Y. Li, L. J. Cimini, and N. R. Sollenberger, “Robust channel estimation for OFDM systems with rapid dispersive fading channels,” IEEE Transactions on Communications, vol. 46, no. 7, pp. 902–915, July 1998. [10] J. K. Moon and S. I. Choi, “Performance of channel estimation methods for OFDM systems in a multipath fading channels,” IEEE Transactions on Consumer Electronics, vol. 46, no. 1, pp. 161–170, Feb. 2000. [11] Y. Li, “Pilot-symbol-aided channel estimation for OFDM in wireless systems,” IEEE Trans- actions on Vehicular Technology, vol. 49, no. 4, pp. 1207–1215, July 2000. [12] Y. Zhao and A. Huang, “A novel channel estimation method for OFDM mobile commu- nications systems based on pilot signals and transform-domain processing,” in Proc. IEEE Vehicular Technology Conference, VTC 1997, Phoenix, AZ, USA, May 4–7 1997, vol. 3, pp. 2089–2093. [13] M. X. Chang and Y. T. Su, “Model-based channel estimation for OFDM signals in Rayleigh fading,” IEEE Transactions on Communications, vol. 50, no. 4, pp. 540–544, Apr. 2002. [14] F. Z. Merli and G. M. Vitetta, “Iterative ML-based estimation of carrier frequency offset, channel impulse response and data in OFDM transmissions,” IEEE Transactions on Commu- nications, vol. 56, no. 3, pp. 497–506, Mar. 2008. [15] G. L. Stüber, Principles of Mobile Communications. Norwell, MA: Kluwer, 1996. [16] T. S. Rappaport, Wireless Communications: Principles and Practice, 2nd ed. Upper Saddle River: Prentice-Hall, 2001. [17] A. Goldsmith, Wireless Communications. New York: Cambridge University Press, 2005. [18] D. Tse and P. Viswanath, Fundamentals of Wireless Communication. New York: Cambridge University Press, 2005. [19] A. Paulraj, R. Nabar, and D. Gore, Introduction to Space-Time Wireless Communications. Cambridge, UK: Cambridge Univeristy Press, 2003. 53 Chapter 5. Bibliography [20] R. van Nee and R. Prasad, OFDM for Wireless Multimedia Communications. Boston: Artech House, 2000. [21] R. R. Mosier and R. G. Clabaugh, “Kineplex, a bandwidth efficient binary transmission system,” IEEE Transactions on Power Apparatus and Systems, vol. 76, pp. 723–728, Jan. 1958. [22] R. W. Chang, “Synthesis of band limited orthogonal signals for multichannel data transmis- sion,” Bell System Technical Journal, vol. 45, no. 12, pp. 1775–1796, Dec. 1966. [23] B. R. Salzberg, “Performance of an efficient parallel data transmission system,” IEEE Trans- actions on Communication Technology, vol. COM-15, no. 6, pp. 805–813, Dec. 1967. [24] S. B. Weinstein and P. M. Ebert, “Data transmission by frequency-division multiplexing using the discrete Fourier transform,” IEEE Transactions on Communication Technology, vol. COM-19, no. 5, pp. 628–634, Oct. 1971. [25] A. Gorokhov and J.-P. Linnartz, “Robust OFDM receivers for dispersive time-varying chan- nels: equalization and channel acquisition,” IEEE Transactions on Communications, vol. 52, no. 4, pp. 527–583, Apr. 2004. [26] R. Steele and L. Hanzo, Mobile Radio Communications. New York: Wiley-IEEE Press, 1999. [27] X. Ma, H. Kobayashi, and S. C. Schwartz, “Joint frequency offset and channel estimation for OFDM,” in Proc. IEEE Global Telecommunications Conference, GLOBECOM 2003, San Francisco, CA, USA, Dec. 1–5 2003, vol. 1, pp. 15–19. [28] T. Cui and C. Tellambura, “Joint frequency offset and channel estimation for OFDM systems using pilot symbols and virtual carriers,” IEEE Transactions on Wireless Communications, vol. 6, no. 4, pp. 1193–1202, Apr. 2007. [29] K. B. Petersen and M. S. Petersen, The Matrix Cookbook. http://matrixcookbook.com, 2008. 54 Chapter 5. Bibliography [30] J. A. Fessler and A. O. Hero, “Space-alternating generalized expectation-maximization al- gorithm,” IEEE Transactions on Signal Processing, vol. 42, no. 10, pp. 2664–2677, Oct. 1994. [31] K. Li and H. Liu, “Joint channel and carrier offset estimation in CDMA communications,” IEEE Transactions on Signal Processing, vol. 47, no. 7, pp. 1811–1822, July 1999. [32] P. Stoica and O. Besson, “Training sequence design for frequency offset and frequency- selective channel estimation,” IEEE Transactions on Communications, vol. 51, no. 11, pp. 1910–1917, Nov. 2003. [33] P. H. Moose, “A technique for orthogonal frequency division multiplexing offset correction,” IEEE Transactions on Communications, vol. 42, no. 10, pp. 2908–2914, Oct. 1994. [34] T. M. Schmidl and D. C. Cox, “Robust frequency and timing synchronization for OFDM,” IEEE Transactions on Communications, vol. 45, no. 12, pp. 1613–1621, Dec. 1997. [35] D. M. Young, Iterative Solution of Large Linear Systems. New York: Academic Press, 1971. [36] G. T. RAN, “Spatial channel model for Multiple Input Multiple Output (MIMO) simulations,” Technical Report, Dec. 2008. [37] G. Cardano and T. R. Witmer, The Great Art, or: The Rules of Algebra, 1st ed. Cambridge: MIT Press, 1968. 55 Appendix A Derivation of the Discrete CIR Model For a multipath CIR described in (3.2), each resolvable path consists of a number of non-resolvable components so that γl(t) can be expressed as γl(t) = ∑ i αl,ie − jφl,i(t) (A.1) where αl,i and φl,i(t) are respectively the complex amplitude and the phase shift incurred by the ith non-resolvable component of the lth resolvable path. The phase shift φl,i(t) is due to the time delay and Doppler frequency shift and is given by φl,i(t) = 2pi fcτl −2pi fDl,it (A.2) where fc is the carrier frequency and fDl,i is the Doppler frequency shift at the ith non-resolvable component of the lth resolvable path. In this work, we consider an uplink transmission in a macro- cellular system where the angular spread is small for those non-resolvable components [36]. Con- sequently, we can assume fDl,i = fDl and φl,i(t) = 2pi fcτl −2pi fDl t. (A.3) Applying (A.3) to (A.1), one can obtain γl(t) = e j2pi fDl t ∑ i αl,ie − j2pi fcτl︸ ︷︷ ︸ hl (A.4) 56 Appendix A. Derivation of the Discrete CIR Model and rewrite (3.2) as h(τ, t) = L ∑ l=1 hle j2pi fDl tδ (τ − τl). (A.5) Therefore, the discrete-time CIR can be expressed as h[m,n] 4 = h(mTsa,nTsa) = L ∑ l=1 hle j2pi fDl TsnTsa NTsa δ (mTsa−Tsaτl/Tsa) = L ∑ l=1 hle j2pi fl n N δ [m−nl] (A.6) where Tsa is the sampling interval defined as Tsa = Ts/N and Ts is the OFDM symbol duration, fl is the Doppler frequency shift at the lth multipath normalized by 1/Ts, i.e., fl = fDl Ts, and nl is the corresponding delay in samples, i.e., nl = bτl/Tsac. 57 Appendix B Derivation of (3.20) Recalling (3.8) and (3.13), the cost function we try to minimize is J(hl, fl) = ∣∣∣∣∣~Y − L∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l + · · · )∣∣∣∣∣ 2 = ∣∣∣∣∣~Y − L∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l + · · · )∣∣∣∣∣ H ∣∣∣∣∣~Y − L∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l + · · · )∣∣∣∣∣ (B.1) where (·)H denotes the Hermitian transpose of the argument matrix. According to (30), (31), and (32) in [31], we can obtain ∆~θi =  ℜ{∆hi} ℑ{∆hi} ∆ fi  (B.2) ~ni =−  ∂J(hl , fl) ∂ℜ{hi} ∂J(hl , fl) ∂ℑ{hi} ∂J(hl , fl) ∂ fi  (B.3) and Ri j =  ∂ 2J(hl , fl) ∂ℜ{hi}∂ℜ{h j} ∂ 2J(hl , fl) ∂ℜ{hi}∂ℑ{h j} ∂ 2J(hl , fl) ∂ℜ{hi}∂ f j ∂ 2J(hl , fl) ∂ℑ{hi}∂ℜ{h j} ∂ 2J(hl , fl) ∂ℑ{hi}∂ℑ{h j} ∂ 2J(hl , fl) ∂ℑ{hi}∂ f j ∂ 2J(hl , fl) ∂ fi∂ℜ{h j} ∂ 2J(hl , fl) ∂ fi∂ℑ{h j} ∂ 2J(hl , fl) ∂ fi∂ f j  (B.4) where ℜ{·} and ℑ{·} respectively denote the real part and imaginary part of the argument, and ∆hi = ĥi− hi and ∆ fi = f̂i− fi are the perturbation of tne estimates caused by Gaussian noise ~ni. 58 Appendix B. Derivation of (3.20) In the following, we will calculate the elements of~ni. Using (B.1), one can obtain ∂J(hl, fl) ∂ℜ{hi} = [ − ( ~Ai +~Bi fi +~Ci f 2 i + · · · )]H [ ~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l + · · · )] + [ ~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l + · · · )]H [ − ( ~Ai +~Bi fi +~Ci f 2 i + · · · )] . (B.5) Recalling from (3.8), we have ~w =~Y − L ∑ l=1 hl ( ~Al +~Bl fl +~Cl f 2 l + · · · ) (B.6) and can rewrite (B.5) as ∂J(hl, fl) ∂ℜ{hi} = [ − ( ~Ai +~Bi fi +~Ci f 2 i + · · · )]H ~w+~wH [ − ( ~Ai +~Bi fi +~Ci f 2 i + · · · )] . (B.7) Using the assumption that fl’s are small, we can neglect the terms containing fi in (B.7) and further simplify (B.7) as ∂J(hl, fl) ∂ℜ{hi} ≈ ( −~Ai )H ~w+~wH ( −~Ai ) = ( −~wH~Ai )H + ( −~wH~Ai ) = −2ℜ { ~wH~Ai } . (B.8) In a similar way, we can obtain ∂J(hl, fl) ∂ℑ{hi} = 2ℑ { ~wH~Ai } (B.9) and ∂J(hl, fl) ∂ fi =−2ℜ { hi~w H~Bi } (B.10) 59 Appendix B. Derivation of (3.20) so that we obtain ~ni =−  ∂J(hl , fl) ∂ℜ{hi} ∂J(hl , fl) ∂ℑ{hi} ∂J(hl , fl) ∂ fi  = 2  ℜ { ~wH~Ai } −ℑ { ~wH~Ai } ℜ { ~hi~w H~Bi }  . (B.11) Based on the derivation above, it is straightforward to calculate the elements of Ri j in a similar way and obtain Ri j = 2  ℜ { ~AHj ~Ai } ℑ { ~AHj ~Ai } ℜ { h∗j~B H j ~Ai } −ℑ { ~AHj ~Ai } ℜ { ~AHj ~Ai } −ℑ { h∗j~B H j ~Ai } ℜ { hi~A H j ~Bi } ℑ { hi~A H j ~Bi } ℜ{hih∗j~BHj ~Bi}  (B.12) where {·}∗ denotes the complex conjugate of the argument. Applying (B.11) and (B.12) to (32) in [31], we can obtain (3.20). 60 Appendix C Derivation of (3.30c) From (3.7) and (3.8), we have ~BHl ~Bl = N−1 ∑ k=0 BHl [k]Bl[k]. (C.1) Applying (3.9b) to (C.1) and using the uncorrelation property of white training sequence , one obtains ~BHl ~Bl = pi2(N−1)2 N2 N−1 ∑ k=0 |X [k]|2 + 4pi 2 N2 N−1 ∑ k=0 N−1 ∑ k′ 6=k k′=0 1 |1−ω|2 ∣∣X [k′]∣∣2 (C.2) where ω = exp ( j2pi(k′−k) N ) . Assuming |X [k]|= 1√ N , for k = 0,1, . . . ,N−1, we can simplify (C.2) to be ~BHl ~Bl = pi2(N−1)2 N2 + 4pi2 N3 N−1 ∑ k=0 N−1 ∑ k′ 6=k k′=0 1 |1−ω|2 . (C.3) For the sake of simplicity, we denote s[k] 4 = N−1 ∑ k′ 6=k k′=0 1 |1−ω|2 = N−1 ∑ k′ 6=k k′=0 1∣∣∣1− e j2pi(k′−k)N ∣∣∣2 , k = 0,1, . . . ,N−1 (C.4) and rewrite (C.3) as ~BHl ~Bl = pi2(N−1)2 N2 + 4pi2 N3 N−1 ∑ k=0 s[k]. (C.5) When N is fixed, exploiting the fact that s[k] is constant for k = 0,1, . . . ,N−1, one can obtain s[k] 4 = s = N−1 ∑ n=1 1∣∣∣1− e j2pinN ∣∣∣2 , k = 0,1, . . . ,N−1 (C.6) 61 Appendix C. Derivation of (3.30c) and further simplify (C.5) to be ~BHl ~Bl = pi2(N−1)2 N2 + 4pi2 N2 s. (C.7) Using Parseval’s relationship, one can show s = N−1 ∑ n=1 1∣∣∣1− e j2pinN ∣∣∣2 = 1 4 N−1 ∑ n=1 1 sin2 ( npi N ) = (N−1)(N +1) 12 . (C.8) Finally substituting of (C.8) into (C.7), one obtains (3.30c) as desired, i.e., ~BHl ~Bl = 2pi2(N−1)(2N−1) 3N2 . (C.9) 62 Appendix D Derivation of the Absolute Value for the Largest Eigenvalue of M−1η Nη According to algebraic theory, the roots of a cubic equation x3 +αx2 +βx+ γ = 0 (D.1) are given by [37] x = v−u− α 3 (D.2) where u = 3 √ q 2 ± √ q2 4 + p3 27 (D.3a) v = p 3u (D.3b) p = β − α 2 3 (D.3c) q = γ + 2α3−9αβ 27 . (D.3d) After some manipulations, one can calculate that the roots for the characteristic equation in (4.12) are given by p =−3(1−η)+η 2kl 3 η2kl (D.4a) 63 Appendix D. Derivation of the Absolute Value for the Largest Eigenvalue of M−1η Nη and q =− 1 27 [ 9(1−η)+2η2kl ] η4k2l . (D.4b) To minimize the largest absolute value of the roots of (4.12), we let δ = q2 4 + p3 27 = 0 (D.5) and have three equal roots. After applying (D.4) into (D.5), one can show that η is required to satisfy η2kl −4η +4 = 0 (D.6) or η = 2(1−√1− kl) kl . (D.7) After substitution of η into u and v, it can be shown that u = 3 √ q 2 =−2 3 (1−η) (D.8a) v = 2 3 (1−η) (D.8b) x = v−u− α 3 = α = 1−η . (D.8c) Thus, the absolute value for the largest eigenvalue of M−1η Nη is |λ |= |1−η |= ∣∣∣∣1− 2(1−√1− kl)kl ∣∣∣∣ . (D.9) 64 Appendix E Matlab Code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% Main Funct ion %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% c l c ; c l e a r ; %% Parameter I n i t i a l i z a t i o n %% REPEAT = 2000; % Number o f repeat t imes , f o r purpose of average N f f t = 256; % Size o f FFT L = 3; % L−path channel % f l = ones (1 , L ) ; % Normalized Doppler frequency (1 by L vec to r ) % h l = zeros (1 , L ) ; % Ampli tude f o r the path (1 by L vec to r ) n l = 0 : ( L − 1 ) ; % Delay o f the path (1 by L vec to r ) n modu type = 1; % The type of modulat ion scheme % 1=BPSK, 2=QPSK, 4=16QAM, 6=64QAM Tx = 1; % Tx=1 => Transmi tor ; Tx=0 => Receiver SNR = −10: 5 : 40; h l = [(0.2944+1.6236 j ) (−1.3362−0.6918 j ) (0.7143+0.8580 j ) ] ; f l = [0 .02 0.04 0 . 0 6 ] ; d e l t a h 2 = 0 .01 ; % Stoping th resho ld d e l t a f 2 = 0 .01 ; % Stoping th resho ld h l i n i t i a l = ones (2 , L ) ; f l i n i t i a l = ones (2 , L ) ; h l i n i t i a l = (1 + 1 j ) ∗ h l i n i t i a l ; % I n i t i a l value f l i n i t i a l = 0.05 ∗ f l i n i t i a l ; % I n i t i a l value %% For Algor i thm 1 %% hl hat ave p2 A1 = zeros ( leng th (SNR) , L ) ; f l ha t ave p2 A1 = zeros ( leng th (SNR) , L ) ; hl MSE p2 A1 = zeros ( leng th (SNR) , L ) ; f l MSE p2 A1 = zeros ( leng th (SNR) , L ) ; h l e r r ave p2 A1 = zeros ( leng th (SNR) , L ) ; 65 Appendix E. Matlab Code f l e r r a v e p 2 A 1 = zeros ( leng th (SNR) , L ) ; i t e r a t i o n p 2 A 1 = zeros ( leng th (SNR) , 1 ) ; %% For Algor i thm 2 %% hl hat ave p2 A2 = zeros ( leng th (SNR) , L ) ; f l ha t ave p2 A2 = zeros ( leng th (SNR) , L ) ; hl MSE p2 A2 = zeros ( leng th (SNR) , L ) ; f l MSE p2 A2 = zeros ( leng th (SNR) , L ) ; h l e r r ave p2 A2 = zeros ( leng th (SNR) , L ) ; f l e r r a v e p 2 A 2 = zeros ( leng th (SNR) , L ) ; i t e r a t i o n p 2 A 2 = zeros ( leng th (SNR) , 1 ) ; %% For t h e o r e t i c a l value %% nvar = zeros ( leng th (SNR) , 1 ) ; % Noise var iance t h e o r e t i c f l = zeros ( leng th (SNR) , ( L + 1 ) ) ; f o r sn r index = 1 : leng th (SNR) %% For Algor i thm 1 %% hl hat temp p2 A1 = zeros (1 , L ) ; f l ha t temp p2 A1 = zeros (1 , L ) ; hl MSE temp p2 A1 = zeros (1 , L ) ; f l MSE temp p2 A1 = zeros (1 , L ) ; h l e r r temp p2 A1 = zeros (1 , L ) ; f l e r r t emp p2 A1 = zeros (1 , L ) ; I t e ra t i on temp p2 A1 = 0; %% For Algor i thm 2 %% hl hat temp p2 A2 = zeros (1 , L ) ; f l ha t temp p2 A2 = zeros (1 , L ) ; hl MSE temp p2 A2 = zeros (1 , L ) ; f l MSE temp p2 A2 = zeros (1 , L ) ; h l e r r temp p2 A2 = zeros (1 , L ) ; f l e r r t emp p2 A2 = zeros (1 , L ) ; I t e ra t i on temp p2 A2 = 0; %% For t h e o r e t i c a l value %% nvar temp = 0; % Noise var iance f o r repea t index = 1 : REPEAT %% Generate Data %% Xb i t = round ( rand (1 , ( N f f t ∗ n modu type ) ) ) ; %Xb i t = r a n d i n t (1 , N f f t ∗ n modu type ) ; 66 Appendix E. Matlab Code %% Modulat ion ( Gray Mapping ) %% X = mapper ( Xbi t , n modu type , Tx ) ; X = X / s q r t ( ( X ∗ X ’ ) ) ; %%% Normalize X %% Calcu la te Coe f f i c iences %% [ Al , Bl , Cl , Dl ] = c a l c u c o e f f ( N f f t , L , n l , X ) ; %% Calcu la te the output o f FFT (Y [ k ] ) using Eq . 4 %% [Y, n v a r i ] = ca lcu co r rup ted ( N f f t , L , f l , h l , n l ,X ,SNR( snr index ) ) ; nvar temp = nvar temp + n v a r i ; %% Channel Es t imat ion using Algor i thm 1 %% temp i te r p2 = 0; p = 2; % The order o f the Tay lor expansion . % I t can not work p rope r l y when p=1. [ h l ha t ave p2 A1 ( snr index , : ) , f l ha t ave p2 A1 ( snr index , : ) , i t e r a t i o n p 2 A 1 ( snr index ) ] . . . = channel est A1 ( N f f t , L , p , Al , Bl , Cl , Dl , Y, de l ta h 2 , d e l t a f 2 , h l i n i t i a l , f l i n i t i a l ) ; i t e r a t i o n p 2 A 1 ( snr index ) = 1000 − i t e r a t i o n p 2 A 1 ( snr index ) ; %% Channel Es t imat ion using Algor i thm 2 %% [ h l ha t ave p2 A2 ( snr index , : ) , f l ha t ave p2 A2 ( snr index , : ) , i t e r a t i o n p 2 A 2 ( snr index ) ] . . . = channel est A2 ( N f f t , L , p , Al , Bl , Cl , Dl , Y, de l ta h 2 , d e l t a f 2 , h l i n i t i a l , f l i n i t i a l ) ; i t e r a t i o n p 2 A 2 ( snr index ) = 1000 − i t e r a t i o n p 2 A 2 ( snr index ) ; %% Calcu la te the MSE %% %% For Algor i thm 1 %% hl hat temp p2 A1=hl hat temp p2 A1+h l ha t ave p2 A1 ( snr index , : ) ; f l ha t temp p2 A1= f l ha t temp p2 A1+ f l ha t ave p2 A1 ( snr index , : ) ; h l temp p2 A1 = abs ( ( h l ha t ave p2 A1 ( snr index , : ) − h l ) ) . ˆ 2 ; f l temp p2 A1 = abs ( ( f l ha t ave p2 A1 ( snr index , : ) − f l ) ) . ˆ 2 ; hl MSE temp p2 A1 = hl MSE temp p2 A1 + hl temp p2 A1 ; fl MSE temp p2 A1 = fl MSE temp p2 A1 + f l temp p2 A1 ; I t e ra t i on temp p2 A1 = I te ra t i on temp p2 A1 + i t e r a t i o n p 2 A 1 ( snr index ) ; %% For Algor i thm 2 %% hl hat temp p2 A2=hl hat temp p2 A2+h l ha t ave p2 A2 ( snr index , : ) ; f l ha t temp p2 A2= f l ha t temp p2 A2+ f l ha t ave p2 A2 ( snr index , : ) ; h l temp p2 A2 = abs ( ( h l ha t ave p2 A2 ( snr index , : ) − h l ) ) . ˆ 2 ; f l temp p2 A2 = abs ( ( f l ha t ave p2 A2 ( snr index , : ) − f l ) ) . ˆ 2 ; hl MSE temp p2 A2 = hl MSE temp p2 A2 + hl temp p2 A2 ; fl MSE temp p2 A2 = fl MSE temp p2 A2 + f l temp p2 A2 ; I t e ra t i on temp p2 A2 = I te ra t i on temp p2 A2 + i t e r a t i o n p 2 A 2 ( snr index ) ; end 67 Appendix E. Matlab Code %% For Algor i thm 1 %% hl hat ave p2 A1 ( snr index , : ) = h l hat temp p2 A1 / REPEAT; f l ha t ave p2 A1 ( snr index , : ) = f l ha t temp p2 A1 / REPEAT; hl MSE p2 A1 ( snr index , : ) = hl MSE temp p2 A1 / REPEAT; fl MSE p2 A1 ( snr index , : ) = fl MSE temp p2 A1 / REPEAT; h l e r r ave p2 A1 ( snr index , : ) = h l ha t ave p2 A1 ( snr index , : ) − h l ; f l e r r a v e p 2 A 1 ( snr index , : ) = f l ha t ave p2 A1 ( snr index , : ) − f l ; i t e r a t i o n p 2 A 1 ( snr index ) = I te ra t i on temp p2 A1 / REPEAT; %% For Algor i thm 2 %% hl hat ave p2 A2 ( snr index , : ) = h l hat temp p2 A2 / REPEAT; f l ha t ave p2 A2 ( snr index , : ) = f l ha t temp p2 A2 / REPEAT; hl MSE p2 A2 ( snr index , : ) = hl MSE temp p2 A2 / REPEAT; fl MSE p2 A2 ( snr index , : ) = fl MSE temp p2 A2 / REPEAT; h l e r r ave p2 A2 ( snr index , : ) = h l ha t ave p2 A2 ( snr index , : ) − h l ; f l e r r a v e p 2 A 2 ( snr index , : ) = f l ha t ave p2 A2 ( snr index , : ) − f l ; i t e r a t i o n p 2 A 2 ( snr index ) = I te ra t i on temp p2 A2 / REPEAT; %% For t h e o r e t i c a l value %% nvar ( sn r index ) = nvar temp / REPEAT; end %% Plo t F igure %% hl temp p2 A1 = zeros ( leng th (SNR) , 1 ) ; f l temp p2 A1 = zeros ( leng th (SNR) , 1 ) ; t h e o r e t i c h l = nvar ∗ 5 / 2 ; h l normal = abs ( h l ) . ˆ 2 ; f o r i = 1 : L t h e o r e t i c f l ( : , i ) = nvar / h l normal ( i ) ; t h e o r e t i c f l ( : , ( L + 1 ) ) = t h e o r e t i c f l ( : , ( L + 1) )+ t h e o r e t i c f l ( : , i ) ; end t h e o r e t i c f l = t h e o r e t i c f l / 2 / p i ˆ2 ∗ 3; t h e o r e t i c f l ( : , ( L + 1 ) ) = t h e o r e t i c f l ( : , ( L + 1 ) ) / L ; hl temp p2 A2 = zeros ( leng th (SNR) , 1 ) ; f l temp p2 A2 = zeros ( leng th (SNR) , 1 ) ; f o r i = 1 : L hl temp p2 A1 = hl MSE p2 A1 ( : , i ) + hl temp p2 A1 ; hl temp p2 A2 = hl MSE p2 A2 ( : , i ) + hl temp p2 A2 ; end hl ave MSE p2 A1 = hl temp p2 A1 ’ / L ; hl ave MSE p2 A2 = hl temp p2 A2 ’ / L ; hl ave MSE p2 A1 = 10 ∗ log10 ( hl ave MSE p2 A1 ) ; 68 Appendix E. Matlab Code f l ave MSE p2 A1 = 10 ∗ log10 ( fl MSE p2 A1 ) ; hl ave MSE p2 A2 = 10 ∗ log10 ( hl ave MSE p2 A2 ) ; f l ave MSE p2 A2 = 10 ∗ log10 ( fl MSE p2 A2 ) ; t h e o r e t i c h l f i n a l = 10 ∗ log10 ( t h e o r e t i c h l ) ; t h e o r e t i c f l f i n a l = 10 ∗ log10 ( t h e o r e t i c f l ) ; f o r p = 2 : 3 swi tch p case 2 p l o t (SNR, hl ave MSE p2 A1 , ’−ko ’ ) ; hold on p l o t (SNR, hl ave MSE p2 A2 , ’−mp ’ ) ; hold on p l o t (SNR, t h e o r e t i c h l f i n a l , ’−bd ’ ) ; g r i d on ; legend ( ’ A lgor i thm 1 ’ , ’ A lgor i thm 2 ’ , ’ Theo re t i ca l value ’ ) ; x l a b e l ( ’ Average SNR (dB ) ’ , ’ i n t e r p r e t e r ’ , ’ l a tex ’ ) ; y l a b e l ( ’ Average MSE of $\hat h l $ (dB ) ’ , ’ i n t e r p r e t e r ’ , ’ l a tex ’ ) ; end end f o r i = 1 : L f i g u r e p l o t (SNR, f l ave MSE p2 A1 ( : , i ) , ’−ko ’ ) ; hold on p l o t (SNR, f l ave MSE p2 A2 ( : , i ) , ’−mp ’ ) ; hold on p l o t (SNR, t h e o r e t i c f l f i n a l ( : , i ) , ’−bd ’ ) ; g r i d on ; legend ( ’ A lgor i thm 1 ’ , ’ A lgor i thm 2 ’ , ’ Theo re t i ca l value ’ ) ; x l a b e l ( ’ Average SNR (dB ) ’ , ’ i n t e r p r e t e r ’ , ’ l a tex ’ ) ; y l a b e l ( ’MSE of $\hat f l $ (dB ) ’ , ’ i n t e r p r e t e r ’ , ’ l a tex ’ ) ; end x d = SNR( 3 : end ) ; y d A1 = i t e r a t i o n p 2 A 1 ( 3 : end ) ; y d A2 = i t e r a t i o n p 2 A 2 ( 3 : end ) ; f i g u r e p l o t ( x d , y d A1 , ’−ko ’ ) ; hold on p l o t ( x d , y d A2 , ’−mp ’ ) ; hold on g r i d on legend ( ’ A lgor i thm 1 ’ , ’ A lgor i thm 2 ’ ) ; x l a b e l ( ’ Average SNR (dB ) ’ , ’ i n t e r p r e t e r ’ , ’ l a tex ’ ) ; 69 Appendix E. Matlab Code y l a b e l ( ’ Average Number o f I t e r a t i o n s ’ , ’ i n t e r p r e t e r ’ , ’ l a tex ’ ) ; % The End % 70 Appendix E. Matlab Code f u n c t i o n data mapped = mapper ( da ta inpu t , n modu type , Tx ) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% F i l e : mapear .m %% %% %% %% Funct ion : Map the inpu t b i t s i n t o the modulated symbols %% %% according to the modulat ion type decided by %% %% the parameter ’ n modu type ’ a t the t r a n s m i t o r . %% %% Or demap the rece ived symbols i n t o b i t s a t the %% %% rece i ve r . %% %% %% %% Parameters : da t a i n pu t => i npu t b i t s %% %% n modu type => modulat ion type %% %% (1=BPSK, 2=QPSK, 4=16QAM, 6=64QAM) %% %% Tx => f l a g i n d i c a t e s the t r a n s m i t o r or r ece i ve r %% %% (1= t ransmi to r , 0= rece i ve r ) %% %% %% %% Outputs : A mat r i x o f the mapped symbols according to the %% %% modulat ion scheme used at the t r a n s m i t o r s ide . %% %% Or a vec to r o f the demapped b i t s according to the %% %% modulat ion scheme . %% %% %% %% Not ice : The leng th o f ’ da ta inpu t ’ must be a i n t e g e r m u l t i p l e %% %% of ’ n modu type ’ a t the t r a n s m i t o r s ide . %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Decide the c o n s t e l l a t i o n f i r s t %% [M, M1, M2, type mapping , c ] = pa rame te rs cons te l l a t i on ( n modu type ) ; a lphabet = b i t symbo l (M, type mapping , M1, M2) ; i f n modu type ˜= 1 c o n s t e l l a t i o n g r a y = alphabet ( : , 3 ) + 1 j ∗ alphabet ( : , 2 ) ; e lse c o n s t e l l a t i o n g r a y = [−1 1 ] ’ ; end l = leng th ( da ta i npu t ) ; %% Real ize the mapping or demapping %% i f Tx==1 mat r i x da ta = reshape ( da ta inpu t , n modu type , l / n modu type ) ; m data decimal = bi2de ( mat r i x da ta ’ , ’ l e f t−msb ’ ) ; f o r i = 1 : ( l / n modu type ) 71 Appendix E. Matlab Code v data dec imal = m data decimal ( i ) ; v encoded = genqammod( v data dec imal , c o n s t e l l a t i o n g r a y ) ; output ( : , i ) = v encoded ; end data mapped = c .∗ output ; e l s e i f Tx==0 data normal ized = da ta i np u t . / c ; f o r i = 1 : l v data mapped = data normal ized ( i ) ; v decoded = genqamdemod( v data mapped , c o n s t e l l a t i o n g r a y ) ; data dec imal ( : , i ) = v decoded ; data mapped = de2bi ( data decimal , n modu type , ’ l e f t−msb ’ ) ’ ; data mapped = data mapped ( : ) ’ ; end end 72 Appendix E. Matlab Code f u n c t i o n [M, M1, M2, type mapping , c ] = pa rame te rs cons te l l a t i on ( n modu type ) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% F i l e : pa rame te rs cons te l l a t i on .m %% %% %% %% Funct ion : Decide the c o n s t e l l a t i o n parameters according %% %% to modulat ion scheme to be used . %% %% %% %% Parameters : n modu type => type o f modulat ion scheme %% %% (1=BPSK, 2=QPSK, 4=16QAM, 6=64QAM) %% %% %% %% Outputs : The parameters which are used to c a l l another %% %% f u n c t i o n to c a l c u l a t e the c o n s t e l l a t i o n using %% %% Gray l a b e l i n g . %% %% %% %% Not ice : Here we should pay a t t e n t i o n to the parameter ’ c ’ %% %% We use i t to normal ize the c o n s t e l l a t i o n . %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% swi tch n modu type case 1 % For BPSK ( As a PAM) type mapping = ’MPSK’ ; % Only take i n t o account Aic M = 2; M1 = 0; M2 = 0; c = 1; case 2 % For QPSK ( L ike 4−QAM) type mapping = ’QAM’ ; c = 1 / s q r t ( 2 ) ; case 4 % For 16−QAM type mapping = ’QAM’ ; c = 1 / s q r t ( 1 0 ) ; case 6 % For 64−QAM type mapping = ’QAM’ ; c = 1 / s q r t ( 4 2 ) ; end i f n modu type ˜= 1 M = 0; M1 = s q r t ( 2 ˆ n modu type ) ; M2 = s q r t ( 2 ˆ n modu type ) ; end 73 Appendix E. Matlab Code f u n c t i o n [ a lphabet ] = b i t symbo l ( M, type , M1, M2) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% F i l e : b i t symbo l .m %% %% %% %% Funct ion : Return the alphabet o f the c o n s t e l l a t i o n according %% %% to the modulat ion scheme used . Here the encoding %% %% i s done by Gray l a b e l i n g %% %% %% %% Parameters : M => t o t a l number o f po in t s i n the c o n s t e l l a t i o n %% %% I t should equal to 2ˆk , where k i s the number o f %% %% b i t s t h a t are grouped toge ther to form a symbol , %% %% except QAM, which can come as the product o f the %% %% arguments M1 and M2. %% %% type => type o f modulat ion scheme %% %% PAM, MPSK, QAM( rec tangu la r ) . %% %% M1 & M2 => number o f po in t s on each ax le %% %% %% %% Outputs : the alphabet o f the c o n s t e l l a t i o n according %% %% to the modulat ion scheme used . %% %% %% %% Not ice : When using QAM, M i s the product o f M1 by M2. %% %% Here the encoding i s done by Gray l a b e l i n %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Block i n i t i a l i z a t i o n %% i f strcmp ( type , ’QAM’ ) k1 = c e i l ( log2 (M1) ) ; k2 = c e i l ( log2 (M2) ) ; M1 = 2 ˆ k1 ; M2 = 2 ˆ k2 ; M = M1 ∗ M2; Aicd = zeros (1 , k1 ) ; Aisd = zeros (1 , k2 ) ; tab le1 = zeros (M1, 2 ) ; tab le2 = zeros (M2, 2 ) ; a lphabet = zeros (M, 3 ) ; d1 = 0 : 1 : M1−1; d1 = d1 ’ ; d2 = 0 : 1 : M2−1; d2 = d2 ’ ; 74 Appendix E. Matlab Code ind1 = bi2de ( f l i p l r ( gray2b i ( f l i p l r ( de2bi ( d1 ) ) ) ) ) ; tab le1 = [ d1 , ind1 + 1 ] ; ind2 = bi2de ( f l i p l r ( gray2b i ( f l i p l r ( de2bi ( d2 ) ) ) ) ) ; tab le2 = [ d2 , ind2 + 1 ] ; e lse k = c e i l ( log2 (M) ) ; M = 2 ˆ k ; Aicd = zeros (1 , k ) ; Aisd = zeros (1 , k ) ; t ab l e = zeros (M, 2 ) ; a lphabet = zeros (M, 3 ) ; d = 0 : 1 : M−1; d = d ’ ; ind = bi2de ( f l i p l r ( gray2b i ( f l i p l r ( de2bi ( d ) ) ) ) ) ; t ab l e = [ d , ind + 1 ] ; end %% Block computat ion %% i f strcmp ( type , ’PAM’ ) Aicd = −(M−1) : 2 : M−1; Aisd = [ ] ; % Use the alphabet f o r i =1 : M index = f i n d i n d e x ( i −1, t ab l e ) ; a lphabet ( i , : ) = [ i −1, Aicd ( index ) , 0 ] ; end e l s e i f strcmp ( type , ’MPSK’ ) angle = 0:2∗ p i /M:2∗ p i ∗(M−1)/M; Aicd = cos ( angle ) ; Aisd = s in ( angle ) ; % Use the alphabet f o r i =1 : M index = f i n d i n d e x ( i −1, t ab l e ) ; a lphabet ( i , : ) = [ i −1, Aicd ( index ) , Aisd ( index ) ] ; end e l s e i f strcmp ( type , ’QAM’ ) Aicd = −(M1−1) : 2 : M1−1; Aisd = (M2−1) : −2 : −(M2−1); % Use the alphabet f o r i =1 : M1 f o r j =1 : M2 index1 = f i n d i n d e x ( i −1, tab le1 ) ; index2 = f i n d i n d e x ( j −1, tab le2 ) ; 75 Appendix E. Matlab Code l = i + M1 ∗ ( j −1); a lphabet ( l , : ) = [ l −1, Aicd ( index1 ) , Aisd ( index2 ) ] ; end end end 76 Appendix E. Matlab Code f u n c t i o n b = gray2b i ( g ) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% GRAY2BI conver ts Gray encoded sequence i n t o the b inary %% %% sequence . I t i s assumed t h a t the most s i g n i f i c a n t b i t i s %% %% the l e f t hand s ide b i t . %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % copy the msb : b ( : , 1 ) = g ( : , 1 ) ; f o r i = 2 : s ize ( g , 2 ) , b ( : , i ) = xor ( b ( : , i −1) , g ( : , i ) ) ; end r e t u r n ; 77 Appendix E. Matlab Code f u n c t i o n i n d i c e = f i n d i n d e x (num, tab le ) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% F i l e : f i n d i n d e x .m %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% dim = s ize ( t ab l e ) ; num col = dim ( 1 ) ; f o r i =1: num col i f num== tab le ( i , 1 ) ; i n d i c e = tab l e ( i , 2 ) ; r e t u r n end end 78 Appendix E. Matlab Code f u n c t i o n [ Al , Bl , Cl , Dl ] = c a l c u c o e f f ( N f f t , L , n l , X) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% F i l e : c a l c u c o e f f .m %% %% %% %% Funct ion : Ca lcu la te the c o e f f i c i e n t s o f Equation 8 . %% %% Then output Al [ k ] , Bl [ k ] , Cl [ k ] and Dl [ k ] . %% %% %% %% Parameters : N f f t => Size o f FFT %% %% L => L−path channel %% %% n l => Delay o f the path %% %% X => Tra in ing sequence %% %% %% %% Outputs : Al , Bl , Cl and Dl %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Al = zeros ( L , N f f t ) ; f o r l = 1 : L f o r k = 1 : N f f t Al ( l , k ) = X( k ) ∗ exp(−1 j ∗ 2 ∗ p i ∗ ( k − 1) ∗ n l ( l ) / N f f t ) ; w = exp (1 j ∗ 2 ∗ p i ∗ ( ( 1 : N f f t ) − k + 1e−13) / N f f t ) ; temp = X .∗ exp(−1 j ∗ 2 ∗ p i ∗ (0 : ( N f f t − 1 ) ) ∗ n l ( l ) / N f f t ) . . . .∗ (−2 ∗ 1 j ∗ p i / N f f t . / (1 − w ) ) ; temp ( k ) = Al ( l , k ) ∗ 1 j ∗ p i ∗ ( N f f t − 1) / N f f t ; Bl ( l , k ) = sum( temp ) ; temp = X .∗ exp(−1 j ∗ 2 ∗ p i ∗ (0 : ( N f f t − 1 ) ) ∗ n l ( l ) / N f f t ) . . . .∗ (2 ∗ p i ˆ2 .∗ (1 − 2 ∗ w . / (w−1) / N f f t ) . / (1−w) / N f f t ) ; temp ( k ) = −Al ( l , k ) ∗ p i ˆ2 ∗ (2 / 3 − 2 / 3 / N f f t / N f f t − ( N f f t −1) / N f f t / N f f t ) ; Cl ( l , k ) = sum( temp ) ; temp = X .∗ exp(−1 j ∗ 2 ∗ p i ∗ (0 : ( N f f t − 1 ) ) ∗ n l ( l ) / N f f t ) . . . .∗ (1 j ∗ 4 ∗ p i ˆ3 ∗ ( N f f t ˆ2 ∗ (1−w) . ˆ 2 + 3 ∗ w .∗ ( N f f t + 1 . . . + w + N f f t ∗ w) ) / 3 / N f f t ˆ3 / (1−w ) . ˆ 3 ) ; temp ( k ) = −Al ( l , k ) ∗ 1 j ∗ p i ˆ3 ∗ ( N f f t −1)ˆ2 / 3 / N f f t ˆ 2 ; Dl ( l , k ) = sum( temp ) ; end end 79 Appendix E. Matlab Code f u n c t i o n [Y, nvar ] = ca lcu co r rup ted ( N f f t , L , f l , h l , n l , X , SNR) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% F i l e : ca l cu co r rup ted .m %% %% %% %% Funct ion : Ca lcu la te the cor rupted s igna l , undergoing fading , %% %% adding AWGN according to Equation ( 4 ) . %% %% %% %% Parameters : N f f t => Size o f FFT %% %% L => L−path channel %% %% f l => Normalized Doppler frequency %% %% h l => Complex valued channel gain %% %% n l => Delay o f the path %% %% X => Tra in ing sequence %% %% SNR => SNR value %% %% %% %% Outputs : The cor rupted s i g n a l a t the rece i ve r s ide . %% %% %% %% Not ice : We do not add AWGN i f we j u s t want the cor rupted %% %% s i g n a l f o r c a l c u l a t i n g the equ iva len t SNR to %% %% q u a n t i f y the approximat ion . Then we only have %% %% 6 inpu t arguments w i thou t SNR. %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% swi tch narg in case 6 Y = zeros (1 , N f f t ) ; R = zeros ( L , 2∗N f f t −1); kk = (1 − N f f t ) : ( N f f t − 1 ) ; % Calcu la te the rece ived s i g n a l w i thou t AWGN f o r l = 1 : L R( l , : ) = (1 − exp ( j ∗ 2 ∗ p i ∗ f l ( l ) ) ) . / (1 − exp ( j ∗ 2 . . . ∗ p i .∗ ( kk + f l ( l ) + 1e−13) / N f f t ) ) / N f f t ; end f o r k = 1 : N f f t f o r l = 1 : L f o r k = 1 : N f f t Y( k ) = Y( k ) + exp(− j ∗ 2 ∗ p i ∗ ( k −1) ∗ n l ( l ) / . . . N f f t ) ∗ X( k ) ∗ h l ( l ) ∗ R( l , ( N f f t−k+k ) ) ; end 80 Appendix E. Matlab Code end end nvar = 0 ; case 7 SNR l = 10 ˆ (SNR / 10 ) ; Y = zeros (1 , N f f t ) ; R = zeros ( L , 2∗N f f t −1); kk = (1 − N f f t ) : ( N f f t − 1 ) ; % Calcu la te the rece ived s i g n a l w i thou t AWGN f o r l = 1 : L R( l , : ) = (1 − exp ( j ∗ 2 ∗ p i ∗ f l ( l ) ) ) . / (1 − exp ( j ∗ 2 . . . ∗ p i .∗ ( kk + f l ( l ) + 1e−13) / N f f t ) ) / N f f t ; end f o r k = 1 : N f f t f o r l = 1 : L f o r k = 1 : N f f t Y( k ) = Y( k ) + exp(− j ∗ 2 ∗ p i ∗ ( k −1) ∗ n l ( l ) / . . . N f f t ) ∗ X( k ) ∗ h l ( l ) ∗ R( l , ( N f f t−k+k ) ) ; end end end % Add AWGN to the s i g n a l Es = mean( abs (Y) . ˆ 2 ) ; nvar = Es / SNR l ; AWGNoise = s q r t ( nvar / 2) .∗ ( randn (1 , N f f t ) + 1 j .∗ randn (1 , N f f t ) ) ; Y = Y + AWGNoise ; end 81 Appendix E. Matlab Code f u n c t i o n [ h l ha t , f l h a t , count ] = channel est A1 ( N f f t , L , p , Al , Bl , . . . Cl , Dl , Y, de l ta h , d e l t a f , h l i n i t i a l , f l i n i t i a l ) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% F i l e : channel est A1 .m %% %% %% %% Funct ion : Est imate the channel parameters ( h l & f l ) using Algor i thm 1.%% %% %% %% Parameters : %% %% N f f t => Size o f FFT %% %% L => L−path channel %% %% p => the order o f Tay lor Ser ies Expansion %% %% Al , Bl , Cl , Dl => the c o e f f i c i e n t s o f the approx imat ion %% %% Y => the rece ived s i g n a l %% %% d e l t a h => the th resho ld o f h l ( stops the i t e r a t i o n ) %% %% d e l t a f => the th resho ld o f f l %% %% h l i n i t i a l => the i n i t i a l value o f est imate ’ h l ha t ’ %% %% f l i n i t i a l => the i n i t i a l value o f est imate ’ f l h a t ’ %% %% %% %% Outputs : The esimates ’ h l ha t ’ , ’ f l h a t ’ and the i t e r a t i o n ’ count ’ %% %% %% %% Not ice : The order o f accuracy depends on N f f t and p . We have %% %% d i f f e r e n t accuracy f o r d i f f e r e n t path . The i t e r a t i o n t ime %% %% depends on the i n i t i a l value and the th resho ld . %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% I n i t i a l i z e h l and f l %%% i f narg in == 10 YYY = Y . ’ ; AAA = Al . ’ ; BBB = Bl . ’ ; temp H = inv ( Al ∗ AAA) ∗ Al ∗ YYY; temp F = inv ( Bl ∗ BBB) ∗ Bl ∗ (YYY − AAA ∗ temp H ) . / temp H ; h l i n i t i a l = [ temp H temp H ] ; f l i n i t i a l = [ temp F temp F ] ; h l i n i t i a l = h l i n i t i a l . ’ ; f l i n i t i a l = f l i n i t i a l . ’ ; end count = 0 ; h l h a t = zeros (1 , L ) ; f l h a t = zeros (1 , L ) ; 82 Appendix E. Matlab Code swi tch p case 2 % second order approx imat ion i t e r a t i o n = 1000; wh i le ( i t e r a t i o n ) f o r l = 1 : L Alpha h = zeros (1 , N f f t ) ; A lpha f = zeros (1 , N f f t ) ; i f l == 1 f o r k = 2 : L Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) . . . + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) ) ; end e l s e i f l == L f o r k = 1 : L − 1 Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) . . . + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) ) ; end else f o r k = 1 : l − 1 Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) . . . + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) ) ; end f o r k = l + 1 : L Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) . . . + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) ) ; end end % update h l Alpha h = Y − Alpha h ; Beta h = Al ( l , : ) + f l i n i t i a l (2 , l ) .∗ Bl ( l , : ) + f l i n i t i a l (2 , l ) ˆ 2 .∗ Cl ( l , : ) ; temp = h l i n i t i a l (2 , l ) ; h l i n i t i a l (2 , l ) = Beta h ∗ Alpha h ’ / ( Beta h ∗ Beta h ’ ) ; h l i n i t i a l (2 , l ) = conj ( h l i n i t i a l (2 , l ) ) ; h l i n i t i a l (1 , l ) = temp ; % update f l A lpha f = Alpha h − h l i n i t i a l (2 , l ) .∗ Al ( l , : ) ; Be ta f = h l i n i t i a l (2 , l ) .∗ Bl ( l , : ) ; E t a f = h l i n i t i a l (2 , l ) .∗ Cl ( l , : ) ; a = E t a f ∗ Eta f ’ ∗ 4; b = ( Be ta f ∗ Eta f ’ + E t a f ∗ Beta f ’ ) ∗ 3; c = ( Be ta f ∗ Beta f ’ − Alpha f ∗ Eta f ’ − E t a f ∗ Alpha f ’ ) ∗ 2; d = −(A lpha f ∗ Beta f ’ + Be ta f ∗ Alpha f ’ ) ; ppp = [ a b c d ] ; 83 Appendix E. Matlab Code r r r = roo ts ( ppp ) ; temp = f l i n i t i a l (2 , l ) ; f l i n i t i a l (2 , l ) = r r r ( leng th ( r r r ) ) ; f l i n i t i a l (1 , l ) = temp ; end i f (max( abs ( h l i n i t i a l (2 , : ) − h l i n i t i a l (1 , : ) ) . / abs ( h l i n i t i a l (1 , : ) ) ) < de l t a h ) . . . && (max( abs ( f l i n i t i a l (2 , : ) − f l i n i t i a l (1 , : ) ) . / abs ( f l i n i t i a l (1 , : ) ) ) < d e l t a f ) count = i t e r a t i o n ; i t e r a t i o n = 0; e lse i t e r a t i o n = i t e r a t i o n − 1; end end h l h a t = h l i n i t i a l (2 , : ) ; f l h a t = f l i n i t i a l (2 , : ) ; case 3 % t h i r d order approx imat ion i t e r a t i o n = 1000; wh i le ( i t e r a t i o n ) f o r l = 1 : L Alpha h = zeros (1 , N f f t ) ; A lpha f = zeros (1 , N f f t ) ; i f l == 1 f o r k = 2 : L Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) . . . + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 3 .∗ Dl ( k , : ) ) ; end e l s e i f l == L f o r k = 1 : L − 1 Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) . . . + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 3 .∗ Dl ( k , : ) ) ; end else f o r k = 1 : l − 1 Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) . . . + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 3 .∗ Dl ( k , : ) ) ; end f o r k = l + 1 : L Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) . . . + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 3 .∗ Dl ( k , : ) ) ; end end % update h l 84 Appendix E. Matlab Code Alpha h = Y − Alpha h ; Beta h = Al ( l , : ) + f l i n i t i a l (2 , l ) .∗ Bl ( l , : ) + f l i n i t i a l (2 , l ) ˆ 2 .∗ Cl ( l , : ) . . . + f l i n i t i a l (2 , l ) ˆ 3 .∗ Dl ( l , : ) ; temp = h l i n i t i a l (2 , l ) ; h l i n i t i a l (2 , l ) = Beta h ∗ Alpha h ’ / ( Beta h ∗ Beta h ’ ) ; h l i n i t i a l (2 , l ) = conj ( h l i n i t i a l (2 , l ) ) ; h l i n i t i a l (1 , l ) = temp ; % update f l A lpha f = Alpha h − h l i n i t i a l (2 , l ) .∗ Al ( l , : ) ; Be ta f = h l i n i t i a l (2 , l ) .∗ Bl ( l , : ) ; E t a f = h l i n i t i a l (2 , l ) .∗ Cl ( l , : ) ; The ta f = h l i n i t i a l (2 , l ) .∗ Dl ( l , : ) ; a = The ta f ∗ Theta f ’ ∗ 6; b = ( E t a f ∗ Theta f ’ + The ta f ∗ Eta f ’ ) ∗ 5; c = ( Be ta f ∗ Theta f ’ + E t a f ∗ Eta f ’ + The ta f ∗ Beta f ’ ) ∗ 4; d = ( Be ta f ∗ Eta f ’ + E t a f ∗ Beta f ’ − Alpha f ∗ Theta f ’ − Theta f ∗ Alpha f ’ ) ∗ 3; e = ( Be ta f ∗ Beta f ’ − Alpha f ∗ Eta f ’ − E t a f ∗ Alpha f ’ ) ∗ 2; f = −(A lpha f ∗ Beta f ’ + Be ta f ∗ Alpha f ’ ) ; ppp = [ a b c d e f ] ; r r r = roo ts ( ppp ) ; temp = f l i n i t i a l (2 , l ) ; f l i n i t i a l (2 , l ) = r r r ( leng th ( r r r ) ) ; f l i n i t i a l (1 , l ) = temp ; end i f (max( abs ( h l i n i t i a l (2 , : ) − h l i n i t i a l (1 , : ) ) . / abs ( h l i n i t i a l (1 , : ) ) ) < de l t a h ) . . . && (max( abs ( f l i n i t i a l (2 , : ) − f l i n i t i a l (1 , : ) ) . / abs ( f l i n i t i a l (1 , : ) ) ) < d e l t a f ) count = i t e r a t i o n ; i t e r a t i o n = 0; e lse i t e r a t i o n = i t e r a t i o n − 1; end end h l h a t = h l i n i t i a l (2 , : ) ; f l h a t = f l i n i t i a l (2 , : ) ; end 85 Appendix E. Matlab Code f u n c t i o n [ h l ha t , f l h a t , count ] = channel est A2 ( N f f t , L , p , Al , Bl , . . . Cl , Dl , Y, de l ta h , d e l t a f , h l i n i t i a l , f l i n i t i a l ) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% F i l e : channel est A2 .m %% %% %% %% Funct ion : Est imate the channel parameters ( h l & f l ) using Algor i thm 2.%% %% %% %% Parameters : %% %% N f f t => Size o f FFT %% %% L => L−path channel %% %% p => the order o f Tay lor Ser ies Expansion %% %% Al , Bl , Cl , Dl => the c o e f f i c i e n t s o f the approx imat ion %% %% Y => the rece ived s i g n a l %% %% d e l t a h => the th resho ld o f h l ( stops the i t e r a t i o n ) %% %% d e l t a f => the th resho ld o f f l %% %% h l i n i t i a l => the i n i t i a l value o f est imate ’ h l ha t ’ %% %% f l i n i t i a l => the i n i t i a l value o f est imate ’ f l h a t ’ %% %% %% %% Outputs : The esimates ’ h l ha t ’ , ’ f l h a t ’ and the i t e r a t i o n ’ count ’ %% %% %% %% Not ice : The order o f accuracy depends on N f f t and p . We have %% %% d i f f e r e n t accuracy f o r d i f f e r e n t path . The i t e r a t i o n t ime %% %% depends on the i n i t i a l value and the th resho ld . %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% I n i t i a l i z e h l and f l %%% i f narg in == 10 YYY = Y . ’ ; AAA = Al . ’ ; BBB = Bl . ’ ; temp H = inv ( Al ∗ AAA) ∗ Al ∗ YYY; temp F = inv ( Bl ∗ BBB) ∗ Bl ∗ (YYY − AAA ∗ temp H ) . / temp H ; h l i n i t i a l = [ temp H temp H ] ; f l i n i t i a l = [ temp F temp F ] ; h l i n i t i a l = h l i n i t i a l . ’ f l i n i t i a l = f l i n i t i a l . ’ end eta = 4 / 3 ; count = 0 ; h l h a t = zeros (1 , L ) ; 86 Appendix E. Matlab Code f l h a t = zeros (1 , L ) ; swi tch p case 2 % second order approx imat ion i t e r a t i o n = 1000; wh i le ( i t e r a t i o n ) f o r l = 1 : L Alpha h = zeros (1 , N f f t ) ; A lpha f = zeros (1 , N f f t ) ; i f l == 1 f o r k = 2 : L Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) . . . + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) ) ; end e l s e i f l == L f o r k = 1 : L − 1 Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) . . . + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) ) ; end else f o r k = 1 : l − 1 Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) . . . + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) ) ; end f o r k = l + 1 : L Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) . . . + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) ) ; end end % update h l Alpha h = Y − Alpha h ; Beta h = Al ( l , : ) + f l i n i t i a l (2 , l ) .∗ Bl ( l , : ) + f l i n i t i a l (2 , l ) ˆ 2 .∗ Cl ( l , : ) ; temp1 = Beta h ∗ Alpha h ’ / ( Beta h ∗ Beta h ’ ) ; % h ( i +1) temp1 = conj ( temp1 ) ; temp2 = eta ∗ temp1 + (1 − eta ) ∗ h l i n i t i a l (2 , l ) ; temp = h l i n i t i a l (2 , l ) ; h l i n i t i a l (1 , l ) = temp ; h l i n i t i a l (2 , l ) = temp2 ; % update f l A lpha f = Alpha h − h l i n i t i a l (2 , l ) .∗ Al ( l , : ) ; Be ta f = h l i n i t i a l (2 , l ) .∗ Bl ( l , : ) ; E t a f = h l i n i t i a l (2 , l ) .∗ Cl ( l , : ) ; 87 Appendix E. Matlab Code a = E t a f ∗ Eta f ’ ∗ 4; b = ( Be ta f ∗ Eta f ’ + E t a f ∗ Beta f ’ ) ∗ 3; c = ( Be ta f ∗ Beta f ’ − Alpha f ∗ Eta f ’ − E t a f ∗ Alpha f ’ ) ∗ 2; d = −(A lpha f ∗ Beta f ’ + Be ta f ∗ Alpha f ’ ) ; ppp = [ a b c d ] ; r r r = roo ts ( ppp ) ; temp1 = r r r ( leng th ( r r r ) ) ; % f ( i +1) temp2 = eta ∗ temp1 + (1 − eta ) ∗ h l i n i t i a l (2 , l ) ; temp = f l i n i t i a l (2 , l ) ; f l i n i t i a l (1 , l ) = temp ; f l i n i t i a l (2 , l ) = temp1 ; end i f (max( abs ( h l i n i t i a l (2 , : ) − h l i n i t i a l (1 , : ) ) . / abs ( h l i n i t i a l (1 , : ) ) ) < de l t a h ) . . . && (max( abs ( f l i n i t i a l (2 , : ) − f l i n i t i a l (1 , : ) ) . / abs ( f l i n i t i a l (1 , : ) ) ) < d e l t a f ) count = i t e r a t i o n ; i t e r a t i o n = 0; e lse i t e r a t i o n = i t e r a t i o n − 1; end end h l h a t = h l i n i t i a l (2 , : ) ; f l h a t = f l i n i t i a l (2 , : ) ; case 3 % t h i r d order approx imat ion i t e r a t i o n = 1000; wh i le ( i t e r a t i o n ) f o r l = 1 : L Alpha h = zeros (1 , N f f t ) ; A lpha f = zeros (1 , N f f t ) ; i f l == 1 f o r k = 2 : L Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) . . . + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 3 .∗ Dl ( k , : ) ) ; end e l s e i f l == L f o r k = 1 : L − 1 Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) . . . + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 3 .∗ Dl ( k , : ) ) ; end else f o r k = 1 : l − 1 Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) . . . + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 3 .∗ Dl ( k , : ) ) ; 88 Appendix E. Matlab Code end f o r k = l + 1 : L Alpha h = Alpha h + h l i n i t i a l (2 , k ) .∗ ( Al ( k , : ) + f l i n i t i a l (2 , k ) .∗ Bl ( k , : ) . . . + f l i n i t i a l (2 , k ) ˆ 2 .∗ Cl ( k , : ) + f l i n i t i a l (2 , k ) ˆ 3 .∗ Dl ( k , : ) ) ; end end % update h l Alpha h = Y − Alpha h ; Beta h = Al ( l , : ) + f l i n i t i a l (2 , l ) .∗ Bl ( l , : ) + f l i n i t i a l (2 , l ) ˆ 2 .∗ Cl ( l , : ) . . . + f l i n i t i a l (2 , l ) ˆ 3 .∗ Dl ( l , : ) ; temp1 = Beta h ∗ Alpha h ’ / ( Beta h ∗ Beta h ’ ) ; % h ( i +1) temp1 = conj ( temp1 ) ; temp2 = eta ∗ temp1 + (1 − eta ) ∗ h l i n i t i a l (2 , l ) ; temp = h l i n i t i a l (2 , l ) ; h l i n i t i a l (1 , l ) = temp ; h l i n i t i a l (2 , l ) = temp2 ; % update f l A lpha f = Alpha h − h l i n i t i a l (2 , l ) .∗ Al ( l , : ) ; Be ta f = h l i n i t i a l (2 , l ) .∗ Bl ( l , : ) ; E t a f = h l i n i t i a l (2 , l ) .∗ Cl ( l , : ) ; The ta f = h l i n i t i a l (2 , l ) .∗ Dl ( l , : ) ; a = The ta f ∗ Theta f ’ ∗ 6; b = ( E t a f ∗ Theta f ’ + The ta f ∗ Eta f ’ ) ∗ 5; c = ( Be ta f ∗ Theta f ’ + E t a f ∗ Eta f ’ + The ta f ∗ Beta f ’ ) ∗ 4; d = ( Be ta f ∗ Eta f ’ + E t a f ∗ Beta f ’ − Alpha f ∗ Theta f ’ − Theta f ∗ Alpha f ’ ) ∗ 3; e = ( Be ta f ∗ Beta f ’ − Alpha f ∗ Eta f ’ − E t a f ∗ Alpha f ’ ) ∗ 2; f = −(A lpha f ∗ Beta f ’ + Be ta f ∗ Alpha f ’ ) ; ppp = [ a b c d e f ] ; r r r = roo ts ( ppp ) ; temp1 = r r r ( leng th ( r r r ) ) ; % f ( i +1) temp2 = eta ∗ temp1 + (1 − eta ) ∗ h l i n i t i a l (2 , l ) ; temp = f l i n i t i a l (2 , l ) ; f l i n i t i a l (1 , l ) = temp ; f l i n i t i a l (2 , l ) = temp1 ; end i f (max( abs ( h l i n i t i a l (2 , : ) − h l i n i t i a l (1 , : ) ) . / abs ( h l i n i t i a l (1 , : ) ) ) < de l t a h ) . . . && (max( abs ( f l i n i t i a l (2 , : ) − f l i n i t i a l (1 , : ) ) . / abs ( f l i n i t i a l (1 , : ) ) ) < d e l t a f ) count = i t e r a t i o n ; i t e r a t i o n = 0; e lse i t e r a t i o n = i t e r a t i o n − 1; end 89 Appendix E. Matlab Code end h l h a t = h l i n i t i a l (2 , : ) ; f l h a t = f l i n i t i a l (2 , : ) ; end 90