Statistical models for agroclimate risk analysis by Mohamadreza Hosseini B.Sc., Amirkabir University, 2003 M.Sc., McGill University, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Statistics) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) November, 2009 c© Mohamadreza Hosseini 2009 Abstract In order to model the binary process of precipitation and the dichotomized temperature process, we use the conditional probability of the present given the past. We find necessary and sufficient conditions for a collection of functions to correspond to the conditional probabilities of a discrete–time categorical stochastic process X1,X2, · · · . Moreover we find parametric rep- resentations for such processes and in particular rth–order Markov chains. To dichotomize the temperature process, quantiles are often used in the literature. We propose using a two–state definition of the quantiles by con- sidering the “left quantile” and “right quantile” functions instead of the traditional definition. This has various advantages such as a symmetry re- lation between the quantiles of random variables X and −X. We show that the left (right) sample quantile tends to the left (right) distribution quantile at p ∈ [0, 1], if and only if the left and right distribution quantiles are iden- tical at p and diverge almost surely otherwise. In order to measure the loss of estimating (or approximating) a quantile, we introduce a loss function that is invariant under strictly monotonic transformations and call it the “probability loss function.” Using this loss function, we introduce measures of distance among random variables that are invariant under continuous strictly monotonic transformations. We use this distance measures to show optimal overall fits to a random variable are not necessarily optimal in the tails. This loss function is also used to find equivariant estimators of the parameters of distribution functions. We develop an algorithm to approximate quantiles of large datasets which works by partitioning the data or use existing partitions (possibly of non-equal size). We show the deterministic precision of this algorithm and how it can be adjusted to get customized precisions. Then we develop a framework to optimally summarize very large datasets using quantiles and combining such summaries in order to infer about the original dataset. Finally we show how these higher order Markov models can be used to construct confidence intervals for the probability of frost–free periods. ii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi 1 Thesis introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Exploratory analysis of the Canadian weather data . . . . 7 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Data description . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Temperature and precipitation . . . . . . . . . . . . . . . . . 8 2.4 Daily values, distributions . . . . . . . . . . . . . . . . . . . 24 2.5 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.1 Temporal correlation . . . . . . . . . . . . . . . . . . 48 2.5.2 Spatial correlation . . . . . . . . . . . . . . . . . . . . 56 2.6 Summary and conclusions . . . . . . . . . . . . . . . . . . . . 60 3 rth-order Markov chains . . . . . . . . . . . . . . . . . . . . . 62 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.2 Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.3 Consistency of the conditional probabilities . . . . . . . . . . 65 3.4 Characterizing density functions and rth–order Markov chains 70 3.5 Functions of r variables on a finite domain . . . . . . . . . . 73 3.5.1 First representation theorem . . . . . . . . . . . . . . 74 3.5.2 Second representation theorem . . . . . . . . . . . . . 80 iii Table of Contents 3.5.3 Special cases of functions of r finite variables . . . . . 85 3.6 Generalized linear models for time series . . . . . . . . . . . 86 3.7 Simulation studies . . . . . . . . . . . . . . . . . . . . . . . . 90 3.8 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . 93 4 Binary precipitation process . . . . . . . . . . . . . . . . . . . 94 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.2 Models for 0-1 precipitation process . . . . . . . . . . . . . . 95 4.3 Exploratory analysis of the data . . . . . . . . . . . . . . . . 97 4.4 Comparing the models using BIC . . . . . . . . . . . . . . . 105 4.5 Changing the location and the time period . . . . . . . . . . 112 5 On the definition of “quantile” and its properties . . . . . 115 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.2 Definition of median and quantiles of data vectors and ran- dom samples . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.3 Defining quantiles of a distribution . . . . . . . . . . . . . . . 128 5.4 Left and right extreme points . . . . . . . . . . . . . . . . . . 132 5.5 The quantile functions as inverse . . . . . . . . . . . . . . . . 133 5.6 Equivariance property of quantile functions . . . . . . . . . . 135 5.7 Continuity of the left and right quantile functions . . . . . . 137 5.8 Equality of left and right quantiles . . . . . . . . . . . . . . . 144 5.9 Distribution function in terms of the quantile functions . . . 150 5.10 Two-sided continuity of lq/rq . . . . . . . . . . . . . . . . . . 152 5.11 Characterization of left/right quantile functions . . . . . . . 153 5.12 Quantile symmetries . . . . . . . . . . . . . . . . . . . . . . . 157 5.13 Quantiles from the right . . . . . . . . . . . . . . . . . . . . 163 5.14 Limit theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5.15 Summary and discussion . . . . . . . . . . . . . . . . . . . . 177 6 Probability loss function . . . . . . . . . . . . . . . . . . . . . 181 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 6.2 Degree of separation between data vectors . . . . . . . . . . 181 6.3 “Degree of separation” for distributions: the “probability loss function” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.4 Limit theory for the probability loss function . . . . . . . . . 187 6.5 The probability loss function for the continuous case . . . . . 188 6.6 The supremum of δX . . . . . . . . . . . . . . . . . . . . . . 189 6.6.1 “c-probability loss” functions . . . . . . . . . . . . . . 191 iv Table of Contents 7 Approximating quantiles in large datasets . . . . . . . . . . 193 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 7.2 Previous work . . . . . . . . . . . . . . . . . . . . . . . . . . 194 7.3 The median of the medians . . . . . . . . . . . . . . . . . . . 196 7.4 Data coarsening and quantile approximation algorithm . . . 197 7.5 The algorithm and computations . . . . . . . . . . . . . . . . 205 8 Quantile data summaries . . . . . . . . . . . . . . . . . . . . . 212 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 8.2 Generalization to weighted vectors . . . . . . . . . . . . . . . 214 8.2.1 Partition operator . . . . . . . . . . . . . . . . . . . . 219 8.2.2 Quantile data summaries . . . . . . . . . . . . . . . . 223 8.3 Optimal probability indices for vector data summaries . . . . 225 8.4 Other loss functions . . . . . . . . . . . . . . . . . . . . . . . 231 8.4.1 Optimal index vectors for assigning quantiles to a ran- dom sample . . . . . . . . . . . . . . . . . . . . . . . 234 9 Quantile distribution distance and estimation . . . . . . . . 236 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 9.2 Quantile–specified parameter families . . . . . . . . . . . . . 237 9.2.1 Equivariance of quantile–specified families estimation 239 9.2.2 Continuous distributions with the order statistics fam- ily of estimators . . . . . . . . . . . . . . . . . . . . . 242 9.3 Probability divergence (distance) measures . . . . . . . . . . 243 9.4 Quantile distance measures . . . . . . . . . . . . . . . . . . . 248 9.4.1 Quantile distance invariance under continuous strictly monotonic transformations . . . . . . . . . . . . . . . 249 9.4.2 Quantile distance closeness of empirical distribution and the true distribution . . . . . . . . . . . . . . . . 254 9.4.3 Quantile distance and KS distance closeness . . . . . 255 9.4.4 Quantile distance for continuous variables . . . . . . . 260 9.4.5 Equivariance of estimation under monotonic transfor- mations using the quantile distance . . . . . . . . . . 267 9.4.6 Estimation using quantile distance . . . . . . . . . . . 268 10 Binary temperature processes . . . . . . . . . . . . . . . . . . 272 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 10.2 rth–order Markov models for extreme minimum temperatures 275 10.2.1 Exploratory analysis for binary extreme minimum tem- peratures . . . . . . . . . . . . . . . . . . . . . . . . . 275 v Table of Contents 10.2.2 Model selection for extreme minimum temperature . 276 10.3 rth–order Markov models for extreme maximum tempera- tures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 10.3.1 Exploratory analysis for extreme maximum tempera- tures . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 10.3.2 Model selection for extreme maximum temperature . 286 10.4 Probability of a frost–free period for Medicine Hat . . . . . . 296 10.5 Possible applications of the models . . . . . . . . . . . . . . . 303 11 Conclusions and future research . . . . . . . . . . . . . . . . 304 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 11.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 11.3 Future research . . . . . . . . . . . . . . . . . . . . . . . . . 305 11.3.1 rth-order Markov chains . . . . . . . . . . . . . . . . 305 11.3.2 Approximating quantiles and data summaries . . . . 306 11.3.3 Parameter estimation using probability loss and quan- tile distances . . . . . . . . . . . . . . . . . . . . . . . 306 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Appendices A Climate review . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 A.1 Organizations and resources . . . . . . . . . . . . . . . . . . 312 A.2 Definitions and climate variables . . . . . . . . . . . . . . . . 313 A.3 Climatology . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 A.3.1 General circulations . . . . . . . . . . . . . . . . . . . 318 A.3.2 Topography of Canada . . . . . . . . . . . . . . . . . 319 A.4 Some interesting facts about Canadian geography and weather 319 B Extracting Canadian Climate Data from Environment Canada dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 B.2 Using Python to extract data . . . . . . . . . . . . . . . . . . 325 B.3 New functions to write stations’ data . . . . . . . . . . . . . 330 B.4 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . 331 C Algorithms and Complexity . . . . . . . . . . . . . . . . . . . 332 vi Table of Contents D Notations and Definitions . . . . . . . . . . . . . . . . . . . . 333 vii List of Tables 2.1 The summary statistics for the mean annual maximum tem- perature, min temperature and precipitation at the Calgary site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 Confidence intervals for the mean annual maximum temper- ature, min temperature and precipitation at the Calgary site. 16 2.3 Lines fitted to annual mean minimum temperature and an- nual mean precipitation against annual mean maximum tem- perature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4 The regression line parameters for the fitted lines for each variable with respect to time for the Calgary site. . . . . . . . 24 2.5 The regression line parameters for the fitted lines for each variable with respect to time for the Banff site. . . . . . . . . 24 2.6 The regression line parameters for the fitted lines for each variable with respect to time for the Medicine Hat site. . . . 24 3.1 The estimated parameters for the model Zt−1 = (1, Yt−1, cos(ωt)) with parameters β = (−1, 1,−0.5). The standard deviation for the parameters is computed once using GN (theo. sd) and once using the generated samples (sim. sd). . . . . . . . . . . 91 3.2 BIC values for several models competing for the role of the true model, where Zt−1 = (1, Y 1, COS), β = (−1, 1,−0.5). . . 92 3.3 BIC values for several models competing for the role of true model given by Zt−1 = (1, Y 1, Y 2, COS), β = (−1, 1, 1,−0.5). 93 4.1 BIC values for models including N l, the number of precipita- tion days during the past l days for the Calgary site. . . . . . 106 4.2 BIC values for models including N l, the number of wet days during the past l days and Y 1, the precipitation occurrence of the previous day for the Calgary site. . . . . . . . . . . . . 107 4.3 BIC values for models including N l, the number of wet days during the past l days and seasonal terms for the Calgary site. 108 viii List of Tables 4.4 BIC values for models including N l, the number of PN days during the past l days, Y 1, the precipitation occurrence of the previous day and seasonal terms for the Calgary site. . . 109 4.5 BIC values for Markov models of different order with small number os parameters for the Calgary site. . . . . . . . . . . 109 4.6 BIC values for Markov models with different order plus sea- sonal terms for the Calgary site. . . . . . . . . . . . . . . . . 110 4.7 BIC values for models including seasonal terms and the occur- rence of precipitation during the previous day for the Calgary site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.8 BIC values for 2nd–order Markov models for precipitation at the Calgary site. . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.9 BIC values for 2nd–order Markov models for precipitation at the Calgary site plus seasonal terms. . . . . . . . . . . . . . . 111 4.10 BIC values for models including several covariates as temper- ature, seasonal terms and year effect for precipitation at the Calgary site. . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.11 BIC values for several models for the binary process of pre- cipitation in Calgary, 1990–1994 . . . . . . . . . . . . . . . . 113 4.12 BIC values for several models for precipitation occurrence in Medicine Hat, 2000-2004 . . . . . . . . . . . . . . . . . . . . . 113 5.1 Earthquakes intensities . . . . . . . . . . . . . . . . . . . . . . 121 5.2 Rain acidity data . . . . . . . . . . . . . . . . . . . . . . . . . 122 6.1 A class marks in mathematics and physics. The third column are the raw physics marks before the physics teacher scaled them. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 7.1 The table of data . . . . . . . . . . . . . . . . . . . . . . . . . 196 7.2 Comparing the exact method with the proposed algorithm in R run on a laptop with 512 MB memory and a processor 1500 MHZ, m = 1000, d = 500. “DOS” stands for degree of separation in the original vector. “DOS bound” is the theoretical degree of separation obtained by Theorem 7.4.1. . 208 7.3 Comparing the exact method with the proposed algorithm in R (run on a laptop with 512 MB memory and processor 1500 MHZ) to compute the quantiles ofMT (daily maximum temperature) over 25 stations with data from 1940 to 2004. . 211 ix List of Tables 9.1 Comparing standard normal with various distributions using quantile distance, where U denotes the uniform distribution and χ2 the Chi-squared distribution. . . . . . . . . . . . . . . 261 9.2 Comparing standard normal on the tails with some distribu- tions using quantile distance, where U denotes the uniform distribution and χ2 the Chi-squared distribution. . . . . . . . 267 9.3 Assessment of Maximum likelihood estimation and quantile distance estimation using several measures of error for a sam- ple of size 20. In the table s.e. stands for the standard error. 269 9.4 Assessment of Maximum likelihood estimation and quantile distance estimation using several measures of error for a sam- ple of size 100. In the table s.e. stands for the standard error. 269 10.1 BIC values for models including Nk for the extreme mini- mum temperature process e(t) at the Medicine Hat site. . . . 284 10.2 BIC values for several models for the extreme minimum tem- perature e(t) at the Medicine Hat site. . . . . . . . . . . . . . 285 10.3 BIC values for models including Nk for the extremely hot process E(t). . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 10.4 BIC values for several models for the extremely hot process E(t). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 10.5 BIC values for models including Nk for the extremely cold process e(t) at the Medicine Hat site. . . . . . . . . . . . . . . 298 10.6 BIC values for several models including Nk and seasonal terms for the extremely cold process e(t) at the Medicine Hat site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 10.7 BIC values for several models for the extremely cold process e(t) at the Medicine Hat site. . . . . . . . . . . . . . . . . . . 299 10.8 Theoretical and simulation estimated standard deviations for extremely cold process e(t) at the Medicine Hat site. . . . . . 300 x List of Figures 2.1 Alberta site locations for temperature (deg C) data. There are 25 stations available with temperature data over Alberta. 8 2.2 Alberta site locations for precipitation (mm) data. There are 47 stations available with precipitations data over Alberta. . 9 2.3 The number of years available for sites with temperature (deg C) data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4 The number of years available for sites with precipitation (mm) data available. . . . . . . . . . . . . . . . . . . . . . . . 10 2.5 The elevation (meters) of sites with temperature data available. 10 2.6 The elevation (meters) of the sites with precipitation data available. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.7 The time series of daily maximum temperature (deg C) at the Calgary site from 2000 to 2003. . . . . . . . . . . . . . . . 12 2.8 The time series of daily minimum temperature (deg C) at the Calgary site from 2000 to 2003. . . . . . . . . . . . . . . . . . 12 2.9 The time series of daily precipitation (mm) at the Calgary site from 2000 to 2003. . . . . . . . . . . . . . . . . . . . . . . 13 2.10 The time series of monthly maximum temperature (deg C) at the Calgary site, 1995–2005. . . . . . . . . . . . . . . . . . . . 13 2.11 The time series of monthly minimum temperature means (deg C) at the Calgary site, 1995–2005. . . . . . . . . . . . . . . . 14 2.12 The time series of monthly precipitation means (mm) at the Calgary site, 1995–2005. . . . . . . . . . . . . . . . . . . . . . 14 2.13 The annual mean maximum temperature (C) for Calgary site for all available years. . . . . . . . . . . . . . . . . . . . . . . 15 2.14 The annual mean minimum temperature (C) for Calgary site for all available years. . . . . . . . . . . . . . . . . . . . . . . 15 2.15 The annual mean precipitation (mm) for Calgary site for all available years. . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.16 The histogram of annual maximum temperature means (deg C) for Calgary with a normal curve fitted to the data. . . . . 17 xi List of Figures 2.17 The normal qq–plot for annual maximum temperature means (deg C) for Calgary. . . . . . . . . . . . . . . . . . . . . . . . 18 2.18 The histogram of annual minimum temperature means (deg C) for Calgary with normal curve fitted to the data. . . . . . 18 2.19 The normal qq–plot for annual minimum temperature means (deg C) for Calgary. . . . . . . . . . . . . . . . . . . . . . . . 19 2.20 The histogram of annual precipitation means (mm) for Cal- gary with normal curve fitted to the data. . . . . . . . . . . 19 2.21 The normal qq–plot for annual precipitation means for Calgary. 20 2.22 The time series plots of maximum temperature (deg C), min- imum temperature (deg C) and precipitation (mm) annual means for Calgary. The time series plot in the bottom is minimum temperature, the one in the middle is precipitation and the top curve is maximum temperature. . . . . . . . . . . 21 2.23 The regression line fitted to maximum temperature and min- imum temperature annual means for Calgary. . . . . . . . . . 22 2.24 The regression line fitted to maximum temperature and pre- cipitation annual means for Calgary. . . . . . . . . . . . . . . 22 2.25 The regression line fitted to summer minimum temperature means against time for Calgary. . . . . . . . . . . . . . . . . . 23 2.26 The time series of daily maximum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.27 The histogram of daily maximum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.28 The normal qq–plots of of daily maximum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. . . . . . . . . . . . . . . . . . . . . . . . 27 2.29 The time series of daily minimum temperature for Calgary for four given dates: January 1st, April 1st, July 1st and October 1st. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.30 The histogram of daily minimum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.31 The normal qq-plots of daily minimum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. . . . . . . . . . . . . . . . . . . . . . . . 30 xii List of Figures 2.32 The time series of daily precipitation at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.33 The histogram of daily precipitation at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.34 The confidence intervals for the daily mean maximum tem- perature (deg C) at the Calgary site. Dashed line shows the upper bound and the solid line the lower bound of the confi- dence intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.35 The confidence intervals for the daily mean minimum tem- perature (deg C) at the Calgary site. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.36 The confidence intervals for the probability of precipitation (mm) at the Calgary site for the days of the year. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. . . . . . . . . . . . . . . . . . . . . . 35 2.37 The confidence intervals for the standard deviation of each day of the year for maximum temperature (deg C) at the Calgary site. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. . . . . . . . 36 2.38 The confidence intervals for the standard deviation of each day of the year for minimum temperature (deg C) at the Calgary site. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. . . . . . . . 37 2.39 The confidence intervals for standard deviation (sd) of each day of the year for the probability of precipitation (mm) (0-1 precipitation process) at the Calgary site. Dashed line shows the upper bound and the solid the lower bound of the con- fidence intervals. Plot shows sd ≤ 1/2. This is because sd = √ p(1− p) which has a maximum value of 12 . . . . . . . 38 2.40 The distribution of each day of the year forMT (C) from Jan 1st to Dec 1st. The year has been divided to two halves. In each half rainbow colors are used to show the change of the distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.41 The distribution of each day of the year for mt (C) from Jan 1st to Dec 1st. The year has been divided to two halves. In each half rainbow colors are used to show the change of the distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 xiii List of Figures 2.42 The histogram of daily precipitation greater than 0.2 mm at the Calgary site with Gamma density curve fitted using Maximum likelihood. . . . . . . . . . . . . . . . . . . . . . . . 40 2.43 The qq–plots of daily precipitation greater than 0.2 mm at the Calgary site with Gamma curve fitted using Maximum likelihood. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.44 The Gamma fit of each day of 4 months for precipitation (mm). In each month rainbow colors are used to show the change of the distribution. . . . . . . . . . . . . . . . . . . . . 42 2.45 The maximum likelihood estimate for α, the shape parameter of the Gamma distribution fitted to the precipitation amounts. 43 2.46 The confidence interval for MOM estimate of the shape pa- rameter, α, of the Gamma distribution fitted to daily precip- itation amounts. The dotted line is the upper bound and the solid line the lower bound. As seen in the figure the upper bounds at the beginning and end of the year have become very large. We have not shown them because otherwise then the pattern in the rest of the year could not be seen. . . . . . 44 2.47 The 1st-order transition probabilities. The dotted line is the the probability of precipitation if it happened the day before (p̂11) and the dashed is the probability of precipitation if it did not happen the day before (p̂01). . . . . . . . . . . . . . . 45 2.48 The 2nd–order transition probabilities for the precipitation at the Calgary site: p̂111 (solid) against p̂011 (dotted). . . . . 46 2.49 The 2nd–order transition probabilities for the precipitation at the Calgary site: p̂001 (solid) against p̂101 (dotted). . . . . 47 2.50 The correlation and covariance plot for maximum tempera- ture at the Calgary site for Jan 1st and 732 consequent days. 48 2.51 The correlation plot for maximum temperature (deg C) at the Calgary site for Jan 1st and 732 consequent days. . . . . 49 2.52 The correlation plot for minimum temperature (deg C) at the Calgary site for Jan 1st and 732 consequent days. . . . . . . . 50 2.53 The correlation plot for precipitation (mm) at the Calgary site for Jan 1st and 732 consequent days. . . . . . . . . . . . 51 2.54 The correlation plot for maximum temperature (deg C) at the Calgary site for Feb 1st (solid), April 1st (dashed), July 1st (dotted) and Oct 1st (dot dash) and 30 consequent days. 52 2.55 The correlation plot for minimum temperature (deg C) at the Calgary site for Feb 1st (solid), April 1st (dashed), July 1st (dotted) and Oct 1st (dot dash) and 30 consequent days. . . . 53 xiv List of Figures 2.56 The correlation plot for precipitation (mm) at the Calgary site for Feb 1st (solid), April 1st (dashed), July 1st (dotted) and Oct 1st (dot dashed) and 30 consequent days. . . . . . . 54 2.57 The correlation plot for maximum temperature and minimum temperature (deg C) between Calgary and Medicine Hat. . . 55 2.58 The correlation plot for precipitation (mm) between Calgary and Medicine Hat. . . . . . . . . . . . . . . . . . . . . . . . . 55 2.59 The correlation plot for maximum temperature (deg C) with respect to distance (km). . . . . . . . . . . . . . . . . . . . . . 56 2.60 The correlation plot for minimum temperature (deg C) with respect to distance(km). . . . . . . . . . . . . . . . . . . . . . 57 2.61 The correlation plot for precipitation (mm) with respect to distance (km). . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.62 The correlation plot for precipitation (mm) 0-1 process with respect to distance (km). . . . . . . . . . . . . . . . . . . . . . 59 3.1 The distribution of parameter estimates for the model with the covariate process Zt−1 = (1, Yt−1, cos(ωt)) and parame- ters (β1 = −1, β2 = 1, β3 = −0.5). . . . . . . . . . . . . . . . . 92 4.1 The transition probabilities for the Banff site. The dotted line represents p̂11 (the estimated probability of precipitation if precipitation occurs the day before) and the dashed represents p̂01 (the estimated probability of precipitation if precipitation does not occur the day before.) . . . . . . . . . . . . . . . . . 99 4.2 The solid curve represents p̂111 (the estimated probability of precipitation if during both two previous days precipitation occurs) and the dashed curve represents p̂011 (the estimated probability that precipitation occurs if precipitation occurs the day before and does not occur two days ago) for the Banff site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.3 The solid curve represents p̂001 (the estimated probability of precipitation occurring if it does not occur during the two previous days) and the dotted curve is p̂101 (the estimated probability that precipitation occurs if precipitation does not occur the day before but occurs two days ago) for the Banff site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.4 Banff’s estimated mean annual probability of precipitation calculated from historical data. . . . . . . . . . . . . . . . . . 102 xv List of Figures 4.5 Calgary’s estimated mean annual probability of precipitation calculated from historical data. . . . . . . . . . . . . . . . . . 102 4.6 The logit function: logit(x) = log(x/(1 − x)). . . . . . . . . . 103 4.7 The logit of the estimated probability of precipitation in Banff for different days of the year. . . . . . . . . . . . . . . . . . . 103 5.1 An example of a distribution function with discontinuities and flat intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 5.2 The left quantile (lq) function for the distribution function given in Example 5.7. Notice that this function is left contin- uous and increasing. . . . . . . . . . . . . . . . . . . . . . . . 142 5.3 The right quantile (rq) function for the distribution function given in Example 5.7. Notice that this function is right con- tinuous and increasing. . . . . . . . . . . . . . . . . . . . . . . 142 5.4 LQ function for Example 5.7. Notice that this function is increasing and left continuous. . . . . . . . . . . . . . . . . . 143 5.5 RQ function for Example 5.7, notice that this function is in- creasing and right continuous. . . . . . . . . . . . . . . . . . . 143 5.6 For the vector x = (−2,−2, 2, 2, 4, 4, 4, 4) the left (top) and right (bottom) quantile functions are given. . . . . . . . . . . 156 5.7 The solid line is the distribution function of {Xi}. Note that for the distribution of the Xi and p = 0.5, lqFX (p) = 0, rqFX (p) = 3. Let h = rq(p) − lq(p) = 3. The dotted line is the distribution function of the {Yi} which coincides with that of {Xi} to the left of lqFX (p) and is a backward shift of 3 units for values greater than rqFX (p). Note that for the {Yi}, lqFY (p) = rqFY (p) = 1. . . . . . . . . . . . . . . . . . . . . . . 176 7.1 Comparing the approximated quantiles to the exact quantiles N = 107. The circles are the exact quantiles and the + are the corresponding approximated quantiles. . . . . . . . . . . . 209 7.2 Comparing the approximated quantiles to the exact quantiles for MT (daily maximum temperature) over 25 stations in Alberta 1940–2004. The circles are the exact quantiles and the + the approximated quantiles. . . . . . . . . . . . . . . . 210 9.1 The order statistics family members that estimate lqX(1/2) and lqX(P (Z ≤ 1)) for a random sample of length 25 obtained by generating samples of size 1 to 1000 from a standard nor- mal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 244 xvi List of Figures 9.2 The order statistics family members that estimate lqX(1/2) and lqX(P (Z ≤ 1)) for a random sample of length 20 obtained by generating samples of size 1 to 1000 from a standard nor- mal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 245 9.3 Cauchy distribution’s distance with different scale parameter (and location parameter=0) to the standard normal. In the plots QD1 = QX and QD2 = QDY and QD = QD1 +QD2, where X is the standard normal and Y is the Cauchy. . . . . 262 9.4 The distribution function of standard normal (solid) com- pared with the optimal Cauchy (and location parameter=0) picked by quantile distance minimization with scale param- eter=0.66 (dashed curve), Cauchy with scale parameter=1 (dotted) and Cauchy with scale parameter=0.5 (dot dashed). 263 9.5 Cauchy distribution’s distance with different scale parameter (and location parameter=0) to the standard normal on the tails. In the plots QD1 = QX and QD2 = QDY and QD = QD1 + QD2, where X is the standard normal and Y is the Cauchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 9.6 The distribution function of standard normal (solid) com- pared with the optimal Cauchy picked by tail quantile dis- tance minimization with scale parameter=0.12 (dashed curve), Cauchy with scale parameter=0.65 (dotted) and Cauchy with scale parameter=0.01 (dot dashed). . . . . . . . . . . . . . . . 265 9.7 Comparing the standard normal distribution (solid) with op- timal Cauchy picked by quantile distance (dashed) and the optimal Cauchy picked by tail quantile distance minimization (dotted). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 9.8 Histograms for the parameter estimates using quantile dis- tance and maximum likelihood methods for a sample of size 20. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 9.9 Histograms for the parameter estimates using quantile dis- tance and maximum likelihood methods for a sample of size 100. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 10.1 The estimated probability of a freezing day for the Banff site for different days of a year computed using the historical data. 276 10.2 The estimated probability of a freezing day for the Medicine Hat site for different days of a year computed using the his- torical data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 xvii List of Figures 10.3 The estimated 1st–order transition probabilities for the 0- 1 process of extreme minimum temperatures for the Banff site. The dotted line represents the estimated probability of “e(t) = 1 if e(t − 1) = 1” (p̂11) and the dashed, “e(t) = 1 if e(t− 1) = 0” (p̂01). . . . . . . . . . . . . . . . . . . . . . . . . 278 10.4 The estimated 1st–order transition probabilities for the 0-1 process of extreme minimum temperatures for the Medicine Hat site. The dotted line represents the estimated probability of “e(t) = 1 if e(t − 1) = 1” (p̂11) and the dashed, “e(t) = 1 if e(t− 1) = 0” (p̂01). . . . . . . . . . . . . . . . . . . . . . . . 279 10.5 The estimated 2nd–order transition probabilities for the 0-1 process of extreme minimum temperature for the Banff site with p̂111 (solid) compared with p̂011 (dotted) both calculated from the historical data. . . . . . . . . . . . . . . . . . . . . . 280 10.6 The estimated 2nd–order transition probabilities for the 0-1 process of extreme minimum temperatures for the Banff site with p̂001 (solid) compared with p̂101 (dotted) calculated from the historical data. . . . . . . . . . . . . . . . . . . . . . . . . 281 10.7 The estimated 2nd–order transition probabilities for the 0-1 process of extreme minimum temperatures for the Medicine Hat site with p̂111 (solid) compared with p̂011 (dotted) calcu- lated from the historical data. . . . . . . . . . . . . . . . . . . 282 10.8 The estimated 2nd–order transition probabilities for the 0-1 process of extreme minimum temperatures for the Medicine Hat site with p̂001 (solid) compared with p̂101 (dotted) calcu- lated from the historical data. . . . . . . . . . . . . . . . . . . 283 10.9 The estimated probability of a hot day (maximum tempera- ture ≥ 27 (deg C)) for different days of the year for the Banff site calculated from the historical data. . . . . . . . . . . . . 287 10.10The estimated probability of a hot day (maximum temper- ature ≥ 27 (deg C)) for different days of the year for the Medicine Hat site calculated from the historical data. . . . . 288 10.11The estimated 1st–order transition probabilities for the bi- nary process of extremely hot temperatures for the Banff site. The dotted line represent the estimated probability of “E(t) = 1 if E(t− 1) = 1” (p̂11) and the dashed, “E(t) = 1 if E(t− 1) = 0” (p̂01). . . . . . . . . . . . . . . . . . . . . . . . 289 xviii List of Figures 10.12The estimated 1st–order transition probabilities for the bi- nary process of extremely hot temperatures for the Medicine Hat site. The dotted line represents the estimated probability of “E(t) = 1 if E(t−1) = 1” (p̂11) and the dashed, “E(t) = 1 if E(t− 1) = 0” (p̂01). . . . . . . . . . . . . . . . . . . . . . . 290 10.13The estimated 2nd–order transition probabilities for the bi- nary process of extremely hot temperatures for the Banff site with p̂111 (solid) compared with p̂011 (dotted) calculated from the historical data. . . . . . . . . . . . . . . . . . . . . . . . . 291 10.14The estimated 2nd–order transition probabilities for the bi- nary process of extremely hot temperatures for the Banff site with p̂001 (solid) compared with p̂101 (dotted) calculated from the historical data. . . . . . . . . . . . . . . . . . . . . . . . . 292 10.15The estimated 2nd–order transition probabilities for the bi- nary process of extremely hot temperatures for the Medicine Hat site with p̂111 (solid) compared with p̂011 (dotted), calcu- lated from the historical data. . . . . . . . . . . . . . . . . . . 293 10.16The estimated 2nd–order transition probabilities for the bi- nary process of extremely hot temperatures for the Medicine Hat site with p̂001 (solid) compared with p̂101 (dotted) calcu- lated from the historical data. . . . . . . . . . . . . . . . . . . 294 10.17Medicine Hat’s estimated mean annual probability of frost calculated from the historical data. . . . . . . . . . . . . . . . 297 10.18Normal curved fitted to the distribution of 50 samples of the estimated parameters. . . . . . . . . . . . . . . . . . . . . . . 301 B.1 Canada site locations . . . . . . . . . . . . . . . . . . . . . . . 324 xix Acknowledgements I would like to thank my supervisors, Prof. Jim Zidek and Prof. Nhu Le for what they have taught me in statistics and a lot more, their encourage- ments, ideas and financial support through various RA positions during my PhD studies. I feel very grateful and lucky to have them as my supervisors. I should also thank Prof. Matias Salibian-Barrera on my supervisory com- mittee for giving me great ideas and feedbacks. I also like to thank other people in the statistics department at UBC, prof. Paul Gustafson, prof. John Petkau, prof. Constance Van Eden and prof. Ruben Zamar which I owe them a lot of things I know. I also thank Mike Marin (instructor at UBC) for having various interesting discussions about statistics and sci- ence, and Viena Tran for helping me regarding many administrative issues. I like to thank my friend Dr. Nathaniel Newlands for insightful comments and good suggestions and Ralph Wright (Alberta Agriculture Food and Ru- ral Development) for making useful comments about the definition of the extremes. Finally, I like to express my deepest appreciation to all the people who helped me learn and love statistics and mathematics during all my life, from my grandfather who graciously taught me mathematics when I was a child to my mother who has been an inspiration and high school teachers who encouraged me to study mathematics as a university major. xx Dedication To my lovely parents, my amazing brother: Alireza, my sweet sister: Fatima, my best friends: Mostafa Aghajanpour, Mahmoud Sohrabi, Masoud Feizbakhsh, Behruz Khajali, Ali Mehrabian, Prof. Masoud Asgharian, Prof. Niky Kamran, Mirella Simoneova, Kiyouko Futaeda, May Yun, Yuki Ezaki, Soheil Keshmiri, Naoko Yoshimi and Mike Marin. xxi Chapter 1 Thesis introduction This thesis develops mathematical and statistical framework to model stochas- tic processes over time. In particular it develops models for precipitation and extreme (high or low) temperature events occurrences. This is impor- tant for Canada’s agriculture since agricultural production is dependent on weather and water availability. We study the quantiles of data and distributions in detail and develop a framework for approximating quantiles in large datasets and inference. We also study categorical Markov chains of higher order and apply them to precipitation and temperature processes. However, the methodologies and theories developed here are general and can be used in many other appli- cations where such processes are encountered (such as physics, chemistry, climatology, economics and so on). Sample quantiles and quantile function are fundamental concepts in statistics. In the study of extreme events they are often used to pick appro- priate thresholds. We use the quantiles specifically to pick thresholds for the temperature process. This motivates us to study the concept of quantiles and extend their classic definition to provide a more intuitively appealing alter- native. This alternative also enables us to get interesting asymptotic results about their sample counterparts and a framework to approximate quantiles and make inference. In fact weather datasets (observed weather or output of climate models) are very large in size. This makes computing the quantiles of such large datasets computationally intensive. Along with this alternative definition, we present an algorithm for computing/approximating quantiles in large datasets. The data used in this thesis come from the climate data CD published by Environment Canada [10], which includes the daily observed precipitation and temperature data for several station from 1895 to 2007 (the years vary- ing with the station). The data are saved in several binary files. We have written a Python module to extract the data in desired formats. The guide to using this module is in Appendix B. For most of the analysis however, we have used the “homogenized” dataset for Alberta. This dataset is adjusted for change of instruments and location of the stations. More information 1 Chapter 1. Thesis introduction about the datasets is given in Appendix B and Chapter 2. Chapter 2 presents results from the exploratory analysis of the dataset. We look at the variables’ daily time series, monthly means time series, annual means time series and the distribution of the daily/annual means values. We also look at the relation between the variables as well as some long–term trends by simple techniques such as linear regression. For example it seems that the mean summer daily minimum temperature has increased over time at some locations in Alberta. Then we study the seasonal patterns of these variables over the course of the year. As expected there is a strong sea- sonal component in these processes. For example, we observe that the daily temperature is more variable in the colder seasons than the warmer ones. The daily values for the minimum and maximum temperature seem to be described fairly well by a Gaussian process. However, some deviations from the Gaussian assumption is seen in the tails. This is particularly impor- tant in modeling extreme events and will help us in later chapters to choose our approach to modeling the occurrence of such extremes. As a part of the exploratory analysis, we look the precipitation occurrence. A question that has been addressed by several authors (e.g. Tong in [45] and Gabriel et al. in [18]) is the Markov order of such a chain. The exploratory anal- ysis using the transition probabilities plots leads to the conjecture that a 1st–order Markov chain should be appropriate. This is studied in detail in later chapters. We also look at the spatial–temporal correlation function of these processes. Several interesting features are observed. For example for the maximum and minimum temperature, the correlation seems to be sta- tionary over time. Also the geodesic distance seems to describe the spatial correlation for temperature well. For precipitation on other hand not much spatial correlation is observed. This could be due to the fact that we have only 47 precipitation stations available over Alberta and it is more variable over space compared to temperature. Let us denote a general weather process by Xt, where t denotes time. The main approach we take to model the process is discrete–time categorical rth–order Markov chains (r a natural number), where we have the following assumption for the conditional probabilities: P (Xt|Xt−1, · · · ) = P (Xt|Xt−1, · · · ,Xt−r). “Categorical chain” here means thatXt takes only a finite number of possible states. For example it can be a two state space of the (occurrence)/(non- occurrence) of precipitation. Dichotomizing the temperature process, we can consider processes such as (freezing)/(not freezing). Processes with 2 Chapter 1. Thesis introduction more than two states can also be considered. For example a process with three states: (not warm)/(warm)/(hot). Chapter 3 studies the rth–order categorical Markov chains in general. We present a new representation theorem for such chains that expresses the above conditional probability as a linear combination of the monomials of past process values Xt−1, · · · ,Xt−r. We will show the existence and unique- ness of such a representation. In the stationary case since the conditional probability is the same for all time points, some more work on the consis- tency shows that this representation characterizes all stationary categorical rth–order Markov chains. For the binary case the result is a corollary of a theorem stated in [6]. However, the expression of the theorem in [6] is flawed as also pointed out by Cressie et al. [14]. We present a rigorous statement along with a constructive proof for the theorem. For discrete– time categorical chains with more than two states this theorem does not seem especially useful. We prove a new theorem for this case that gives us representation for all discrete–time categorical chains (rather than only binary). In order to estimate the parameters of such a model in the binary case and infer about them, we use the “Time series following general linear models” as described in [27]. The inferences are similar to generalized linear models. However because of dependencies over time some extensions of the usual theory are needed. Maximizing the “partial likelihood” will give us “consistent” estimators as shown in [48]. We apply the partial likelihood theory to our proposed rth–order Markov models. Simulations show that partial likelihood and the representation together give us satisfactory re- sults for the binary case. We also check the performance of the Bayesian information criterion (BIC), developed in [42] and others, to pick optimal models by simulation studies and we get satisfactory results. This allows us not only to pick the order of the Markov chain but also to compare several Markov chains of the same order. Another advantage of this model to ex- isting ones is the capacity to accommodate other continuous variables. For example, we can add some seasonal processes to get a non–stationary chain. [In previous studies regarding the order of the chain e.g. [45] and [18], it was assumed that the precipitation chain is stationary.] We can also add covariate processes such as temperature of the previous day to the model. Then we apply these techniques to the binary precipitation process in Al- berta and pick appropriate models. A 1st–order non-stationary (with one seasonal term) seem to be the most appropriate based on the BIC method for model selection. To apply these techniques to the temperature processes, we need a way of dichotomizing the temperature process. Usually certain quantiles are cho- 3 Chapter 1. Thesis introduction sen in order to do so. Computing the quantiles for large datasets can be computationally challenging. Very large datasets are often encountered in climatology, either from a multiplicity of observations over time and space or outputs from deterministic models (sometimes in petabytes= 1 million giga- bytes). Loading a large data vector and sorting it, is impossible sometimes due to memory limitations or computing power. We show that a proposed algorithm to approximating the median, “the median of the median” per- forms poorly. Instead, we propose a new algorithm that can give us good approximations to the exact quantiles, which is an extension of the algo- rithm proposed in [3]. In fact, we derive the precision of the algorithm. The algorithm partitions the data, “coarsens” the partitions at every iteration, put the coarsened vectors together and sort it instead of the original vector. Working on the quantiles, in order to find some theory to justify the use- fulness and accuracy of the algorithm motivated us to think about the defi- nition of the quantile function and quantiles for data vectors. The quantile function of a random variable X with distribution function F is traditionally defined as q(p) = inf{u|F (u) ≥ p}. Applying this to the fair coin example with 0, 1 as outputs, we get q(1/2) = 0. This is counterintuitive to the fact that the distribution has equal mass on 0 and 1. Also a standard definition for the quantiles does not exist for a data vector. [For example Hyndman et al. [25] point out that there are many definitions of quantiles in different packages.] For example suppose a data vector has an even number of points, then there is no point exactly in the middle, in which case, the average of the two middle values is often proposed as the median. We argue that this is not a good definition. In fact, we present an alternative way of defining quantiles that is motivated by an intuitive experiment and resolve all the above problems. We propose using the two-state definition of right and left quantiles instead of only quantile. The left quantile is defined as above and the right quantile is defined to be rq(p) = sup{u|F (u) ≤ p}. We also define left and right quantiles for the data vectors and study the limit properties of the sample quantiles. For example it turns out the sample left and right quantile converge to the distribution quantiles if and only if the left and right quantile are equal. This again shows another interesting aspect of the definition of quantiles and confirms that it is not redundant. This definition is an extension to the concept of upper and lower median 4 Chapter 1. Thesis introduction in robustness literature. Also in some books (e.g. [41]) rq(p) is taken to be the definition of quantiles. However, we do not know of any study of their properties or a claim that considering both can lead to many inter- esting results. We also show that the widely claimed equivariance property of traditional (left) quantile functions under strictly increasing transforma- tions (for example in [21] and [29]) is false. However, we show that the left (right) quantile is equivariant under left (right) continuous increasing transformations. We also provide a neat result for continuous decreasing transformations. We also show that the probability that the random vari- able is between these the right and left quantile is zero and the left and right quantile are identical except for at most a countable subset of [0,1]. Since our objective is to approximate to the exact quantiles by our al- gorithm, we need a way of assessing the accuracy such an approximation (a loss function). We introduce a new loss function that is invariant un- der strictly monotonic transformations of the data or the random variable. This loss function is very natural and in summary the loss of estimating a quantile z by z′ is the probability that the random variable is between these two values. In other words, we use the mass of the random variable itself between the two values to judge the goodness of the approximation. We also show some limit theorems to show the empirical loss function tends to the loss function of the distribution. This loss function might be a useful tool in many other contexts and is an interesting topic for future research. We show by simulations and real data the algorithm performs well. Then we will apply it to the weather data to pick the 95% quantile for the max- imum daily temperature. After picking the quantiles we use the rth–order Markov techniques and partial likelihood to find appropriate models to de- scribe the temperature. Using this loss function and the theory developed for the quantiles, we introduce measures to compute “distance” among dis- tribution functions over the reals (random variables) that are invariant un- der continuous strictly monotonic transformations. We use this distance measures to show optimal overall fits to a random variable are not neces- sarily optimal in the tails (and hence not appropriate to study extremes). We also find “optimal” ways of picking a limited number of probabilities 0 ≤ p1 < · · · < pk ≤ 1 to summarize a random variable by its corresponding quantiles. Finally we show how these higher order Markov models can be used to construct confidence intervals for the probability of a frost free week at the beginning of August at Medicine Hat (in Alberta). The last chapter provides a summary of the work and the conclusions. It also points out some interesting questions that are not answered in this 5 Chapter 1. Thesis introduction thesis and a research proposal for the future. 6 Chapter 2 Exploratory analysis of the Canadian weather data 2.1 Introduction This chapter performs an exploratory analysis for the Homogenized climate dataset for the province of Alberta in Canada. We have access to daily maximum temperature (MT ), daily minimum temperature (mt) and precip- itation (PN). The temperature data have been provided to us by Vincent, L.A. and the precipitation data have been provided to us by Eva Mekis both from Environment Canada. This dataset has been homogenized for changes of instrument, changes of the location of the stations and so on. More in- formation about these data can be found in [34] and [47]. These data are a homogenized part of a larger dataset published by Environment Canada (2007), which are in binary format and a Python module in order to extract them is provided in Appendix B. This chapter uses several graphical and analytical tools to examine the behavior of selected climate variables. Looking at the data, we will see some interesting features that suggest future research. Section 2 describes the dataset. For example the location plots of the stations and their elevation plots are given. In Section 3, we look at the daily and annual time series of temperatures and precipitation. The normality of the distribution of annual values and the associations between different variables are investigated. We have also investigated the seasonal patterns as well as the long–term patterns for different variables over the course of the year. For example, the mean summer daily minimum temperature shows a significant increasing pattern over the course of the past century in Calgary and some other locations. Section 4 looks at the distribution of the daily values. For example, a normal distribution seems to describe the temperature and a Gamma distribution, the precipitation daily values. Confidence intervals for the mean/standard deviation in the normal case and shape/scale parameters in the Gamma case are given. Section 5 looks 7 2.2. Data description −118 −116 −114 −112 −110 50 52 54 56 58 Temperature (deg C) Longitude W (degrees) La tit ud e N (de gre es ) Figure 2.1: Alberta site locations for temperature (deg C) data. There are 25 stations available with temperature data over Alberta. at the spatial and temporal correlation of different variables. 2.2 Data description The temperature data comes from 25 stations over Alberta which operated from 1895 to 2006. PN data involve 47 stations from 1895 to 2006. Different stations have different intervals of data available. For example, the PN data for Caldwell is available from 1911 to 1990. Figures 2.1 and 2.2 respectively depict the location of the stations for temperature (both MT and mt) and PN . The number of years available for each station is plotted against the location in Figures 2.3 and 2.4. Another available variable for the location of the stations is the elevation. Figures 2.5 and 2.6 show the elevation in meters. As seen in the plots, some stations have both temperature and precipitation data. 2.3 Temperature and precipitation To get some initial impression of the data, we look at the time series of MT, mt, and PN at a fixed location. We use the Calgary site since it has a long 8 2.3. Temperature and precipitation −118 −116 −114 −112 −110 50 52 54 56 58 PN (mm) Longitude W (degrees) La tit ud e N (de gre es ) Figure 2.2: Alberta site locations for precipitation (mm) data. There are 47 stations available with precipitations data over Alberta. Temperature (deg C) −120 −118 −116 −114 −112 −110 60 70 80 90 10 0 11 0 12 0 48 50 52 54 56 58 60 Longitude W (degrees) La tit ud e N (de gre es ) Av ai la bl e ye ar s of d at a Figure 2.3: The number of years available for sites with temperature (deg C) data. 9 2.3. Temperature and precipitation PN (mm) −120 −118 −116 −114 −112 −110 20 40 60 80 10 0 12 0 48 50 52 54 56 58 60 Longitude W (degrees) La tit ud e N (de gre es ) Av ai la bl e ye ar s of d at a Figure 2.4: The number of years available for sites with precipitation (mm) data available. Temperature (deg C) −120 −118 −116 −114 −112 −110 20 0 40 0 60 0 80 0 10 00 12 00 14 00 48 50 52 54 56 58 60 Longitude W (degrees) La tit ud e N (de gre es ) El ev at io n(m ) Figure 2.5: The elevation (meters) of sites with temperature data available. 10 2.3. Temperature and precipitation PN (mm) −120 −118 −116 −114 −112 −110 20 0 40 0 60 0 80 0 10 00 12 00 14 00 48 50 52 54 56 58 60 Longitude W (degrees) La tit ud e N (de gre es ) El ev at io n(m ) Figure 2.6: The elevation (meters) of the sites with precipitation data avail- able. period of data available and includes both temperature and precipitation. Looking at the maximum and minimum temperature, we see the peri- odic trend over the course of a year as shown in Figures 2.7 and 2.8 which illustrate theMT andmt daily values from 2000 to 2003. A regular seasonal trend is seen in both processes. Looking at the PN plot in Figure 2.9, we observe a large number of zeros. Moreover, seasonal patterns are hard to see by looking at daily values. To illustrate the seasonal patterns better, we look at the monthly averages for MT , mt and PN over the period 1995 to 2005 in Figures 2.10, 2.11 and 2.12. Now the seasonal patterns for precipitation can be seen better in Figure 2.12. Next we look at the mean annual values of the three variables for all available years that have less than 10 missing days (Figures 2.13, 2.14 and 2.15). Table 2.1 gives a summary of these annual means. 11 2.3. Temperature and precipitation 2000.0 2000.5 2001.0 2001.5 2002.0 2002.5 2003.0 − 20 − 10 0 10 20 30 40 Year M T (de g C ) Figure 2.7: The time series of daily maximum temperature (deg C) at the Calgary site from 2000 to 2003. 2000.0 2000.5 2001.0 2001.5 2002.0 2002.5 2003.0 − 30 − 20 − 10 0 10 Year m t (d eg C ) Figure 2.8: The time series of daily minimum temperature (deg C) at the Calgary site from 2000 to 2003. 12 2.3. Temperature and precipitation 2000.0 2000.5 2001.0 2001.5 2002.0 2002.5 2003.0 0 5 10 15 20 25 30 Year PN (m m) Figure 2.9: The time series of daily precipitation (mm) at the Calgary site from 2000 to 2003. 1996 1998 2000 2002 2004 2006 − 10 0 10 20 Time (Year and month) M T (de g C ) Figure 2.10: The time series of monthly maximum temperature (deg C) at the Calgary site, 1995–2005. 13 2.3. Temperature and precipitation 1996 1998 2000 2002 2004 2006 − 20 − 15 − 10 − 5 0 5 10 Time (Year and month) m t (d eg C ) Figure 2.11: The time series of monthly minimum temperature means (deg C) at the Calgary site, 1995–2005. 1996 1998 2000 2002 2004 2006 0 1 2 3 4 5 Time (Year and month) PN (m m) Figure 2.12: The time series of monthly precipitation means (mm) at the Calgary site, 1995–2005. 14 2.3. Temperature and precipitation 1900 1920 1940 1960 1980 2000 9 10 11 12 13 14 15 Year M T (de g C ) Figure 2.13: The annual mean maximum temperature (C) for Calgary site for all available years. 1900 1920 1940 1960 1980 2000 − 4 − 3 − 2 − 1 Year m t (d eg C ) Figure 2.14: The annual mean minimum temperature (C) for Calgary site for all available years. 15 2.3. Temperature and precipitation 1900 1920 1940 1960 1980 2000 1. 0 1. 5 2. 0 2. 5 Year PN (m m) Figure 2.15: The annual mean precipitation (mm) for Calgary site for all available years. Variable min 1st quartile median mean 3rd quartile max MT (deg C) 7.59 9.64 10.37 10.36 11.19 13.46 mt (deg C) -4.83 -3.40 -2.54 -2.66 -1.95 0.07 PN (mm) 0.68 1.12 1.28 1.29 1.39 2.51 Table 2.1: The summary statistics for the mean annual maximum temper- ature, min temperature and precipitation at the Calgary site. Assuming stochastic normality and independence of the observations, we can obtain confidence intervals for all three variables and these are given in Table 2.2. The confidence intervals are fairly narrow. Variable 95% confidence interval MT (deg C) (10.14,10.57) mt (deg C) (-2.85,-2.47) PN (mm) (1.24,1.35) Table 2.2: Confidence intervals for the mean annual maximum temperature, min temperature and precipitation at the Calgary site. 16 2.3. Temperature and precipitation MT (deg C) D en si ty 8 10 12 14 16 0. 00 0. 05 0. 10 0. 15 0. 20 0. 25 0. 30 0. 35 Figure 2.16: The histogram of annual maximum temperature means (deg C) for Calgary with a normal curve fitted to the data. To investigate the shape of the distribution of annual means, we look at the histogram of each variable with a normal curve fitted in Figures 2.16, 2.18 and 2.20. The corresponding normal qq–plots (quantile–quantile) are also given in Figures 2.17, 2.19 and 2.21 to asses the normality assumption. Both the histogram and the qq–plots forMT validate the normality assumptions. The histogram for mt is slightly left skewed. For PN, some deviation from the normality assumption is seen. This is expected since the daily PN process is very far from normal to start with. Hence, even averaging through the whole year has not quite given us a normal distribution. We plot all three variables (annual mean MT , mt and PN) in the same graph, Figure 2.22. As shown in that figure, MT and mt show the same trends over time. To get an idea of how the two variables are related, we fit a regression line, taking mt as response and MT as the explanatory variable. As seen in Figure 2.23, the regression fit looks very good. We repeat this analysis this time taking MT as explanatory variable and PN as response. As shown in Figure 2.24, the fit is still reasonable, but the association is not as strong. As shown in Table 3, both fits are significant. One can criticize the use of a simple regression since the independence assumption might not be satisfied. Finding more reliable and sensible relationships among the 17 2.3. Temperature and precipitation −2 −1 0 1 2 9 10 11 12 13 14 15 Normal Q−Q Plot Theoretical Quantiles Sa m pl e Qu an tile s ( de g C ) Figure 2.17: The normal qq–plot for annual maximum temperature means (deg C) for Calgary. mt (deg C) D en si ty −5 −4 −3 −2 −1 0 0. 0 0. 1 0. 2 0. 3 0. 4 Figure 2.18: The histogram of annual minimum temperature means (deg C) for Calgary with normal curve fitted to the data. 18 2.3. Temperature and precipitation −2 −1 0 1 2 − 4 − 3 − 2 − 1 Normal Q−Q Plot Theoretical Quantiles Sa m pl e Qu an tile s ( de g C ) Figure 2.19: The normal qq–plot for annual minimum temperature means (deg C) for Calgary. PN (mm) D en si ty 0 1 2 3 0. 0 0. 5 1. 0 1. 5 2. 0 Figure 2.20: The histogram of annual precipitation means (mm) for Calgary with normal curve fitted to the data. 19 2.3. Temperature and precipitation −2 −1 0 1 2 1. 0 1. 5 2. 0 2. 5 Normal Q−Q Plot Theoretical Quantiles Sa m pl e Qu an tile s ( mm ) Figure 2.21: The normal qq–plot for annual precipitation means for Calgary. variables needs a multivariate model taking account of correlation and other aspects of the processes. Also note that these are annual averages which are not as correlated as daily values over time as seen in the annual time series plots. Variables Intercept Slope p-value for intercept p-value for slope mt (deg C) -10.40 0.746 2 ×10−16 2 ×10−16 PN (mm) 2.13 -0.082 1.49 × 10−14 0.0005 Table 2.3: Lines fitted to annual mean minimum temperature and annual mean precipitation against annual mean maximum temperature. Next we look at the change in the seasonal means for all three variables. As we noted above there are missing data particularly near the beginning of the time series. This has caused the gap at the beginning of most plots. To get a longer time series of means, we first compute the monthly means al- lowing 3 missing days and then compute the annual mean using the monthly means. This is reasonable since nearby days have similar values. We do the regression analysis for three locations: Calgary, Banff and Medicine Hat. We fit the regression line to annual means, spring means, summer means, fall means and winter means for each of MT , mt and PN with respect to 20 2.3. Temperature and precipitation 1900 1920 1940 1960 1980 2000 − 10 − 5 0 5 10 15 Year Te m p (de g C ) & P N (m m) Figure 2.22: The time series plots of maximum temperature (deg C), mini- mum temperature (deg C) and precipitation (mm) annual means for Calgary. The time series plot in the bottom is minimum temperature, the one in the middle is precipitation and the top curve is maximum temperature. 21 2.3. Temperature and precipitation 8 9 10 11 12 13 − 5 − 4 − 3 − 2 − 1 0 MT (deg C) m t (d eg C ) Figure 2.23: The regression line fitted to maximum temperature and mini- mum temperature annual means for Calgary. 8 9 10 11 12 13 1. 0 1. 5 2. 0 2. 5 MT (deg C) PN (m m) Figure 2.24: The regression line fitted to maximum temperature and pre- cipitation annual means for Calgary. 22 2.3. Temperature and precipitation 1900 1920 1940 1960 1980 2000 6 7 8 9 10 Year m t (d eg C ) Figure 2.25: The regression line fitted to summer minimum temperature means against time for Calgary. time. The results are given in Table 4, 5 and 6. We have only included fits that turned out to be significant. Note that PN does not appear in any of the tables. Annual minimum temperature and summer mean temperature show an increase in all three locations. Figure 2.25 depicts one of the time series (mt summer mean for Calgary) with the regression line fitted. 23 2.4. Daily values, distributions Variable Season Intercept Slope p-value for intercept p-value for slope mt (deg C) Year -24.72 0.112 2× 10−05 0.0001 mt (deg C) Spring -30.05 0.138 0.0008 0.0024 mt (deg C) Summer -20.11 0.0144 6× 10−7 3× 10−11 Table 2.4: The regression line parameters for the fitted lines for each variable with respect to time for the Calgary site. Variable Season Intercept Slope p-value for intercept p-value for slope MT (deg C) Year -12.99 0.0105 0.019 0.0002 MT (deg C) Spring -17.0 0.0048 0.075 0.009 MT (deg C) Fall -12.64 0.0106 0.19 0.0326 mt (deg C) Year -37.0 0.01666 2× 10−10 2× 10−8 mt (deg C) Spring -49.8 0.0229 5× 10−9 10−7 mt (deg C) Summer -36.8 0.0212 2× 10−15 2× 10−16 Table 2.5: The regression line parameters for the fitted lines for each variable with respect to time for the Banff site. Variable Season Intercept Slope p-value for intercept p-value for slope MT (deg C) Year -24.6 0.0185 0.00102 3× 10−6 MT (deg C) Spring -34.24 0.0235 0.009 0.0005 mt (deg C) Year -39.98 0.0197 5× 10−10 2× 10−9 mt (deg C) Spring -39.81 0.0196 5× 10−5 9× 10−5 mt (deg C) Summer -10.93 0.0112 0.0199 7× 10−6 mt (deg C) Fall -24.66 0.0122 0.0110 0.0137 Table 2.6: The regression line parameters for the fitted lines for each variable with respect to time for the Medicine Hat site. 2.4 Daily values, distributions This section studies the daily values for all three variables. To that end, we pick four days of the year, Jan 1st, April 1st, July 1st and October 1st. Let us look at the time series, histograms and normal qq–plots for each variable over the years. Figures 2.26 to 2.31 give the results. In fact the plots show that a normal distribution fits the data for daily MT and mt for the the selected days fairly well. However, some deviations from the normal distribution is seen, particularly in the tails. We also tried the first day of each month and observed similar results. 24 2.4. Daily values, distributions 1900 1940 1980 − 20 − 10 0 10 Jan 1st M T (de g C ) 1900 1940 1980 − 15 − 5 0 5 10 20 Apr 1st M T (de g C ) 1900 1940 1980 15 20 25 30 July 1st Year M T (de g C ) 1900 1940 1980 0 5 10 15 20 25 Oct 1st Year M T (de g C ) Figure 2.26: The time series of daily maximum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. 25 2.4. Daily values, distributions Jan 1st D en si ty −30 −20 −10 0 10 0. 00 0. 02 0. 04 Apr 1st D en si ty −15 −5 0 5 10 15 20 0. 00 0. 02 0. 04 0. 06 July 1st MT (deg C) D en si ty 15 20 25 30 0. 00 0. 04 0. 08 Oct 1st MT (deg C) D en si ty −5 0 5 10 15 20 25 30 0. 00 0. 02 0. 04 0. 06 Figure 2.27: The histogram of daily maximum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. 26 2.4. Daily values, distributions −2 −1 0 1 2 − 20 − 10 0 10 Jan 1st Sa m pl e Qu an tile s ( de g C ) −2 −1 0 1 2 − 15 − 5 0 5 10 20 Apr 1st −2 −1 0 1 2 15 20 25 30 July 1st Theoritical Quantiles Sa m pl e Qu an tile s ( de g C ) −2 −1 0 1 2 0 5 10 15 20 25 Oct 1st Theoritical Quantiles Figure 2.28: The normal qq–plots of of daily maximum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. 27 2.4. Daily values, distributions 1900 1940 1980 − 30 − 20 − 10 0 Jan 1st m t (d eg C ) 1900 1940 1980 − 25 − 15 − 5 0 5 Apr 1st m t (d eg C ) 1900 1940 1980 2 4 6 8 10 12 14 July 1st Year m t (d eg C ) 1900 1940 1980 − 5 0 5 10 Oct 1st Year m t (d eg C ) Figure 2.29: The time series of daily minimum temperature for Calgary for four given dates: January 1st, April 1st, July 1st and October 1st. 28 2.4. Daily values, distributions Jan 1st D en si ty −40 −30 −20 −10 0 0. 00 0. 02 0. 04 Apr 1st D en si ty −25 −15 −5 0 5 10 0. 00 0. 04 0. 08 July 1st mt (deg C) D en si ty 2 4 6 8 10 12 14 16 0. 00 0. 05 0. 10 0. 15 Oct 1st mt (deg C) D en si ty −5 0 5 10 0. 00 0. 04 0. 08 Figure 2.30: The histogram of daily minimum temperature at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. 29 2.4. Daily values, distributions −2 −1 0 1 2 − 30 − 20 − 10 0 Jan 1st Sa m pl e Qu an tile s ( de g C ) −2 −1 0 1 2 − 25 − 15 − 5 0 5 Apr 1st −2 −1 0 1 2 2 4 6 8 10 12 14 July 1st Theoritical Quantiles Sa m pl e Qu an tile s ( de g C ) −2 −1 0 1 2 − 5 0 5 10 Oct 1st Theoritical Quantiles Figure 2.31: The normal qq-plots of daily minimum temperature at the Cal- gary site for four given dates: January 1st, April 1st, July 1st and October 1st. 30 2.4. Daily values, distributions 1900 1940 1980 0 2 4 6 8 10 12 Jan 1st Year PN (m m) 1900 1940 1980 0 5 10 15 Apr 1st Year PN (m m) 1900 1940 1980 0 5 10 15 20 25 July 1st Year PN (m m) 1900 1940 1980 0 10 20 30 40 Oct 1st Year PN (m m) Figure 2.32: The time series of daily precipitation at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. We plot the histogram for PN as well (Figure 2.33). The distribution is far from normal because of high frequency of no PN (dry) days. Next, we use the available years to compute the confidence intervals for the mean of every given day of the year for MT and mt. For PN , we construct the confidence intervals for probability of PN . [A PN day is defined to be a day with PN > 0.2 (mm). This is because any precipitation amount less than 0.2 (mm) is barely measurable.] Figures 2.34 to 2.36 give the confidence intervals for the means. The confidence interval for the standard deviations (obtained by bootstrap techniques) are given in Figures 2.37 to 2.39. A regular seasonal pattern is seen in the means and standard deviations. For example the maximum for MT and mt occurs around the 200th (in July) day of the year and the minimum occurs at the beginning 31 2.4. Daily values, distributions Jan 1st PN (mm) D en si ty 0 2 4 6 8 10 12 14 0. 0 0. 1 0. 2 0. 3 0. 4 Apr 1st PN (mm) D en si ty 0 5 10 15 20 0. 0 0. 1 0. 2 0. 3 0. 4 July 1st PN (mm) D en si ty 0 5 10 15 20 25 30 0. 00 0. 05 0. 10 0. 15 Oct 1st PN (mm) D en si ty 0 10 20 30 40 50 0. 00 0. 05 0. 10 0. 15 Figure 2.33: The histogram of daily precipitation at the Calgary site for four given dates: January 1st, April 1st, July 1st and October 1st. 32 2.4. Daily values, distributions 0 100 200 300 − 5 0 5 10 15 20 25 Day of the year M ea n M T (de g C ) Figure 2.34: The confidence intervals for the daily mean maximum temper- ature (deg C) at the Calgary site. Dashed line shows the upper bound and the solid line the lower bound of the confidence intervals. and the end of the year. Comparing the plots of the means and the standard deviations, we observe that warmer days have smaller standard deviations than colder days. For example the minimum standard deviation for the Maximum and minimum temperature happens around the 200th day of the year which correspond to the warmest period of the year. The plots seem to indicate that a simple periodic function suffices to model the seasonal patterns. Contrary to MT and mt, for the 0-1 PN process, the standard deviation is the highest in June, when the probability of precipitation is close to 12 . As shown above, the distribution of daily PN values is far from normal. This time, after removing the zeros, we fit a Gamma distribution to PN (Figure 2.42). The Gamma qq–plots are given in Figure 2.43 and reveal a fairly good fit. 33 2.4. Daily values, distributions 0 100 200 300 − 15 − 10 − 5 0 5 10 Day of the year M ea n m t (d eg C ) Figure 2.35: The confidence intervals for the daily mean minimum temper- ature (deg C) at the Calgary site. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. 34 2.4. Daily values, distributions 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 Day of the year Pr ob ab ilit y of P N Figure 2.36: The confidence intervals for the probability of precipitation (mm) at the Calgary site for the days of the year. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. 35 2.4. Daily values, distributions 0 100 200 300 4 6 8 10 Day of the year St an da rd d ev ia tio n (de g C ) Figure 2.37: The confidence intervals for the standard deviation of each day of the year for maximum temperature (deg C) at the Calgary site. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. 36 2.4. Daily values, distributions 0 100 200 300 2 4 6 8 10 Day of the year St an da rd d ev ia tio n (de g C ) Figure 2.38: The confidence intervals for the standard deviation of each day of the year for minimum temperature (deg C) at the Calgary site. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. 37 2.4. Daily values, distributions 0 100 200 300 0. 35 0. 40 0. 45 0. 50 0. 55 Day of the year St an da rd d ev ia tio n (m m) Figure 2.39: The confidence intervals for standard deviation (sd) of each day of the year for the probability of precipitation (mm) (0-1 precipitation process) at the Calgary site. Dashed line shows the upper bound and the solid the lower bound of the confidence intervals. Plot shows sd ≤ 1/2. This is because sd = √ p(1− p) which has a maximum value of 12 . 38 2.4. Daily values, distributions −40 −20 0 20 40 0. 00 0. 05 0. 10 0. 15 First half of the year MT (C) D en si ty −40 −20 0 20 40 0. 00 0. 05 0. 10 0. 15 Second half of the year MT (C) D en si ty Figure 2.40: The distribution of each day of the year for MT (C) from Jan 1st to Dec 1st. The year has been divided to two halves. In each half rainbow colors are used to show the change of the distribution. 39 2.4. Daily values, distributions −40 −20 0 10 20 30 0. 00 0. 05 0. 10 0. 15 First half of the year mt (C) D en si ty −40 −20 0 10 20 30 0. 00 0. 05 0. 10 0. 15 Second half of the year mt (C) D en si ty Figure 2.41: The distribution of each day of the year formt (C) from Jan 1st to Dec 1st. The year has been divided to two halves. In each half rainbow colors are used to show the change of the distribution. Jan 1st PN (mm) D en si ty 0 10 20 30 40 50 0. 0 0. 1 0. 2 0. 3 Apr 1st PN (mm) D en si ty 0 10 20 30 40 50 0. 00 0. 10 0. 20 0. 30 July 1st PN (mm) D en si ty 0 10 20 30 40 50 0. 00 0. 06 0. 12 Oct 1st PN (mm) D en si ty 0 10 20 30 40 50 0. 00 0. 05 0. 10 0. 15 Figure 2.42: The histogram of daily precipitation greater than 0.2 mm at the Calgary site with Gamma density curve fitted using Maximum likelihood. 40 2.4. Daily values, distributions 0 5 10 15 20 0 2 4 6 8 12 Theoritical Quantiles Sa m pl e Qu an tile 0 5 10 15 20 25 0 5 10 15 Theoritical Quantiles Sa m pl e Qu an tile 0 5 10 15 20 25 0 5 10 20 Theoritical Quantiles Sa m pl e Qu an tile 0 5 10 15 20 25 0 10 30 Theoritical Quantiles Sa m pl e Qu an tile Figure 2.43: The qq–plots of daily precipitation greater than 0.2 mm at the Calgary site with Gamma curve fitted using Maximum likelihood. 41 2.4. Daily values, distributions 0 2 4 6 8 10 0. 0 0. 2 0. 4 0. 6 March PN (mm) D en si ty 0 2 4 6 8 10 0. 0 0. 2 0. 4 0. 6 June PN (mm) D en si ty 0 2 4 6 8 10 0. 0 0. 2 0. 4 0. 6 September PN (mm) D en si ty 0 2 4 6 8 10 0. 0 0. 2 0. 4 0. 6 December PN (mm) D en si ty Figure 2.44: The Gamma fit of each day of 4 months for precipitation (mm). In each month rainbow colors are used to show the change of the distribution. Figures 2.40, 2.41 and 2.44 reveal the result of our investigation of the change in the distribution over a period of time. For MT and mt, we have done that for the course of the year. The figures show how the distribution deforms continuously over the year. We can also notice changes in mean and standard deviation over the year. For PN , we have done the same only for 4 different months because of high irregularity of the process. Next, we look at the parameters of the Gamma distribution fitted to PN over the course of a year. If we use maximum likelihood estimates (MLE), which we have used above to form the Gamma curve, the confidence intervals, obtained by bootstrap method will be very wide (tend rapidly to infinity). Hence, we use the “method of moments estimates” (MOM), to 42 2.4. Daily values, distributions 0 100 200 300 0 1 2 3 4 5 Day of the year Es tim at ed p ar am et er , a lp ha Figure 2.45: The maximum likelihood estimate for α, the shape parameter of the Gamma distribution fitted to the precipitation amounts. obtain confidence intervals. The MOM confidence intervals are given in Figure 2.46. When using MLE estimates, since there is no closed form for them, we need to use Newton method to find the Maximum values. However, MOM gives us closed form solution. This advantage might explain the better behavior of MOM estimates in forming the confidence intervals. However, even the MOM confidence intervals do not look satisfactory and are rather wide and irregular specially at the beginning and end of the year. We can also consider the 0-1 process of PN (1 for wet and 0 for dry) and compute the transition probabilities for PN (Figure 2.47). The figure shows the probability of PN is changing continuously over the year and can be modeled by a simple periodic function. Considering the 0-1 process of PN as a chain leads to the interesting question as the order of the Markov chain. Let us denote by 1 a PN oc- currence and 0 otherwise. Suppose xt = 1 denote PN on day t and xt = 0 denote no PN and let pxt−r···xt(t), denote the probability of observing xt on day t of the year conditional on the chain (xt−r · · · xt−1). In Figure 2.47, we have plotted the estimated p̂11(t) and p̂01(t) for different days of the year. 43 2.4. Daily values, distributions 0 100 200 300 0. 5 1. 0 1. 5 2. 0 2. 5 Day of the year Es tim at ed p ar am et er , a lp ha Figure 2.46: The confidence interval for MOM estimate of the shape param- eter, α, of the Gamma distribution fitted to daily precipitation amounts. The dotted line is the upper bound and the solid line the lower bound. As seen in the figure the upper bounds at the beginning and end of the year have become very large. We have not shown them because otherwise then the pattern in the rest of the year could not be seen. 44 2.4. Daily values, distributions 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 2.47: The 1st-order transition probabilities. The dotted line is the the probability of precipitation if it happened the day before (p̂11) and the dashed is the probability of precipitation if it did not happen the day before (p̂01). 45 2.5. Correlation 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 2.48: The 2nd–order transition probabilities for the precipitation at the Calgary site: p̂111 (solid) against p̂011 (dotted). The clear gap between this two estimated probabilities indicate that a 1st– order Markov chain should be preferred to a 0th–order. Figures 2.48 and 2.49 plot p̂111 against p̂011 and p̂001 against p̂101. The estimated probabilities seem to be close and overlap heavily over the course of the year. Hence a 1st–order Markov chain seems to suffice for describing the binary process of PN . 2.5 Correlation The correlation in a spatial–temporal process can depend on time and space. This section studies the temporal and spatial patterns of the correlation function separately. 46 2.5. Correlation 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 2.49: The 2nd–order transition probabilities for the precipitation at the Calgary site: p̂001 (solid) against p̂101 (dotted). 47 2.5. Correlation 0 200 400 600 − 0. 2 0. 2 0. 6 1. 0 Day Co rre la tio n 0 200 400 600 − 20 20 60 Day Co va ria nc e Figure 2.50: The correlation and covariance plot for maximum temperature at the Calgary site for Jan 1st and 732 consequent days. 2.5.1 Temporal correlation Here we look at the correlation/covariance of the variables as a function of time. The location is taken to be the Calgary site. First we look at the correlation/covariance of a given day and its consequent days. We pick Jan 1st and compute the correlation/covariance with the following days: Jan 2nd, Jan 3rd and etc. Figure 2.50 shows that the correlation and covariance have the same trends for maximum temperature. Figures 2.51 to 2.53 show a decreasing trend for correlation over time forMT,mt and PN . The decrease is far from linear and it looks to be exponentially decreasing. The plots also indicate that only a few consequent days are possibly correlated and in particular two days that are one year apart can be considered independent. This assumption might be useful in building a spatial–temporal model. 48 2.5. Correlation 0 200 400 600 − 0. 2 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day Co rre la tio n Figure 2.51: The correlation plot for maximum temperature (deg C) at the Calgary site for Jan 1st and 732 consequent days. 49 2.5. Correlation 0 200 400 600 − 0. 2 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day Co rre la tio n Figure 2.52: The correlation plot for minimum temperature (deg C) at the Calgary site for Jan 1st and 732 consequent days. 50 2.5. Correlation 0 200 400 600 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day Co rre la tio n Figure 2.53: The correlation plot for precipitation (mm) at the Calgary site for Jan 1st and 732 consequent days. Next we look at the correlation of responses on other days of the year with their 30 consecutive days. Our goal is to see if the correlation function has the same behavior over the course of a year. We pick, Feb 1st, April 1st, July 1st, Oct 1st. Figures 2.54 and 2.56 show similar patterns. Finally, we look at the correlation of two fixed locations over the course of the year (by changing the day). The results are given in Figures 2.57 and 2.58. Strong correlation and clear seasonal patterns are seen for MT and mt. This seems to indicate in particular that the temperature process is not stationary. The correlation in the middle of the year around day 200 which corresponds to the summer season seems to be smaller than the correlation at the beginning and end of the year which correspond to the cold season. 51 2.5. Correlation 0 5 10 15 20 25 30 − 0. 2 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day Co rre la tio n Figure 2.54: The correlation plot for maximum temperature (deg C) at the Calgary site for Feb 1st (solid), April 1st (dashed), July 1st (dotted) and Oct 1st (dot dash) and 30 consequent days. 52 2.5. Correlation 0 5 10 15 20 25 30 − 0. 2 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day Co rre la tio n Figure 2.55: The correlation plot for minimum temperature (deg C) at the Calgary site for Feb 1st (solid), April 1st (dashed), July 1st (dotted) and Oct 1st (dot dash) and 30 consequent days. 53 2.5. Correlation 0 5 10 15 20 25 30 − 0. 2 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day Co rre la tio n Figure 2.56: The correlation plot for precipitation (mm) at the Calgary site for Feb 1st (solid), April 1st (dashed), July 1st (dotted) and Oct 1st (dot dashed) and 30 consequent days. 54 2.5. Correlation 0 100 200 300 0. 5 0. 7 0. 9 Day of the year Co rre la tio n, M T 0 100 200 300 0. 5 0. 7 0. 9 Day of the year Co rre la tio n, m t Figure 2.57: The correlation plot for maximum temperature and minimum temperature (deg C) between Calgary and Medicine Hat. 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Co rre la tio n, P N Figure 2.58: The correlation plot for precipitation (mm) between Calgary and Medicine Hat. 55 2.5. Correlation 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Jan 1st Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Apr 1st Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 July 1st Distance (km) Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Oct 1st Distance (km) Co rre la tio n Figure 2.59: The correlation plot for maximum temperature (deg C) with respect to distance (km). 2.5.2 Spatial correlation This subsection looks at the spatial correlation by fixing the time to a few dates: January 1st, April 1st, July 1st and Oct 1st distributed over year’s climate regime. We plot the correlation with respect to the geodesic distance (km) on the surface of the earth. Figures 2.59 to 2.62 show the results for MT , mt, PN and 0-1 PN respectively. For MT and mt, we observe a clear decreasing trend with respect to distance. The trend for PN does not seem to be regular. 56 2.5. Correlation 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Jan 1st Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Apr 1st Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 July 1st Distance (km) Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Oct 1st Distance (km) Co rre la tio n Figure 2.60: The correlation plot for minimum temperature (deg C) with respect to distance(km). 57 2.5. Correlation 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Jan 1st Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Apr 1st Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 July 1st Distance (km) Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Oct 1st Distance (km) Co rre la tio n Figure 2.61: The correlation plot for precipitation (mm) with respect to distance (km). 58 2.5. Correlation 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Jan 1st Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Apr 1st Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 July 1st Distance (km) Co rre la tio n 0 200 400 600 800 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Oct 1st Distance (km) Co rre la tio n Figure 2.62: The correlation plot for precipitation (mm) 0-1 process with respect to distance (km). 59 2.6. Summary and conclusions 2.6 Summary and conclusions This section summarizes our findings of the exploratory analysis. • There is a strong seasonal trend in the temperature and precipitation processes. See Figures 2.7, 2.8, 2.11 and 2.36. • The summer average minimum temperature has increased over several locations over the past century. See Figure 2.25. • mt and MT are highly correlated. See Figure 2.23. • The distributions of daily maximum temperature and minimum tem- perature are rather close to the Gaussian distribution in the center with some deviations seen in the tails. See Figures 2.27 and 2.29. • The temperature process in Alberta is less variable in the warm seasons and the converse holds for the precipitation process. See Figures 2.37, 2.38 and 2.39. • The distribution of the daily temperature varies continuously over the course of the year. This could not be shown for precipitation. (This might be because we need more data.) • The correlation between two sites depends on the time of the year. They are more correlated in cold seasons. This might be because there are more (strong) global weather regimes in the cold seasons influencing the whole region. • The correlation over time for MT , mt and PN seems stationary and is decreasing with a nonlinear trend (exponentially) with respect to the time difference. • The spatial correlations for MT and mt are strong and decreasing almost linearly with respect to the geodesic distance. • The spatial correlation for PN is not strong. It might be because the sites are too faraway to capture the spatial correlation for PN . The future chapters investigate some of these items. In particular after developing some theory regarding Markov chains, we investigate the order of the binary precipitation process. Then we will turn to modeling the occurrence of extreme temperature. Instead of using a Gaussian process to 60 2.6. Summary and conclusions model the temperature and use that to infer about the occurrence of the extremes, we use a categorical chain. This is because of the deviations from normality in the tails as pointed out above. 61 Chapter 3 rth-order Markov chains 3.1 Introduction This chapter studies rth–order categorical Markov chains and more gen- erally, categorical discrete–time stochastic processes. By “categorical”, we mean chains that have a finite number of possible states at each time point. Such chains have important applications in many areas, one of which is mod- eling weather processes such as precipitation over time. In fact, we use these chains to model the binary process of precipitation as well as dichotomized temperature processes. In rth–order Markov chains, the conditional proba- bility of the present given the past is modeled. Such a conditional probability is a function of the past r states, where each one of them only takes finite possible values. It is useful and intuitively appealing to specify or model a discrete process over time by the conditional probabilities rather than the joint distribution. However, one must check the consistency of such a specification i.e. to prove that it corresponds to a full joint distribution. In the case of discrete–time categorical processes, we prove a theorem that shows the conditional prob- abilities can be used to specify the process. Also we prove a representation theorem which states that every such conditional probability after an appro- priate transformation can be written as a linear summation of monomials of the past processes. In fact, we represent all categorical discrete–time stochastic processes over time, in particular rth–order Markov chains and more particularly stationary rth–order Markov chains. For the binary case the result is a consequence of an expansion theorem due to Besag [6]. To generalize the result to arbitrary categorical Markov chains, we prove a new expansion theorem which generalizes the result to the case of arbitrary cat- egorical rth–order Markov chains (rather than binary only). The result simplifies the task of modeling categorical stochastic pro- cesses. Since we have written the conditional probability as a linear com- bination, we can simply add other covariates as linear terms to the model to build non-stationary chains. For example, we can add seasonal terms or geographical coordinates (longitude and latitude). The theory of “partial 62 3.1. Introduction likelihood” allows us to estimate the parameters of such chain models for the binary case. By restricting the degree of those polynomials or by requir- ing that some of their coefficients be the same, we can find simpler models. Simulation studies show that the “BIC” criterion (Bayesian information cri- terion) combined with the partial likelihood works well in that they recover the correct simulation model. Since we are only dealing with the categorical case all the density functions in this chapter are densities with the respect to the counting measure on the real line. Specifying a categorical chain over time (with positive joint densities) using conditional probabilities of the present given the past is quite common in statistics and probability. However, we did not find a rigorous result for sufficient and necessary condition for a collection of function to correspond to the conditionals of a unique stochastic process. The proof is given in Theorem 3.5.6. This is an easy consequence of Lemma 3.3.2 that states that the “ascending” joint densities can uniquely determine such a stochastic process. Another commonly used technique in statistics is transforming a discrete probability density from (0, 1) to the real numbers using a transformations such as “log” for example in logistic regression. This is done to remove the restriction of these quantities and ease modeling of such probabilities. Theorem 3.4.1 provides a characterization of all such density functions given any bijective transformation between positive numbers and reals. Hence any positive discrete density function (mass function) correspond to a a unique function on reals and any arbitrary function on reals correspond to a positive function (after fixing the transformation and one element with positive probability). We do not know of a result in this generality elsewhere. Obviously now modeling such arbitrary function on reals which can only take finite values is easier. In order to find a parametric form for an arbitrary function over the reals that only takes finite values, for the binary case, we use a corollary of a result stated by Besag [6] who used such functions in modeling Markov random fields. However, Besag did not provide a rigorous proof and the statement of the theorem is flawed as also pointed out by Cressie et al. in [14]. They also state a correct version of the theorem without offering a proof. We provide a rigorous statement and proof in Theorem 3.5.1. The corollary can only be obtained if the flaw in the statement is fixed. In order to extend to stochastic processes that can have more than two states at some times, we prove a new representation theorem in Theorem 3.5.6. Some novel simplified models with less parameters for such processes are given in Subsection 3.5.3 and many of them have been investigated in later chapters to model precipitation and 63 3.2. Markov chains extreme temperature events occurrences. 3.2 Markov chains Let {Xt}t∈T be a stochastic process on the index set T , where T = Z, T = N (the integers or natural numbers respectively) or T = {0, 1, · · · , n}. It is customary to call {Xt}t∈T a chain, since T is countable and has a natural ordering. {Xt}t∈T is called an rth–order Markov chain if: P (Xt|Xt−1, · · · ) = P (Xt|Xt−1, · · · ,Xt−r), ∀t such that t, t− r ∈ T. We call the Markov chain homogenous if P (Xt = xt|Xt−1 = xt−1, · · · ,Xt−r = xt−r) = P (Xt′ = xt|Xt′−1 = xt−1, · · · ,Xt′−r = xt−r), ∀t, t′ ∈ T such that t− r and t′− r are also in T . Note that Markovness can be defined as a local property. We call {Xt}t∈T locally rth–order Markov at t if P (Xt|Xt−1, · · · ) = P (Xt|Xt−1, · · · ,Xt−r). Hence, we can have chains with a different Markov order at different times. Let Xt be the binary random variable for precipitation on day t, with 1 denoting the occurrence of precipitation and 0 non-occurrence. In particular, consider the precipitation (PN) for Calgary site from 1895 to 2006. This process can be considered in two possible ways: 1. Let X1,X2, · · · ,X366 denote the binary random variable of precipi- tation for days of a year. Suppose we repeatedly observe this chain year-by-year from 1895 to 2006 and take these observed chains to be in- dependent and identically distributed from one year to the next. With this assumption, techniques developed in [4] can be applied in order to infer the Markov order of the chain. However, this approach presents three issues. Firstly independence of the successive chains seems ques- tionable. In particular, the end of any one year will be autocorrelated with the beginning of the next. Secondly this model unrealistically as- sumes the 0-1 precipitation stochastic process is identically distributed over all years. Thirdly and more technically, leap years have 366 days while non–leap years have 365. We can resolve this last issue by for- mally assuming a missing data day in the non–leap years, by dropping 64 3.3. Consistency of the conditional probabilities the last day in the leap year or by using other methods. However, none of these approaches seem completely satisfactory. 2. Alternatively, we could consider the observations of Calgary daily pre- cipitation as coming from a single process that spans the entire time interval from 1895 to 2006. In this case, we will show below that we can still build models that bring in the seasonality effects within a year. 3.3 Consistency of the conditional probabilities To represent a stochastic process, we only need to specify the joint probabil- ity distributions for all finite collections of states. The Kolmogorov exten- sion theorem then guarantees the existence and uniqueness of an underlying stochastic process from which these distributions derive, provided they are consistent as described below. (See [9] for example.) To state the version of that celebrated theorem we require, let T denote some interval (that can be thought of as “time”), and let n ∈ N = {1, 2, . . . }. For each k ∈ N and finite sequence of times t1, · · · , tk, let νt1···tk be a prob- ability measure on (Rn)k. Suppose that these measures satisfy two consis- tency conditions: 1. Permutation invariance. For all permutations π (a bijective and one–to–one map from a set to itself) of 1, · · · , k and measurable sets Fi ⊂ Rn, νtπ(1)...tπ(k) (F1 × · · · × Fk) = νt1...tk ( Fπ−1(1) × · · · × Fπ−1(k) ) . 2. Marginalization consistency. For all measurable sets Fi ⊆ Rn, m ∈ N: νt1...tk (F1 × · · · × Fk) = νt1...tktk+1,...,tk+m (F1 × · · · × Fk × Rn × · · · × Rn) . Then there exists a probability space (Ω,F ,P) and a stochastic process X : T ×Ω→ Rn, such that: νt1...tk (F1 × · · · × Fk) = P (Xt1 ∈ F1, . . . ,Xtk ∈ Fk) , for all ti ∈ T , k ∈ N and measurable sets Fi ⊆ Rn, i.e. X has the νt1...tk as its finite–dimensional distributions. (See [37] for more details.) 65 3.3. Consistency of the conditional probabilities Remark. Note that Condition 1 is equivalent to νtπ(1)...tπ(k) ( Fπ(1) × · · · × Fπ(k) ) = νt1...tk (F1 × · · · × Fk) . This is seen by replacing F1×· · ·×Fk by Fπ(1)×· · ·×Fπ(k) in the first equality. Remark. We are only concerned about the case n = 1. This is because we consider stochastic processes, a collection of random variables from the same sample space to R1 = R. When working on (higher order) Markov chains over the index set N, it is natural to consider the conditional distributions of the present, time t, given the past instead of the finite joint distributions, in other words Pt(x0, · · · , xt) = P (Xt = xt|Xt−1 = xt−1, · · · ,X0 = x0), for {Xt}t∈N∪0 plus the starting distribution P0(x0) = P (X0 = x0). However that raises a fundamental question – does there exist a stochastic process whose conditional distributions match the specified ones and if so, is it unique? We answer this question affirmatively in this section for the case of discrete–time categorical processes, in particular higher order categorical Markov chains. We also restrict ourselves to chains for which all the joint probabilities are positive. Let M0,M1, · · · ⊂ R be the state spaces for time 0, 1, · · · , where each one of them is of finite cardinality. A probability mea- sure on the finite space M0 can be represented through its density function, a positive function P0 :M0 → R satisfying the condition∑ m∈M0 P0(m) = 1. The following theorem ensures the consistency of our probability model. Theorem 3.3.1 Suppose M0,M1, · · · ⊂ R, |Mt| = ct < ∞, t = 0, 1, · · · . Let P0 : M0 → R be the density of a probability measure on M0 and more generally for n = 1, . . . , Pn(x0, x1, · · · , xn−1, .) be a positive probability den- sity on Mn, ∀(x0, · · · , xn−1) ∈M0×· · ·×Mn−1. Then there exists a unique stochastic process (up to distributional equivalence) on a probability space (Ω,Σ, P ) such that P (Xn = xn|Xn−1 = xn−1, · · · ,X0 = x0) = Pn(x0, x1, · · · , xn−1, xn). 66 3.3. Consistency of the conditional probabilities To prove this theorem, we first consider a related problem whose solu- tion is used in the proof. More precisely, we consider stochastic processes {Xn}n∈N∪{0}, where the state space for Xn is Mn, i = 0, 1, 2, · · · and finite. Suppose pn :M0 ×M1 × · · · ×Mn → R is the joint probability distribution (density) of a random vector {X0, . . . ,Xn}, i.e. pn(x0, · · · , xn) = P (X0 = x0, · · · ,Xn = xn). We call a such sequence of functions, {pn}n∈N, the “ascending joint distribu- tions” of the stochastic process {Xn}n∈N∪{0}. It is clear that given a family of functions {pn}n∈N, other joint distributions such as P (Xt1 = xt1 , · · · ,Xtk = xtk), are obtainable by summing over appropriate components. Now consider the inverse problem. Given the {pn}n∈N and some type of consistency be- tween them, is there a (unique) stochastic process that matches these joint distributions? The following lemma gives an affirmative answer. Lemma 3.3.2 Suppose Mt ⊂ R, t = 0, 1, · · · are finite, p0 :M0 → R repre- sents a probability density function (i.e. ∑ x0∈M0 p(x0) = 1) and functions pn :M1×· · ·×Mn → R+∪{0} satisfy the following (consistency) condition:∑ xn∈Mn pn(x0, · · · , xn) = pn−1(x0, · · · , xn−1). Then there exist a unique stochastic process (up to distributional equivalence) {Xt}t∈N∪{0} such that P (X0 = x0, · · · ,Xn = xn) = pn(x0, · · · , xn) Proof Existence: By the Kolmogorov extension theorem quoted above, we only need to show there exists a consistent family of measures (density functions) {qt1,··· ,tk |k ∈ N, (t1, · · · , tk) ∈ Nk}, such that q1,··· ,t = pt. We define such a family of functions, prove they are measures and consistent. For any sequence, t1, · · · , tk, let t = max{t1, · · · , tk} and define qt1,··· ,tk(xt1 , · · · , xtk) = ∑ xu∈Mu,u∈{1,··· ,t}−{t1,··· ,tk} pt(x1, · · · , xt). We need to prove three things: 67 3.3. Consistency of the conditional probabilities a) Each qt1,··· ,tk is a density function. It suffices to show that qt is a mea- sure because the qt1,··· ,tk are sums of such measures and so are measures themselves. But pt is nonnegative by assumption. It only remains to show that pt sums up to one. For t = 1 it is in the assumptions of the theorem. For t > 1, it can be done by induction because of the following identity ∑ xi∈Mi,i=0,1,··· ,t pt(x0, · · · , xt) = ∑ xi∈Mi,i=0,1,··· ,t−1 pt−1(x0, · · · , xt−1) where the right hand side is obtained by the assumption ∑ Mn pn = pn−1. b) In order to satisfy the first condition of Kolmogorov extension theorem, we need to show qt1,··· ,tk(xt1 , · · · , xtk) = qtπ(1),··· ,tπ(k)(xtπ(1) , · · · , xtπ(k)), for π a permutation of {1, 2, · · · , k}. But this is obvious since max{t1, · · · , tk} = max{tπ(1), · · · , tπ(k)}. c) In order to satisfy the second condition of Kolmogorov extension theorem, we need to show∑ xti∈Mti qt1,··· ,ti,··· ,tk(xt1 , · · · , xti , · · · , xtk) = qt1,··· ,t̂i,··· ,tk(xt1 , · · · , x̂ti , · · · , xtk), where the notationˆabove a component means that component is omit- ted. To prove this, we consider two cases: Case I: t = max{t1, · · · , tk} = max{t1, · · · , t̂i, · · · , tk}: ∑ xti∈Mti qt1,··· ,ti,··· ,tk(xt1 , · · · , xti , · · · , xtk) = ∑ xti∈Mti ∑ xu∈Mu,u∈{1,··· ,t}−{t1,··· ,ti,··· ,tk} pt(x0, · · · , xt) = ∑ xu∈Mu,u∈{1,··· ,t}−{t1,··· ,t̂i,··· ,tk} pt(x0, · · · , xt) = pt1,··· ,t̂i,··· ,tk(xt1 , · · · , x̂ti , · · · , xtk) 68 3.3. Consistency of the conditional probabilities Case II: max{t1, · · · , t̂i, · · · , tk} = t′ < t = ti: ∑ xti∈Mti qt1,··· ,ti,··· ,tk(xt1 , · · · , xti , · · · , xtk) = ∑ xti∈Mti ∑ xu∈Mu,u∈{1,··· ,t}−{t1,··· ,ti,··· ,tk} pt(x0, · · · , xt) = ∑ xu∈Mu,u∈{1,··· ,t}−{t1,··· ,t̂i,··· ,tk} pt(x0, · · · , xt) = ∑ xu∈Mu,u∈{1,··· ,t′}−{t1,··· ,t̂i,··· ,tk} ∑ xv∈Mv,v∈{t′+1,··· ,t} ft(x0, · · · , xt) = ∑ xu∈Mu,u∈{1,··· ,t′}−{t1,··· ,t̂i,··· ,tk} pt′(x0, · · · , xt′) = qt1,··· ,t̂i,··· ,tk(xt1 , · · · , x̂ti , · · · , xtk). Uniqueness: Suppose {Yt}t∈N∪{0} is another stochastic process satisfying the conditions of the theorem with the p′t1,··· ,tk as the joint measures. p′1,··· ,t = pt = p1,··· ,t, by the assumption. Taking the appropriate sums on the two sides, we get p′t1,··· ,tk = pt1,··· ,tk . Now the uniqueness is a straight consequence of the Kol- mogorov Extension Theorem. Remark. Note that we did not impose the positivity of the functions for this case. Now we are ready to prove Theorem 3.3.1. Proof Existence: In Lemma 3.3.2, let p0 = P0, p1 :M0 ×M1 → R, p1(x0, x1) = p0(x0)P1(x0, x1), ... pn :M1×M2×· · ·×Mn → R, pn(x0, · · · , xn) = pn−1(x0, · · · , xn−1)Pn(x0, · · · , xn). 69 3.4. Characterizing density functions and rth–order Markov chains To see that the {pi} satisfy the conditions of Lemma 3.3.2, note that∑ xn∈Mn pn(x0, · · · , xn) = ∑ xn∈Mn pn−1(x0, · · · , xn−1)Pn(x0, · · · , xn) = pn−1(x0, · · · , xn−1) ∑ xn∈Mn Pn(x0, · · · , xn) = pn−1(x0, · · · , xn−1). Lemma 3.3.2 shows the existence of a stochastic process with joint dis- tributions matching the pi. Furthermore, the positivity of the {Pi} implies that of the {pi}. Thus all the conditionals exist for such a process and they match the Pi by the definition of the conditional probabilities. Uniqueness. Any stochastic process satisfying the above conditions, has a joint distribution that matches those of the {pi} and hence by the above theorem they are unique. 3.4 Characterizing density functions and rth–order Markov chains The previous section saw discrete–time categorical processes represented in terms of conditional probability density functions. However such densities on finite domains satisfy certain restrictions that can make modeling them difficult. That leads to the idea of linking them to unrestricted functions on R in much the same spirit as a single probability can profitably be logit transformed in logistic regression. To begin, let X be a random variable with probability density p defined on a finite setM = {m1, · · · ,mn}. The section finds the class of all possible such ps with p(mi) > 0, i = 1, · · · , n and g : R→ R+, a fixed bijection. For example g(x) = exp(x). The following theorem characterizes the relationship between p and g. While particular examples of the following theorem are used commonly in statistical modeling we are not aware of a reference which contains this result or the proof in this generality. Theorem 3.4.1 Let g : R→ R+ a bijection. For every choice of probability density p on M = {m1, · · · ,mn}, n ≥ 2, there exists a unique function f :M − {m1} → R, such that 70 3.4. Characterizing density functions and rth–order Markov chains p(m1) = 1 1 + ∑ y∈M−{m1} h(y) , (3.1) p(x) = h(x) 1 + ∑ y∈M−{m1} h(y) , x 6= m1, (3.2) where h = g ◦ f . Moreover, h(x) = p(x)/p(m1). Inversely, for an arbitrary function f :M − {m1} → R, the p defined above is a density function. Proof Existence: Suppose p : M → (0, 1) is given. Let h(x) = p(x)p(m1) , x 6= m1 and f :M − {m1} → R, f(x) = g−1 ◦ h(x). Obviously h = g ◦ f . Moreover 1 1 + ∑ y∈M−{m1} h(y) = 1 1 + ∑ y∈M−{m1} p(y)/p(m1) = 1 1 + (1− p(m1))/p(m1) = p(m1) and h(x) 1 + ∑ y∈M−{m1} h(y) = p(x)/p(m1) 1 + (1− p(m1))/p(m1) = p(x), thereby establishing the validity of equations (3.1) and (3.2). Uniqueness: Suppose for f1, f2, we get the same p. Let h1 = g ◦f1, h2 = g ◦ f2, by dividing 3.2 by 3.1 for h1 and h2, we get h1(x) = p(x)/p(m1) = h2(x) hence g ◦ f1 = g ◦ f2. Since g is a bijection f1 = f2. Corollary 3.4.2 Fixing a bijection g and m1 ∈ M , every density function corresponds to an arbitrary vector of length n− 1 over R. Example Consider the binomial distribution with a trials and probability of success π and the transformation g(x) = expx. Then M = {0, 1, · · · , a}. Let m1 = 0 then for x 6= 0 f(x) = g−1(h(x)) = log p(x)/p(0) = log ( n x ) px(1− p)n−x/(1 − p)n = log ( n x ) + x log{p/(1 − p)}. 71 3.4. Characterizing density functions and rth–order Markov chains Theorem 3.4.3 Fix a bijection g : R → R+, mn1 ∈ Mn. Let Mn, n = 0, 1, · · · be finite subsets of R with cardinality greater than or equal to 2 and M ′n = Mn − {mn1}, ∀n. Then every categorical stochastic process with pos- itive joint distribution on the Mn having initial density P0 : M0 → R and conditional probabilities Pn at stage n given the past, can be uniquely repre- sented by means of unique functions: g0 :M ′ 0 → R ... gn :M0 × · · · ×Mn−1 ×M ′n → R ... for n = 1, . . . , where P0(m 0 1) = 1 1 + ∑ y∈M0−{m01} h0(y) , (3.3) P0(x) = h0(x) 1 + ∑ y∈M0−{m01} h0(y) , x 6= m01 ∈M0, (3.4) and h0 = g ◦ g0. Moreover h0(x) = P (X0=x)P (X0=m10) . The conditional probabilities Pn are given by Pn(x0, · · · , xn−1,mn1 ) = 1 1 + ∑ y∈Mn−{mn1 } hn(y) , (3.5) Pn(x0, · · · , xn−1, x) = h(x) 1 + ∑ y∈Mn−{mn1 } hn(y) , x 6= mn1 ∈Mn, (3.6) where, hn = g ◦gn. Moreover hn(x0, · · · , x) = P (Xn=x|Xn−1=xn−1,··· ,X0=x0)P (Xn=m1n|Xn−1=xn−1,··· ,X0=x0) . Conversely, any collection of arbitrary functions g0, g1, · · · gives rise to a unique stochastic process by the above relations. Proof The result is immediate by Theorems 3.3.1 and 3.4.1. 72 3.5. Functions of r variables on a finite domain Remark. We can view the arbitrary functions g0, · · · , gn on M ′0,M0 × M ′1, · · · ,M0×· · ·×Mn−1×M ′n as arbitrary functions g0 onM ′0, g1(., x1), x1 6= m11 on M0 and gn(., xn), xn 6= mn1 on M0 × · · · ×Mn−1. As a check we can compute the number of free parameters of such a stochastic process on M0, · · · ,Mn. We can specify such a process by c0c1 · · · cn− 1 parameters by specifying the joint distribution on M0 ×M1 × · · · ×Mn. If we specify the stochastic process using the above theorems and the gi functions, we need (m0−1)+m0(m1−1)+m0m1(m2−1)+ · · ·+m0m1 · · ·mn−1(mn−1) which is the same number after expanding the terms and canceling out. Remark. In the case of rth–order Markov chains, gn(x0, · · · , xn) only de- pends on the last r + 1 components for n > r. Remark. In the case of homogenous rth–order Markov chains, Mi = M0, ∀i. Fix m0 ∈ M0 and suppose |M0| = c0. We only need to spec- ify g0 to gr, which are completely arbitrary functions. We only need to specify g0 on M ′ 0, g1 on M0 × M ′1 to gr on M0 × · · · ×M ′r+1. This also shows every homogenous Markov chain of order at most r is characterized by (c0 − 1) ∑r i=0 c r 0 = c r+1 0 − 1 elements in R. We could have also counted all such Markov chains by noting they are uniquely represented by the joint probability density pr+1 on M r+1 0 which has c r+1 0 − 1 free parameters (since it has to sum up to 1). To describe processes using Markov chains, we need to find appropriate parametric forms. We investigate the generality of these forms in the fol- lowing section and use the concept of partial likelihood to estimate them. We find appropriate parametric representations of gn which are functions of n + 1 finite variables. In the next section we study the properties of such functions. We call a variable “finite” if it only takes values in a finite subset of R. 3.5 Functions of r variables on a finite domain This section studies the properties of functions of r variables with finite domain. First, we present a result of Besag [6] who studied such functions in the context of Markov random fields. However the statement of the result in his paper is inaccurate and moreover it gives no rigorous proof of his result. We present a rigorous statement, proof of the result and generalization of Besag’s theorem. 73 3.5. Functions of r variables on a finite domain 3.5.1 First representation theorem This subsection presents a corrected version of a theorem stated by Besag in [6] and a constructive proof. Then we generalize this theorem and apply it to stationary binary Markov chains to get a parametric representation. Theorem 3.5.1 Suppose, f : ∏ i=1,··· ,rMi → R, Mi being finite with |Mi| = ci and 0 ∈Mi, ∀i, 1 ≤ i ≤ r. Let M ′i =Mi−{0}. Then there exist a unique family of functions {Gi1,··· ,ik :M ′i1×M ′i2×· · ·×M ′ik → R, 1 ≤ k ≤ r, 1 ≤ i1 < i2 < · · · < ik ≤ r}, such that f(x1, · · · , xr) = f(0, · · · , 0) + r∑ i=1 xiGi(xi) + · · ·+ ∑ 1≤i1 r. Hence there exist unique parameters αt0, {αti1,··· ,it}1≤i1,··· ,it≤t for t < r and αt0, {αti1,··· ,ir}1≤i1,··· ,ir≤r for t ≥ r such that for t < r: g−1{P (Xt = 1|Xt−1, · · · ,X0) P (Xt = 0|Xt−1, · · · ,X0)} = αt0 + t∑ i=1 Xt−iαti + · · ·+ ∑ 1≤i1 r. The above corollary shows that the conditional probability of a Markov chain after an appropriate transformation can be uniquely represented as a linear combination of monomial products of previous states. 79 3.5. Functions of r variables on a finite domain One might conjecture that the same result holds for all categorical– valued Markov chains (with a finite number of states) using the above the- orem. This is not true in general since the {Gi1,··· ,ik} are functions. In the next section, we prove another representation theorem which paves the way for the categorical case. As it turns out, we need more terms in order to write down the transformed conditional probability as a linear combination of past processes. 3.5.2 Second representation theorem In this section, we prove a new representation theorem for functions of r finite variables. We start with the trivial finite–valued one–variable function and then extend the result to r–variable functions. The proof for the general case is non–trivial and is done again by induction. Lemma 3.5.5 Suppose f : M → R, M ⊂ R being finite of cardinality c. Let d = c− 1. Then f has a unique representation of the form f(x) = ∑ 0≤i≤d αix i, ∀x ∈M. Remark. The lemma states that, if we consider the vector space V = {f : M → R}, then the monomial functions {pi}0≤i≤d, where pi : M → R, pi(x) = x i form a basis for V . Proof First note that the dimension of V is c. To show this, suppose M = {m1, · · · ,mc} and consider the following isomorphism of vector spaces, I : V → Rc f 7→ (f(m1), · · · , f(mc)). It only remains to show that {pi}0≤i≤d is an independent set. To prove this suppose, ∑ 0≤i≤d αix i = 0, ∀x ∈M. That would mean that the d–th degree polynomial p(x) = ∑ 0≤i≤d αix i has at least c = d+ 1 disjoint roots which is greater than its degree. This con- tradicts the fundamental theorem of algebra. 80 3.5. Functions of r variables on a finite domain Theorem 3.5.6 (Categorical Expansion Theorem) Suppose Mi is a finite subset of R with |Mi| = ci, i = 1, 2, · · · , r. Let di = ci−1,M = ∏ i=1,··· ,rMi and consider the vector space of functions over R, V = {f : M → R} with the function addition as the addition operation of the vector space and the scalar product of a real number to the function as the scalar product of the vector space. Then this vector space is of dimension C = ∏ i=1,··· ,r ci and {xi11 · · · xirr }0≤i1≤d1,··· ,0≤ir≤dr forms a basis for it. Proof To show that the dimension of the vector space is C, suppose M = {m1, · · · ,mc} and consider following the isomorphism of vector spaces: I : V → RC , f 7→ (f(m1), · · · , f(mC)). To show that {xi11 · · · xirr }0≤i1≤di,··· ,0≤ir≤dr forms a basis, we only need to show that it is an independent collection since there are exactly C elements in it. We proceed by induction on r. The case r = 1 was shown in the above lemma. Suppose we have shown the result for r − 1 and we want to show it for r. Assume a linear combination of the basis is equal to zero. We can arrange the terms based on powers of xr: p0(x1, · · · , xr−1)+xrp1(x1, · · · , xr−1)+ · · ·+xdrr pd(x1, · · · , xr−1) = 0, (3.7) ∀(x1, · · · , xr) ∈M1 × · · · ×Mr. Fix the values of x′1, · · · , x′r−1 ∈ M1 × · · · ×Mr−1. Then Equation (3.7) is zero for cr values of xr. Hence by Lemma 3.5.5, all the coefficients: p0(x ′ 1, · · · , x′r−1), p1(x′1, · · · , x′r−1), · · · , pd(x′1, · · · , x′r−1), are zero and we conclude: p0(x1, · · · , xr−1) = 0, p1(x1, · · · , xr−1) = 0, · · · , pd(x1, · · · , xr−1) = 0, ∀(x1, · · · , xr−1) ∈M1 × · · · ×Mr−1. Again by the induction assumption all the coefficients in these polynomials are zero. Hence, all the coefficients in the original linear combination in Equation (3.7) are zero. 81 3.5. Functions of r variables on a finite domain Corollary 3.5.7 Suppose Xt is a categorical stochastic process, where Xt takes values in Mt, |Mt| = ct = dt+1 <∞. Also assume that the conditional probability P (Xt = xt|Xt−1 = xt−1, · · · ,X0 = x0), is well–defined and in (0,1). Fix m1t ∈ Mt. Let g : R → R+ be a bijective transformation, then there are unique parameters {αti0,··· ,it}t∈N,0≤i0≤dt−1,0≤i1≤dt−1,0≤i2≤dt−2,··· ,0≤it≤d0 , such that P (Xt = xt|Xt−1 = xt−1, · · · ,X0 = x0) = Pt(x0, · · · , xt), where Pt(x0, · · · , xt−1,mt1) = 1 1 + ∑ y∈M−{mt1} ht(y) , (3.8) Pt(x0, · · · , xt−1, x) = h(x) 1 + ∑ y∈M−{m1} ht(y) , x 6= mt1 ∈Mt, (3.9) for ht(x0, · · · , xt) = g ◦ gt(x0, · · · , xt−1, xt) and gt(x0, · · · , xt−1, xt) = ∑ 0≤i0≤dt−1,0≤i1≤dt−1,··· ,0≤it≤d0 αti0,··· ,itx i0 t−0 · · · xitt−t, (x0, · · · , xt) ∈M0 × · · · ×Mt−1 ×M ′t . On the other hand any set of arbitrary parameters αti0,··· ,it gives rise to a unique stochastic process with the above equations. Corollary 3.5.8 Suppose that {Xt} is an rth–order Markov chain where Xt takes values in Mt a finite subset of real numbers, |Mt| = ct = dt + 1 <∞, the conditional probability P (Xt = xt|Xt−1 = xt−1, · · · ,X0 = x0), is well–defined and belongs to (0, 1). Fix m1t ∈Mt, let M ′t =Mt−{m1t} and suppose g : R→ R+ is a given bijective transformation. Then gt(xt, · · · , x0) = g−1{ P (Xt = xt|Xt−1 = xt−1, · · · ,X0 = x0) P (Xt = m 1 t |Xt−1 = xt−1, · · · ,X0 = x0) }, 82 3.5. Functions of r variables on a finite domain is a function of t + 1 variables for t < r, (xt, · · · , x0) and is a function of r + 1 variables,(xt, · · · , xt−r), for t > r. Hence there exist parameters {αti0,··· ,it}0≤i0≤dt−1,0≤i1≤dt−1,··· ,0≤it≤d0 , for t < r and {αti0,··· ,ir}0≤i0≤dt−1,0≤i1≤dt−1,··· ,0≤ir≤dt−r , for t ≥ r such that for t < r: g−1{ P (Xt = xt|Xt−1 = xt−1, · · · ,X0 = x0) P (Xt = m 1 t |Xt−1 = xt−1, · · · ,X0 = x0) } =∑ 0≤i0≤dt−1,0≤i1≤dt−1,··· ,0≤it≤d0 αti0,··· ,itx i0 t−0 · · · xitt−t, (x0, · · · , xt) ∈M0 × · · ·Mt−1 ×M ′t , and for t ≥ r: g−1{ P (Xt = xt|Xt−1 = xt−1, · · · ,X0 = x0) P (Xt = m 1 t |Xt−1 = xt−1, · · · ,X0 = x0) } =∑ 0≤i0≤dt−1,0≤i1≤dt−1,··· ,0≤ir≤dt−r αti0,··· ,irx i0 t−0 · · · xirt−r (x0, · · · , xt) ∈M0 × · · ·Mt−1 ×M ′t . Moreover any collection of arbitrary parameters {αti0,··· ,it}0≤i0≤dt−1,0≤i1≤dt−1,··· ,0≤it≤d0 , for t < r, and {αti0,··· ,ir}0≤i0≤dt−1,0≤i1≤dt−1,··· ,0≤ir≤dt−r , for t ≥ r, specify a unique stochastic process (upto distribution) by the above relations. In the case of homogenous Markov chains the αti1,··· ,ir do not depend on t for t > r. One might question the usefulness of such a representation. After all we have exactly as many parameters in the model as the values of the original function. In the following, we explain the importance of linear representa- tions of such functions. 83 3.5. Functions of r variables on a finite domain 1. A vast amount of theory has been developed to deal with linear mod- els. Generalized linear models in the case of independent sequence of random variables is a powerful tool. As we will see in sequel, these ideas can be imported into time series using the concept of partial likelihood. 2. Although we have as many parameters in the model as the values of the original function, the representation gives us a convenient framework for modeling, in particular for making various model reductions by omitting some terms or assuming certain coefficients are equal. 3. Although this is a representation for stationary rth–order Markov chains (or representation for arbitrary locally rth–order chains at time t), this representation allows us to accommodate other explanatory variables simply as additive linear terms and extend the model to non–stationary cases. This cannot be done in the same way if we try to model the original values of the function. Example As an example consider a categorical response variable Y and r categorical explanatory variables X1, · · · ,Xr, are given. Suppose the Xi takes values in the Mi which include 0. Our purpose is to model Y based on X1, · · · ,Xr. In order to do that, we consider the conditional probability P (Y = y|X1 = x1, · · · ,Xr = xr). Again, we assume that the conditional probability is well-defined everywhere and takes values in (0, 1). The above theorem shows that after applying a transformation the conditional probability can be written as a linear com- bination of multiples of powers of the Xi. Although, the theorem above shows the form of the conditional prob- ability in general and paves the way to the estimation of the conditional probabilities by estimating the parameters, the large number of parameters makes this a challenging task which might be impractical in some cases. In the next section, we introduce some classes of r variable functions that can be useful for some applications. 84 3.5. Functions of r variables on a finite domain 3.5.3 Special cases of functions of r finite variables The first class of functions we introduce are obtained by power restrictions. We simply assume that gt can be represented only by powers less than k. Suppose Xt takes values in 0, 1, · · · , ct − 1. Then for a k-restricted power stationary rth–order Markov chain, the gt, t > r is given by:∑ 0≤i1≤d1,··· ,0≤ir≤dr , ∑ j ij≤k αi1,··· ,irX i1 t−1 · · ·Xirt−r. In particular, we can let k = 1 and get β0 + ∑ i βiXt−i. This is useful especially for binary Markov chains. The second class of functions are useful in the case when relationships exist between the states in terms of a semi–metric d. Suppose {Xt} is an rth–order Markov chain and Xt takes values in the same finite set M = {1, · · · ,m}. Also let d :M ×M → R, be a semi–metric being a mapping on M that satisfies the following condi- tions: d ≥ 0; d(x, z) ≤ d(x, y) + d(y, z); d(x, x) = 0. Then we introduce the following model: g−1{P (Xt = j|Xt−1, · · · ,Xt−r) P (Xt = 1|Xt−1, · · · ,Xt−r)} = α0,j + k∑ i=1 αi,jd(j,Xt−i) for j = 2, · · · ,m. For this model P (Xt = 1|Xt−1, · · · ,Xt−r) = 1− ∑ j=2,··· ,m P (Xt = j|Xt−1, · · · ,Xt−r). Finally, we introduce a simple class for the binary Markov chain of order r. For any bijective transformation g : R→ R+ g−1{P (Xt = 1|Xt−1, · · · ,Xt−r) P (Xt = 0|Xt−1, · · · ,Xt−r)} = α0 + α1Nt−1, 85 3.6. Generalized linear models for time series where Nt−1 = ∑r j=1Xt−j . For example in the 0-1 precipitation process example seen in the Introduction, Nt−1 counts the number of the days out of r days before today that had some precipitation. 3.6 Generalized linear models for time series Generalized linear models were developed to extend ordinary linear regres- sion to the case that the response is not normal. However, that extension required the assumption of independently observed responses. The notion of partial likelihood was introduced to generalize these ideas to time series where the data are dependent. What follows in this section is a summary of the first chapter in Kedem and Fokianos [27], which we have included for completeness. Definition Let Ft, t = 1, 2, · · · be an increasing sequence of σ–fields, F0 ⊂ F1 ⊂ F2, · · · and let Y1, Y2, · · · be a sequence of random variables such that Yt is Ft–measurable. Denote the density of Yt, given Ft, by ft(yt; θ), where θ ∈ Rp is a fixed parameter. The partial likelihood (PL) is given by PL(θ; y1, · · · , yN ) = N∏ t=1 ft(yt; θ). Example As an example, suppose Yt represents the 0-1 PN process in Calgary, while MTt denotes the maximum daily temperature process. We can define Ft as follows: 1. Ft = σ{Yt−1, Yt−2, · · · }. In this case, we are assuming the information available to us is the value of the process on each of the previous days. 2. Ft = σ{Yt−1, Yt−2, · · · MTt−1,MTt−2, · · · }. In this case, we are assum- ing we have all the information regarding the 0-1 process of precipita- tion and maximum temperature for previous days. 3. Ft = σ{Yt−1, Yt−2, · · · MTt,MTt−1,MTt−2, · · · }. In this case, we add to the information in 2 the knowledge of today’s maximum tempera- ture. The vector θ that maximizes the above equation is called the maximum partial likelihood (MPLE). Wong [48] has studied its properties. Its con- sistency, asymptotic normality and efficiency can be shown under certain regularity conditions. 86 3.6. Generalized linear models for time series In this report, we are mainly interested in the case: Ft = σ{Yt−1, Yt−2, · · · }. We assume that the information Ft is given as a vector of random variables and denote it by Zt, which we call the covariate process: Zt = (Zt1, · · · , Ztp)′. Zt might also include the past values of responses Yt−1, Yt−2, · · · . Let µt = E[Yt|Ft−1], be the conditional expectation of the response given the information we have up to the time t. Kedem and Fokianos in [27] address time series following generalized linear models satisfying certain conditions about the so-called random and systematic components: • Random components: For t = 1, 2, · · · , N f(yt; θt, φ|Ft−1) = exp{ytθt − b(θt) at(φ) + c(yt;φ)}. • The parametric function αt(φ) is of the form φ/wt, where φ is the dispersion parameter, and wt is a known parameter called “weight parameter”. The parameter θt is called the natural parameter. • Systematic components: For t = 1, 2, · · · , N , g(µt) = ηt = p∑ j=1 βjZ(t−1)j = Z ′t−1β, for some known monotone function g called the link function. Example Binary time series: As an example consider {Yt}, a binary time series. Let us denote by πt the probability of success given Ft−1. Then for t = 1, 2, · · · , N , f(yt; θt, φ|Ft−1) = exp(yt log( πt 1− πt ) + log(1− πt)) with E[Yt|Ft−1] = πt, b(θt) = − log(1 − πt) = log(1 + exp(θt)), V (πt) = πt(1− πt), φ = 1, and wt = 1. The canonical link gives rise to the so–called “logistic model”: g(πt) = θt(πt) = log( πt 1− πt ) = ηt = Z ′ t−1β. 87 3.6. Generalized linear models for time series In the notation of Corollary 3.5.4, Yt = Xt, πt = P (Xt = 1|Xt−1, · · · ,Xt−r) and Z ′t−1 = (1,Xt−1, · · · ,Xt−r,Xt−1Xt−2, · · · ,Xt−1 · · ·Xt−r). We can also consider other covariate processes such as Z ′t−1 = (1,Xt−1, · · · ,Xt−r) and so on. In order to study the asymptotic behavior of the maximum likelihood estimator, we consider the conditional information matrix. To establish large sample properties, the stability of the conditional information matrix and the central limit theorem for martingales are required. Proofs may be found in Kedem and Fokianos [27]. Inference for partial likelihood The definitions of partial likelihood and exponential family of distributions imply that the log partial likelihood is given by l(β) = N∑ t=1 log f(yt; θt, φ|Ft−1) = N∑ t=1 {ytθt − b(θt) αt(φ) + c(yt, φ)} = N∑ t=1 {ytu(z ′ t−1β)− b(u(z′t−1)) αt(φ) + c(yt, φ)} = N∑ t=1 lt, where u(.) = (g◦µ(.))−1 = µ−1(g−1(.)), so that θt = u(zt−1β). We introduce the notation, ▽ = ( ∂ ∂β1 , · · · , ∂ ∂βp )′ and call ▽l(β) the partial score. To compute the gradient, we can use the chain rule in the following manner ∂lt ∂βj = ∂lt ∂βj ∂θt ∂µt ∂µt ∂ηt ∂ηt ∂βj . Some algebra shows SN (β) = ▽l(β) = N∑ t=1 Z(t−1) ∂µt ∂ηt Yt − µt(β) σ2t (β) , where, σ2t (β) = V ar[Yt|Ft−1]. The partial score process is defined from the partial sums as St(β) = ▽l(β) = t∑ s=1 Z(s−1) ∂µs ∂ηs Ys − µs(β) σ2s(β) . 88 3.6. Generalized linear models for time series One can show the terms in the above sums to be orthogonal: E[Z(t−1) ∂µt ∂ηt Yt − µt(β) σ2t (β) Z(s−1) ∂µs ∂ηs Ys − µs(β) σ2s(β) ] = 0, s < t. Also, E[SN (β) = 0]. The cumulative information matrix is defined by GN (β) = N∑ t=1 Cov[Z(t−1) ∂µt ∂ηt Yt − µt(β) σ2t (β) |Ft−1]. The unconditional information matrix is simply Cov(SN (β)) = FN (β) = E[GN (β)]. Next let HN (β) = −▽▽′l(β). Kedem and Fokianso [27] show that HN(β) = GN (β)−RN (β), where RN (β) = 1 αt(φ) N∑ t=1 Zt−1dt(β)Z ′t−1(Yt − µt(β)), and dt(β) = [∂ 2u(ηt)/∂η 2 t ]. St satisfies the martingale property: E[St+1(β)|Ft−1] = St(β). To prove the consistency and other properties of the estimators, we need: Assumption A: A1. The true parameter β belongs to an open set B ⊂ R. A2. The covariate vector Zt almost surely lies in a non random compact set Γ of Rp, such that P [ ∑N t=1 Zt−1Z ′ t−1 > 0] = 1. In addition, Z ′ t−1β lies almost surely in the domain H of the inverse link function h = g−1 for all Zt−1 ∈ Γ and β ∈ B. A3. The inverse link function h, defined in (A2), is twice continuously differentiable and |∂h(λ)/∂λ| 6= 0. A4. There is a probability measure ν on Rp such that ∫ Rp zz′ν(dz) is positive definite, and such that for Borel sets A ⊂ Rp, 1 N N∑ t=1 I[Zt−1∈A] → ν(A). 89 3.7. Simulation studies Theorem 3.6.1 Under assumption A the maximum likelihood estimator is almost surely unique for all sufficiently large N , and 1. the estimator is consistent and asymptotically normal, β̂ p→ β in probability, and √ N(β̂ − β) d→ Np(0, G−1(β)), in distribution as N →∞, for some matrix G. 2. The following limit holds in probability, as N →∞: √ N(β̂ − β)− 1√ N G−1(β)SN (β) p→ 0. We follow Kedem and Fokianos [27], who used similar models, to assume the above conditions for our models. However, we conjecture that the above assumptions hold for the partial likelihood of stationary rth–order Markov chains (with strictly positive joint distribution) in terms of our parametric linear form at least for the binary case. In fact assumptions A1. to A3. are easy to check and only A4. poses some challenge. We leave this for future research and use several simulation studies to check the consistency of the estimators in next section as well as Chapter 4 and Chapter 10. For more discussion regarding the assumptions and consistency see [27]. 3.7 Simulation studies This section presents the results of some simulation studies about the partial likelihood applied to categorical rth–order Markov chains. We also investi- gate the performance of the BIC to pick the appropriate (“true”) model. In particular, we generate samples from a seasonal Markov chain Yt where, Zt−1 = (1, Yt−1, cos(ωt)), ω = 2π 366 . We consider this Markov chain over 5 years from 2000 to 2005 and assume logit{P (Yt = 1|Zt−1)} = β′Zt−1, where β = (−1, 1,−0.5). 90 3.7. Simulation studies To generate samples for this chain, we need an initial value of the past two states, which we take it to be (1, 1). We denote the process Yt−k by Y k for simplicity. To check the performance of the partial likelihood and estimates of the variance using GN , we generate 50 chains with this initial value and then compare the parameter estimates with the true parameters. We also com- pare the theoretical variances with the experimental variances. Table 3.7 shows that the parameter estimates are fairly close to the true values. Also the experimental and theoretical variances are similar. sim. sd theo. sd β̂1 β̂2 β̂3 sd(β̂1) sd(β̂2) sd(β̂3) sd(β̂1) sd(β̂2) sd(β̂3) -0.99 1.0 -0.42 0.07 0.10 0.07 0.06 0.12 0.07 Table 3.1: The estimated parameters for the model Zt−1 = (1, Yt−1, cos(ωt)) with parameters β = (−1, 1,−0.5). The standard deviation for the param- eters is computed once using GN (theo. sd) and once using the generated samples (sim. sd). In Kedem and Fokianos [27] other simulation studies have been done to check the validity of this method. To check the normality of the parameter estimates, we plot the three parameter estimates histograms in Figure 3.1. The figure shows that the parameter estimates have a distribution close to Gaussian. Next we check the performance of the BIC criterion in picking the op- timal (“true”) model. We use the same model as above and then compute the BIC for a few models to see if BIC picks the right one. We denote Yt−k by Y k and cos(ωt) by COS for simplicity. For an assessment, we simulate a few other chains. 91 3.7. Simulation studies beta1 beta1 D en si ty −1.2 −1.1 −1.0 −0.9 −0.8 0 1 2 3 4 5 6 beta2 beta2 D en si ty 0.7 0.9 1.1 1.3 0. 0 0. 5 1. 0 1. 5 2. 0 2. 5 3. 0 3. 5 beta3 beta3 D en si ty −0.70 −0.60 −0.50 −0.40 0 1 2 3 4 5 6 7 Figure 3.1: The distribution of parameter estimates for the model with the covariate process Zt−1 = (1, Yt−1, cos(ωt)) and parameters (β1 = −1, β2 = 1, β3 = −0.5). Model: Zt−1 BIC parameter estimates (1) 2380.0 (-0.605) (1, Y 1) 2267.1 (-1.03, 1.11) (1, Y 1, Y 2) 2273.7 (-1.064, 1.091, 0.101) (1, Y 1, COS) 2217.7 (-1.00, 0.970, -0.558) (1, Y 1, SIN) 2274.4 ( -1.037, 1.117, 0.026) (1, Y 1, COS, SIN) 2225.1 (-1.00, 0.970, -0.559, 0.028) (1, Y 1, Y 2, Y 1Y 2) 2281.1 (-1.055, 1.0615, 0.0647, 0.077) (1, Y 1, Y 2, Y 1Y 2, COS) 2232.4 (-0.985, 0.943, -0.0870, 0.0915, -0.564) (1, Y 1, Y 2, Y 1Y 2, COS, SIN) 2239.8 (-0.981, 0.957, -0.0946, 0.0723, -0.575, 0.0232) Table 3.2: BIC values for several models competing for the role of the true model, where Zt−1 = (1, Y 1, COS), β = (−1, 1,−0.5). As we see in Table 3.2, the true model has the smallest BIC showing it performs well in this case. Also note that models which include the covari- ates of the true model have accurate estimates for the parameters associated with (1, Y 1, COS), while giving very small magnitude for other parameters. 92 3.8. Concluding remarks Model: Zt−1 BIC parameter estimates (1) 2537.3 (0.0799) (1, Y 1) 2329.5 (-0.649, 1.417) (1, Y 1, Y 2) 2245.5 (-1.022, 1.144, 0.998) (1, Y 1, COS) 2265.9 (-0.553, 1.236, -0.617) (1, Y 1, SIN) 2336.7 (-0.648, 1.415, -0.0433) (1, Y 1, COS, SIN) 2273.0 (-0.552, 1.235, -0.617, -0.0480) (1, Y 1, Y 2, Y 1Y 2) 2251.3 (-1.08, 1.287, 1.140, -0.278) (1, Y 1, Y 2, Y 1Y 2, COS) 2213.7 (-0.936, 1.11, 0.966, -0.175, -0.511) (1, Y 1, Y 2, Y 1Y 2, COS, SIN) 2221.2 (-0.927, 1.101, 0.940, -0.160, -0.549, -0.0441) (1, Y 1, Y 2, COS) 2206.8 (-0.899, 1.0263, 0.875, -0.515) Table 3.3: BIC values for several models competing for the role of true model given by Zt−1 = (1, Y 1, Y 2, COS), β = (−1, 1, 1,−0.5). Table 3.3 presents the true model in the last row. Ignore that row for a moment. The smallest “BIC” corresponds to (1, Y 1, Y 2, Y 1Y 2, COS), which has an component Y 1Y 2 added to the true model. However, the coefficients of this model are very close to the true model and the coefficient for Y 1Y 2 is relatively small in magnitude. The true model has the smallest BIC again and the parameter estimates are close to the correct values. 3.8 Concluding remarks In summary, this chapter shows that a categorical discrete–time stochastic process can be represented using a small number of ascending joint distri- butions P (X0 = x0), P (X0 = x0,X1 = x1), P (X0 = x0,X1 = x1,X2 = x2), · · · . As a corollary of the above, we showed that a categorical discrete–time stochastic process can be represented using the conditional probabilities P (X0 = x0), P (X1 = x1|X0 = x0), P (X2 = x2|X0 = x0,X1 = x1), · · · . A parametric form was found for the conditional probability distribution of categorical discrete–time stochastic processes. The parameters can be estimated for stationary binary Markov chains using partial likelihood. 93 Chapter 4 Binary precipitation process 4.1 Introduction This chapter studies the Markov order of the 0-1 precipitation process (PN from now on). Many authors such as Anderson et al. in [4] and Barlett in [5] have developed techniques to test different assumptions about the order of the Markov chain. For example in [4], Anderson et al. develop a Chi-squared test to test that a Markov chain is of a given order against a larger order. In particular, we can test the hypothesis that a chain is 0th–order Markov against a 1st–order Markov chain, which in this case is testing independence against the usual (1st–order) Markov assumption. (This reduces simply to the well–known Pearson’s Chi-squared test.) Hence, to “choose” the Markov order one might follow a strategy of testing 0th– order against 1st–order, testing 1st–order against 2nd–order and so on to rth–order against (r + 1)th–order, until the test rejects the null hypothesis and then choose the last r as the optimal order. However, some drawbacks are immediately seen with this method: 1. The choice of the significance level will affect our chosen order. 2. The method only works for chains with several independent observa- tions of the same finite chain. 3. We cannot account for some other explanatory variables in the model, for example the maximum temperature. Issues like this have led researchers to think about other methods of order selection. Akaike in [2], using the information distance and Schwartz in [42] using Bayesian methods develop the AIC and BIC, respectively. Other methods and generalizations of the above methods have been proposed by some authors such as Hannan in [20], Shibita in [44] and Haughton in [22]. Many authors have studied the order of precipitation processes at dif- ferent locations on Earth. Gabriel et al. in [18] use the test developed in Anderson et al. [4] to show that the precipitation in Tel-aviv is a 1st–order 94 4.2. Models for 0-1 precipitation process Markov chain. Tong in [45] used the AIC for Hong Kong, Honolulu and New York and showed that the process is 1st–order in Hong Kong and Honolulu but 0th–order in New York. In a later paper, [46], Tong and Gates use the same techniques for Manchester and Liverpool in England and also re- examined the Tel–aviv data. Chin in [12] studies the problem using AIC over 100 stations (separately) in the United States over 25 years. He concludes that the order depends on the season and geographical location. Moreover, he finds a prevalence of first order conditional dependence in summer and higher orders in winter. Other studies have been done by several authors using similar techniques over other locations. For example, Moon et al. in [35] study this issue at 14 location in South Korea. This report investigates the Markov order for a cold–climate region. The Markov order of the precipitation in this region might be different due to a large fraction of precipitation being being in the form of snowfall. The re- port also drops the homogeneity (stationarity) condition usually imposed in studying the Markov order. In fact the model proposed here can accommo- date both continuous (here time and potentially geographical location and other explanatory variables) and categorical variables (e.g. precipitation occurred/not occurred on a given day). An issue with increasing the order of a Markov chain is the exponential increase in number of parameters in the model. Here as a special case, we propose models that increase with the order of Markov chain by adding only 1 parameter. Other authors such as Raftery in [40] and Ching in [13] have proposed other methods to reduce the number of parameters. The dataset used in this study contains more than 110 years of daily precipitation for some stations. This allows us to look at some properties of the precipitation process such as stationarity more closely. 4.2 Models for 0-1 precipitation process In the light of Categorical Expansion Theorem (Theorem 3.5.6), from the previous chapter, we know all the possible forms of rth–order Markov chains for binary data. Since, this theorem gives us linear forms, time series follow- ing generalized linear models (TGLM) provides a method to estimate the parameters. For two reasons it is beneficial to study simpler models rather than a full model: 1. There are a large number of parameters to estimate in the full model. 2. There are better interpretations for the parameters in simpler models. 95 4.2. Models for 0-1 precipitation process We introduce a few processes that are useful in modeling precipitation: • Yt represents the occurrence of precipitation on day t. Here Yt is a binary process with 1 denoting precipitation and 0 denoting its absence on day t. • N lt−1 = ∑l j=1 Yt−j represents the number of PN days in the past l days. • Binary processes for modeling m years, say l1 to l2. Here, we define the binary processes Alt, l ∈ [l1, l2] by Alt = { 1, if t belongs to the year l 0, otherwsie . This is a binary deterministic process to model the year effect. • Seasonal processes (deterministic): cos(ωt) and sin(ωt), ω = 2π 366 . We can also consider higher order terms in the Fourier series cos(ωnt) and sin(ωnt), where n is a natural number. Some possibly interesting models present themselves when Zt−1 is a co- variate process. The probability of precipitation today depends on the value of that covariate process, and those processes might include: • Zt−1 = (1, N lt−1). This model assumes that the probability of PN today only depends on the number of PN days during l previous days. • Zt−1 = (1, N lt−1, Yt−1). This model assumes that the PN occurrence today depends on the PN occurrence yesterday and the number of PN occurrences during l previous days. • Zt−1 = (1, cos(ωt), sin(ωt), N lt−1, Yt−1). • Zt−1 = (1, cos(ωt), sin(ωt), Yt−1). • Zt−1 = (1, Yt−1, · · · , Yt−r). This is a special case of Markov chain of order r. No interaction between the days is assumed. In this model increasing the order of Markov chain by one corresponds to adding one parameter to the model. 96 4.3. Exploratory analysis of the data • Zt−1 = (1, Yt−1, · · · , Yt−r, Yt−1Yt−2). In this model, the interaction between the previous day and two days ago is included. • Zt−1 = (1, cos(ωt), sin(ωt), Yt−1, · · · , Yt−r). In this model, two sea- sonal terms are added to the previous model. • Zt−1 = (A1t , · · · , Akt , Yt−1, · · · , Yt−r). This model has a different inter- cept for various years (year effect). 4.3 Exploratory analysis of the data The data includes the daily precipitation for 48 stations over Alberta from 1895 to 2006. First, we make the plot of transition probabilities for a few locations. We pick Calgary and Banff, which have a rather long period of data avail- able for PN . We have also repeated the procedure for some other locations such as Edmonton and seen similar results. Figures 4.1 to 4.7 show the plots for Banff. For Calgary see plots in Chapter 2. Figure 4.1 plots the estimated 1st–order transition probabilities p̂11 (the probability of precip- itation if precipitation occurs the day before) and p̂01 (the probability of precipitation if it does not occur the day before). These transition probabil- ities are estimated using the observed data. For example p̂11 for January 5th is estimated by n11n1 , where n11 is the number of pairs of days (Jan. 4th, Jan. 5th) with precipitation and n1 is the number of Jan. 5th with precipitation during available years. Figures 4.2 and 4.3 show similar plots for estimated 2nd–order transition probabilities. Figures 4.4 and 4.5 give the estimated annual probability of precipitation for Banff and Calgary computed by di- viding the number of wet days of a year by the number of days in that year. The plot of the logit function and the transformed estimated probability of precipitation in Banff are shown in Figures 4.6 and 4.7. We summarize the conclusions and conjectures based on the exploratory analysis of the data as followings: • The binary PN process is not stationary. Figure 4.1 shows that the transition probabilities change over time and depend on the season. • Figure 4.1 also suggests the transition probabilities change continu- ously over time. Although a high variation is seen in the higher order probabilities, a generally continuous trend is observed. There is a pe- riodic trend for the transition probabilities over the course of the year 97 4.3. Exploratory analysis of the data and a simple periodic function should suffice modeling these probabil- ities. • Figure 4.1 suggests p11 and p01 differ over the course of the year, so a 0th–order Markov chain (independent) does not seem appropriate. • Figure 4.2 plots the curves p̂111, p̂011 and Figure 4.3 plots the curves p̂001, p̂101. They have considerable overlaps over the course of the year. Therefore a 2nd–order Markov chain does not seem necessary. • Figures 4.4 and 4.5 show the estimated probability of precipitation for different years, computed by averaging through the days of a given year. The probability of precipitation seems to differ year–to–year. It also seems that consecutive years have similar probability and hence assuming that different years are identically distributed and indepen- dent does not seem reasonable. The probability of precipitation has increased over the past century for Calgary, while for Banff the prob- ability of precipitation seems to have been changing with a more ir- regular pattern. • Figure 4.6 shows the plot of the logit function, while Figure 4.7 shows the result of applying the logit function to the estimated probabilities. We observe how the logit function transforms the values between 0 and 1 to a wider range in R. Since logit is an increasing function the peaks are observed at the same time as the original values. The Categorical Expansion Theorem (Theorem 3.5.6) shows the general form for binary rth–order Markov processes. Table 4.8 compares all possi- ble 2nd–order Markov chains (including the constant process). We discuss the implications of these possible models and use the following abbrevia- tions: Y k = Yt−k, COS = cos(ωt), SIN = sin(ωt), COS2 = cos(2ωt) and SIN2 = sin(2ωt). Some proposed models: • Zt−1 = 1: The probability of PN ’s occurrence does not depend on the previous days. In other words days are independent. • Zt−1 = (1, Y 1): The probability of PN today depends only on the day before and given the latter’s value, it is independent of the other previous days. 98 4.3. Exploratory analysis of the data 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 4.1: The transition probabilities for the Banff site. The dotted line represents p̂11 (the estimated probability of precipitation if precipitation oc- curs the day before) and the dashed represents p̂01 (the estimated probability of precipitation if precipitation does not occur the day before.) 99 4.3. Exploratory analysis of the data 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 4.2: The solid curve represents p̂111 (the estimated probability of precipitation if during both two previous days precipitation occurs) and the dashed curve represents p̂011 (the estimated probability that precipitation occurs if precipitation occurs the day before and does not occur two days ago) for the Banff site. 100 4.3. Exploratory analysis of the data 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 4.3: The solid curve represents p̂001 (the estimated probability of precipitation occurring if it does not occur during the two previous days) and the dotted curve is p̂101 (the estimated probability that precipitation occurs if precipitation does not occur the day before but occurs two days ago) for the Banff site. 101 4.3. Exploratory analysis of the data 1900 1920 1940 1960 1980 2000 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Year Pr ob ab ilit y Figure 4.4: Banff’s estimated mean annual probability of precipitation cal- culated from historical data. 1900 1920 1940 1960 1980 2000 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Year Pr ob ab ilit y Figure 4.5: Calgary’s estimated mean annual probability of precipitation calculated from historical data. 102 4.3. Exploratory analysis of the data 0.0 0.2 0.4 0.6 0.8 1.0 − 6 − 4 − 2 0 2 4 6 x lo gi t Figure 4.6: The logit function: logit(x) = log(x/(1 − x)). 0 100 200 300 − 1. 0 − 0. 5 0. 0 0. 5 1. 0 Day of the year Pr ob ab ilit y of Pr ec ip ita tio n Figure 4.7: The logit of the estimated probability of precipitation in Banff for different days of the year. 103 4.3. Exploratory analysis of the data • Zt−1 = (1, Y 2) : The probability of PN given the information for the day before yes- terday is independent of other previous days, in particular yesterday! This does not seem reasonable. • Zt−1 = (1, Y 1, Y 2) : This model includes both Y 1 and Y 2. One might suspect that it has all the information and therefore is the most general 2nd–order Markov model. However, note that in the model the transformed conditional probability is a linear combination of the past two states: logit{P (Y = 1|Y 1, Y 2)} = α0 + α1Y 1 + α2Y 2, which implies, logit{P (Y = 1|Y 1 = 0, Y 2 = 0)} = α0, logit{P (Y = 1|Y 1 = 1, Y 2 = 0)} = α0 + α1, logit{P (Y = 1|Y 1 = 0, Y 2 = 1)} = α0 + α2, and logit{P (Y = 1|Y 1 = 1, Y 2 = 1)} = α0 + α1 + α2. We conclude that logit{P (Y = 1|Y 1 = 1, Y 2 = 0)} − logit{P (Y = 1|Y 1 = 0, Y 2 = 0)} = logit{P (Y = 1|Y 1 = 1, Y 2 = 1)} − logit{P (Y = 1|Y 1 = 0, Y 2 = 1)} = α1. In other words, the model implies that no matter what the value Y 2 has, the differences between the conditional probabilities given Y 1 = 1 and given Y 1 = 0 (in the logit scale) are the same. • Zt−1 = (1, Y 1Y 2): Among other things, this model implies that the conditional probabil- ities given (Y 1 = 0, Y 2 = 1), (Y 1 = 1, Y 2 = 0) or (Y 1 = 0, Y 2 = 0) are the same. 104 4.4. Comparing the models using BIC • Zt−1 = (1, Y 1, Y 1Y 2): Among other things this model implies that the conditional probabil- ities given any of the pairs (Y 1 = 0, Y 2 = 0) or (Y 1 = 0, Y 2 = 0) are the same. • Zt−1 = (1, Y 2, Y 1Y 2): The interpretation is similar to the previous case. • Zt−1 = (1, Y 1, Y 2, Y 1Y 2) : This is the full 2nd–order stationary Markov model with no restrictive assumptions as shown by Categorical Expansion Theorem. The above explanations show that one must be careful about the as- sumptions made about any proposed model. Including/dropping various covariates can lead to implications that might be unrealistic. 4.4 Comparing the models using BIC This section uses the methods developed previously to find appropriate mod- els for the 0-1 PN process. We use the PN data for Calgary from 2000 to 2004. We compare several models using the BIC criterion. The partial likelihood is computed and then maximized using the “optim” function in “R”. Using “Time Series Following Generalized Linear Models” as discussed by Kedem et al. in [27], for binary time series with the canonical link function, we have: P (Yt = 1|Zt−1) = logit−1(αZt−1), and, P (Yt = 0|Zt−1) = 1− logit−1(αZt−1). We conclude that the log partial likelihood is equal to: N∑ t=1 log P (Yt|Zt−1) = ∑ 1≤t≤N,Yt=1 log(logit−1(αZt−1)) + ∑ 1≤t≤N,Yt=0 log(1− logit−1(αZt−1)). 105 4.4. Comparing the models using BIC To ensure that the maximum picked by “optim” in the R package is close to the actual maximum, several initial values were chosen randomly until stability was achieved. In order to find an optimal model to describe a binary (0-1) PN process, we can include several factors such as previous values of the process, seasonal terms, previous maximum temperature values and so on. We have done this comparison in several tables. The smallest BIC in the tables is shown by boldface. Table 4.1 shows the constant process 1 and N l, the number of wet days during l previous days, as predictors. Note that N1 = Y 1. The BIC criterion in this case picks the simplest model which includes only the previous day. Hence a 1st–order Markov chain is chosen among these particular lth–order chains. Model: Zt−1 BIC parameter estimates (1, N1) 2268.1 (−1.035, 1.268) (1, N2) 2294.5 (−1.097, 0.726) (1, N3) 2293.4 (−1.181, 0.559) (1, N4) 2292.7 (−1.244, 0.462) (1, N5) 2296.9 (−1.281, 0.390) (1, N6) 2305.9 (−1.292, 0.331) (1, N7) 2311.3 (−1.308, 0.291) (1, N8) 2317.2 (−1.317, 0.258) (1, N9) 2322.1 (−1.32, 0.232) (1, N10) 2325.6 (−1.34, 0.212) (1, N11) 2330.4 (−1.34, 0.193) (1, N12) 2335.7 (−1.34, 0.177) (1, N13) 2336.3 (−1.36, 0.168) (1, N14) 2340.5 (−1.35, 0.155) (1, N15) 2342.6 (−1.36, 0.146) Table 4.1: BIC values for models including N l, the number of precipitation days during the past l days for the Calgary site. Table 4.2 compares models with predictors: 1, Y l and N l, l = 1, 2, · · · , 30. Since Y 1 = N1 the first row is obviously an over–parameterized model. The smallest BIC corresponds to the model (1, Y 1, N28). Even the model (1, Y 1, N4) shows an improvement over (1, Y 1). Hence by adding the number of PN days to the simple model (1, Y 1), an improvement is achieved. 106 4.4. Comparing the models using BIC Model: Zt−1 BIC parameter estimates (1, Y 1, N1) 2275.6 (-1.04, -0.40, 1.67) (1, Y 1, N2) 2270.2 (-1.10, 0.94, 0.255) (1, Y 1, N3) 2258.3 (-1.21, 0.88, 0.279) (1, Y 1, N4) 2250.6 (-1.28, 0.88, 0.254) (1, Y 1, N5) 2247.5 (-1.32, 0.91, 0.221) (1, Y 1, N6) 2248.2 (-1.34, 0.95, 0.187) (1, Y 1, N7) 2247.1 (-1.37, 0.97, 0.167) (1, Y 1, N8) 2247.5 (-1.39, 0.99, 0.149) (1, Y 1, N9) 2247.6 (-1.40, 1.01, 0.136) (1, Y 1, N10) 2247.4 (-1.42, 1.02, 0.126) (1, Y 1, N11) 2248.3 (-1.43, 1.04, 0.115) (1, Y 1, N12) 2249.6 (-1.43, 1.05, 0.105) (1, Y 1, N13) 2248.1 (-1.46, 1.06, 0.102) (1, Y 1, N14) 2249.7 (-1.46, 1.07, 0.0945) (1, Y 1, N15) 2249.5 (-1.47, 1.07, 0.0905) (1, Y 1, N16) 2249.0 (-1.49, 1.08, 0.0872) (1, Y 1, N17) 2245.3 (-1.51, 1.08, 0.0853) (1, Y 1, N18) 2246.8 (-1.53, 1.08, 0.0831) (1, Y 1, N19) 2246.8 (-1.55, 1.08, 0.0820) (1, Y 1, N20) 2245.6 (-1.56, 1.08, 0.0787) (1, Y 1, N21) 2246.0 (-1.56, 1.08, 0.0749) (1, Y 1, N22) 2247.6 (-1.55, 1.09, 0.0703) (1, Y 1, N23) 2245.9 (-1.58, 1.09, 0.0701) (1, Y 1, N24) 2246.0 (-1.58, 1.09, 0.0678) (1, Y 1, N25) 2246.8 (-1.58, 1.10, 0.0647) (1, Y 1, N26) 2246.6 (-1.59, 1.10, 0.0632) (1, Y 1, N27) 2246.2 (-1.60, 1.10, 0.0618) (1, Y 1, N28) 2244.7 (-1.62, 1.10, 0.0615) (1, Y 1, N29) 2245.4 (-1.62, 1.10, 0.0593) (1, Y 1, N30) 2246.2 (-1.622, 1.11, 0.0571) Table 4.2: BIC values for models including N l, the number of wet days during the past l days and Y 1, the precipitation occurrence of the previous day for the Calgary site. Table 4.3 compares models with predictors (1, N l, COS, SIN). We have added (COS,SIN) to capture the seasonality in the precipitation over a year. (1, N1, COS, SIN) (which is the same as (1, Y 1, COS, SIN)) is the winner. Note that this model is better than the simpler model (1, Y 1) or the model (1, Y 1, N28). 107 4.4. Comparing the models using BIC Model: Zt−1 BIC parameter estimates (1, N1, COS, SIN) 2222.5 (-1.00, 1.10, -0.588, 0.0999) (1, N2, COS, SIN) 2254.6 (-1.02, 0.592, -0.564, 0.0977) (1, N3, COS, SIN) 2260.1 (-1.07, 0.443, -0.538, 0.0961) (1, N4, COS, SIN) 2264.1 (-1.11, 0.359, -0.518, 0.0959) (1, N5, COS, SIN) 2270.8 (-1.12, 0.295,-0.508, 0.0971) (1, N6, COS, SIN) 2280.5 (-1.11, 0.240, -0.510, 0.0999) (1, N7, COS, SIN) 2286.7 (-1.11, 0.205, -0.508, 0.101) (1, N8, COS, SIN) 2293.0 (-1.09, 0.176, -0.511, 0.103) (1, N9, COS, SIN) 2293.1 (-1.08, 0.153, -0.513, 0.105) (1, N10, COS, SIN) 2302.2 (-1.07, 0.136, -0.516, 0.107) Table 4.3: BIC values for models including N l, the number of wet days during the past l days and seasonal terms for the Calgary site. Table 4.4 includes Y 1, seasonal terms and N l for l = 1, 2, · · · , 10 as predictors. The model with predictors (1, Y 1, N5, COS, SIN), which includes a combination of seasonal terms and number of precipitation days has the smallest BIC so far. Note that both the seasonal terms and the number of precipitation days prior to the day we are looking at, are indica- tors of “weather conditions”. There are natural cycles throughout the year that can inform us about the weather conditions of a particular day of the year. These natural cycles are modeled by the periodic functions COS and SIN . Also by looking at a short period prior to the current day (short–term past), we might be able to determine the weather conditions. Precipitation may not follow a very regular seasonal pattern similar to temperature as shown in the exploratory analysis. Which one of these variables (seasonal or short–term past) is more important or necessary might depend on the location and other factors. 108 4.4. Comparing the models using BIC Model: Zt−1 BIC parameter estimates (1, Y 1, N1, COS, SIN) 2230.0 (-1.00, -2.31, 3.41, -0.589, 0.0999) (1, Y 1, N2, COS, SIN) 2229.2 (-1.03, 0.977, 0.0997, -0.576, 0.0985) (1, Y 1, N3, COS, SIN) 2224.8 (-1.10, 0.895, 0.156, -0.546, 0.0946) (1, Y 1, N4, COS, SIN) 2222.1 (-1.14, 0.89, 0.147, -0.525, 0.0941) (1, Y 1, N5, COS, SIN) 2221.7 (-1.16, 0.922, 0.124, -0.515, 0.0934) (1, Y 1, N6, COS, SIN) 2223.3 (-1.16, 0.959, 0.0954, -0.517, 0.0946) (1, Y 1, N7, COS, SIN) 2223.7 (-1.17, 0.978, 0.0822, -0.513, 0.0947) (1, Y 1, N8, COS, SIN) 2224.7 (-1.16, 0.997, 0.0682, -0.515, 0.0945) (1, Y 1, N9, COS, SIN) 2225.5 (-1.16, 1.0129, 0.0582, -0.515, 0.0961) (1, Y 1, N10, COS, SIN) 2226.0 (-1.16, 1.026, 0.0502, -0.517, 0.0958) Table 4.4: BIC values for models including N l, the number of PN days during the past l days, Y 1, the precipitation occurrence of the previous day and seasonal terms for the Calgary site. Table 4.5 compares models with different number of predictors from (1, Y 1) to (1, Y 1, · · · , Y 7). The first model is a 1st–order Markov chain and the last one is a 7th– order chain. The optimal model picked is: (1, Y 1, Y 2, Y 3). Comparing this table to Table 4.2, we see that (1, Y 1, N3) is superior to (1, Y 1), (1, Y 1, Y 2) and (1, Y 1, Y 2, Y 3). Note that (1, Y 1, N3) is equivalent to (1, Y 1, Y 2 + Y 3). Hence, including Y 2 and Y 3 and giving them the same weight is better than not including them, including one of them or including both of them. Model: Zt−1 BIC parameter estimates (1, Y 1) 2268.1 (-1.034, 1.27) (1, Y 1, Y 2) 2270.2 (-1.11, 1.20, 0.23) (1, Y 1, Y 2, Y 3) 2263.3 (-1.21, 1.19, 0.140, 0.410) (1, Y 1, · · · , Y 4) 2263.9 (-1.28, 1.16, 0.133, 0.334, 0.281) (1, Y 1, · · · , Y 5) 2268.5 (-1.32, 1.15, 0.121, 0.328, 0.232, 0.192) (1, Y 1, · · · , Y 6) 2335.4 (-1.34, 1.15, 0.0837, 0.357, 0.213, 0.135, 0.115) (1, Y 1, · · · , Y 7) 2286.7 (-1.51, 1.33, -0.113, 0.378, 0.418, 0.204, -0.0050, 0.214) Table 4.5: BIC values for Markov models of different order with small num- ber os parameters for the Calgary site. Table 4.6 compares models with different Markov orders plus the seasonal terms. The model (1, Y 1, COS, SIN) is the winner. Hence, whether we include the seasonal terms or not, the model that only depends on the previous day is the winner. 109 4.4. Comparing the models using BIC Model: Zt−1 BIC parameter estimates (1, COS, SIN, Y 1) 2222.6 (-1.0, -0.5, 0.1, 1.1) (1, COS, SIN, Y 1, Y 2) 2229.1 (-1.0, -0.5, 0.1, 1.0, 0.1) (1, COS, SIN, Y 1, Y 2, Y 3) 2230.4 (-1.1, -0.5, 0.1, 1.0, 0.02, 0.3) (1, COS, SIN, Y 1, · · · , Y 4) 2247.3 (-1.1, -0.5, 0.1, 1.0, 0.03, 0.2, 0.15) (1, COS, SIN, Y 1, · · · , Y 5) 2243.4 (-1.3, -0.4, 0.2, 1.4, -0.4, -0.1, 1.0, -0.15) (1, COS, SIN, Y 1, · · · , Y 6) 2501.6 (-1.2, -1.5, 0.4, 0.2, 0.8, 0.9, 0.9, -0.6, -0.2) (1, COS, SIN, Y 1, · · · , Y 7) 2447.3 (-1.1, -0.2, 0.07, 0.8, -0.02, 0.3, 0.4, -0.07, 0.4, -0.3) Table 4.6: BIC values for Markov models with different order plus seasonal terms for the Calgary site. Table 4.7 studies seasonality more. We consider the possibility that there are more/less terms of the Fourier series of a periodic function over the year. It turns out that the model with (1, Y 1, COS) is the optimal model so far. Hence, only one term seem to suffice modeling the seasonal nature of the process. Model: Zt−1 BIC parameter estimates (1, COS) 2322.7 (-0.556, -0.717) (1, SIN) 2424.3 (-0.523, 0.115) (1, COS, SIN) 2327.3 (-0.568, -0.738, 0.119) (1, Y 1, COS) 2216.9 (-1.00 , 1.10, -0.587) (1, Y 1, SIN) 2273.9 (-1.03, 1.26, 0.0933) (1, Y 1, COS, SIN) 2222.6 (-1.004, 1.102, -0.589, 0.100) (1, Y 1, COS, SIN,COS2) 2229.7 (-1.00, 1.10, -0.586, 0.0998, 0.0247) (1, Y 1, COS, SIN, SIN2) 2230.0 (-1.00, 1.10, -0.590, 0.101, 0.0125) (1, Y 1, COS, SIN,COS2, SIN2) 2237.2 (-1.01, 1.11, -0.575, 0.0978, 0.0236, -0.0101) Table 4.7: BIC values for models including seasonal terms and the occur- rence of precipitation during the previous day for the Calgary site. Table 4.8 compares all stationary 2nd–order Markov models. The small- est BIC corresponds to (1, Y 1). 110 4.4. Comparing the models using BIC Model: Zt−1 BIC parameter estimates (1) 2419.6 (-0.528) (1, Y 1) 2268.0 (-1.04, 1.27) (1, Y 2) 2392.8 (-0.756, 0.590) (1, Y 1, Y 2) 2270.2 (-1.110, 1.197, 0.256) (1, Y 1Y 2) 2335.5 (-0.779, 1.134) (1, Y 1, Y 1Y 2) 2272.7 (-1.040, 1.113, 0.282) (1, Y 2, Y 1Y 2) 2342.3 (-0.757, -0.113, 1.225) (1, Y 1, Y 2, Y 1Y 2) 2277.7 ( -1.103, 1.177, 0.234, 0.048) Table 4.8: BIC values for 2nd–order Markov models for precipitation at the Calgary site. Table 4.9 compares all 2nd–order Markov chains with a seasonal COS term. The model (1, Y 1, COS) is the winner. Model: Zt−1 BIC parameter estimates (1, COS) 2322.7 (-0.567, -0.738) (1, COS, Y 1) 2216.8 (-1.005, -0.587, 1.106) (1, COS, Y 2) 2317.4 (-0.708, -0.679, 0.372) (1, COS, Y 1Y 2) 2223.5 (-0.760, -0.618, 0.905) (1, COS, Y 1, Y 2) 2276.1 (-1.033, -0.575, 1.080, 0.103) (1, COS, Y 1, Y 1Y 2) 2223.9 (-1.004, -0.580, 1.041, 0.120) (1, COS, Y 2, Y 1Y 2) 2280.9 (-0.709, -0.632, -0.244, 1.093) (1, COS, Y 1, Y 2, Y 1Y 2) 2231.0 (-1.028, -0.575, 1.065, 0.085, 0.037) Table 4.9: BIC values for 2nd–order Markov models for precipitation at the Calgary site plus seasonal terms. Table 4.10 also includes the maximum and minimum temperature of the day before, as predictors of some of the models which performed better in the above tables. We have also included the annual processes A1, · · · , A5 to one of the models. Finally, we have included the model (1, Y 1, N5, COS). This model has a combination of the seasonal term COS and the short–term past process N5 which did the best when combined with the seasonal terms and Y 1 in Table 4.4. It turns out that including MT and mt does not improve the BIC as well as does the annual terms. However, (1, Y 1, N5, COS) has the smallest BIC in all the models, which is a seasonal Markov chain of order 5 with only 4 parameters. Also the simpler model, (1, Y 1, COS), has a close BIC to (1, Y 1, N5, COS). 111 4.5. Changing the location and the time period Model: Zt−1 BIC parameter estimates (1, COS, Y 1) 2216.8 (-1.005, -0.587, 1.106) (1, Y 1, COS,MT 1) 2221.7 (-0.84, 1.0, -0.74, -0.012) (1, Y 1, COS,mt1) 2224.2 (-1.0, 1.0, -0.65, -0.0055) (1, Y 1, COS,MT 1,mt1) 2227.4 (-0.65, 0.99, -0.67, -0.025, 0.022) (1, Y 1, COS,A1, · · · , A5) 2241.2 ( 1.1, -0.5, -0.9, -1.2, -1.1, -1.0, -0.7) (1, Y 1, N5, COS,MT 1) 2297.3 (-2.13, 0.9, 0.4, 0.6, 0.2, 0.04) (1, Y 1, N5, COS, SIN,MT 1,mt1) 2516.8 (1.4, 0.04, 0.2, 0.7, 0.8, -0.2, 0.3) (1, Y 1, N5, COS,MT 1,mt1) 2393.9 ( 1.4, 0.7, -0.1, -0.5, 0.5, -0.1, 0.2) (Y 1, N5, COS,MT 1, A1, · · · , A5) 2697.1 (1.23, -0.64, -2.0, -0.10, 2.0, 1.2, 2.2, 1.2, 1.8) (Y 1, N5, COS,A1, · · · , A5) 2447.1 (0.1, 0.1, -0.7, -0.39, -0.01, -0.2, -0.9, -1) (1, Y 1,MT 1) 2251.5 (-1.2, 1.3, 0.021) (1, Y 1, N5, COS) 2215.8 (-1.1, 0.9, 0.1, -0.5) (1, Y 1, N5, COS,MT 1) 2223.8 (-1.2, 0.9, 0.1, -0.4, 0.0) Table 4.10: BIC values for models including several covariates as tempera- ture, seasonal terms and year effect for precipitation at the Calgary site. 4.5 Changing the location and the time period This section compares various models for a different time period and loca- tion. Table 4.11 compares various models for the 0-1 PN process in Calgary between 1990 and 1994 which is a 5–year period. In Table 4.12, we have compared several models for 0-1 PN process over Medicine Hat site between 2000 and 2004. Table 4.11 shows that among the compared models (1, Y 1, COS) has the smallest BIC. In particular the BIC for this model is smaller than the BIC for (1, Y 1, N5, COS) which has the smallest BIC for Calgary 2000– 2004. However (1, Y 1, COS) was the second optimal model also for Calgary 2000–2004 with a close BIC to the optimal. Including the maximum and minimum temperature to the model increases the BIC again. 112 4.5. Changing the location and the time period Model: Zt−1 BIC parameter estimates (1, Y 1) 2312.7 (-0.931, 1.275) (1, Y 1, Y 2) 2318.8 (-0.967, 1.238, 0.126) (1, Y 1, COS) 2228.8 (-0.858, 1.036, -0.712) (1, Y 1, N5) 2303.3 (-1.168, 1.012, 0.168) (1, Y 1, N10) 2287.9 (-1.581, 1.015, 0.132) (1, Y 1, N15) 2282.7 (-1.486, 1.045, 0.105) (1, Y 1, COS, SIN) 2231.9 (-0.855, 1.026, -0.715 , 0.152) (1, Y 1, N5, COS) 2236.4 (-0.864, 1.032, 0.004, -0.709) (1, Y 1, N5, SIN) 2307.8 (-1.160, 1.011, 0.164, 0.125) (1, Y 1, N5, COS, SIN) 2239.4 (-0.849, 1.031, -0.004, -0.718, 0.152) (1, Y 1, N10, COS) 2236.4 (-0.847, 1.030, -0.002, -0.721, 0.153) (1, Y 1, N10, COS, SIN) 2239.4 (-0.847, 1.030, -0.002, -0.721 , 0.153) (1, Y 1, N5, COS,MT 1) 2244.3 (-0.433, 1.046, -0.096, -1.078, -0.021) (1, Y 1, N5, COS,mt1) 2244.1 (-0.910, 1.011, 0.031, -0.584, 0.006) Table 4.11: BIC values for several models for the binary process of precipi- tation in Calgary, 1990–1994 Table 4.12 shows that the smallest BIC corresponds to (1, Y 1, COS). However, several models have similar BIC values. Also, including the max- imum and minimum temperature increases the BIC here. Model: Zt−1 BIC parameter estimates (1, Y 1) 2202.9 (-1.138, 1.094) (1, Y 1, Y 2) 2207.9 (-1.183, 1.051, 0.181) (1, Y 1, N5) 2203.6 (-1.275, 0.921, 0.119) (1, Y 1, N10) 2228.9 (-0.858, 1.036, -0.712) (1, Y 1, N15) 2200.5 (-1.420, 0.980, 0.065) (1, Y 1, N20) 2202.5 (-1.421, 1.008, 0.048) (1, Y 1, COS) 2201.2 (-1.134, 1.067, -0.224) (1, Y 1, COS, SIN) 2202.9 (-1.132, 1.052, -0.225, 0.177) (1, Y 1, N5, COS) 2203.9 (-1.252, 0.924, 0.101, -0.201) (1, Y 1, N5, SIN) 2206.6 (-1.263, 0.922, 0.109, 0.158) (1, Y 1, N5, COS, SIN) 2206.6 (-1.239, 0.925, 0.091, -0.204, 0.163) (1, Y 1, N10, COS) 2201.9 (-1.336, 0.958, 0.073, -0.183) (1, Y 1, N10, COS, SIN) 2205.1 (-1.311, 0.958, 0.065, -0.187, 0.151) (1, Y 1, N5, COS,MT 1) 2306.5 (-1.455, 2.099, -0.130, 0.041, 0.004) (1, Y 1, N5, COS,mt1) 2211.1 (-1.238, 0.937, 0.087, -0.267, -0.005) (1, Y 1, N15, COS) 2202.7 (-1.363, 0.981, 0.053, -0.175) Table 4.12: BIC values for several models for precipitation occurrence in Medicine Hat, 2000-2004 113 4.5. Changing the location and the time period In summary, in all the three cases (1, Y 1, COS), is either optimal or the second to the optimal (using BIC). We have also tried BIC for Calgary with a long time period of close to 100 years and surprisingly the same simple model (1, Y 1, COS) was the optimal. 114 Chapter 5 On the definition of “quantile” and its properties 5.1 Introduction This chapter points out deficiencies in the classical definition (as well as some other widely used definitions) of the median and more generally the quan- tile and the so-called quantile function. Moreover redefining it appropriately gives us a basis on which we can find necessary and sufficient conditions for the sample quantiles to converge for arbitrary distribution functions. In the next chapter, we define a “degree of separation” function to measure the goodness of the approximation (or estimation). We argue that this func- tion can be viewed as a natural loss function for assessing estimations and approximations. One characteristic of this loss function is its invariance un- der strictly monotonic transformations of the random variable, in particular re-scaling. In this chapter, we have used the terms data vector, approximation, estimation, exact and true quantiles repeatedly. To clarify what we mean by these terms, we give the following explanations: • Data vector: A vector of real numbers. We do not consider these values as random in general. We use the term random vector or random sample for a vector of random variables. We define the quantile for data vectors, but the same definition applies to a random sample. • Approximation and exact value: Suppose a very large data vector is given. We can compute the exact mean/median of such a vector by using all the data and the definition of mean/median. One can approximate the mean/median using various techniques. Note that both approximation and exact terms are used for data vectors of (non- random) numbers. • Estimation and true value: Estimation means finding functions of the random sample to estimate parameters of the underlying distribution. 115 5.1. Introduction The parameters are called the true values. The sample definition of quantiles varies in different text books. In [24], Hyndman et al. point out many different definitions in statistical packages for quantiles of a sample. In [17], Freund et al. point out various defini- tions for quartiles of data and propose a new definition using the concept of “hinge”. The traditional definition of quantiles for a random variable X with distribution function F , lqX(p) = inf{x|F (x) ≥ p}, appears in classic works as [38]. We call this the “left quantile function”. In some books (e.g. [41]) the quantile is defined as rqX(p) = sup{x|F (x) ≤ p}, this is what we call the “right quantile function”. Also in robustness lit- erature people talk about the upper and lower medians which are a very specific case of these definitions. However, we do not know of any work that considers both definitions, explore their relation and show that considering both has several advantages. A physical motivation is given for the right/left definition of quantiles. It is widely claimed that (e.g. Koenker in [29] or Hao and Naiman in [21]) the traditional quantile function is invariant under monotonic transforma- tions. We show that this does not hold even for strictly increasing functions. However, we prove that the traditional quantile function is invariant un- der non-decreasing left continuous transformations. We also show that the right quantile function is invariant under non-decreasing right continuous transformations. A similar neat result is found for continuous decreasing transformations using the Quantile Symmetry Theorem also proved in this chapter. Suppose we know that a data point is larger than a known number of other data points and smaller than another known number of data points. Of interest are the quantiles to which this data point corresponds. Lemma 5.2.4 gives a result about this. We will use this lemma later to establish the precision of our proposed algorithm for approximating quantiles of large datasets. Quantiles are often used as the inverse of distribution functions. In gen- eral neither the distribution function nor the quantile function are invertible. However Lemma 5.5.1 shows how quantiles can be used to characterize sets of the form {x|F (x) < p}, a case that is equivalent to (−∞, lqF (p)). 116 5.1. Introduction Lemma 5.7.1 shows the left continuity of the left quantile function and the right continuity of the right quantile function. Section 5.8 finds necessary and sufficient conditions for the left and right quantile functions to be equal at p ∈ [0, 1]. We also find out that the left and right quantile functions coincide except for at most a countable number of values in [0,1]. Then we characterize the image of the the left and right quantile functions and show that the image corresponds to “heavy” points (heavy point is a point that the probability of being in a neighborhood around that point is positive). Section 5.9 shows that given any of lq, rq and F uniquely determines the other two and formulas are given in order to find them. We also show that if one of lq and rq is two-sided continuous then so is the other one. Lemma 5.10.1 shows that the strict monotonicity of the distribution function F on its “real domain” {x|0 < F (x) < 1} is equivalent to two-sided continuity of lq/rq. Conversely, strict monotonicity of lq/rq corresponds to continuity of F . Section 5.12 presents the desirable “Quantile Symmetry Theorem”, a re- sult that could be only obtained by considering both left and right quantiles. This relation can help us prove several other useful results regarding quan- tiles. Also using the quantile symmetry theorem, we find a relation for the equivariance property of quantiles under non-increasing transformations. Section 5.14 studies the limit properties of left and right quantile func- tions. In Theorem 5.14.7, we show that if left and right quantiles are equal, i.e. lqF (p) = rqF (p), then both sample versions lqFn , rqFn are convergent to the common distribution value. We found an equivalent statement in Ser- fling [43] with a rather similar proof. The condition for convergence there is said to be lqF (p) being the unique solution of F (x−) < p ≤ F (x) which can be shown to be equivalent to lqF (p) = rqF (p). Note how considering both left and right quantiles has resulted in a cleaner, more comprehensible condition for the limits. In a problem Serfling asks to show with an example that this condition cannot be dropped. We show much more by proving that if lqF (p) 6= rqF (p) then both rqFn(p) and rqFn(p) diverge almost surely. The almost sure divergence result can be viewed as an extension to a well-known result in probability theory which says that if X1,X2, · · · an i.i.d sequence from a fair coin with -1 denoting tail and 1 denoting head and Zn = ∑n i=1Xi then P (Zn = 0 i.o.) = 1. The proof in [9] uses the Borel–Cantelli Lemma to get around the problem of dependence of Zn. This is equivalent to say- ing for the fair coin both lqFn(1/2) and rqFn(1/2) diverge almost surely. For the general case, we use the Borel–Cantelli Lemma again. But we also need a lemma (Lemma 5.14.10) which uses the Berry–Esseen Theorem in 117 5.2. Definition of median and quantiles of data vectors and random samples its proof to show the deviations of the sum of the random variables can become arbitrarily large, a result that is easy to show as done in [9] for the simple fair coin example. Finally, we show that even though in the case that lqF (p) 6= rqF (p), lqFn , rqFn are divergent; for large ns they will fall in (lqF (p)− ǫ, lqF (p)] ∪ [rqF (p), rqF (p) + ǫ). In fact we show that lim inf n→∞ lqFn(p) = lim infn→∞ rqFn(p) = lqF (p) and lim sup n→∞ lqFn(p) = lim sup n→∞ rqFn(p) = rqF (p). The proof is done by constructing a new random variable Y from the original random variable X with distribution function FX by shifting back all the values greater than rqX(p) to lqX(p). This makes lqY (p) = rqY (p) in the new random variable. Then we apply the convergence result to Y . 5.2 Definition of median and quantiles of data vectors and random samples This section presents a way to define quantiles of data vectors and random samples. We confine our discussion to data vectors since the definition for random samples is merely a formalistic extension. Suppose, we are given a very long data vector. The goal is to find the median of this vector. Let us denote the data vector by x = (x1, · · · , xn). Suppose y = (y1, · · · , yn) is an increasing sorted vector of elements of x = (x1, · · · , xn). Then usually the median of x is defined to be y(n+1)/2 if n is odd and yn/2+y(n+2)/2 2 if n is even. Essentially the median is defined so that half data lies below it and half lies above it. However, when n is even, any value between yn/2 and y(n+2)/2 serves this purpose and taking the average of the two values seems arbitrary. Intuitively, the quantile should have the following properties: 1. It should be a member of the data vector. In other words if x = (x1, · · · , xn) is the data vector then the quantile should be equal to one of xi, i = 1, · · · , n. 2. Equivariance: If we transform the data using an increasing continuous transformation of R, find the quantile and transform back, we should get the same result, had we found the quantile of the original data. 118 5.2. Definition of median and quantiles of data vectors and random samples More formally, if we denote the quantile of a data vector x for p ∈ (0, 1) by qx(p) then for any φ : R→ R strictly increasing and bijective qx(p) = y ⇔ qφ(x)(p) = φ(y). 3. Symmetry: The p-th quantile of the data vector x = (x1, · · · , xn) should be the negative of (1 − p)-th quantile of data vector −x = (−x1, · · · ,−xn): qx(p) = −q−x(1− p). Particularly, the median of x should be the image of the median of the image of x with respect to 0. 4. The “amount” of data between qx(p1) and qx(p2) should be p2− p1 of the the “data amount” of the whole vector if p1 < p2. 5. If we “cut” a sorted data vector up until the p1-th quantile and com- pute the p2-th quantile for the new vector, we should get the p1p2-th quantile of the original vector. For example the median of a sorted vector upto its median should be the first quartile. This chapter develops a definition for quantiles that satisfies the first three conditions. We will address the last two conditions in later chapters and develop a framework in which they are satisfied. Consider the example x = (0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 10). We see that the median by the usual definition is 1.5 not apparent in the observed data. Also if we take bijective, increasing and continuous transformation φ(x) = x3, we see that the classic definition does not satisfy the second property. The median and quantiles can be defined both for distributions and data vectors (and random samples). For a random variable X having a distribution function F , the p-th quantile is traditionally defined as qF (p) = inf{x|F (x) ≥ p}. (5.1) This can be used to define the quantiles of a data vector using the empirical (sample) distribution function Fn, Fn(x) = n∑ i=1 1(−∞,xi](x). 119 5.2. Definition of median and quantiles of data vectors and random samples With this definition of the quantile, the equivariance property holds and the result is a realizable data value. This definition faces another issue however. Consider flipping a fair coin with outcomes: 0,1. Then the distribution of X is given by FX(x) =   0 x < 0 1/2 0 ≤ x < 1 1 x ≥ 1 Hence by definition 5.1, qF (p) = 0, p ≤ 1/2 and qF (p) = 1, p > 1/2. This all seems to be reasonable other than qF (p) = 0, p = 1/2. Based on the symmetry of the distribution there should not be any advantage for 0 over 1 to be the median. For the quantiles of the data vectors the same issue occurs. For example, consider x = (1, 2, 3, 4, 5, 6) and apply definition 5.1 to Fn corresponding to this data vector. We will get 3 as the median but in fact 4 should to be as eligible by symmetry. Before to get to our definition of quantile we provide the following mo- tivating examples. Example A student decided to buy a new memory chip for his computer. He needed to choose between the available RAM sizes (1 GB, 2GB etc) in his favorite store. In a trade-off between price and speed, he decided to get a RAM chip that is at least as large as 2/3 RAMs bought in the store during the day before. He could access the information regarding all RAMs bought the day before, in particular their size. He entered the size data into the R package he had recently downloaded for free. He had heard about the quantiles in his elementary statistics course so he decided to compute the quantile of the data for p = 2/3. When he computed that he got 2.666 (GB). He knew a RAM of size 2.666 does not exist and concluded this must be a result of an interpolation procedure in R. Since the closest integer to 2.666 is 3 he concluded that 3 GB is the size he is looking for. He went back to the store asking for 3 GB RAM and was told they have never sold such a RAM in that store! He thought there must be an error in the dataset so he looked the data again 1, 1, 1, 1, 2, 2, 2, 2, 4, 4, 4, 4 Surprisingly there was no 3. R had interpolated 2 and 4 to give 2.66 and mislead the student. Example A supervisor asked 2 graduate students to summarize the follow- ing data regarding the intensity of the earthquakes in a specific region: 120 5.2. Definition of median and quantiles of data vectors and random samples row number ML (Richter) A (shaking amplitude) 1 4.21094 1.62532 × 104 2 4.69852 4.99482 × 104 3 4.92185 8.35314 × 104 4 5.12098 13.21235 × 104 5 5.21478 16.39759 × 104 6 5.28943 19.47287 × 104 7 5.32558 21.16313 × 104 8 5.47828 30.08015 × 104 9 5.59103 38.99689 × 104 10 5.72736 53.37772 × 104 Table 5.1: Earthquakes intensities Earthquake intensity is usually measured in ML scale, which is related to A by the following formula: ML = log10A. In the data file handed to the students (Table 5.1), the data is sorted with respect to ML in increasing order from top to bottom. Hence the data is arranged decreasingly with respect to A from top to bottom. The supervisor asked two graduate students to compute the center of the intensity of the earthquakes using this dataset. One of the students used A and the usual definition of median and so obtained (16.39759 × 104 + 19.47287 × 104)/2 = 17.93523 × 104. The second student used the ML and the usual definition of median to find (5.21478 + 5.28943)/2 = 5.252105. When the supervisor saw the results he figured that the students must have used different scales. Hence he tried to make the scales the same by transforming one of the results 105.252105 = 17.86920 × 104. To his surprise the results were not quite the same. He was bothered to notice that the definition of median is not invariant under the change of scale which is continuous strictly increasing. 121 5.2. Definition of median and quantiles of data vectors and random samples Example A scientist asked two of his assistants to summarize the following data regarding the acidity of rain: row number pH aH 1 4.7336 18.4672 × 10−6 2 4.8327 14.6994 × 10−6 3 4.8492 14.1514 × 10−6 4 5.0050 9.8855 × 10−6 5 5.0389 9.1432 × 10−6 6 5.2487 5.6403 × 10−6 7 5.2713 5.3543 × 10−6 8 5.2901 5.1274 × 10−6 9 5.5731 2.6724 × 10−6 10 5.6105 2.4519 × 10−6 Table 5.2: Rain acidity data pH is defined as the cologarithm of the activity of dissolved hydrogen ions (H+). pH = − log10 aH. In the data file handed to the students (Table 5.2) the data is sorted with respect to pH in increasing order from top to bottom. Hence the data is arranged decreasingly with respect to aH from top to bottom. The scientist asked the two assistant to compute the 20th and 80th percentile of the data to get an idea of the variability of the acidity. First assistant used the pH scale and the traditional definition of the quantile qF (p) = inf{x|F (x) ≥ p}, where F is the empirical distribution of the data. He got the following two numbers qF (0.2) = 4.8327 and qF (0.8) = 5.2901 (5.2) these values are positioned in row 2 and 8 respectively. The second assistant also used the traditional definition of the quantiles and the aH scale to get qF (0.2) = 2.6724 × 10−6 and qF (0.8) = 14.1514 × 10−6, (5.3) which correspond to row 9 and 3. 122 5.2. Definition of median and quantiles of data vectors and random samples The scientist noticed the assistants used different scales. Then he thought since one of the scales is in the opposite order of the other and 0.2 and 0.8 are the same distance from 0 and 1 respectively, he must get the other assis- tant’s result by transforming one. So he transformed the second assistant’s results given in Equation 5.3 (or by simply looking at the corresponding rows, 9 and 3 under pH), to get 5.5731 and 4.8492, which are not the same as the first assistants result in Equation 5.2. He noticed the position of these values are only one off from the previous values (being in row 9 and 3 instead of 8 and 2). Then he tried the same himself for 25th and 75th percentile using both scales pH : qF (0.25) = 4.8492 and qF (0.75) = 5.2901, which are positioned at 3rd and 8th row. aH : qF (0.25) = 5.1274 × 10−6 and qF (0.25) = 14.1514 × 10−6, which are positioned at row 8th and 3rd. This time he was surprised to observe the symmetry he expected. He wondered when such symmetry exist and what is true in general. He conjectured that the asymmetric definition of the traditional quantile is the reason of this asymmetry. He also thought that the symmetry property is off at most by one position in the dataset. To define the quantile, we perform a thought experiment and use our intuition to decide how it should be defined. Suppose a data vector x = (x1, · · · , xn) is given. Define the sort operator which permutes the compo- nents of a vector to give a vector with non-decreasing coordinates by sort(x) = (y1, · · · , yn). In statistics yi defined as above is called the i–th order statistics of x and is usually denoted by x(i) or xi:n. [This definition extends to random vec- tors (X1, · · · ,Xn) as well.] The concept of quantile should only depend on sort(x). Let z = (z1, · · · , zr) be the non–decreasing subvector of all distinct elements of x. If zi is repeated mi times, we say zi has multiplicity mi and therefore ∑r i=1mi = n. Now imagine, a uniform bar of length 1. Cut the bar from left to right to r parts of lengths m1n , · · · , mrn proportional to 123 5.2. Definition of median and quantiles of data vectors and random samples the multiplicity of the zi. Assign a unique color to every zi, i = 1, · · · , r and color its piece with that color. Then reassemble the stick from left to right in the original order. To define the p-th quantile measure a length p from the left hand of the bar (whose total length is one). Determine the reassembled bar’s color at that point. However, this protocol fails at the end points as well as the points where two colors meet. Since each color is an equally eligible choice, we are led to the idea in defining the quantiles of a two–state solution at these points, giving us the left and right quantiles. But proceeding with our bar analogy, the intersection points and boundary points are: 0, m1 n , m1 +m2 n , · · · , m1 + · · ·+mr−1 n , 1. By the above discussion, if p is not an intersection/boundary point both left and right quantiles, which we denote by lqx and rqx respectively should be the same and equal to lqx(p) = rqx(p) =   z1 0 < p < m1 n zi m1+···+mi−1 n < p < m1+···+mi n zr m1+···+mr−1 n < p < 1 For the intersection points, if p = m1+···+mi−1n then lqx(p) = zi−1 and rqx(p) = zi. For the boundary points we define lqx(0) = −∞, rqx(0) = z1, lqx(1) = zr, rqx(1) =∞. As a convention, for a sorted vector y of length n, we define y0 = −∞ and yn+1 =∞. Lemma 5.2.1 Suppose x is a data vector of length n and y = sort(x) = (y1, · · · , yn). Also let y0 = −∞ and yn+1 = ∞. For 0 < p < 1, let [np] denote the integer part of np. Then a) np = [np]⇒ lqx(p) = y[np], rqx(p) = y[np]+1. b) np > [np]⇒ lqx(p) = y[np]+1, rqx(p) = y[np]+1. c) y = sort(x) and pi = i/n, i = 0, 1, · · · , n, implies y = (lqx(p1), · · · , lqx(pn)) = (rqx(p0), · · · , rqx(pn−1)). 124 5.2. Definition of median and quantiles of data vectors and random samples Proof a) Let np = h ∈ N. There are four cases: 1. For h = 0 and h = n the result is trivial by the definition of y0 and yn+1. 2. 0 < h < m1 ⇒ 0 < p < m1/n and by definition lqx(p) = rqx(p) = z1. But yh = yh+1 = z1. 3. There exists 1 < i ≤ r such thatm1+· · ·+mi−1 < h < m1+· · ·+mi ⇒ m1+···+mi−1 n < p < m1+···+mi n and by definition lqx(p) = rqx(p) = zi. But yh = yh+1 = zi since m1 + · · · +mi−1 < h < m1 + · · ·+mi. 4. h = m1 + · · · +mi, i < r ⇒ p = m1+···+min , i < r. By definition since this is an intersection point lqx(p) = zi and rqx(p) = zi+1. But zi = yh and zi+1 = yh+1. b) Let h = [np] ⇒ hn < p < h+1n . Since h and h + 1 differ exactly by one unit, there exists an i such that m1 + · · ·+mi−1 n ≤ h n < p < h+ 1 n ≤ m1 + · · · +mi n . Then by definition lqx(p) = rqx(p) = zi. But since m1 + · · ·+mi−1 < h+ 1 ≤ m1 + · · ·+mi, yh+1 = zi. c) Straightforward consequence of the definition. Suppose y′ ∈ {y1, · · · , yn}, for future reference, we define some additional notations for data vectors. Definition The minimal index of y′, m(y′) and the maximal index of y′, M(y′) are defined as below: m(y′) = min{i|yi = y′}, M(y′) = max{i|yi = y′}. It is easy to see that in y = sort(x) = (y1, · · · , yn) all the coordinates between m(y′) and M(y′) are equal to y′. Also note that if y′ = zi then M(y′) −m(y′) + 1 = mi is the multiplicity of zi. We use the notation mx andMx whenever we want to emphasize that they depend on the data vector x. 125 5.2. Definition of median and quantiles of data vectors and random samples Lemma 5.2.2 Suppose x = (x1, · · · , xn), y = sort(x) and z a non–decreasing vector of all distinct elements of x. Then a) m(zi+1) =M(zi) + 1, i = 0, · · · , r − 1. b) Suppose φ is a bijective increasing transformation over R, mφ(x)(φ(zi)) = mx(zi), and Mφ(x)(φ(zi)) =Mx(zi), for i = 1, · · · , r. Proof a) is straightforward. b) Note that mx(y ′) = min{i|yi = y′} = min{i|φ(yi) = φ(y′)} = mφ(x)(φ(y′)). A similar argument works for Mx. We also define the position and standardized position of an element of a data vector. Definition Let x = (x1, · · · , xn) be a vector and y = sort(x) = (y1, · · · , y n). Then for y′ ∈ {y1, · · · , yn}, we define posx(y ′) = {mx(y′),mx(y′) + 1, · · · ,Mx(y′)}, where pos stands for position. Then we define the standardized position of y′ to be sposx(y ′) = ( mx(y ′)− 1 n , Mx(y ′) n ). In the following lemma we show that for every p ∈ spos(y′) (and only p ∈ spos(y′)), we have rq(p) = lq(p) = y′. For example if 1/2 ∈ spos(y′) then y′ is the (left and right) median. Lemma 5.2.3 Suppose x = (x1, · · · , xn), y = sort(x) = (y1, · · · , yn) and y′ ∈ {y1, · · · , yn}. Then p ∈ sposx(y′)⇔ lqx(p) = rqx(p) = y′. Proof Let z = (z1, · · · , zr) be the reduced vector with multiplicitiesm1, · · · ,mr. Then y′ = mi for some i = 1, · · · , r. 126 5.2. Definition of median and quantiles of data vectors and random samples case I: If i = 2, · · · , r, then m(y′) = m1 + · · · +mi−1 + 1, and M(y′) = m1 + · · ·+mi. case II: If i = 1, then m(y′) = 1 and M(y′) = m1. In any of the above cases for p ∈ (m(y′)−1n , M(y ′) n ) and only p ∈ (m(y ′)−1 n , M(y′) n ) rqx(p) = lqx(p) = zi, by definition. Now we prove a lemma that will become useful later on. It is easy to see that if u ∈ pos(y′) then ( u− 1 n , u n ) ⊂ spos(y′). We conclude that ∪u∈pos(y′)( u− 1 n , u n ) ⊂ spos(y′). In fact spos(y′) can possibly have a few points on the edge of the intervals not in ∪u∈pos(y′)(u−1n , un). Lemma 5.2.4 Suppose x is a data vector of length n and y′ is an element of this vector. Also assume y′ ≥ xi, i ∈ I, y′ ≤ xj, j ∈ J, I ∩ J = φ, I, J ⊂ {1, 2, · · · , n}. Then there exist a p in ( |I|−1n , 1 − |J |n ) that belongs to spos(y′). In other words lq(p) = rq(p) = y′. Proof From the assumption, we conclude that pos(y′) includes a number between |I| and n−|J |. Let us call it u0. Hence (u0−1n , u0n ) ⊂ spos(y′). Since |I| ≤ u0 ≤ n− |J |, we conclude that spos(y′) intersects with ∪|I|≤u≤n−|J |( u− 1 n , u n ) ⊂ ( |I| − 1 n , 1− |J | n ). 127 5.3. Defining quantiles of a distribution 5.3 Defining quantiles of a distribution So far, we have only defined the quantile for data vectors. Now we turn to defining the quantile for distribution functions. The p-th quantile for a random variable X with distribution function F as pointed out above is traditionally defined to be q(p) = inf{u|F (u) ≥ p}. We showed by an example above the asymmetry issue to which that defini- tion can lead. We show that the issue arises due to the flatness of F in an interval. To get around this problem as the case of data vectors, we define the left and right quantile for the distribution F as follows: lqF (p) = inf{u|F (u) ≥ p}, and rqF (p) = inf{u|F (u) > p}. If there are more than one random variables in the discussion, to avoid confusion, we use the notations lqFX , rqFX . Also when there is no chance of confusion, we simply use lq, rq. The reason for this definition should become clear soon. First let us apply this definition to the fair coin example. If p 6= 1/2 then both lqF (p) and rqF (p) will be the same and give us the same value. However, lqF (1/2) = 0 and rqF (1/2) = 1. This is exactly what one would hope for. To see the consequences of this definition, we prove the following lemma: Lemma 5.3.1 (Quantile Properties Lemma) Suppose X is a random vari- able on the probability space (Ω,Σ, P ) with distribution function F : a) F (lqF (p)) ≡ P (X ≤ lqF (p)) ≥ p. b) lqF (p) ≤ rqF (p). c) p1 < p2 ⇒ rqF (p1) ≤ lqF (p2). This and (b) imply that lqF (p1) ≤ rqF (p1) ≤ lqF (p2) ≤ rqF (p2). d) rqF (p) = sup{x|F (x) ≤ p}. 128 5.3. Defining quantiles of a distribution e) P (lqF (p) < X < rqF (p)) = 0. In other words if lqF (p) < rqF (p) then F is flat in the interval (lqF (p), rqF (p)). f) P (X < rqF (p)) ≤ p. g) If lqF (p) < rqF (p) then F (lqF (p)) = p and hence P (X ≥ rqF (p)) = 1−p. h) lqF (1) > −∞, rqF (0) <∞ and P (rqF (0) ≤ X ≤ lqF (1)) = 1. i) lqF (p) and rqF (p) are non–decreasing functions of p. j) Suppose F has a jump at x, in other words P (X = x) > 0, which is equivalent to limy→x− F (y) < F (x). Then lqF (F (x)) = x. k) x < lqF (p)⇒ F (x) < p and x > rqF (p)⇒ F (x) > p. Proof a) Take a strictly decreasing sequence {xn} in R that tends to lq(p). For every xn, F (xn) ≥ p since xn > lq(p). Otherwise F (xn) < p⇒ F (y) < p, ∀y ≤ xn. Hence (−∞, xn] ∩ {y|F (y) ≥ p} = ∅. We conclude that lq(p) = inf{y|F (y) ≥ p} ≥ xn > lq(p), which is a contradiction. Now since F is right continuous lim n→∞F (xn) = F (lq(p)). But F (xn) ≥ p, ∀n ∈ N. Hence limn→∞ F (xn) ≥ p. b) Note that {u|F (u) > p} ⊂ {u|F (u) ≥ p}. c) Note that {x|F (x) ≥ p2} ⊂ {x|F (x) > p1} if p2 > p1. 129 5.3. Defining quantiles of a distribution d) Suppose p ∈ [0, 1] is given. Let A = {x|F (x) > p} and B = {x|F (x) ≤ p}. We want to show that inf A = supB. Consider two cases: 1) Suppose inf A < supB. Then pick inf A < y < supB. We get a contradiction as follows: inf A < y ⇒ F (y) > p. Otherwise, since F is increasing F (y) ≤ p ⇒ y < x, ∀x ∈ A⇒ y ≤ inf A. y < supB ⇒ F (y) ≤ p. Otherwise, since F is increasing F (y) > p⇒ y > x, ∀x ∈ B ⇒ y ≥ supB. We conclude F (y) > p and F (y) ≤ p, a contradiction. 2) Suppose supB < inf A. Take supB < y < inf A. supB < y ⇒ F (y) > p. Otherwise, F (y) ≤ p⇒ y ∈ B ⇒ y ≤ supB. y < inf A⇒ F (y) ≤ p. Otherwise F (y) > p⇒ y ∈ A⇒ y ≥ inf A. Once more F (y) > p and F (y) ≤ p which is a contradiction. e) Suppose F is not flat in that interval. ∃v1 < v2 ∈ (lq(p), rq(p)) such that F (v2) > F (v1). F (v2) > F (v1) ≥ F (lq(p)) ≥ p. This is a contradiction since v2 < rq(p). f) Take an increasing sequence xn ↑ rqF (p), then note that P (X ≤ xn) ≤ p since xn < rqF (p). Let An = {X ≤ xn} and A = {X < rqF (p)} then limn→∞An = A, by continuity of the probability (See [9]): P (X < rqF (p)) = P ( lim n→∞An) = limn→∞P (An) ≤ p. g) By a) F (lqF (p)) = P (X ≤ lqF (p)) ≥ p. Suppose P (X ≤ lqF (p)) > p. This implies that lqF (p) ≥ rqF (p). By b) we get lqF (p) = rqF (p), which is a contradiction. h) Note that lqF (0) = inf{x|F (x) ≥ 0} = inf R = −∞. Suppose rqF (0) =∞. Then {x|F (x) > 0} = ∅ ⇒ ∀x ∈ R, F (x) = 0, a contradiction to the properties of a distribution function F . 130 5.3. Defining quantiles of a distribution Also note that rqF (1) = inf{x|F (x) > 1} = inf ∅ =∞. Suppose lqF (1) = −∞. Then inf{x|F (x) ≥ 1} = −∞⇒ ∀x ∈ R, F (x) ≥ 1⇒ ∀x ∈ R, F (x) = 1, a contradiction. For the second part note that rqF (0) ≤ lqF (1) by (c). Then P (rqF (0) ≤ X ≤ lqF (1)) = 1− P (lqF (1) < X < rqF (1)) − P (lqF (0) < X < rqF (0)) = 1− 0− 0, by part (e). i) Trivial. j) Suppose P (X = x) > 0 then limy→x− F (y) = P (X < x) < P (X < x) + P (X = x) = F (x). Now assume that limy→x− F (y) < F (x), then P (X < x) < F (x)⇒ P (X = x) > 0. To prove that in this case lqF (F (x)) = x, let p = F (x) we want to show lqF (p) = x. Note that F (x) = p gives lqF (p) ≤ x. On other hand for any y < x, we know that F (y) < p, by a) y cannot be lqF (p). Hence x = lqF (F (x)). k) First part follows from the definition of lq and the second part from part (d). The following lemma is useful in proving that a specific value is the left or right quantile for a given p. Lemma 5.3.2 (Quantile value criterion) a) lqF (p) is the only a satisfying (i) and (ii), where (i) F (a) ≥ p, (ii) x < a⇒ F (x) < p. 131 5.4. Left and right extreme points b) rqF (p) is the only a satisfying (i) and (ii), where (i) x < a⇒ F (x) ≤ p, (ii) x > a⇒ F (x) > p. Proof a) Both properties hold for lqF (p) by previous lemma. If both a < b satisfy them, then F (a) ≥ p by (i). But since b satisfies the properties and a < b, by (ii), F (a) < p which is a contradiction. b) Both properties hold for rqF (p) by previous lemma. If both a < b satisfy them, then we can get a contradiction similar to above. 5.4 Left and right extreme points In Lemma 5.3.1, we showed these properties about rqX(0) and lqX(1): rqX(0) <∞, lqX(1) > −∞, rqX(0) ≤ lqX(1), and P (rqX(0) ≤ X ≤ lqX(1)) = 1. The above states that all the mass is between these two values. We will show in the next lemma that these values are also the minimal values to satisfy this property. This is the motivation for the following definition. Definition We call rqF (0) the “left extreme” and lqF (1) the “right ex- treme” of the distribution function F . Lemma 5.4.1 (Left and right extreme points property) Suppose X is a random variable with distribution function F . a) The right extreme lqF (1) is the smallest a satisfying P (X ≤ a) = 1. In other words min a {P (X ≤ a) = 1} = lqF (1). 132 5.5. The quantile functions as inverse b) The left extreme rqF (0) is the biggest a satisfying P (X ≥ a) = 1. max a {P (X ≥ a) = 1} = rqF (0). c) Consider the following subset of R2 I2 = {(a, b) ∈ R2|P (X ∈ [a, b]) = 1}. Then ∩(a,b)∈I2 [a, b] = [rqX(0), lqX(1)]. Proof a) In Lemma 5.3.1, we showed F (lqF (1)) = 1. Also F (a) < 1 for a < lqF (1) by the definition of lqF . b) In Lemma 5.3.1, we showed P (X ≥ rqX(0)) = 1. Suppose a > rqX(0). Then since rqX(p) = inf{x|F (x) > 0}, ∃c ∈ {x|F (x) > 0}, c < a ⇒ ∃c < a, F (c) > 0 ⇒ ∃c, P (X < a) ≥ F (c) > 0 ⇒ P (X ≥ a) = 1− P (X < a) < 1. c) This is straightforward from a) and b). 5.5 The quantile functions as inverse The following lemma shows that lqX and rqX can be considered as the inverse of the distribution function in some sense. Lemma 5.5.1 (Quantile functions as inverse of the distribution function) a) F (x) < p⇔ x < lqX(p). (i.e. {x|F (x) < p} = (−∞, lqF (p)).) b) {x|F (x) ≤ p} = (−∞, rqX(p)] or (−∞, rqX(p)). c) If F is continuous at rqX(p) then {x|F (x) ≤ p} = (−∞, rqX(p)]. d) {x|F (x) ≥ p} = [lqX(p),∞). e) {x|F (x) > p} = (rqX(p),∞) or [rqX(p),∞). f) If F is continuous then {x|F (x) > p} = (rqX(p),∞). Proof 133 5.5. The quantile functions as inverse a) (⇒) is true because otherwise if x ≥ lqX(p) ⇒ F (x) ≥ F (lqX(p)) ≥ p, which is a contradiction. To show (⇐) note that by the definition of lqX(p), if F (x) ≥ p then x ≤ lqX(p). b) We need to show that (1) (−∞, rqX(p)) ⊂ {x|F (x) ≤ p} and (2) {x|F (x) ≤ p} ⊂ (−∞, rqX(p)]. For (1), suppose x < rqX(p). We claim F (x) ≤ p. Otherwise if F (x) > p by the definition of rqX(p), rqX(p) ≤ x. For (2), suppose F (x) ≤ p. Then since rqX(p) = sup{x|F (x) ≤ p}, we conclude x ≤ rqX(p). c) By Part (b), it suffices to show F (rqX(p)) = p. This is shown in the next lemma. d) R.H.S ⊂ L.H.S by Lemma 5.3.1 part (a). L.H.S ⊂ R.H.S by the definition of lq. e) Note that x > rqF (p) then F (x) > p by Lemma 5.3.1 part (k). Also F (x) > p⇒ rqF (p) ≤ x by definition of rq. f) This is a consequence of part (e) and next lemma. For the continuous distribution functions, we have the following lemma. Lemma 5.5.2 (Continuous distributions inverse) If F is continuous F (x) = p⇔ x ∈ [lqX(p), rqX(p)]. Proof If x < lqX(p) then we already showed that F (x) < p. Also if x > lqX(p) then rqX(p) = sup{x|F (x) ≤ p} ⇒ F (x) > p. (Because otherwise if F (x) ≤ p⇒ rqX(p) ≥ p.) It remains to show that F (lqX(p)) = F (rqX(p)) = p. But by Lemma 5.3.1, we have F (lqX(p)) ≥ p. Hence it suffices to show that F (rqX(p)) ≤ p. But by Part (f) of Lemma 5.3.1 and continuity of F F (rqF (x)) = P (X ≤ rqF (x)) = P (X < rqF (x)) ≤ p. 134 5.6. Equivariance property of quantile functions 5.6 Equivariance property of quantile functions Example (Counter example for Koenker–Hao claim) Suppose X is dis- tributed uniformly on [0,1]. Then lqX(1/2) = 1/2. Now consider the follow- ing strictly increasing transformations φ(x) = { x −∞ < x < 1/2 x+ 5 x ≥ 1/2 . Let T = φ(X) then the distribution of T is given by P (T ≤ t) =   0 t ≤ 0 t 0 < t ≤ 1/2 1/2 1/2 < t ≤ 5 + 1/2 t− 5 5 + 1/2 < t ≤ 5 + 1 1 t > 5 + 1 . It is clear form above that lqT (1/2) = 1/2 6= φ(lqX(1/2)) = φ(1/2) = 5 + 1/2. We start by defining φ≤(y) = {x|φ(x) ≤ y}, φ⋆(y) = supφ≤(y), and φ≥(y) = {x|φ(x) ≥ y}, φ⋆(y) = inf φ≥(y). Then we have the following lemma. Lemma 5.6.1 Suppose φ is non-decreasing. a) If φ is left continuous then φ(φ⋆(y)) ≤ y. b) If φ is right continuous then φ(φ⋆(y)) ≥ y. Proof 135 5.6. Equivariance property of quantile functions a) Suppose xn ↑ φ⋆(y) a strictly increasing sequence. Then since xn < φ⋆(y), we conclude xn ∈ φ≤(y)⇒ φ(xn) ≤ y. Hence limn→∞ φ(xn) ≤ y. But by left continuity φ(xn) ↑ φ(φ⋆(y)). b) Suppose xn ↓ φ⋆(y) a strictly decreasing sequence. Then since xn > φ⋆(y), we conclude xn ∈ φ≥(y)⇒ φ(xn) ≥ y. Hence limn→∞ φ(xn) ≥ y. But by right continuity φ(xn) ↓ φ(φ⋆(y)). Theorem 5.6.2 (Quantile Equivariance Theorem) Suppose φ : R → R is non-decreasing. a) If φ is left continuous then lqφ(X)(p) = φ(lqX(p)). b) If φ is right continuous then rqφ(X)(p) = φ(rqX(p)). Proof a) We use Lemma 5.3.2 to prove this. We need to show (i) and (ii) in that lemma for φ(lqX(p)). First note that (i) holds since Fφ(X)(φ(lqX(p))) = P (φ(X) ≤ φ(lqX(p))) ≤ P (X ≤ lqX(p)) ≥ p. For (ii) let y < φ(lqX(p)). Then we want to show that Fφ(X)(y) < p. It is sufficient to show φ⋆(y) < lqX(p). Because then P (φ(X) ≤ y) ≤ P (X ≤ φ⋆(y)) < p. To prove φ⋆(y) < lqX(p), note that by the previous lemma φ(φ⋆(y)) ≤ y < φ(lqX(p)). b) We use Lemma 5.3.2 to prove this. We need to show (i) and (ii) in that lemma for φ(rqX(p)). To show (i) note that if y < φ(rqX(p)), P (φ(X) ≤ y) ≤ P (φ(X) < φ(rqX(p))) ≤ P (X < rqX(p)) ≤ p. 136 5.7. Continuity of the left and right quantile functions To show (ii), suppose y > φ(rqX(p)). We only need to show φ⋆(y) > rqX(p) because then P (φ(X) ≤ y) ≥ P (X < φ⋆(y)) > p. But by previous lemma φ(φ⋆(y)) ≥ y > φ(rqX(p)). Hence φ⋆(y) > rqX(p). 5.7 Continuity of the left and right quantile functions Lemma 5.7.1 (Continuity of quantile functions) Suppose F is a distribu- tion function. Then a) lqF is left continuous. b) rqF is right continuous. Proof a) Suppose pn ↑ p be a strictly increasing sequence in [0,1]. Then since lqF is increasing, lqF (pn) is increasing and hence has a limit we call y. We need to show y = lqF (p). We show this in two steps: 1. y ≤ lqF (p): Let A = {x|F (x) ≥ p}. Then for any x ∈ A: F (x) ≥ p⇒ F (x) ≥ pn ⇒ x ≥ lqF (pn)⇒ x ≥ sup n∈N lqF (pn)⇒ x ≥ y. Hence lqF (p) = inf A ≥ y. 2. y ≥ lqF (p): We only need to show that F (y) ≥ p. But y ≥ lqF (pn), ∀n⇒ F (y) ≥ F (lqF (pn)) ≥ pn, ∀n⇒ F (y) ≥ p. b) Take a strictly decreasing sequence pn ↓ p, we need to show rqF (pn) → rq(p). The limit of rqF (pn) exists since rq is non–decreasing. Let y = infn∈N rqF (pn). We proceed in two steps: 1. rqF (p) ≤ y: rqF (p) ≤ rqF (pn), ∀n ∈ N⇒ rqF (p) ≤ inf n∈N rqF (pn) = y. 137 5.7. Continuity of the left and right quantile functions 2. rqF (p) ≥ y: Since rqF (p) = sup{x|F (x) ≤ p} by Lemma 5.3.1, we only need to show z < y ⇒ F (z) ≤ p. But if F (z) > p then F (z) > pn for some n ∈ N⇒ z ≥ rqF (pn) for some n ∈ N. Hence, y > z ≥ rq(pn) for some n ∈ N, which is a contradiction to y = infn∈N rq(pn). FX is a function that ranges over [0, 1]. Once F hits 1 it will remain one. Similarly before F becomes positive it is always zero. This is the motivation for the following definition. Definition Suppose F is a distribution function. We define the real domain of F to be RD(F ) = {x|0 < F (x) < 1}. Lemma 5.7.2 Suppose F is a distribution function. Then RD(F ) = (rq(0), lq(1)) or RD(F ) = [rq(0), lq(1)). Proof We proceed in two steps (a),(b). (a) RD(F ) ⊂ [rq(0), lq(1)): Note that (a) ⇔ [rq(0), lq(1))c ⊂ RD(F )c, where c stands for taking the compliment of a set in R. If x ∈ [rq(0), lq(1))c then x < rq(0) or x ≥ lq(1). x < rq(0) then F (x) = 0 by the definition of rq(0). x ≥ lq(1) then F (x) ≥ F (lq(1)) ≥ 1⇒ F (x) = 1. (b) (rq(0), lq(1)) ⊂ RD(F ): x > rq(0)⇒ F (x) > 0. (This is because rq(0) = sup{x|F (x) ≤ 0}.) x < lq(1)⇒ F (x) < 1. (This is because lq(1) = inf{x| F (x) = 1}.) Definition For a random variableX with distribution function F , we define the L-quantile and R-quantile functions on R: LQF : R→ R, LQF = lqF ◦ F, RQF : R→ R, RQF = rqF ◦ F. 138 5.7. Continuity of the left and right quantile functions Lemma 5.7.3 (Properties of LQ and RQ) a) LQF , RQF are non–decreasing. b)LQF (x) ≤ x ≤ RQF (x). c) LQF , RQF are left continuous and right continuous, respectively. d) lqF (F (x)) = rqF (F (x))⇒ LQF (x) = RQF (x) = x. e) We have the following equalities: LQF (v) = inf{u|F (u) = F (v)}, RQF (v) = sup{u|F (u) = F (v)}. f) P (LQF (x) < X < RQF (x)) = 0. Proof a) This result follows from the fact that lqF , rqF and F are non–decreasing. b) LQF (x) = inf{y|F (y) ≥ F (x)}. Since x ∈ {y|F (y) ≥ F (x)}, x ≥ LQF (x). RQF (x) = sup{y|F (y) ≤ F (x)}. Since x ∈ {y|F (y) ≤ F (x)}, RQF (x) ≥ x. c) Suppose xn ↓ x is a strictly decreasing sequence, then F (xn) ↓ F (x) since F is right continuous. Hence rqF (F (xn)) ↓ rqF (F (x)) since rqF is right continuous by Lemma 5.7.1. To prove LQF is left continuous, let xn ↑ x be a strictly increasing sequence and let pn = F (xn). Then since {pn} is an increasing and bounded sequence, pn → p′. Also let F (x) = p. We consider two cases: 1. p = p′. In this case pn ↑ p is a strictly increasing sequence. Since lqF is left continuous, limn→∞LQF (xn) = limn→∞ lqF (pn) = lqF (p) = LQF (x). 2. p′ < p. This means F has a jump at x. By Lemma 5.3.1 j), LQF (x) = lqF (F (x)) = x. Let y = limn→∞ lqF (F (xn)). We claim y ≥ x. Otherwise since F (x) = p and F has a jump at p, F (y) < p ⇒ F (y) < pn, for some n ∈ N. But y = supn∈N lq(F (xn)). Hence y ≥ lq(F (xn)) and F (y) ≥ F (lq(pn)) ≥ pn > p a contradiction. Thus y = limn→∞ lqF (F (xn)) ≥ x. Also note that lqF (pn) ≤ lqF (F (x)) = x, ∀n⇒ y = supn∈N lqF (pn) ≤ lqF (F (x)) = x. We conclude y = x. In other words y = limn→∞LQF (xn) = LQF (x). d) This result is a straightforward consequence of b). e) This result follows immediately from the definition of these quantiles. 139 5.7. Continuity of the left and right quantile functions f) P (LQF (x) < X < RQF (x)) = P (lqF (F (x)) < X < rqF (F (x))) = 0, by Lemma 5.3.1. Example Suppose the distribution function F depicted in Figure 5.1 is given as follows F (x) =   2 π arctan(x)+1 5 x ≤ 0 1/5 0 ≤ x ≤ 1 x/5 1 ≤ x < 2 3/5 2 ≤ x < 3 2 π arctan(x−3)+4 5 x ≥ 3 . Then lqF (0.2) = 0, rqF (0.2) = 1, lqF (0.5) = rqF (0.5) = 2 and lqF (0.55) = rqF (0.55) = 2. We have also plotted lq, rq, LQ,RQ in Figures 5.2 to 5.5. If we are given a data vector, we can compute the sample distribution and then compute the left and right quantile functions. In the sequel, we show that we get the same definition as we gave for left and right quantile for a vector. Lemma 5.7.4 Suppose a data vector x is given and Fn is its sample distri- bution. Then lqx(p) = lqFn(p) and rqx(p) = rqFn(p). Proof We show this for non–intersection points. Similar arguments work for intersection points. If p is not an intersection point, then m1+···+mi−1 n < p < m1+···+mi n and rqx(p) = lqx(p) = zi. We want to show that inf{u|Fn(u) ≥ p} is also zi, where Fn(u) = n∑ i=1 I(−∞,xi](u). But it follows that: Fn(zi) = m1 + · · ·+mi n ; lqFn(p) = inf{u|Fn(u) ≥ p}; rqFn(p) = inf{u|Fn(u) > p}. 140 5.7. Continuity of the left and right quantile functions −2 −1 0 1 2 3 4 5 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 x F Figure 5.1: An example of a distribution function with discontinuities and flat intervals. 141 5.7. Continuity of the left and right quantile functions 0.0 0.2 0.4 0.6 0.8 1.0 − 2 − 1 0 1 2 3 4 5 p lq (p) Figure 5.2: The left quantile (lq) function for the distribution function given in Example 5.7. Notice that this function is left continuous and increasing. 0.0 0.2 0.4 0.6 0.8 1.0 − 2 − 1 0 1 2 3 4 5 p rq (p) Figure 5.3: The right quantile (rq) function for the distribution function given in Example 5.7. Notice that this function is right continuous and increasing. 142 5.7. Continuity of the left and right quantile functions −4 −2 0 2 4 6 8 − 2 − 1 0 1 2 3 4 5 x LQ (x) Figure 5.4: LQ function for Example 5.7. Notice that this function is in- creasing and left continuous. −4 −2 0 2 4 6 8 − 2 − 1 0 1 2 3 4 5 x R Q( x) Figure 5.5: RQ function for Example 5.7, notice that this function is in- creasing and right continuous. 143 5.8. Equality of left and right quantiles Since Fn is a step function the right hand side of the two above equations can only be one of −∞, z1, · · · , zr,∞. The first u that makes Fn greater than or equal to p is zi, proving the assertion. Lemma 5.7.4 guarantees that our definition of quantile for data vectors is consistent with the definition for distributions. Lemma 5.3.1 shows that if a distribution function F is flat then rq and lq might differ. To study this further when rq and lq are equal, we define the concept of heavy and weightless points in the next section. 5.8 Equality of left and right quantiles This section finds necessary and sufficient conditions for the left and right quantiles to be equal. We start with some definitions. Definition Suppose X is a random variable with the distribution function F . x ∈ R is called a weightless point of a distribution function F if there exist a neighborhood (an open interval) around x such that F is flat in that neighborhood. We call a point heavy if it is not weightless. Denote the set of all heavy points by H. Definition A point x ∈ R is called a super heavy point if P (X ∈ (x− ǫ, x]) > 0, P (X ∈ [x, x+ ǫ)) > 0, ∀ǫ > 0. We denote the set of super heavy points by SH. Obviously any super heavy point is heavy. We can also define right heavy points and left heavy points. Definition A point x ∈ R is called a right heavy point if P (X ∈ [x, x+ ǫ)) > 0, ∀ǫ > 0. We show the set of all right heavy points by RH. A point x ∈ R is called a left heavy point if P (X ∈ (x− ǫ, x]) > 0, ∀ǫ > 0. We denote the set of all such points by LH. Obviously any heavy point is either right heavy or left heavy. Also a super heavy point is both right heavy and left heavy. 144 5.8. Equality of left and right quantiles Lemma 5.8.1 Suppose X is a random variable with distribution function F . Also suppose that u1 < u2 are heavy points and F is flat on [u1, u2] i.e. F (u1) = F (u2). Then lq(p) = u1 and rq(p) = u2, where p = F (u1) = P (X ≤ u1). Proof 1. lq(p) = u1: Since F (u1) = p, lq(p) ≤ u1. Suppose lq(p) < u1. Then P (lq(p) < X < u2) > 0, since u1 is a heavy point. We can rewrite above as P (lq(p) < X ≤ u1) + P (u1 < X < u2) > 0, the second term is zero by the flatness assumption. Hence P (lq(p) < X ≤ u1) > 0. But then P (X ≤ lq(p)) = p(X ≤ u1)− P (lq(p) < X ≤ u1) < p, which is a contradiction to Lemma 5.3.1 a). 2. rq(p) = u2: From F (u) = p for all u1 ≤ u < u2, we conclude rq(p) ≥ u2. To prove the inverse, note that for any u1 < u3 < u2, F (u3) = p since F is flat on [u1, u2]. Since rq(p) = sup{x|F (x) ≤ p} by Lemma 5.3.1, rq(p) ≥ u2. Now note that since u2 is heavy, for any u3 > u2, P (u1 < X < u3) > 0⇒ F (u3) = F (u1) + P (u1 < X < u3) > p. Hence only values less than or equal to u2 are in {x|F (x) ≤ p}. We conclude the sup is at most u2. In other words rq(p) ≤ u2. Lemma 5.8.2 Suppose X is a random variable with distribution function F . Then v is a weightless point ⇔ v ∈ (LQF (v), RQF (v)). 145 5.8. Equality of left and right quantiles Proof (⇐): This is trivial by Lemma 5.7.3 part (f). (⇒): If v /∈ (LQF (v), RQF (v))⇒ LQF (v) = RQF (v) = v by Lemma 5.7.3. RQF (v) = v ⇒ inf{x| F (x) > F (v)} = v ⇒ F (x) > F (v), ∀x > v ⇒ P (v < X ≤ x) > 0, ∀x > v ⇒ P (v < X < x) > 0,∀x > v, where the last (⇒) is because for any x > v, we can take v < x′ < x and note that P (v < X < x) ≥ P (x < X ≤ x′) > 0. We conclude v is a right heavy point which is a contradiction. For a weightless point v, there is an interval (a, b) such that v ∈ (a, b) and F is flat in that interval. It is useful to consider the flat interval around v. This is the motivation for the following definition. Definition Suppose X is a random variable with distribution function F and v is a weightless point of F . Then we define the weightless interval of v, I(v) by I(v) = ∪a p while a′ < z gives F (a′) ≤ p. Hence P (a′ ≤ X ≤ b′) > 0, a contradiction. c) Assume x is right heavy. Then let p = F (x). We claim that rqF (p) = x. Suppose rqF (p) = x ′ < x then F (x) = p is a contradiction to rqF (p) = sup{y|F (y) ≤ p}. On the other hand for any x′ > x, pick x < x′′ < x′. We have F (x′′) > p since x is right heavy. Since rqF (p) = inf{y|F (y) > p} and F (x′′) > p then x′ > rqF (p). We conclude that rqF (p) = x. Now suppose x is left heavy. Let p = F (x). We claim lqF (p) = x. First note that for any x′ < x, F (x′) < F (x) = p since x is left heavy. Hence lqF (p) ≥ x. But F (x) = p and since lqF (p) = inf{y|F (y) ≥ p} we are done. d) The necessary and sufficient conditions follow immediately from c). To show that H − SH is countable, we prove LH − SH and RH − SH are countable. To that end, for any x ∈ LH−SH consider Ix = (LQ(x), RQ(x)). Since x is not super heavy this interval has positive length. Also note that x < y, x, y ∈ H implies Ix ∩ Iy = ∅. To prove this, note that since x, y are left heavy, LQ(x) = x and LQ(y) = y. We conclude Ix = (x,RQ(x)) Iy = (y,RQ(y)). If Ix ∩ Iy is nonempty then we conclude x < y < RQ(x). Then 0 = P (X ∈ (x,RQ(x))) ≤ P (X ∈ (x, y)) > 0. 148 5.8. Equality of left and right quantiles (P (X ∈ (x, y)) > 0 since y is left heavy.) This is a contradiction and hence Ix ∩ Iy = ∅. Now pick a rational number qx ∈ Ix. Then Ix ∩ Iy = ∅ ⇒ qx 6= qy. Since the set of rational numbers is countable LH − SH is countable. A similar argument works for RH − SH. Lemma 5.8.7 Suppose X is a random variable with distribution function F . Then the set A = {p| p ∈ [0, 1], lqF (p) 6= rqF (p)} is countable. Proof For every p ∈ A let J(p) = (lqF (p), rqF (p)). Then for every x ∈ J(p), F (x) = p. (F (x) ≥ F (lqF (p)) ≥ p. Now if F (x) > p, we get a contradiction to x < lqX(p).) We conclude p, p′ ∈ A, p 6= p′ ⇒ J(p) ∩ J(p′) = ∅. The intervals are disjoint, every interval has a positive length and their union is a subset of [0, 1]. Hence there are only countable number of such intervals. We conclude A is countable. The following lemma gives sufficient and necessary conditions for lqX = rqX , ∀p ∈ (0, 1). Lemma 5.8.8 lqX(p) = rqX(p), p ∈ (0, 1) iff FX is strictly increasing. Proof (⇒) lqX(p) = inf{x|FX (x) ≥ p} = inf{x|x ≥ F−1X (p)} = inf{x|x > F−1X (p)} = rqX(p) . (⇐): If Fx is not strictly increasing then ∃x2 < x1 s.t FX(x1) = FX(x2). Then let p = FX(x1). We also have p = FX(x2). Hence lqX(p) = inf{FX(x) ≥ p} ≤ x1, and rqX(p) = sup{FX(x) ≤ p} ≥ x2, which is a contradiction. 149 5.9. Distribution function in terms of the quantile functions 5.9 Distribution function in terms of the quantile functions It is interesting to understand the connections amongst lq, rq and F . We answer the following question: Question: Given one of lq, rq or F , are the other two uniquely determined? The answer to this question is affirmative and the following theorem says much more. Theorem 5.9.1 Suppose F is a distribution function. Then a) For p0 ∈ (0, 1), lq(p0) = limp→p−0 rq(p0). Hence, the function rq uniquely determines lq. b) For p0 ∈ (0, 1), rq(p0) = limp→p+0 lq(p0). Hence lq uniquely determines rq. c) lq or rq continuous at p0 ∈ (0, 1) ⇒ lq(p0) = rq(p0). d) lq(p0) = rq(p0)⇒ lq and rq are continuous at p0. e) lq is continuous at p⇔ rq is continuous at p. f) F (x) = inf{p|lq(p) > x}. g) F (x) = inf{p|rq(p) > x}. Proof a) Take a strictly increasing sequence pn ↑ p0 in [0, 1]. Then pn−1 < pn < pn+1 ⇒ lq(pn−1) < rq(pn) < lq(pn+1), (5.4) by Lemma 5.3.1, part (c). By the left continuity of lq, lq(pn) → lq(p0). Applying the Sandwich Theorem about the limits from elementary cal- culus to the Equation (5.4), we conclude that rq(pn)→ lq(p0). 150 5.9. Distribution function in terms of the quantile functions b) Take a strictly decreasing sequence pn ↓ p0 in [0, 1]. Then pn−1 > pn > pn+1 ⇒ rq(pn−1) > lq(pn) > rq(pn+1), (5.5) again by Lemma 5.3.1, part (c). By the right continuity of rq, rq(pn)→ rq(p0). Applying the Sandwich Theorem for limits to Equation (5.5), we conclude that lq(pn)→ rq(p0). c) Suppose lq is continuous at p0. Then limp→p+0 lq(p) = lq(p0). But by the previous parts of this theorem, we also have limp→p+0 = rq(p0). Similar arguments work if rq is continuous at p0. d) To prove lq is continuous at p0 note that lim p→p−0 lq(p0) = lq(p0) = rq(p0) = lim p→p+0 lq(p0), where the first equality comes from the left continuity of lq and the last one comes from (b). Similar arguments work for rq. e) This result follows immediately from the previous two parts. f) Let A = {p|lq(p) > x}. We want to show that F (x) = inf A. To do that we first show that F (x) ≤ inf A. By Lemma 5.7.3, lq(F (x)) ≤ x⇒ F (x) ≤ a, ∀a ∈ A⇒ F (x) ≤ inf A. It remains to show that inf A ≤ F (x). Suppose to the contrary that F (x) < inf A. Then take F (x) < p0 < inf A to get lq(p0) ≤ x, p0 > F (x) ⇒ F (lq(p0)) ≤ F (x), p0 > F (x). But by Lemma 5.3.1 part (a), p0 ≤ F (lq(p0)). Hence p0 ≤ F (lq(p0)) ≤ F (x), p0 > F (x), which is a contradiction. 151 5.10. Two-sided continuity of lq/rq g) Let B = {p|rq(p) > x} and A be as the previous part. Then F (x) = inf A ≤ inf B. It only remains to show that inf B ≤ F (x). Otherwise, we can pick p0, F (x) < p0 < inf B so that rq(p0) ≤ x, p0 > F (x)⇒ p0 ≤ F (rq(p0)) ≤ F (x), p0 > F (x), which is a contradiction. 5.10 Two-sided continuity of lq/rq Lemma 5.10.1 Suppose F is a distribution function for the random vari- able X and lq, rq are its corresponding left and right quantile functions. Then a) F is continuous ⇔ lq is strictly increasing on (0, 1). b) F is strictly increasing on RD(F ) = {x|0 < F (x) < 1} = (rq(0), lq(1)) or [rq(0), lq(1)) ⇔ lq is continuous on (0, 1). Proof a) (⇒): F is continuous iff P (X = x) = 0, ∀x ∈ R. If the R.H.S does not hold then x = lq(p1) = lq(p2), p1 < p2. Then for every y < x, we have F (y) < p1. Hence P (X < x) = lim y→x− P (X ≤ y) ≤ p1 < p2. But F (x) ≥ p2 since lq(p2) = x and we conclude P (X = x) ≥ p2 − p1, a contradiction. (⇐): If F is not continuous then P (X = x) = ǫ > 0 for some x ∈ R. Let p = F (x) then P (X < x) = p− ǫ. Pick p1 < p2 in the interval (p− ǫ, p) then lq(p1) = lq(p2) = x. b) (⇒): lq is left continuous. Hence if it is not continuous then lim p→p+0 lq(p) = rq(p0) 6= lq(p0). Hence F is flat on (lq(p0), rq(p0)) 6= ∅, which is a contradiction to F being increasing. 152 5.11. Characterization of left/right quantile functions (⇐): Suppose F is not continuous on RD(F ), then there exist a, b ∈ R such that F is flat on [a, b]: F (a) = F (b) = p ∈ (0, 1). But then lq(p) ≤ a and rq(p) ≥ b. Hence lq(p) 6= rq(p), which means lq is not continuous. Remark. We can replace lq is the above lemma by rq. A similar argument can be done for the proof. 5.11 Characterization of left/right quantile functions The characterization of the distribution function is a well–known result in probability. Here we characterize the left and right quantile functions of a distribution. We start by some simple lemmas which we need in the proof. Lemma 5.11.1 Suppose An ⊂ R, n ∈ N . Then inf ∪n∈NAn = inf n∈N (inf An) Proof a) inf ∪n∈NAn ≥ infn∈N(inf An): a ∈ ∪n∈NAn ⇒ ∃m ∈ N, a ∈ Am ⇒ ∃m ∈ N, a ≥ inf Am ⇒ a ≥ inf n∈N (inf An). Hence, inf ∪n∈NAn ≥ infn∈N(inf An). b) inf ∪n∈NAn ≤ infn∈N(inf An): inf ∪n∈NAn ≤ inf Am, ∀m ∈ N⇒ inf ∪n∈NAn ≤ inf n∈N (inf An). Lemma 5.11.2 Suppose h : (0, 1)→ R is a non–decreasing function. Then G(x) = inf{p ∈ (0, 1)|h(p) > x} is a distribution function. Proof a) We claim G is non–decreasing. Suppose x1 < x2 then let A = {p|h(p) > x1} and B = {p|h(p) > x1}. Then G(x1) = inf A and G(x2) = inf B. But clearly B ⊂ A hence G(x1) ≤ G(x2). 153 5.11. Characterization of left/right quantile functions b) limx→∞G(x) = 1: First note that such a limit exist and is bounded by 1. (Because the domain of h is (0,1)). Assume limx→∞G(x) = q < 1, take q < q′ < 1 then take x0 > h(q′). Let A = inf{p|h(p) > x0} such that G(x0) = inf A. Then (p ∈ A⇒ h(p) > x0 > h(q′)⇒ p > q′)⇒ G(x0) = inf A ≥ q′ > q. We have shown there is an x0 such that G(x0) > q this is a contradiction to limx→∞G(x) = q since G is non-decreasing. c) Suppose that limx→−∞G(x) = q > 0 then take 0 < q′ < q and x0 < h(q′). Let A = inf{p|h(p) > x0} such that G(x0) = inf A. We have h(q′) > x0 ⇒ q′ ∈ A⇒ inf A ≤ q′ ⇒ G(x0) ≤ q′ < q This contradicts limx→−∞G(x) = q > 0 since G is non–decreasing. d) G is right continuous: limx→x+0 G(x) = x0. Suppose xn ↓ x0. In the previous lemma, let An = {p|h(p) > xn} and A = ∪n∈NAn = {p|h(p) > x0}. Then G(x0) = inf A = inf ∪n∈NAn = inf n∈N (inf An) = inf n∈N G(xn) = lim x→x+0 G(x). Theorem 5.11.3 (Quantile function characterization theorem) Suppose a function h : (0, 1) → R is given. Then (a) h is a left quantile function for some random variable X iff h is left continuous and non–decreasing. (b) h is a right quantile function for some random variable X iff h is right continuous and non–decreasing. Proof If h is a left quantile function, then h is left continuous and non– decreasing as we showed in previous sections. Also if h is right continuous function then h is non–decreasing and right continuous. For the inverse of both a) and b) define G as in the above lemma. We will prove that h is lqG in a) and rqG in b). (a) Let A = {x|G(x) ≥ p0}, we want to show h(p0) = inf A. 154 5.11. Characterization of left/right quantile functions (i) inf A ≤ h(p0): Otherwise if inf A > y > h(p0), then: inf A > y ⇒ inf{x|G(x) ≥ p0} > y ⇒ G(y) < p0 ⇒ inf{p ∈ (0, 1)|h(p) > y} < p0 ⇒ ∃p ∈ (0, 1), h(p) > y, p < p0 ⇒ ∃p ∈ (0, 1)h(p0) ≥ h(p) > y, which is a contradiction. (ii) inf A ≥ h(p0) : x ∈ A⇒ G(x) ≥ p0 ⇒ inf{p ∈ (0, 1)|h(p) > x} ≥ p0. Hence, ∀p < p0, h(p) ≤ x⇒ lim p→p−0 h(p) ≤ x⇒ h(p0) ≤ x, by left continuity of h. Hence ∀x ∈ A, h(p0) ≤ x⇒ h(p0) ≤ inf A. (b) Let A = {x|G(x) > p0}, we want to show h(p0) = inf A. (i) inf A ≤ h(p0): Otherwise if inf A > y > h(p0), then inf A > y ⇒ y /∈ A⇒ G(y) ≤ p0 ⇒ inf{p′ ∈ (0, 1)|h(p′) > y} ≤ p0 ⇒ ∀p > p0, inf{p′|h(p′) > y} < p⇒ ∀p > p0, ∃p′ ∈ (0, 1), h(p′) > y, p′ < p⇒ ∀p > p0, ∃p′ ∈ (0, 1), h(p) ≥ h(p′) > y ⇒ h(p0) ≥ y which is a contradiction. (ii) inf A ≥ h(p0) : x ∈ A⇒ G(x) > p0 ⇒ inf{p ∈ (0, 1)|h(p) > x} > p0 ⇒ p0 /∈ {p ∈ (0, 1)|h(p) > x} ⇒ h(p0) ≤ x. Hence h(p0) ≤ inf A. Now we characterize the quantile functions of data vectors. See Figure 5.6 for an example of quantile functions for the vector x = (−2,−2, 2, 2, 2, 2, 4, 4, 4, 4). 155 5.11. Characterization of left/right quantile functions 0.0 0.2 0.4 0.6 0.8 1.0 − 4 − 2 0 2 4 lq (p) 0.0 0.2 0.4 0.6 0.8 1.0 − 4 − 2 0 2 4 p rq (p) Figure 5.6: For the vector x = (−2,−2, 2, 2, 4, 4, 4, 4) the left (top) and right (bottom) quantile functions are given. 156 5.12. Quantile symmetries Theorem 5.11.4 (Data vector quantile function characterization theorem) a) h : (0, 1)→ R is a left quantile function for a data vector x iff h is a left continuous step function with no steps (jumps) or a finite number of steps (jumps) at some points 0 < a1 < a2 < · · · < ak < 1 where ai = 1nni, for some n, ni ∈ N. b) h : (0, 1) → R is a right quantile function for a data vector x iff h is a right continuous step function with no steps (jumps) or finite number of steps (jumps) at some points 0 < a1 < a2 < · · · < ak < 1 where ai = 1nni for some n, ni ∈ N. Proof We only prove a) and b) is obtained either by repeating a similar argu- ment or using the Quantiles Symmetry Theorem (Theorem 5.12.3), which we prove in next sections. a) (⇒) For x = (x1, · · · , xn), it is clear that lqx is a step function with jumps at points proportional to 1/n and we proved the left continuity before. a) (⇐) The result is easy to show if h has no jumps. Let h′ = limx→+∞ h(x) and suppose h is given with jumps at a1 < a2 < · · · < ak, a1 = n1(1/n), · · · , ak = nk(1/n). Let b1 = a1, b2 = a2 − a1, · · · , bk = ak − ak−1, bk+1 = 1− ak. Then bi = 1 nmi, i = 1, 2, · · · , k+1 withm1 = n1,m2 = n2−n1, · · · ,mk = nk−nk−1 and finally mk+1 = n − ∑k i=1mi. Then let x be a data vector with h(ai) repeated mi times. We claim that h = lqx. First note that x is of length n. For 0 < p ≤ a1, we have lqx(p) = h(a1) = h(p). For ai−1 < p ≤ ai, i ≤ k, we have ni−1n = ∑i−1 j=1mj n < p ≤ ∑i j=1mj n = ni n . Hence lqx(p) = h(ai) = h(p), ai−1 < p ≤ ai, i ≤ k. For ak < p < 1, we have nk n = ∑k j=1 mj n < p < 1, lqx(p) = h ′ = h(p), ak < p < 1. 5.12 Quantile symmetries This section studies the symmetry properties of distribution functions and quantile functions. Symmetry is in the sense that if X is a random vari- able with left/right quantile function, some sort of symmetry between the 157 5.12. Quantile symmetries quantile functions of X and −X should exist. We only treat the quantile functions for distributions here but the results can readily be applied to data vectors by considering their empirical distribution functions. Here consider different forms of distribution functions. The usual one is defined to be F cX(x) = P (X ≤ x). But clearly one could have also considered F oX(x) = P (X < x), G c X(x) = P (X ≥ x) or GoX(x) = P (X > x) to characterize the distribution of a random variable. We call F c the left– closed distribution function, F o the left–open distribution function, Gc the right–closed and Go the right–open distribution function. Like the usual distribution function these functions can be characterized by their limits in infinity, monotonicity and right continuity. First note that F c−X(x) = P (−X ≤ x) = P (X ≥ −x) = GcX(−x). Since the left hand side is right continuous, GcX is left continuous. Also note that F cX(x) +G o X(x) = 1⇒ GoX(x) = 1− F cX(x), F oX(x) +G c X(x) = 1⇒ F oX(x) = 1−GcX(x). The above equations imply the following: a) Go and F c are right continuous. b) F o and Gc are left continuous. c) Go and Gc are non–decreasing. d) limx→∞F (x) = 1 and limx→−∞F (x) = 0 for F = F o, F c. e) limx→∞G(x) = 0 and limx→−∞G(x) = 1 for G = Go, Gc. It is easy to see that the above given properties for F o, Go, Gc character- ize all such functions. The proof can be given directly using the properties of the probability measure (such as continuity) or by using arguments similar to the above. Another lemma about the relation of F c, F o, Go, Gc is given below. Lemma 5.12.1 Suppose F o, F c, Go, Gc are defined as above. Then a) if any of F c, F o, Go, Gc are continuous, all of other are continuous too. b) F c being strictly increasing is equivalent to F o being strictly increasing. c) if F c is strictly increasing, Go is strictly decreasing. d) Gc being strictly decreasing is equivalent to Go being strictly increasing. 158 5.12. Quantile symmetries Proof a) Note that limy→x− F c(x) = limy→x− F o(x) and limy→x+ F c(x) = limy→x+ F o(x). If these two limits are equal for either F c or F o they are equal for the others as well. b) If either F c or F o are not strictly increasing then they are constant on [x1, x2], x1 < x2. Take x1 < y1 < y2 < x2. Then F o(x1) = F o(x2)⇒ P (y1 ≤ X ≤ y2) = 0⇒ F c(y1) = F c(y2). Also we have F c(x1) = F c(x2)⇒ P (y1 ≤ X ≤ y2) = 0⇒ F o(y1) = F o(y2). c) This is trivial since Go = 1− F c. d) If Gc is strictly decreasing then F o is strictly increasing since Gc = 1−F o. By part b), F c strictly is increasing. Hence Go = 1− F c is strictly decreas- ing. The relationship between these distribution functions and the quantile functions are interesting and have interesting implications. It turns out that we can replace F c by F o in some definitions. Lemma 5.12.2 Suppose X is a random variable with open and closed left distributions F o, F c as well as open and closed right distributions Go, Gc. Then a) lqX(p) = inf{x|F oX (x) ≥ p}. In other words, we can replace F c by F o in the left quantile definition. b) rqX(p) = inf{x|F oX (x) > p}. In other words, we can replace F c by F o in the right quantile definition. Proof a) Let A = {x|F oX (x) ≥ p} and B = {x|F cX (x) ≥ p}. We want to show that inf A = inf B. Now A ⊂ B ⇒ inf A ≥ inf B. But inf B < inf A⇒ ∃x0, y0, inf B < x0 < y0 < inf A. Then 159 5.12. Quantile symmetries inf B < x0 ⇒ ∃b ∈ B, b < x0 ⇒ ∃b ∈ R, p ≤ P (X ≤ b) ≤ P (X ≤ x0) ⇒ P (X ≤ x0) ≥ p⇒ P (X < y0) ≥ p. On the other hand y0 < inf A⇒ y0 /∈ A⇒ P (X < y0) < p, which is a contradiction, thus proving a). b) Let A = {x|F oX(x) > p} and B = {x|F cX(x) > p}. We want to show inf A = inf B. Again, A ⊂ B ⇒ inf A ≥ inf B. But inf B < inf A⇒ ∃x0, y0, inf B < x0 < y0 < inf A. Then inf B < x0 ⇒ ∃b ∈ B, b < x0 ⇒ ∃b ∈ R, p < P (X ≤ b) ≤ P (X ≤ x0) ⇒ P (X ≤ x0) > p⇒ P (X < y0) > p. On the other hand, y0 < inf A⇒ y0 /∈ A⇒ P (X < y0) ≤ p, which is a contradiction. Using the above results, we establish the main theorem of this section which states the symmetry property of the left and right quantiles. Theorem 5.12.3 (Quantile Symmetry Theorem) Suppose X is a random variable and p ∈ [0, 1]. Then lqX(p) = −rq−X(1− p). Remark. We immediately conclude rqX(p) = −lq−X(1− p), by replacing X by −X and p by 1− p. 160 5.12. Quantile symmetries Proof R.H.S = − sup{x|P (−X ≤ x) ≤ 1− p} = inf{−x|P (X ≥ −x) ≤ 1− p} = inf{x|P (X ≥ x) ≤ 1− p} = inf{x|1 − P (X ≥ x) ≥ p} = inf{x|1−Gc(x) ≥ p} = inf{x|F o(x) ≥ p} = lqX(p). Now we show how these symmetries can become useful to derive other relationships/definitions for quantiles. Lemma 5.12.4 Suppose X is a random variable with distribution function F . Then lqX(p) = sup{x|F c(x) < p}. Proof lqX(p) = −rq−X(1− p) = − inf{x|F o−X (x) > 1− p} = − inf{x|1 −Gc−X(x) > 1− p} = sup{−x|Gc−X(x) < p} = sup{−x|P (−X ≥ x) < p} = sup{x|P (X ≤ x) < p} = sup{x|F c(x) < p}. In the previous sections, we showed that both lqX and rqX are equivari- ant under non-decreasing continuous transformations: lqφ(X)(p) = φ(lqX(p)), where φ is non-decreasing left continuous. Also rqφ(X)(p) = φ(rqX(p)), for φ : R→ R non-decreasing right continuous. However, we did not provide any results for decreasing transformations. Now we are ready to offer a result for this case. 161 5.12. Quantile symmetries Theorem 5.12.5 (Decreasing transformation equivariance) a) Suppose φ is non-increasing and right continuous on R. Then lqφ(X)(p) = φ(rqX(1− p)). b) Suppose φ is non-increasing and left continuous on R. Then rqφ(X)(p) = φ(lqX(1− p)). Proof a) By the Quantile Symmetry Theorem, we have lqφ(X)(p) = −rq−φ(X)(1− p). But −φ is non-decreasing right continuous, hence the above is equivalent to −(−φ(rqX(1− p))) = φ(rqX(1− p)). b) By the Quantile symmetry Theorem rqφ(X)(p) = −lq−φ(X)(1−p) = −− φ(lqX(1− p)) = φ(lqX(p)), since −φ is non-decreasing and left continuous. Lemma 5.12.6 Suppose X is a random variable and F c, F o, Gc, Go are the corresponding distribution functions. Then we have the following inequali- ties: a) F c(lq(p)) ≥ p. (Hence F c(rq(p)) ≥ p.) b) F o(rq(p)) ≤ p. (Hence F o(lq(p)) ≤ p.) c) Go(lq(p)) ≤ 1− p. (Hence Go(rq(p)) ≤ 1− p.) d) Gc(rq(p)) ≥ 1− p. (Hence Gc(lq(p)) ≥ 1− p.) Proof We already showed a). b) Suppose there F o(rq(p)) = p+ ǫ for some positive ǫ. Then since F o is left continuous lim x→rq(p)+ F o(x) = p+ ǫ. Hence there exist x0 < rq(p) such that F (x0) ≥ F o(x0) > p+ ǫ/2. This is a contradiction to rq(p) being the inf of the set {x|F (x) > p}. c) and d) are straightforward consequence of a) and b) since F c + Go = 1 and F o +Gc = 1. 162 5.13. Quantiles from the right The quantile functions as the inverse of an open distribution function Lemma 5.12.7 Suppose X is a random variable with distribution function F and open distribution function F o. a) {x|F o(x) < p} = (−∞, lqF (p)) or (−∞, lqF (p)]. b) {x|F o(x) ≤ p} = (−∞, rqF (p)]. c) If F o is continuous then {x|F o(x) < p} = (−∞, lqF (p)]. d) {x|F o(x) > p} = (rqF (p),∞). e) {x|F o(x) ≥ p} = (lqF (p),∞) or [lqF (p),∞) Proof The proof is very similar to Lemma 5.5.1 and we skip the details. 5.13 Quantiles from the right So far, we have defined left/right quantiles using the classic distribution function F c. We also showed that in quantile definitions F c can be replaced by F o. F cX(x) = P (−∞ < X ≤ x) measures the probability from minus infinity. When we define left/right quantiles, we seek to find points where this probability from minus infinity reaches (passes) a certain value. One could also consider GcX(x) = P (x ≤ X < ∞) and define another version of quantile functions which seek points where the probability from plus infinity reaches or passes a point. This is a motivation to define the “left/right quantile functions from the right”. By indicating from the right we clarify that the probability is compute from the right hand side i.e. plus infinity. The previously defined left and right quantile functions should be called “left/right quantile functions from the left”. Definition Suppose X is a random variable with closed right distribution function GcX(x) = P (X ≥ x). Then we define the “left quantile function from the right” as follows lqfrX(p) = sup{x|GcX(x) > p}. Definition Suppose X is a random variable with closed right distribution function GcX(x) = P (X ≥ x). Then we define the right quantile function from the right as follows 163 5.13. Quantiles from the right rqfrX(p) = sup{x|GcX (x) ≥ p}. Using the symmetries in the definition of these quantities, we will show that we have already characterized left/right from the right quantile func- tions. We need the following lemma. Lemma 5.13.1 Suppose X is a random variable with quantile functions lqX , rqX . Then a) rqX(p) = sup{x|F o(x) ≤ p}. b) lqX(p) = sup{x|F o(x) < p} Proof a) Let A = {x|F c(x) ≤ p} and B = {x|F o(x) ≤ p}. First note that A ⊂ B ⇒ supA ≤ supB. To show that the sups are indeed equal, note supA < supB ⇒ ∃x0, y0, supA < x0 < y0 < supB. Then supA < x0 ⇒ F c(x0) > p, and y0 < supB ⇒ ∃b ∈ B, y0 < b⇒ ∃b, F o(b) ≤ p, y0 < b⇒ F o(y0) ≤ p. But F c(x0) > p,F o(y0) ≤ p, which is a contradiction. b) Let A = {x|F c(x) < p} and B = {x|F o(x) < p}. First note that A ⊂ B ⇒ supA ≤ supB. To show that the sups are indeed equal, note supA < supB ⇒ ∃x0, y0, supA < x0 < y0 < supB. Then supA < x0 ⇒ F c(x0) ≥ p, and y0 < supB ⇒ ∃b ∈ B, y0 < b⇒ ∃b, F o(b) < p, y0 < b⇒ F o(y0) < p. 164 5.14. Limit theory But F c(x0) ≥ p, F o(y0) < p, which is a contradiction. Lemma 5.13.2 (Quantile functions from the right) a) lqfrX(p) = rqX(1− p). b) rqrfX(p) = lqX(1− p). Proof a) lqrfX(p) = sup{x|GcX (x) > p} = sup{x|F oX (x) ≤ p} = rqX(1− p). b) rqrfX(p) = sup{x|GcX(x) ≥ p} = sup{x|F oX (x) < 1− p} = lqX(1− p). 5.14 Limit theory To prove limit results, we need some limit theorems from probability theory that we include here for completeness and without proof. Their proofs can be found in standard probability textbooks and appropriate references are given below. If we are dealing with two samples, X1, · · · ,Xn and Y1, · · · , Yn, to avoid confusion we use the notation Fn,X and Fn,Y to denote their empirical distribution functions respectively. Definition Suppose X1,X2, · · · , is a discrete–time stochastic process. Let F(X) be the σ-algebra generated by the process and F(Xn,Xn+1, · · · ) the σ-algebra generated by Xn,Xn+1, · · · . Any E ∈ F(X) is called a tail event if E ∈ F(Xn,Xn+1, · · · ) for any n ∈ N. Definition Let {An}n∈N be any collection of sets. Then {An i.o.}, read as An happens infinitely often is defined by: {An i.o.} = ∩i∈N ∪∞j=i Aj . 165 5.14. Limit theory Theorem 5.14.1 (Kolmogorov 0–1 law): E being a tail event implies that P (E) is either 0 or 1. Proof See [9]. Theorem 5.14.2 (Glivenko–Cantelli Theorem): Suppose, X1,X2, · · · , i.i.d, has the sample distribution function Fn. Then lim n→∞ supx∈R |Fn(x)− F (x)| → 0, a.s.. Proof See [7]. Here, we extend the Glivenko–Cantelli Theorem to F o, Go and Gc. Lemma 5.14.3 Suppose X is a random variable and consider the associated distribution functions F oX , G o X and G c X with corresponding sample distribu- tion functions F oX,n, G o X,n and G c X,n. Then sup x∈R |GoX,n −GoX | → 0, a.s., sup x∈R |F oX,n − F oX | → 0, a.s., and sup x∈R |GcX,n −GcX | → 0, a.s.. Proof Note that F cX +G o X = 1⇒ GoX = 1− F cX , and F cX,n +G o X,n = 1⇒ GoX,n = 1− F cX,n. Since Glivenko–Cantelli Theorem holds for F cX it also holds for G o X . To show the result for F oX , note that F o X(x) = G o −X(−x) and F oX,n(x) = Go−X,n(−x). Also to show the result for GcX note that GcX = 1 − F oX and GcX,n = 1− F oX,n. 166 5.14. Limit theory Theorem 5.14.4 (Borel–Cantelli lemma): Suppose (Ω,F , P ) is a probability space. Then 1. An ∈ F and ∑∞ 1 P (An) <∞⇒ P (An i.o) = 0. 2. An ∈ F independent events with ∑∞ 1 P (An) = ∞ ⇒ P (An i.o) = 1, where i.o. stands for infinitely often. Proof See [9]. Theorem 5.14.5 (Berry–Esseen bound): Let X1,X2, · · · , be i.i.d with E(Xi) = 0 <∞, E(X2i ) = σ and E(|Xi|3) = ρ. If Gn is the distribution of X1 + · · · +Xn/σ √ n and Φ(x) is the distribution function of a standard normal random variables then |Gn(x)− Φ(x)| ≤ 3ρ/σ3 √ n. Corollary 5.14.6 Let X1,X2, · · · , be i.i.d with E(Xi) = µ < ∞, E(|Xi − µ|2) = σ and E(|Xi−µ|3) = ρ. If Gn is the distribution of (X1+ · · ·+Xn− nµ)/σ √ n = √ n( X̄n−µσ ) and Φ(x) is the distribution function of a standard normal random variable then |Gn(x)− Φ(x)| ≤ 3ρ/σ3 √ n. Proof This corollary is obtained by applying the theorem to Yi = Xi − µ. Now let An = (X1 + · · · +Xn − nµ)/σ √ n. Then |P (An > x)−(1−Φ(x))| = |P (An ≤ x)−Φ(x)| = |Gn(x)−Φ(x)| < 3ρ/σ3 √ n. Also |P (x < An ≤ y)−(Φ(y)−Φ(x)))| ≤ |Gn(y)−Φ(y)|+|Gn(x)−Φ(x)| ≤ 6ρ/σ3 √ n. These inequalities show that for any ǫ > 0 there exist N such that n > N, Φ(z2)−Φ(z1)− ǫ < P (z1 < √ n( X̄n − µ σ ) ≤ z2) < Φ(z2)− Φ(z1) + ǫ, for z1 < z2 ∈ R ∪ {−∞,∞}. It is interesting to ask under what conditions lqFn and rqFn tend to lqF and rqF as n→∞. Theorem 5.14.7 gives a complete answer to this question. 167 5.14. Limit theory Theorem 5.14.7 (Quantile Convergence/Divergence Theorem) a) Suppose rqF (p) = lqF (p) then rqFn(p)→ rqF (p), a.s., and lqFn(p)→ lqF (p), a.s.. b) When lqF (p) < rqF (p) then both rqFn(p), lqFn(p) diverge almost surely. c) Suppose lqF (p) < rqF (p). Then for every ǫ > 0 there exists N such that n > N, lqFn(p), rqFn(p) ∈ (lqF (p)− ǫ, lqF (p)] ∪ [rqF (p), rqF (p) + ǫ). d) lim sup n→∞ lqFn(p) = lim sup n→∞ rqFn(p) = rqF (p), a.s., and lim inf n→∞ lqFn(p) = lim infn→∞ rqFn(p) = rqF (p), a.s.. Proof a) Since, lqF (p) = rqF (p), we use qF (p) to denote both. Suppose ǫ > 0 is given. Then F (qF (p)− ǫ) < p⇒ F (qF (p)− ǫ) = p− δ1, δ1 > 0, and F (qF (p) + ǫ) > p⇒ F (qF (p) + ǫ) = p+ δ2, δ2 > 0. By the Glivenko–Cantelli Theorem, Fn(u)→ F (u) a.s., uniformly over R. We conclude that Fn(qF (p)− ǫ)→ F (qF (p)− ǫ) = p− δ1, a.s., 168 5.14. Limit theory and Fn(qF (p) + ǫ)→ F (qF (p) + ǫ) = p+ δ2, a.s.. Let ǫ′ = min(δ1,δ2)2 . Pick N such that for n > N : p− δ1 − ǫ′ < Fn(qF (p)− ǫ) < p− δ1 + ǫ′, p+ δ2 − ǫ′ < Fn(qF (p) + ǫ) < p+ δ2 + ǫ′. Then Fn(qF (p)− ǫ) < p− δ1 + ǫ′ < p ⇒ lqFn(p) ≥ qF (p)− ǫ and rqFn(p) ≥ qF (p)− ǫ. Also p < p+ δ2 − ǫ′ < Fn(qF (p) + ǫ) ⇒ lqFn(p) ≤ qF (p) + ǫ and rqFn(p) ≤ qF (p) + ǫ. Re-arranging these inequalities we get: qF (p)− ǫ ≤ lqFn(p) ≤ qF (p) + ǫ, and qF (p)− ǫ ≤ rqFn(p) ≤ qF (p) + ǫ. b) This needs more development in the sequel and the proof follows. c) This also needs more development in the sequel and the proof follows. d) If lqF (p) = rqF (p) the result follows immediately from (a). Other- wise suppose lqF (p) < rqF (p). Then by (b) lqFn(p) diverges almost surely. Hence lim sup lqFn(p) 6= lim inf lqFn(p), a.s. . But by (c), ∀ǫ > 0, ∃N, n > N lqFn(p) ∈ (lqF (p)− ǫ, lqF (p)] ∪ [rqF (p), rqF (p) + ǫ). This means that every convergent subsequence of lqFn(p) has either limit lqF (p) or rqF (p), a.s.. Since lim sup lqFn(p) 6= lim inf lqFn(p), a.s., we conclude lim sup lqFn(p) = rqF (p) and lim inf lqFn(p) = lqF (p), a.s.. A similar argument works for rqFn(p). 169 5.14. Limit theory To investigate the case lqF (p) 6= rqF (p) more, we start with the simplest example namely a fair coin. Suppose X1,X2, · · · an i.i.d sequence with P (Xi = −1) = P (Xi = 1) = 12 and let Zn = ∑n i=1Xi. Note that Zn ≤ 0⇔ lqFn(1/2) = −1, Zn > 0⇔ lqFn(1/2) = 1, and Zn < 0⇔ rqFn(1/2) = −1, Zn ≥ 0⇔ rqFn(1/2) = 1. Hence in order to show that lqFn(1/2) and lqFn(1/2) diverge almost surely, we only need to show that P ((Zn < 0 i.o.) ∩ (Zn > 0 i.o.)) = 1. We start with a theorem from [9]. Theorem 5.14.8 Suppose Xi is as above. Then P (Zn = 0 i.o.) = 1. Proof The proof of this theorem in [9] uses the Borel–Cantelli Lemma part 2. Theorem 5.14.9 Suppose, X1,X2, · · · i.i.d. and P (Xi = −1) = P (Xi = 1) = 1/2. Then lqFn(1/2) and rqFn(1/2) diverge almost surely. Proof Suppose, A = {Zn = −1 i.o.} and B = {Zn = 1 i.o.}. It suffices to show that P (A ∩B) = 1. But ω ∈ A ∩ B ⇒ lqFn(p)(ω) = −1, i.o. and lqFn(p)(ω) = 1, i.o. Hence lqFn(p)(ω) diverges. Note that P (A) = P (B) by the symmetry of the distribution. Also it is obvious that both A and B are tail events and so have probability either zero or one. To prove P (A ∩B) = 1, it only suffices to show that P (A ∪B) > 0. Because then at least one of A and B has a positive probability, say A. P (A) > 0⇒ P (A) = 1⇒ P (B) = P (A) = 1⇒ P (A ∩B) = 1. Now let C = {Zn = 0, i.o.}. Then P (C) = 1 by Theorem 5.14.8. If Zn(ω) = 0 then either Zn+1(ω) = 1 or Zn+1(ω) = −1. Hence if Zn(ω) = 0, i.o. then at least for one of a = 1 or a = −1, Zn(ω) = a, i.o.. We conclude that 170 5.14. Limit theory ω ∈ A ∪B. This shows C ⊂ A ∪B ⇒ P (A ∪B) = 1. To generalize this theorem, suppose X1,X2, · · · , arbitrary i.i.d process and lqF (p) < rqF (p). Define the process Yi = { 1 Xi ≥ rqF (p) 0 Xi ≤ lqF (p). (Note that P (lqX(p) < X < rqX(p)) = 0.) Then the sequence Y1, Y2, · · · is i.i.d., P (Yi = 0) = p and P (Yi = 1) = 1− p. Also note that lqFn,Y (p) diverges a.s. ⇒ lqFn,X (p) diverges a.s. Hence to prove the theorem in general it suffices to prove the theorem for the Yi process. However, we first prove a lemma that we need in the proof. Lemma 5.14.10 Let Y1, Y2, · · · i.i.d with P (Yi = 0) = p = 1 − q > 0 and P (Yi = 1) = 1 − p = q > 0. Let Sn = ∑n i=1 Yi, 0 < α, k ∈ N. Then there exists a transformation φ(k) (to N) such that P (Sφ(k) − φ(k)q < −k) > 1/2 − α, P (Sφ(k) − φ(k)q > k) > 1/2− α. Remark. For α = 1/4, we get P (Sφ(k) − φ(k)q < −k) > 1/4, P (Sφ(k) − φ(k)q > k) > 1/4. Proof Since the first three moments of Yi are finite (E(Yi) = q,E(|Yi − q|2) = q(1− q) = σ,E(|Yi − q|3) = q3(1− q) + (1− q)3q = ρ), we can apply the Berry-Esseen theorem to √ n Ȳn−µσ . By a corollary of that theorem, for α 2 > 0 there exists an N1 such that 1− Φ(z)− α 2 < P ( √ n Ȳn − µ σ > z) < 1− Φ(z) + α 2 , and Φ(z)− α 2 < P ( √ n Ȳn − µ σ < −z) < Φ(z) + α 2 , for all z ∈ R and n > N1. Now for the given integer k pick N2 such that 171 5.14. Limit theory 1 2 − α 2 < Φ( k σ √ N2 ) < 1 2 + α 2 . This is possible because Φ is continuous and Φ(0) = 1/2. Now let φ(k) = max{N1, N2}, z = k σ √ φ(k) . Then since φ(k) ≥ N1 P ( √ φ(k) Ȳφ(k) − µ σ > z) > 1− Φ(z)− α 2 > 1/2 − α, and P ( √ φ(k) Ȳφ(k) − µ σ < −z) > Φ(z)− α 2 > 1/2 − α. These two inequalities are equivalent to P ((Sφ(k) − φ(k)q) < −k) > 1/2− α, and P ((Sφ(k) − φ(k)q) > k) > 1/2 − α. If we put α = 1/4, we get P ((Sφ(k) − φ(k)q) < −k) > 1/4, and P ((Sφ(k) − φ(k))q > k) > 1/4. We are now ready to prove Part b) of Theorem 5.14.7. Proof [Theorem 5.14.7, Part b)] For the process {Yi} as defined above, let n1 = 1,mk = nk + φ(nk) and nk+1 = mk + φ(mk). Then define Dk = (Ynk+1 + · · ·+ Ymk − (mk − nk)q < −nk), Ek = (Ymk+1 + · · ·+ Ynk+1 − (nk+1 −mk)q > mk), CK = Dk ∩ Ek. Since {Ck} involve non–overlapping subsequences of Ys, they are indepen- dent events. Also Dk and Ek are independent. Now note that 172 5.14. Limit theory Ynk+1 + · · · + Ymk − (mk − nk)q < −nk ⇒ Y1 + · · ·+ Ymk < −nk + (mk − nk)q + nk ⇒ Ȳmk < mk − nk mk q < q ⇒ lqFn,Y (p) = rqFn,Y = 0⇒ {Ck, i.o.} ⊂ {lqFn,Y (p) = rqFn,Y = 0, i.o.}. Similarly, Ymk+1 + · · ·+ Ynk+1 − (nk+1 −mk)q > mk ⇒ Y1 + · · ·+ Ynk+1 > (nk+1 −mk)q +mk ⇒ Ȳnk+1 > mk + (nk+1 −mk)q nk+1 > q = 1− p ⇒ lqFn,Y (p) = rqFn,Y (p) = 1 ⇒ {Ck, i.o.} ⊂ {lqFn,Y (p) = rqFn,Y (p) = 1, i.o.}. Let us compute the probability of Ck: P (Ck) = P (Ynk+1 + · · · + Ymk − (mk − nk)q < −nk)× P (Ymk+1 + · · ·+ Ynk+1 − (nk+1 −mk)q > mk) = P (Y1 + · · ·+ Yφ(nk) − φ(nk)q < −nk)× P (Y1 + · · · + Yφ(mk) − φ(mk)q > mk) > 1/4.1/4 = 1/16. We conclude that ∞∑ k=1 P (Ck) =∞. By the Borel–Cantelli Lemma, P (Ck, i.o.) = 1. We conclude that P (lqFn,Y (p) = rqFn,Y (p) = 0, i.o.) = 1, and P (lqFn,Y (p) = rqFn,Y (p) = 1, i.o.) = 1. 173 5.14. Limit theory Hence, P ({lqFn,Y (p) = rqFn,Y (p) = 0, i.o.}∩{lqFn,Y (p) = rqFn,Y (p) = 1, i.o.}) = 1. Proof (Theorem 5.14.7, part (c)) Suppose that rqF (p) = x1 6= lqF (p) = x2 and a is an arbitrary real number. Let h = x2 − x1. We define a new chain Y as follows: Yi = { Xi Xi ≤ lqFX (p) Xi − h Xi ≥ rqFX (p). (See Figure 5.7.) Then Y1, Y2, · · · is an i.i.d sample. We drop the index i from Yi and Xi in the following for simplicity and since the Yi (as well as the Xi) are identically distributed. We claim lqFY Y (p) = rqFY (p) = lqFX (p). To prove lqFY (p) = lqFX (p), note that FY (lqFX (p)) = P (Y ≤ lqFX (p)) ≥ P (X ≤ lqFX (p)) ≥ p⇒ lqFY (p) ≤ lqFX (p). (The first inequality is because Y ≤ X.) Moreover for any y < lqFX (p), FY (y) = FX(y) < p. (Since X,Y < lqFX (p) ⇒ X = Y .) Hence lqFY (p) ≥ lqFX (p) and we are done. To show rqFY (p) = lqFX (p), note that rqFY (p) ≥ lqFY (p) = lqFX (p). It only remains to show that rqFY (p) ≤ lqFX (p). Suppose y > lqFX (p) and let δ = y − lqFX (p) > 0. First note that P ({Y ≤ lqFX (p) + δ}) = P ({Y ≤ lqFX (p) + δ and X ≥ rqFX (p)} ∪ {Y ≤ lqFX (p) + δ and X ≤ lqFX (p)}) = P ({X − h ≤ lqFX (p) + δ and X ≥ rqFX (p)} ∪ {X ≤ lqFX (p) + δ and X ≤ lqFX (p)}) = P ({rqFX (p) ≤ X ≤ rqFX (p) + δ} ∪ {X ≤ lqFX (p)}) = P ({X ≤ rqFX(p) + δ}). Hence, FY (y) = P (Y ≤ lqFX (p) + δ) = P (X ≤ rqFX (p) + δ) > p⇒ 174 5.14. Limit theory rqFY (p) ≤ y, ∀y > lqFX (p). We conclude that rqFY (p) ≤ lqFY (p). To complete the proof of part (c) observe that for every ǫ > 0, we may suppose that lqFn,Y (p) ∈ (qFY (p)− ǫ, qFY (p) + ǫ). Then lqFn,X(p), rqFn,X (p) ∈ (lqFX (p)− ǫ, rqFX (p) + ǫ). (5.6) This is because from lqFn,Y (p) ∈ (qFY (p) − ǫ, qFY (p) + ǫ), we may conclude that Fn,Y (qFY (p) + ǫ) > p⇒ Fn,X(rqFX (p) + ǫ) > p⇒ lqFn,X (p), rqFn,X (p) < rqFX (p) + ǫ, and Fn,Y (qFY (p)− ǫ) < p⇒ FnX (lqFX (p)− ǫ) < p⇒ lqFn,X (p), rqFn,X (p) > lqFX (p)− ǫ. But by part (a) of Theorem 5.14.7, lqFn,Y (p) → qFY (p) and rqFn,Y (p) → qFY (p). Hence for given ǫ > 0 there exists an integer N such that for any n > N, lqFn,Y (p) ∈ (qFY (p) − ǫ, qF,Y (p) + ǫ). By (5.6), we have shown that for every ǫ > 0 there exists N such that for every n > N qFn,X (p), rqFn,X (p) ∈ (lqFX (p)− ǫ, rqFX (p) + ǫ), since P (Xi ∈ (lqFX (p), rqFX (p)) for some i ∈ N) = 0. We can conclude that P (lqFn,X (p) ∈ (lqFX (p), rqFX (p)) for some i ∈ N) = 0 and P (rqFn,X (p) ∈ (lqFX (p), rqFX (p)) for some i ∈ N) = 0. Hence with probability 1 qFn,X (p), rqFn,X (p) ∈ (lqFX (p)− ǫ, lqFX (p)] ∪ [rqFX (p), rqFX (p) + ǫ). 175 5.14. Limit theory −2 −1 0 1 2 3 4 5 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 x F Figure 5.7: The solid line is the distribution function of {Xi}. Note that for the distribution of the Xi and p = 0.5, lqFX (p) = 0, rqFX (p) = 3. Let h = rq(p)−lq(p) = 3. The dotted line is the distribution function of the {Yi} which coincides with that of {Xi} to the left of lqFX (p) and is a backward shift of 3 units for values greater than rqFX (p). Note that for the {Yi}, lqFY (p) = rqFY (p) = 1. 176 5.15. Summary and discussion 5.15 Summary and discussion This section highlights the results obtained for a two state-definition for quantiles and discuss why these results show such a consideration is useful. Justifications and consequences of using left and right quantile functions 1. The equivariance property (under non-decreasing continuous transfor- mations) of lqX and rqX makes them equivariant under the change of scale. This is a nice theoretical property. Also from a practical view it means that if we compute the quantile in one scale it can be easily calculated in another scale. 2. Considering lqX , rqX allowed us to find a symmetry relation on quan- tiles: lqX(p) = −rq−X(1− p). 3. We found a nice formula for continuous non-increasing transforma- tions: lqφ(X)(p) = φ(rqX(1− p)). 4. We showed that lqFn(p) the traditional sample quantile function and rqFn(p) tend to the distribution version if and only if lqF (p) = rqF (p). Hence finding a sufficient and necessary condition that is easy to for- mulate in terms of lqF and rqF . 5. If we start with only the traditional quantile function lqF , then rqF (p) would arise in the limit lim sup n→∞ lqFn(p) = rqF (p). 6. It is widely claimed that the “median” minimizes the absolute error E|X − a|. In next chapters, we show that argminaE|X − a| = [lqX(1/2), rqX (1/2)]. We observe both lqX(p) and rqX(p) would arise if we intend to use this as a way defining quantiles. A generalization from 1/2 to arbitrary p is left for future research. 177 5.15. Summary and discussion 7. We offered a physical motivation using a uniform bar to define quan- tiles for data vectors which resulted in a definition that coincide with lqX , rqX . 8. If we only use the traditional quantile function, for p = 0, we get lqX(0) = ∞ in general. However rqX(0) < ∞ is a useful value in the sense that it is the maximum a satisfying P (X ≥ a) = 1. Also rqX(1) = −∞ in general. However lqX(1) > −∞ in general and is a useful value since it is the minimum a satisfying P (X ≤ a) = 1. 9. Middle values of lqX(p), rqX(p) (for example a specific weighted combi- nation of the two) or the whole interval [lqX(p), rqX(p)] are not prefer- able as a definition. This is because we showed that the range of lqX and rqX is exactly the set of heavy points. Points where the probabil- ity of being in any positive radius of them is positive. 10. From a practical point of view giving a value that has already occurred as quantile we can expect the same value or a close value happen again in the future. More formally, suppose a random sample X1, · · · ,Xn is given and we want to compute the sample qunatile. Then lqFn(p) and rqFn(p) are one of Xis by definition. If we denote XF a future value meaning that XF is identically distributed and independent from X1, · · · ,Xn P (XF ∈ (Xi − ǫ,Xi + ǫ)) > 0. A middle value might not satisfy such a property. 11. We found out a clean nice way to show in what sense exactly lqX and rqX are close. We showed P (lqX(p) < X < rqX(p)) = 0. For data vectors this means the two values are side by side in the sorted vector. 12. We showed that lqX(p) and rqX(p) coincide except for at most a count- able subset of the reals. 13. We showed that even though lqX(p) ≤ rqX(p) in general, they are not too far apart since for a very small positive value ǫ lqX(p) ≤ rqX(p) ≤ lqX(p+ ǫ). 178 5.15. Summary and discussion 14. Given one of lqF or rqF , the other one can be obtained by taking the limits lqF (p0) = lim p↑p0 rqF (p), and rqF (p0) = lim p↓p0 lqF (p). 15. In order to invert F , lqF , rqF gives us nice expressions for sets such as x|F (x) > p which is equal to (rqX(p),∞) if F is continuous at rqX(p). 16. For a continuous distribution function, we have a nice formula for the inverse based on lqF and rqF F−1(p) = [lqX(p), rqX(p)]. 17. The left (right) quantile function at given probability p can be simply put as the minimal value that the distribution function reaches (passes) p. In some practices fixing one lq or rq might be sufficient. This is because lq and rq are close in terms of the probability of the underlying random variable. For example in data vectors lq, rq will be at most one element off in terms of their position in the data vector. In most elementary statistics text books and statistical softwares quan- tiles are given as a one-state solution generally a weighted combination of the left and right quantiles. In order to teach the right and left quantile functions, we suggest using a simple example x = (1, 2, 3, 4) to show that there are no values in the middle and the left (2) and right median (3) are natural to consider. Then one can point out this can be generalized from p = 1/2 to any p without getting into details. It can also be pointed out that the left (right) quantile function at given probability p can be simply put as the minimal value that the distribution function reaches (passes) p. In a more advanced courses perhaps for mathematics, statistics or science students the teacher might like to show how the quantiles can be defined using the bar of length 1. Finally the mathematical formulas can be given to students with appropriate mathematical background (i.e. Familiar with the definition of sup and inf and their existence property for the real numbers). In case an interpolation procedure is to be used, we suggest the interpo- lation procedure to be between lqX(p) and rqX(p). Surprisingly this is not the case. For example for x = (0, 0, 0, 0, 0, 1, 1, 1, 1, 1) in the R package as 179 5.15. Summary and discussion the quantile for p = 0.48, we get 0.32. But in the vector we notice that 0s have covered 50 percent of the data and since 0.48 is strictly less than 0.48, we expect 1 to be the quantile. 0.32 in our notation is both greater than lqx(0.50) and rqx(0.50). 180 Chapter 6 Probability loss function 6.1 Introduction This chapter develops a “loss function” to assess the goodness of an ap- proximation or an estimator of quantiles of a distribution (or a data vec- tor). Suppose a quantile of a very large data vector, q is approximated by q̂. Several classic losses can be considered. For example: absolute error L(q, q̂) = |q − q̂| or squared error L(q, q̂) = (q − q̂)2 which was proposed by Gauss. Quoting from [30]: “Gauss proposed the square of the error as a measure of loss or inaccuracy. Should someone object to this specification as arbitrary, he writes, he is in complete agreement. He defends his choice by an appeal to mathematical simplicity and convenience.” An obvious prob- lem with this loss is its lack of invariance under re-scaling of of data. We propose a loss function that is invariant under strictly monotonic transfor- mations. We also show that the sample version of this loss function tends uniformly to the distributional version. This loss function can be used also to find optimal ways to summarize a data vector and to define a measure of distance among random variables as shown in the next chapters. We define the loss of estimating/approximating q by q̂ to be the prob- ability that the random variable falls in between the two values. A limited version of this concept only for data vectors can be found in computer sci- ence literature, where ǫ-approximations are used to approximate quantiles of large datasets. (See for example [32].) However, this concept has not been introduced as a measure of loss and the definition is limited to data vectors rather than arbitrary distributions. 6.2 Degree of separation between data vectors Our purpose is to find good approximations to the median and other quan- tiles. It is not clear how such approximations should be assessed. We con- tend that such a method should not depend on the scale of the data. In other words it should be invariant under monotonic transformations. We 181 6.2. Degree of separation between data vectors define a function δ that measures a natural “degree of separation” between data points of a data vector x. For the sake of illustration, consider the example sort(x) = (1, 2, 3, 3, 4, 4, 4, 5, 6, 6, 7). Now suppose, we want to de- fine the degree of separation of 3,4 and 7 in this example. Since 4 comes right after 3, we consider their degree of separation to be zero. There are 3 elements between 4 and 7 so it is appealing to measure their degree of separation as 3 but since the degree of separation should be relative, we cab also divide by n = 11, the length of the vector, and get: δ(4, 7) = 3/11. We can generalize this idea to get a definition for all pairs in R. With the same example, suppose we want to compute the degree of separation between 2.5 and 4.5 that are not members of the data vector. Then since there are 5 elements of the data vector between these two values, we define their degree of separation as 5/11. More formally, we give the following definition. Definition Suppose z < z′ let ∆x(z, z′) = {i|z < xi < z′}. Then we define δx(z, z ′) = |∆x(z, z′)| n , and δx(z, z) = 0. We call δx the “degree of separation” (DOS) or the “prob- ability loss function” associated with x. We then have the following lemma about the properties of δ. Lemma 6.2.1 The degree of separation δx has the following properties: a) δx ≥ 0. b) y < y′ < y′′ ⇒ δx(y, y′′) ≥ δx(y, y′). c) If z < z′ and z, z′ are elements of x, δx(z, z′) = mx(z)−Mx(z′)−1 n . [For the definition of m(z) and M(z) see Chapter 5.] d) δφ(x)(φ(z), φ(z ′)) = δx(z, z′) if φ is a strictly monotonic transformation. e) y = sort(x) and y′ = yi < y′′ = yj ⇒ δx(y′, y′′) ≤ (j − i− 1)/n. Proof Both a) and b) are straightforward. We obtain c) as a straightforward consequence of the definition of mx(y ′) and Mx(y′). To show (d), suppose z < z′ and φ is strictly decreasing. (The strictly increasing case is similar.) Then φ(z′) < φ(z) and hence ∆φ(x)(φ(z), φ(z ′)) = {i|φ(z′) < φ(xi) < φ(z)} = {i|z < xi < z′} = ∆x(z, z′). 182 6.3. “Degree of separation” for distributions: the “probability loss function” Finally e) is true because |∆x(y′, y′′)| = |{l|yi < xl < yj}| ≤ j − i− 1. All the definitions and results above can be applied to random vectors X = (X1, · · · ,Xn) as well. In that case, lqX(p) and rqX(p) and δX(z, z′) are random. To develop our theory, we need to study the asymptotic behavior of these statistics. We do so in later sections. 6.3 “Degree of separation” for distributions: the “probability loss function” We define a degree of separation for distributions which corresponds to the notion of “degree of separation” defined for data vectors to measure separa- tion between data points. Definition Suppose X has a distribution function F . Let δF (z ′, z) = δF (z, z′) = lim u→z− F (u) − F (z′) = P (z′ < X < z), z > z′, and δF (z, z) = 0, z ∈ R. We also denote this by δX whenever a random variable X with distribution F is specified. We call δX the “degree of sepa- ration” or the “probability loss function” associated with X. The following lemma is a straightforward consequence of the definition. Lemma 6.3.1 Suppose x = (x1, · · · , xn) is a data vector with the empirical distribution Fn. Then δFn(z, z ′) = δx(z, z′), z, z′ ∈ R. This lemma implies that to prove a result about the degree of separation of data vectors, it suffices to show the result for the degree of separation of random variables. Theorem 6.3.2 Let X,Y be random variables and FX , FY , their corre- sponding distribution functions. a) Assume Y = φ(X), for a strictly increasing or decreasing function φ : R→ R. Then δFX (z, z′) = δFY (φ(z), φ(z′)), z < z′ ∈ R. b) δF (z, z ′) ≤ δF (z, z′′), z ≤ z′ ≤ z′′. c) δF (z1, z3) ≤ δF (z1, z2) + δF (z2, z3) + P (X = z2). 183 6.3. “Degree of separation” for distributions: the “probability loss function” d) Suppose, p ∈ [0, 1]. Then δF (lqF (p), rqF (p)) = 0. e) Suppose, p1 < p2 ∈ [0, 1]. Then δF (lqF (p1), rqF (p2)) ≤ p2 − p1. This im- mediately implies δF (lqF (p1), lqF (p2)) ≤ p2 − p1 and δF (rqF (p1), lqF (p2)) ≤ p2 − p1 by b). Remark. We may restate Part (c), for data vectors: Suppose x has length n and z2 is of multiplicity m, (which can be zero). Then the inequality in (c) is equivalent to δx(z1, z3) ≤ δx(z1, z2) + δx(z2, z3) +m/n. Proof a) Note that for a strictly increasing function φ, we have P (z < X < z′) = P (φ(z) < φ(X) < φ(z′)). Now suppose φ is strictly decreasing. Then z < z′ ⇒ φ(z′) < φ(z). Let Y = φ(X). Then δX(z, z ′) = P (z < X < z′) = P (φ(z′) < φ(X) < φ(z)) = δY (φ(z), φ(z′)). b) This is trivial. c) Consider the case z1 < z2 < z3. (The other cases are easier to show.) Then δF (z1, z3) = P (z1 < X < z3) = P (z1 < X < z2)+P (X = z2)+P (z2 < X < z3) = δ(z1, z2) + δ(z2, z3) + P (X = z2). d) This result is a straightforward consequence of Lemma 5.3.1 b) and c). e) This result follows from δF (lq(p1), rq(p2)) = P (lq(p1) < X < rq(p2)) = P (X < rq(p2))− P (X ≤ lq(p1)) ≤ p2 − p1. The last inequality being a result of Lemma 5.3.1 a) and d). Remark. We call part c) of the above theorem the pseudo–triangle inequal- ity. Here we give two examples about using the probability loss function and its interpretation. 184 6.3. “Degree of separation” for distributions: the “probability loss function” Example We showed above that the triangle property does not hold for the probability loss function and that might lead to the criticism that this definition is not intuitively appealing. By an example, we now show why it makes sense that the triangle property should not hold for such a situation. Suppose a few mathematicians are standing in a line Euclid, Khawarzmi, Khayyam, Gauss, Von Neumann. If we were to ask Khwarzmi about his distance from Euclid, he would an- swer: “0, since I am right beside him.” If we ask Khwarazmi again about his distance to Khayyam, he will say that “my distance is 0 since I am right be- side him.” However if we were to ask Euclid about his distance to Khayyam he would answer: “One unit (person) since Khwarzmi is in the middle.” We observe that this distance does not satisfy the triangle property as well. In this example the people sitting in the middle are the relevant factors. If we deal with a vector of sorted observations, then observations in the middle are the relevant factors. Example A student is told that he will receive a scholarship if he ranks first in an exam in his class in either of the subjects mathematics and physics. The teacher of the courses differ and take a practice exam in each subject. They return the students back their marks out of 100. They also publish the lists of all the marks after removing the names, to give the students a feeling of how they did in the class. Table 6.1 shows the marks in mathematics and physics. 185 6.3. “Degree of separation” for distributions: the “probability loss function” Mathematics Physics Physics before scaling 80 90 81.0 65 89 79.2 63 86 74.0 61 85 72.2 54 83 68.9 54 82 67.2 53 79 62.4 50 79 62.4 49 76 57.8 48 75 56.2 47 72 51.8 47 72 51.8 46 69 47.6 44 68 46.2 30 55 30.2 Table 6.1: A class marks in mathematics and physics. The third column are the raw physics marks before the physics teacher scaled them. Reza got 63 in math and 75 in physics. He decided to focus on just one subject that gives him a better chance in order to win the scholarship. He compared his mark in math with the best student in math: 63 against 80. So he needed |best mark− Reza’s mark| = 80− 63 = 17 more marks to be as good as the best student. Then he compared his physics mark to the best student in physics. He found he needs 90-75=15 marks to be as good as him. So he thought it’s better to focus on physics. But then he realized that different teachers use different exam and scoring methods. He had heard that the physics teacher scales the marks upward by the formula new mark = √ 100× old mark. So the student calculated the untransformed values and put the result in the third column. Now he noticed that his new mark is 56.2 while the best mark is 91. The difference this time is 24.8 which is a larger difference than before. According to his “decision-making tool”, the absolute difference, he should focus on math since the absolute difference for math was only 17. But what if the mathematics teacher had used another transformation to re-scale the marks without him knowing it? This made him see a disadvantage to using the absolute value difference. Instead he realized, he can use the number 186 6.4. Limit theory for the probability loss function of the students between himself and the best student as a measure of the difficulty of getting the best mark. He noticed his decision in this case will be independent of how the teachers re-scaled the marks. In the math case there is only one and for physics there are 8 students between him and the best student. Hence he decided that he should focus on math. This example was under the assumption that other students do not change their study habits or do not have access to the marks. If the other students had access to their marks or were ready to change their study fo- cus, we need to take into account other possible actions of the other students and the problem will become game-theoretical in nature, a very interesting problem on its own right. The solution for that problem we conjecture to be the same. 6.4 Limit theory for the probability loss function Theorem 6.4.1 Suppose X1,X2, · · · , is a sequence of i.i.d random vari- ables with distribution function F . Then as n→∞, δFn(z, z ′)→ δF (z, z′), a.s., uniformly in z, z′ ∈ R. In other words sup z>z′∈R |δFn(z, z′)− δF (z, z′)| → 0, a.s.. Proof If z = z′, the result is trivial. Suppose z > z′. We need to show that lim u→z− Fn(u)− Fn(z′) → a.s. lim u→z− F (u)− F (z′), (6.1) as n → ∞, uniformly in z > z′ ∈ R. Suppose ǫ > 0 is given. By Glivenko- Cantelli Theorem there exist N ∈ N such that for every n > N : |Fn(u)− F (u)| < ǫ 2 , a.s., ∀u ∈ R. Now for n > N , |( lim u→z− Fn(u)− Fn(z′))− ( lim u→z− F (u)− F (z′))| ≤ | lim u→z− (Fn(u)−F (u))|+|Fn(z′)−F (z′)| = lim u→z− |Fn(u)−F (u)|+|Fn(z′)−F (z′)|. 187 6.5. The probability loss function for the continuous case But since |Fn(u) − F (u)| < ǫ2 , limu→z− |Fn(u) − F (u)| ≤ ǫ2 . Also |Fn(z′) − F (z′)| < ǫ2 . Hence |( lim u→z− Fn(u)− Fn(z′))− ( lim u→z− F (u)− F (z′))| < ǫ. 6.5 The probability loss function for the continuous case This section studies the probability loss function when the distribution func- tion is continuous. The results are given in the following lemmas, which show some of its desirable properties in the continuous case. Lemma 6.5.1 (Probability loss for continuous distributions) Suppose X is a random variable with distribution function FX . Then δX(lqX(p1), rqX(p2)) = p2 − p1, p2 > p1, ∀p1, p2 ∈ [0, 1] iff FX is continuous. Proof If FX is continuous then for p1 < p2 and by Lemma 5.5.2, δ(lqX(p1), rqX(p2)) = P (lqX(p1) < X < rqX(p2)) = P (X < rqX(p2))− P (X ≤ lqX(p)) = F (rqX(p2))− F (lqX(p2)) = p2 − p1. If F is not continuous then there exists an x0 such that a = PX(X = x0) > 0. Let p1 = P (X < x0)+a/3 and p2 = P (X < x0)+a/2. Clearly lqX(p1) = x0 and rqX(p2) = x0. Hence δ(lqX(p1), rqX(p2)) = 0 6= p2 − p1. Lemma 6.5.2 Suppose δ(lqX(p1), rqX(p2)) = δ(rqX(p1), lqX(p2)) = a, p1 < p2. Then also a = δ(lqX(p1), lqX(p2)) = δ(rqX(p1), lqX(p2)) = δ(rqX(p1), rqX(p2)). Moreover, if X is continuous, all the above are equal to p2 − p1. 188 6.6. The supremum of δX Proof The result follows immediately from the fact that all the three quan- tities are greater than or equal to δ(rqX(p1), lqX(p2)) = a and smaller than or equal to δ(lqX(p1), rqX(p2)) = a. The second part is straightforward us- ing the previous lemma. 6.6 The supremum of δX This section investigates how large the probability loss can become under various scenarios. The results are given in the following lemmas. Lemma 6.6.1 Let Dist be the set of all distribution functions. Then sup F∈Dist δF (lqF (p1), lqF (p2)) = p2 − p1, p2 > p1, p1, p2 ∈ (0, 1). Proof This follows from the fact that δF (lqF (p1), lqF (p2)) ≤ p2 − p1 in general, as shown in Lemma 6.3.2 and δF (lqF (p1), lqF (p2)) = p2 − p1 for continuous variables. The same is true for data vectors as shown in the following lemma. Lemma 6.6.2 Suppose the supremum in the following is taken over all data vectors, then sup x δx(lqx(p1), lqx(p2)) = p2 − p1, p2 > p1, p1, p2 ∈ (0, 1). Proof We know that δx(lqx(p1), lqx(p2)) ≤ p2 − p1. To show that the supremum attains the upper bound, let xn = (1, · · · , n). Then lqxn(p1) = [np1] or [np1] + 1. Also lqxn(p2) = [np2] or [np2] + 1. Then ∆, the number of elements of x between lqxn(p1) and lqxn(p2) satisfies: [np2]− [np1]− 1 ≤ ∆ ≤ [np2]− [np1] + 1⇒ np2 − 1− np1 − 1− 1 ≤ ∆ ≤ np2 − np1 + 1⇒ −3/n ≤ δxn(p1, p2)− (p2 − p1) ≤ 1/n. This shows that δxn(p1, p2) tends to p2−p1 uniformly for all p1 < p2 ∈ [0, 1]. 189 6.6. The supremum of δX Lemma 6.6.3 Suppose p1, p2, · · · , pm ∈ [0, 1] and m = 2k. Then sup x max{δx(lqx(p1), lqx(p2)), δx(lqx(p3), lqx(p4)), · · · , δx(lqx(pm−1), lqx(pm))} = max{|p2 − p1|, · · · , |pm − pm−1|}. Proof The supremum is less than or equal to the left hand side by Lemma 5.3.1. Let xn = (1, 2, · · · , n). Without loss of generality suppose p1 < p2, p3 < p4, · · · , p2k−1 < p2k. By the properties of quantiles of data vectors: lqxn(pi) = x[npi] = [npi] or lqxn(pi) = x[npi]+1 = [npi] + 1. Also, lqxn(pi+1) = x[npi+1] = [npi+1] or lqxn(pi+1) = x[npi+1]+1 = [npi+1] + 1. Then, δxn(lqxn(pi), lqxn(pi+1)) ≥ 1n([npi+1]−[npi]−1) ≥ 1n(npi+1−npi−2) = (pi+1 − pi)− 2n . Hence δxn(lqxn(pi), lqxn(pi+1)) > |pi+1 − pi| − 2 n , i = 1, · · · ,m− 1. The inequality shows the supremum is greater than = max{|p2 − p1| − 2 n , · · · , |pm − pm−1| − 2 n }, for all n ∈ N. Now let n→ +∞ to get the conclusion. Lemma 6.6.4 Suppose p1, p2, · · · , pm ∈ [0, 1] and a1, a1, · · · , a2m ∈ [0, 1]. Then sup x [ ∫ a2 a1 δx(lqx(p1), lqx(p))dp + ∫ a4 a3 δx(lqx(p2), lqx(p))dp+ · · ·+ ∫ a2m a2m−1 δx(lqx(pm), lqx(p))dp] = ∫ a2 a1 |p− p1|dp + ∫ a4 a3 |p− p2|dp + · · ·+ ∫ a2m a2m−1 |p− pm|dp. Proof The proof is similar to the previous lemmas and we skip the details. 190 6.6. The supremum of δX 6.6.1 “c-probability loss” functions This section introduces a family of loss functions that are very similar to the probability loss function but might be more useful in some contexts, particularly when the distribution function is not continuous. A defect of the probability loss function is: it can be equal to zero even if a 6= b, a, b ∈ R. Also we noted that even though it resembles a metric it is not one. For example the triangle inequality does not hold. We introduce the “c- probability loss function” to solve these problems. Definition Suppose X is a random variable, δX its associated probability loss function and c ≥ 0. Then let δcX(a, b) = δX(a, b) + c(1− 1{0}(a− b)), where 1{0} is the indicator function at zero. Note that the c-probability loss is the sum of two losses. The first, δX(a, b), is the probability of being between the two values (a and b), the second, c(1 − 1{0}(a − b)), is the penalty for a and b not being equal. One question is what value of c should be chosen as the “penalty” of not being equal to the true value. It turns out that the value of c is not very important for many purposes as shown in the following lemma. Lemma 6.6.5 (Properties of the c-probability loss functions) a) δcX(a, b) = c⇔ a 6= b and δX(a, b) = 0. b) δcX(a, b) = 0 or δ c X(a, b) ≥ c. c) δcX is invariant under strictly monotonic transformations. d) Let d = sup x0∈R P (X = x0). Then if c ≥ d, δc satisfies the triangle inequality. e) δcX(lqX(p), rqX(p)) ≤ c. (It is either zero or c.) f) Suppose δcX is given for any c > 0. Then we can obtain any other δ d X for d ≥ 0. Proof a) and b) are trivial. c) Both δX and c(1 − 1{0}(a − b)) are invariant under monotonic transfor- mations. d) We use the pseudo–triangle inequality for the probability loss function. Take z1, z2, z3 ∈ R. We need to show δcX(z1, z3) ≤ δcX(z1, z2) + δcX(z2, z3) . If z1 = z3, the result is trivial. Otherwise c(1− 1{0}(z1 − z3)) = c and δcX(z1, z3) = δX(z1, z3) + c ≤ δX(z1, z2) + δX(z2, z3) + P (X = z2) + c 191 6.6. The supremum of δX ≤ δX(z1, z2) + δX(z2, z3) + c(1 − 1{0}(z1 − z2)) + c(1− 1{0}(z2 − z3)) = δcX(z1, z2) + δ c X(z2, z3). e) Trivial by properties of lq, rq and δX as shown in Lemma 5.3.1. f) Suppose δcX is given. If δ c X(a, b) = 0 then a = b and hence δ d X(a, b) = 0. If a 6= b then δcX(a, b) = δX(a, b) + c. From this we can obtain δX(a, b) = δcX(a, b)− c and hence δdX(a, b) = δcX(a, b) − c+ d. δX(X1,X2) (or δ c X(X1,X2)), if X1,X2 i.i.d∼ X can be considered as a measure of disparity of the common distribution. The following lemma shows that the expectation of this quantity is constant for all continuous random variables! Lemma 6.6.6 Suppose X is a continuous random variable, then E(δX(X1,X2)) = 2/3, where X1,X2 i.i.d∼ X. Also E(δcX (X1,X2)) = 2/3 + c. Proof We know that FX(X1) and FX(X2) are both uniformly distributed on (0,1) and independent. Hence E(δX (X1,X2)) = E(|F (X1)− F (X2)|) =∫ 1 0 ∫ 1 0 |p1 − p2|dp1dp2 = 2 ∫ 1 0 ∫ 1 p2 (p1 − p2)dp1dp2 = 2 ∫ 1 0 (1− 2p2 + p22)dp2 = 2/3. E(δcX (X1,X2)) = 2/3 + c is obtained by noting that P (X1 = X2) = 0 for continuous random variables. 192 Chapter 7 Approximating quantiles in large datasets 7.1 Introduction This chapter develops an algorithm for approximating the quantiles in petas- cale (petabyte= one million gigabytes) datasets and uses the “probability loss function” to assess the quality of the approximation. The need for such an approximation does not arise for the sample average, another common data summary. That is because if we break down the data to equal parti- tions and calculate the mean for every partition, the mean of the obtained means is equal to the total mean. It is also easy to recover the total mean from the means of unequal partitions if their length is known. However computer memories, several gigabytes (GBs) in size, cannot handle large datasets that can be petabytes (PBs) in size. For example, a laptop with 2 GBs of memory, using the well–known R package, could find the median of a data file of about 150 megabytes (MBs) in size. However, it crashed for files larger than this. Since large datasets are commonly assem- bled in blocks, say by day or by district, that need not be a serious limitation except insofar as the quantiles computed in that way cannot be used to find the overall quantile. Nor would it help to sub–sample these blocks, unless these (possibly dependent) sub–samples could be combined into a grand sub-sample whose quantile could be computed. That will not usually be possible in practice. The algorithm proposed here is a “worst–case” algo- rithm in the sense that no matter how the data are arranged, we will reach the desired precision. This is of course not true if we sample from the data because there is a (perhaps small) probability that the approximation could be poor. We also address the following question: Question: If we partition the data–file into a number of sub–files and compute the medians of these, is the median of the medians a good approximation to the median of the data–file? 193 7.2. Previous work We first show that the median of the medians does not approximate the exact median well in general, even after imposing conditions on the number of partitions or their length. However for our proposed algorithm, we show how the partitioning idea can be employed differently to get good approximations. “Coarsening” is introduced to summarize data vector with the purpose of inferring about the quantiles of the original vector using the summaries. Then the “d-coarsening” quantile algorithm which works by partitioning the data (or use previously defined partitions) to possibly non- equal partitions, summarizing them using coarsening and inferring about the quantiles of the original data vector using the summaries. Then we show the deterministic accuracy of the algorithm in Theorem 7.4.1. The accuracy is measured in terms of the probability loss function of the original data vector. This is an extension of the work of Albasti et al. in [3] to non-equal size partition case. Theorem 7.4.1 still requires the partition sizes to be divisible by d the coarsening factor. In order to extend the results further to the case where the partitions are nit divisible by d, we investigate how quantiles of a data vector with missing data or contaminated data relate to the quantiles of the original data in Lemma 7.4.3 and Lemma 7.4.4. Also in Lemma 7.5.1, we show if the quantiles of a coarsened vector are used in place of the quantiles of the original data vector how much accuracy will be lost. Finally we investigate the performance of the algorithm using both simulations and real climate datasets. 7.2 Previous work Finding quantiles and using them to summarize data is of great importance in many fields. One example is the climate studies where we have very large datasets. For example the datasets created by computer climate models are larger than PBs in size. In NCAR (National Center for Atmospheric sciences at Boulder, Colorado), the climate data (outputs of compute models) are saved on several disks. To access different parts of these data a robot needs to change disks form a very large storage space. Another case where we confront large datasets is in dealing with data streams which arise in many different applications such as finance and high–speed networking. For many applications, approximate answers suffice. In computer science, quantiles are important to both data base implementers and data base users. They can also be used by business intelligence applications to drive summary information from huge datasets. As pointed out by Gurmeet et al. in [32], a good quantile approximation 194 7.2. Previous work algorithm should 1. not require prior knowledge of the arrival or value distribution of its inputs. 2. provide explicit and tunable approximation guarantees. 3. compute results in a single pass. 4. produce multiple quantiles at no extra cost. 5. use as little memory as possible. 6. be simple to code and understand. Finding quantiles of data vectors and sorting them are parallel problems since once we sort a vector finding any given quantile can be done instantly. A good account of early work in sorting algorithms can be found in [28]. Munero et al. in [36] showed for P–pass algorithms (algorithms that scan the data P times) Θ(N/P ) storage locations are necessary and sufficient, where N is the length of the dataset. (See Appendix C for the definitions of complexity functions such as Θ.) It is well–known that the worst-case complexity of sorting is n log2 n + O(1) as shown in [33]. In [39], Paterson discusses the progress made in the so–called “selection” problem. He lets Vk(n) be the worst–case minimum number of pairwise comparisons required to find the k–th largest out of n “distinct elements”. In particular M(n) = Vk(n) for k = ⌈n/2⌉. In [8], it is shown that the lower bound for Vk(n) is n+min{k−1, n−k}−1, an achieved upper bound by Blum is 5.43n. Better upper bounds have been achieved through the years. The best upper bound so far is 2.9423N and the lower bound is (2+α)N where α is of order 2−40. Yao in [49], showed that finding approximate median needs Ω(N) com- parisons in deterministic algorithms. Using sampling this can be reduced to O( 1 ǫ2 log(δ−1)) independent of N , where ǫ is the accuracy of the approxima- tion in terms of the “probability loss” in our notation. In [36], Munero et al. showed that O(N1/p) is necessary and sufficient to find an exact φ–quantile in p passes. Often an exact quantile is not needed. A related problem is finding space–efficient one–pass algorithms to find approximate quantiles. A sum- mary of the work done in this subject and a new method is given in [1]. Two approximate quantile algorithms using only a constant amount of memory were given by Jain [26] Agrawal et al. in [1]. No guarantee for the error was given. Alsabti et al. in [3], provide an algorithm and guaranteed error 195 7.3. The median of the medians in one pass. This algorithm works by partitioning the data into subsets, summarizing each partition and then finding the final quantiles using the summarized partitions. The algorithm in this chapter is an extension of this algorithm to the case of partitions of unequal length. 7.3 The median of the medians A proposed algorithm to approximate the median of a very large data vector partitions the data into subsets of equal length, computes the median for each partition and then computes the median of the medians. For example, suppose n = lm and break the data to m vectors of size l. One might conjecture that by picking l orm sufficiently large the median of the medians would ensure close proximity to the exact median. We show by an example that taking l and m very large will not help to get close to the exact median. Let l = 2b+ 1 and m = 2a+ 1. partition number Partition Median of the partition 1 (1, 2, · · · , b, b+ 1, 10b, · · · , 10b) b+ 1 2 (1, 2, · · · , b, b+ 1, 10b, · · · , 10b) b+ 1 . . . . . . . . . a (1, 2, · · · , b, b+ 1, 10b, · · · , 10b) b+ 1 a+1 (1, 2, · · · , b, b+ 1, 10b, · · · , 10b) 10b a+2 (10b, 10b, · · · , 10b) 10b . . . . . . . . . 2a+1 (10b, 10b, · · · , 10b) 10b Table 7.1: The table of data Example Table 7.1 shows the dataset partitioned into m = 2a+1 vectors of equal length. Every vector is of length l = 2b + 1. The first a + 1 vectors are identical and 10b is repeated b times in them. The last a vectors are also identical with all components equal to 10b. The median of the medians turns out to be b + 1. However, the median of the dataset is 10b. We show that b+1 is in fact “almost” the first quantile. This is because (b+1) is smaller 196 7.4. Data coarsening and quantile approximation algorithm than all 10b’s. There are (a+1)b+a(2b+1) data points equal to 10b. Hence b+ 1 is smaller than this fraction of the data points: (a+ 1)b+ a(2b+ 1) (2a+ 1)(2b + 1) = 2a+ 2 2a+ 1 b 4b+ 2 + a 2a+ 1 ≈ 1× 1 4 + 1 2 ≈ 3 4 . With a similar argument, we can show that b + 1 is greater than almost a quarter of the data points (the ones equal to 1, 2, · · · , b). Hence b + 1 is “almost” the first quantile. One can prove a rigorous version of the the following statement. The median of the medians is “almost” between the first and the third quartile. We only give a heuristic argument for simplicity. To that end, let n = lm and m = 2a + 1 and l = 2b + 1. Let M be the exact median and M ′ be the median of the medians. Order the obtained medians of each partition and denote them by M1, · · · ,Mm. By definition M ′ ≥ Mj , j ≤ a and M ′ ≤Mj , j ≥ a+ 1. Each Mj , j ≤ a is less than or equal to b data points in its partition. Hence, we conclude that M ′ is less than or equal to ab data points. Similarly M ′ is greater than or equal to ab data points (which are disjoint for the data points used before). But abn = ab (2a+1)(2b+1) ≈ 14 . Hence, M ′ is greater than or equal to 1/4 data points and less than or equal to 1/4 data points. 7.4 Data coarsening and quantile approximation algorithm This section introduces an algorithm to approximate quantiles in very large data vectors. As we demonstrated in the previous section the median of medians algorithm is not necessarily a good approximation to the exact median of a data vector even if we have a large number of partitions and large length of the partitions. The algorithm is based on the idea of “data coarsening” which we will discuss shortly. The proposed algorithm can give us approximations to the exact quantile of known precisions in terms of degree of separation. After stating the algorithm, we prove some theorems that give us the precision of the algorithm. The results hold for partitions of non–equal length. 197 7.4. Data coarsening and quantile approximation algorithm Definition Suppose a data vector x of length n = n1n2 is given, n1, n2 > 1 ∈ N. Also let sort(x) = y = (y1, · · · , yn). Then the n2–coarsening of x, Cn2(x) is defined to be (yn2 , y2n2 , · · · , y(n1−1)n2). Note that Cn2(x) has length n1 − 1. Let pi = i/n1, i = 1, 2, · · · , (n1 − 1). Then Cn2(x) = (lqx(p1), · · · , lqx(pn1−1)). We can immediately generalize the coarsening operator. Suppose sort(x) = (y1, · · · , yn), and n2 < n is given. Then by The Quotient–Remainder Theorem from elementary number theory, there exist n1 ∈ N ∪ {0} and r < n2 such that n = n1n2 + r. Define Cn2(x) = (yn2 , · · · , yn2(n1−1)). The expression is similar to before. However, there are n2 + r elements after yn2(n1−1) in the sorted vector y. In this sense this coarsening is not fully symmetric. We show that if n2 is small compared to n this lack of symmetry has a small effect on the approximation of quantiles. Suppose x is a data vector of length n = ∑m i=1 li. We introduce the coarsening algorithm to find approximations to the large data vectors. d–Coarsening quantiles algorithm: 1. Partition x into vectors of length l1, · · · , lm. (Or use pre–existing partitions, e.g. partitions of data saved in various files on the hard disk of a computer.) x1 = (x1, · · · , xl1), x2 = (xl1+1, · · · , xl1+l2), · · · , xm = (x∑m−1 j=1 lj+1 , · · · , xn) 2. Sort each xl, l = 1, 2, · · · ,m and let yl = sort(xl), l = 1, · · · ,m: y1 = (y11 , · · · , y1l1), · · · , ym = (ym1 , · · · , ymlm). 3. d–Coarsen every vector: (y1d, · · · , y1(c1−1)d), · · · , (ymd , · · · , ym(cm−1)d), and for simplicity drop d and use the notation wji = y j id. w1 = (w11, · · · , w1(c1−1)), · · · , wm = (wm1 , · · · , wm(cm−1)). 198 7.4. Data coarsening and quantile approximation algorithm 4. Stack all the above vectors into a single vector and call it w. Find rqw(p) (or lqw(p)) and call it µ. Then µ is our approximation to rqx(p) (or lqx(p)). Theorem 7.4.1 Suppose x is of length n = ∑m i=1 li, m ≥ 2 and li = cid. Let C = ∑m i=1 ci. Apply the coarsening algorithm to x and find µ to approximate rqx(p) (or lqx(p)). Then µ is a (left and right) quantile in the interval [p− ǫ, p+ ǫ], where ǫ = m+1C−m . In other words δx(µ, rqx(p)) ≤ ǫ and δx(µ, lqx(p)) ≤ ǫ. When li = cd, i = 1, · · · ,m, ǫ = m+1m−1 1c−1 ≤ 3c−1 . We need an elementary lemma in the proof of this theorem. Lemma 7.4.2 (Two interval distance lemma) Suppose two intervals I = [a, b] and J = [c, d] subsets of R are given. Then sup{|p− q|, p ∈ I, q ∈ J} = max{|a− d|, |b− c|}. Proof sup{|p − q|, p ∈ I, q ∈ J} ≥ max{|a − d|, |b − c|} is trivial because a, b ∈ I and c, d ∈ J . To show the converse note that |p− q| = p− q or q − p, p ∈ I, q ∈ J . But p− q ≤ b− c, and q − p ≤ d− a. Hence |p − q| ≤ max{b− c, d− a} ≤ max{|b− c|, |a − d|}. This completes the proof. Proof of Theorem 7.4.1. Let n′ = ∑m i=1(ci − 1) = ∑m i=1 ci − m = C − m and MC = {(i, j)|i = 1, 2 · · · ,m, j = 1, · · · , ci−1}, the index set of w. Also let c = max{c1, · · · , cm}. Suppose, h−1n′ ≤ p < hn′ , h = 1, · · · , n′. Then since µ = rqw(p), there are disjoint subsets of MC , K and K ′ such that |K| = h, |K ′| = n′ − h, µ ≥ wij, (i, j) ∈ K and µ ≤ wij, (i, j) ∈ K ′. (This is because if we let v = sort(w), rqw(p) = vh since [n ′p] = h− 1.) 199 7.4. Data coarsening and quantile approximation algorithm K,K ′ are not necessarily unique because of possible repetitions among the wit. Hence we impose another condition on K and K ′. If (i, t) ∈ K then (i, u) /∈ K ′, u < t. It is always possible to arrange for this condition. For suppose, (i, t) ∈ K and (i, u) ∈ K ′, u < t. Then µ ≥ wti and µ ≤ wiu, hence wit ≤ wui . But since u < t we have wit ≤ wui by the definition of wi. We conclude that wit = w u i . Now we can simply exchange (i, t) and (i, u) between K and K ′. If we continue this procedure after finite number of steps we will get K and K ′ with the desired property. Now define • K1 = {(i, 1)|(i, 1) ∈ K}, with |K1| = k1 and I1 = {(i, j)|j ≤ d, (i, 1) ∈ K}, Then |I1| = k1d. Also note that if (i, j) ∈ I1, µ ≥ wi1 ≥ yij. • Let K2 = {(i, 2)|, (i, 2) ∈ K}, with |K2| = k2 and I2 = {(i, j)|d < j ≤ 2d, (i, 2) ∈ K}. Then |I2| = k2d. Also note that if (i, j) ∈ I2, µ ≥ wi2 ≥ yij. • Let Kt = {(i, t)|(i, t) ∈ K}, with |Kt| = kt and It = {(i, j)|(t − 1)d < j ≤ td, (i, t) ∈ K}. Then |It| = ktd. Also note that if (i, j) ∈ It, µ ≥ wit ≥ yij. • Let Kc−1 = {(i, (c − 1))|(i, c − 1) ∈ K}, with |Kc−1| = kc−1 and I(c−1) = {(i, j)|(c − 2)d < j ≤ (c− 1)d, (i, c − 1) ∈ K}. Then |Ic−1| = kc−1d. Also note that if (i, j) ∈ I(c−1), µ ≥ wi(c−1) ≥ yij. 200 7.4. Data coarsening and quantile approximation algorithm Note that K = ∪c−1t=1Kt, |K| = k1,+ · · ·+kc−1. Since the Kt are disjoint the It are also disjoint. Let I = ∪c−1t=1It then |I| = d(k1+ · · ·+ kc−1) = d|K|. Also note that (i, j) ∈ I ⇒ µ ≥ yij. Similarly define, • K ′1 = {(i, 1)|(i, 1) ∈ K ′}, |K ′1| = k′1, and I ′1 = {(i, j)|d < j ≤ 2d, (i, 1) ∈ K ′}. Then |I ′1| = k′1d. Also note that if (i, j) ∈ I ′1, µ ≤ wi1 ≤ yij. • Let K ′2 = {(i, 2)|(i, 2) ∈ K ′}, |K ′2| = k′2, and I ′2 = {(i, j)|2d < j ≤ 3d, (i, 2) ∈ K ′}. Then |I ′2| = k′2d. Also note that if (i, j) ∈ I ′2, µ ≤ wi2 ≤ yij. • Let K ′t = {(i, t)|(i, t) ∈ K ′}, |K ′t| = k′t, and I ′t = {(i, j)|td < j ≤ (t+ 1)d, (i, t) ∈ K ′}. Then |I ′t| = k′td. Also note that if (i, j) ∈ I ′t then µ ≤ wit ≤ yij. • K ′c−1 = {(i, (c − 1))|(i, c − 1) ∈ K ′}, |K ′c−1| = k′c−1, and I ′c−1 = {(i, j)|j > (c− 1)d, (i, c − 1) ∈ K ′}. Then |I ′c−1| = k′c−1d. Also note that if (i, j) ∈ I ′c−1 ⇒ µ ≤ wi(c−1) ≤ yij. Then |I| = |K|d and |I ′| = |K ′|d. We claim that I ∩ I ′ = ∅. To see this note that because of how the second components in It and I ′ t are defined, it is only possible that It+1 = {(i, j)|td < j ≤ (t + 1)d, (i, t + 1) ∈ K} and I ′t = {(i, j)|td < j ≤ (t+ 1)d, (i, t) ∈ K ′} intersect for some t = 1, · · · , c− 2. But if they intersect then there exist i, t such that (i, t+1) ∈ K and (i, t) ∈ K ′ which is against our assumption regarding K and K ′. Hence by Lemma 5.2.4, µ is a quantile between 201 7.4. Data coarsening and quantile approximation algorithm [ |K|d n , n− |K ′|d n ] = [ hd∑m i=1 cid , n− (n′ − h)d∑m i=1 cid ] = [ h C , m+ h C ]. But we know that p ∈ [ h− 1 C −m, h C −m). We are dealing with two interval in one of them µ is a quantile and the other contains p. We showed in Lemma 7.4.2 if two intervals [a, b] and [c, d] are given, the sup distance between two elements of the two intervals is max{|a− d|, |b − c|}. Applying this to the above two intervals we get, max{|m+ h C − h− 1 C −m |, | h− 1 C −m − h C |}, which is equal to, max{|mC −m 2 − hm+ C C(C −m) |, | C − hm C(C −m) |}. But m2 + hm ≤ m2 + (C −m)m = mC. Hence |mC −m 2 − hm+ C C(C −m) | = mC −m2 − hm+ C C(C −m) ≤ mC + C C(C −m) = m+ 1 C −m. Also | C − hm C(C −m) | ≤ C +mC C(C −m) ≤ m+ 1 C −m. Hence the max is smaller than ǫ = m+1C−m and we conclude that µ is a quantile for p′ which is at most as far as ǫ to p. The case li = cd is easily obtained by replacing C = mc and noting that m+1 m−1 ≤ 3 m ≥ 2. In most applications, usually the data partitions are not divisible by d. For example the data might be stored in files of different length with common factors. Another situation involves a very large file that is needed to be read in successive stages because of memory limitations. Suppose that 202 7.4. Data coarsening and quantile approximation algorithm we need a precision ǫ (in terms of degree of separation) and based on that we find an appropriate c and m. Note that n might not be divisible by mc. First we prove two lemmas. These lemmas show what happens to the quantiles if we throw away a small portion of the data vector or add some more data to it. The first lemma is for a situation that we have thrown away or ignored a small part of the data. The second lemma is for a situation that a small part of the data are contaminated or includes outliers. In both cases, we show how the quantiles computed in the “imperfect” vectors correspond to the quantiles of the original vector. In both case x stands for the imperfect vector and w is the complete/clean data. Lemma 7.4.3 (Missing data quantile summary lemma) Suppose x = (x1, · · · , xn), sort(x) = (y1, · · · , yn) and y′ = lqx(p), p ∈ [0, 1]. Consider a vector x⋆ of length n⋆ and let w = stack(x, x⋆). Then y′ = lqw(p ′), where p′ ∈ [p − ǫ, p + ǫ] and ǫ = n⋆n+n⋆ . Similarly if y′ = rqx(p) and p ∈ [0, 1], y′ = rqw(p′), where p′ ∈ [p−ǫ, p+ǫ] and ǫ = n ⋆ n+n⋆ . Proof We prove the result for lqx only and a similar argument works for rqx. Let z = sort(w) then lqz = lqw. For p = 1 the result is easy to see. Otherwise, in ≤ p < i+1n for some i = 0, · · · , n−1. But then y′ = lqx(p) = yi. In the new vector z since we have added n⋆ elements y′ = zj for some j, i ≤ j < i+ n⋆. Hence y′ = lqz( jn+n⋆ ). From np− 1 < i ≤ np, we conclude np− 1 n+ n⋆ < i n+ n⋆ ≤ j n+ n⋆ < i+ n⋆ n+ n⋆ ≤ np+ n ⋆ n+ n⋆ . Hence, n⋆(1− p)− 1 n+ n⋆ < j n+ n⋆ − p < n ⋆(1− p) n+ n⋆ ⇒ | j n+ n⋆ − p| < max{|n ⋆(1− p)− 1 n+ n⋆ |, |n ⋆(1− p) n+ n⋆ |}. But |n⋆(1−p)n+n⋆ | ≤ n ⋆ n+n⋆ and |n ⋆(1−p)−1 n+n⋆ | ≤ max{ n ⋆−1 n+n⋆ , 1 n+n⋆} since p ranges in [0, 1]. We conclude that that | j n+ n⋆ − p| < n ⋆ n+ n⋆ . 203 7.4. Data coarsening and quantile approximation algorithm Lemma 7.4.4 (Contaminated data quantile summary lemma) Suppose x = (x1, · · · , xn), sort(x) = (y1, · · · , yn) and y′ = lqx(p), p ∈ [0, 1]. Consider the vector w = (x1, x2, · · · , xn−n⋆) then y′ = lqw(p′), where p′ ∈ [p − ǫ, p + ǫ] and ǫ = n⋆n−n⋆ . Similarly if y′ = rqx(p) and p ∈ [0, 1], y′ = rqw(p′), where p′ ∈ [p−ǫ, p+ǫ] and ǫ = n ⋆ n−n⋆ . Proof We only show the case for lqx and a similar argument works for rqx. Let z = sort(w). Then lqz = lqw. If p = 1 the result is easy to see. Otherwise, i n ≤ p < i+1n for some i = 0, · · · , n − 1. But then y′ = lqx(p) = yi. In the new vector z since we have removed n⋆ elements y′ = zj for some j, i − n⋆ ≤ j ≤ i. Hence y′ = lqz( jn−n⋆ ). From np− 1 < i ≤ np, we conclude np− 1− n⋆ < j ≤ np⇒ np− n⋆ ≤ j ≤ np. Hence −n⋆ + n⋆p n− n⋆ ≤ j n− n⋆ − p ≤ n⋆p n− n⋆ ⇒ | j n− n⋆ − p| ≤ n⋆ n− n⋆ . In the case that the partitions are not divisible by d, we can use the same algorithm with generalized coarsening. The error will increase obviously and the next two lemmas say by how much. Lemma 7.4.5 Suppose x has length n = lm + r, 0 ≤ r < l and m = cd. To find lqx(p), apply the algorithm in the previous theorems to a sub–vector of x of length lm. Then the obtained quantile is a quantile for a number in [p − ǫ, p + ǫ], where ǫ = m+1m−1 1c−1 + rlm+r . Proof The result is a straightforward consequence of the Theorem 7.4.1 and the Lemma 7.4.3. Lemma 7.4.6 Suppose x has length n = ∑m i=1 li and li = cid+ ri, ri < d. Let R = ∑m i=1 ri. Then apply the algorithm above to x to find lqx(p), using the generalized coarsening. The obtained quantile is a quantile for a number in [p− ǫ, p+ ǫ] where ǫ = m+1C−m + RR+Cd . Proof Let l′i = cid. Consider x ′ a sub–vector of x consisting of (y11 , · · · , y1l′1), (y 2 1 , · · · , y2l′2), · · · , (y m 1 , · · · , yml′m). 204 7.5. The algorithm and computations Then x′ has length ∑m i=1 l ′ i. By Lemma 7.4.3 p-th quantile found by the al- gorithm is a quantile in [p− ǫ1, p+ ǫ1], ǫ1 = m+1C−m for x′. x has R = ∑m i=1 ri elements more than x′. Hence the obtained quantile is a quantile for x for a number in [p− ǫ, p + ǫ], ǫ = ǫ1 + RR+Cd . 7.5 The algorithm and computations Suppose a data vector x has length n. To find the quantiles of this vector, we only need to sort x. Since then for any p ∈ (0, 1), we can find the first h such that p ≥ h/n. Note that sort(x) = (lqx(1/n), lqx(2/n), · · · , lqx(1)) = (rqx(0), rqx(1/n), · · · , rqx(n− 1 n )). We only focus on left quantiles here. Similar arguments hold for the right quantile. Obviously, the longer the vector x, the finer the resulting quantiles are. Now imagine that we are given a very long data vector which cannot even be loaded on the computer memory. Firstly, sorting this data is a challenge and secondly, reporting the whole sorted vector is not feasible. Assume that we are given the sorted data vector so that we do not need to sort it. What would be an appropriate summary to report as the quantiles? As we noted also the sorted vector itself although appropriate, maybe of such length as to make further computation and file transfer impossible. The natural alternative would be to coarsen the data vector and report the resulting coarsened vector. To be more precise, suppose, length(x) = n = n1n2 and y = sort(x) = (y1, · · · , yn). Then we can report y′ = Cn2(y) = (yn2 , · · · , y(n1−1)n2). This corresponds to (lqy′(1/n2), · · · , lqy′(1)). How much will be lost by this coarsening? Suppose, we require the left quantile corresponding to (h − 1)/n < p ≤ h/n, h = 1, · · · , n. Then x would give us yh. But since (h− 1)/n < p ≤ h/n np < h ≤ np+ 1. 205 7.5. The algorithm and computations Also suppose for some h′ = 1, · · · , n1, (h′ − 1)/(n1 − 1) < p ≤ (h′)/(n1 − 1)⇒ (h′ − 1) < p(n1 − 1) ≤ h′ ⇒ (n1 − 1)p ≤ h′ < p(n1 − 1) + 1. Then (h− 1)(n1 − 1)/n < h′ < h(n1 − 1)/n + 1, and (h− 1)(n1 − 1)n2/n < h′n2 < h(n1 − 1)n2/n+ n2. (7.1) Using the coarsened vector, we would report yh′(n2) as the approximated quantile for p. The degree of separation between this element and the exact quantile using Equation 7.1 is less than or equal to max{|h− (h− 1)(n1 − 1)n2/n| n , |h(n1 − 1)n2/n+ n2 − h| n }. This equals max{|−hn2 − n1n2 + n2 n2 |, |−hn2 + nn2 n2 |}. But |−hn2 − n1n2 + n2 n2 | = n2(n1 + n− 1) n2 < n2(n1 + n) n2 = 1 n + n2 n , and |−hn2 + nn2 n2 | < n2 n . Hence the degree of separation is less than 1/n+1/n1. We have proved the following lemma. Lemma 7.5.1 Suppose x is a data vector of the length n = n1n2 and y = sort(x), y′ = Cn2(y). Then if we use the quantiles of y′ in place of x, the accuracy lost in terms of the probability loss of x (δx) is less than 1/n+1/n1. The algorithm proposes that instead of sorting the whole vector and then coarsening it, coarsen partitions of the data. The accuracy of the quantiles obtained in this way is given in the theorems of the previous section. This allows us to load the data into the memory in stages and avoid program failure due to the length of the data vector. We are also interested in the 206 7.5. The algorithm and computations performance of the method in terms of speed, and do a simulation study using the “R” package (a well–known software for statistical analysis) to assess this. In order to see theoretical results regarding the complexity of the special case of the algorithm for equal partitions see [3]. For the simulation study, we create a vector, x, of length n = 107. We apply the algorithm for m = 1000, c = 20, d = 500. We create this vector in a loop of length 1000. During each iteration of the loop, we generate a random mean for a normal distribution by first sampling from N(0, 100). Then we sample 10,000 points from a normal distribution with this mean and standard deviation 1. We compare two scenarios: 1. Start by a NULL vector x and in each iteration add the full generated vector of length 10000 to x. After the loop has completed its run, sort the data vector which now has length 107 by the command sort in R and use this to find the quantiles. 2. Start with a NULL vector w. During each iteration after generating the random vector, d–coarsen the data by d = 500. (Hence m = 1000, c = 20.) In order to do that computing, first apply the sort command to the data and then simply d–coarsen the resulting sorted vector. During each iteration, add the coarsened vector to w. After all the iterations, sort w and use it to approximate quantiles. Remark. The first part corresponds to the straightforward quantiles’ cal- culation and the second corresponds to our algorithm. Note that in the real examples instead of the loop, we could have a list of 1000 data files and still this example serves as a way of comparing the straightforward method and our algorithm. Remark. Note that if we wanted to create an even longer vector say of length 1010 then the first method would not even complete because the computer would run out of memory in saving the whole vector x. Remark. The final stage of the algorithm can use the fact that w is built of ordered vectors to make the algorithm even faster. We will leave that a problem to be investigated in the future. We have repeated the same procedure for n = 2×107,m = 1000, d = 500 and n = 108,m = 1000, d = 500. The results of the simulation are given in Table 7.2, in which “DOS” stands for the degree of separation between the exact median and the approximated median. The “DOS bound” bounds the degree of separation obtained by the theorems in the previous section. For n = 107, n = 2 × 107 significant time accrue by using the algorithm. For a vector of length 108, R crashed when we tried to sort the original vector 207 7.5. The algorithm and computations and only the algorithm could provide results. For all cases the exact and approximated quantiles are close. In fact the dos is significantly smaller than the dos bound. This is because this is a “worst–case” bound. The exact and approximated quantiles for n = 107 are plotted in Figure 7.1. Length n = 107 n = 2× 107 n = 108 Exact median value 1.847120 1.857168 NA Algorithm median value 1.866882 1.846463 1.846027 DOS 0.00012 −6.475 × 10−5 NA DOS bound 0.05268421 0.02566667 0.005030151 Time for exact median 186 sec 461 s NA Time for the algorithm 6 sec 18 s 98 s Table 7.2: Comparing the exact method with the proposed algorithm in R run on a laptop with 512 MB memory and a processor 1500 MHZ, m = 1000, d = 500. “DOS” stands for degree of separation in the original vector. “DOS bound” is the theoretical degree of separation obtained by Theorem 7.4.1. Next, we apply the algorithm on a real dataset. The dataset includes the daily maximum temperature for 25 stations over Alberta during the period 1940–2004. We focus on the 95th percentile. The results are given in Table 7.3. The algorithm finds the percentile more quickly but the time difference is not as large as the simulation. This is because most of the time of the algorithm and the exact computation is spent on reading the files from the hard drive. The dos bound is about 0.01 (on the 0–1 probability scale). The true degree of separation is about 0.001. The estimated quantiles and the exact quantiles are plotted in Figure 7.2. Notice that the exact and approx- imated values match except at the very beginning (very close to zero) and end (when it is close to 1), where we see that the circles (corresponding to exact quantiles) and the +s (corresponding to the approximated quantiles) do not completely match. This difference is at most 0.01 in terms of dos in any case. 208 7.5. The algorithm and computations 0.0 0.2 0.4 0.6 0.8 1.0 − 20 0 − 10 0 0 10 0 20 0 30 0 lq(p) p Figure 7.1: Comparing the approximated quantiles to the exact quantiles N = 107. The circles are the exact quantiles and the + are the corresponding approximated quantiles. 209 7.5. The algorithm and computations 0.0 0.2 0.4 0.6 0.8 1.0 − 30 − 20 − 10 0 10 20 30 40 lq(p) p Figure 7.2: Comparing the approximated quantiles to the exact quantiles for MT (daily maximum temperature) over 25 stations in Alberta 1940–2004. The circles are the exact quantiles and the + the approximated quantiles. 210 7.5. The algorithm and computations Exact 95th percentile 27 C Algorithm 95th percentile 26.7 C DOS 0.001278726 DOS bound 0.01052189 time for exact median 8 min 6 sec time for the algorithm 7 min 29 sec Table 7.3: Comparing the exact method with the proposed algorithm in R (run on a laptop with 512 MB memory and processor 1500 MHZ) to compute the quantiles of MT (daily maximum temperature) over 25 stations with data from 1940 to 2004. 211 Chapter 8 Quantile data summaries 8.1 Introduction This chapter introduces techniques to summarize data (using quantiles), ma- nipulate and combine such summaries. “Weighted data vectors”, which are an extension of data vectors are introduced. The operators sort and stack are extended to weighted data vectors and the operator comp (compress) is introduced to compress a data vector as much as possible with no loss of information. In the quantile definition chapter, we expressed a few appeal- ing properties that quantiles should satisfy. We established the equivariance and symmetry properties and left the following to later: 1. The “amount” of data between qx(p1) and qx(p2) should be a p2 − p1, p1 < p2 fraction of the “data amount” of the whole data. 2. If we cut a sorted data vector up until the p1-th quantile and compute the p2-th quantile for the new vector, we should get the p1p2-th quan- tile of the original vector. For example the median of a sorted vector upto its median should be the first quartile of the original vector. A natural definition for the “amount of data” between a, b would be the number of data points between a, b divided by the length of the whole vector. However, by this definition there is no hope of establishing property (1) knowing that p2−p1 can be irrational. Also for the second property one might conjecture that if we define the cut operator to be the sorted vector from left to lqx(p1) (or rqx(p1)) then this property holds. However, consider x = (1, 2) and a cut of length 0.6. Then we get the same vector x′ = (1, 2) after the cut using this definition since lqx(0.6) = 2. Now the 0.7th left (or right) quantile of the cut vector x′ is lqx′(0.7) = 2. However, lqx(0.6 × 0.7) = lqx(0.42) = 1. 212 8.1. Introduction In the following, we define the cut operator for p ∈ (0, 1) in a way that it ends with lqx(p) but satisfies property (2). The idea can be explained in the example by considering the vector x = (1, 2) as a weighted vector with weights (1/2, 1/2) and give 2 less “weight” than 1 after the cut. In summary, this chapter provides a framework to establish these properties, using the “partition” operator and the “cut” operator. When dealing with summarized data the following general question is a fundamental one: Question: Suppose x is a data vector which consists of m subvectors x1, · · · , xm. In other words x = stack(x1, · · · , xm). Assume we do not have access to the xi but to the wi, their summaries (possibly a result of coarsening of the xi). Then how can we approximate the quantiles of the original data vector x and assess how good this approximation is? We have already encountered such a problem in Chapter 7, where we an- swered the question in some specific cases. We do not answer the question in general in this chapter but provide a framework to formalize and answer these type of questions. In computer science quantiles are sometimes used to summarize large datasets. A good summary of the work for creating quantile summaries of datasets in a single pass is given in [19]. In order to make a summary (of length k) of a data vector using the quantiles, one has various choices to pick certain probability indices p1 ≤ p2 ≤ · · · ≤ pk, and save the corresponding quantiles. Using the probability loss function, we find an optimal way of doing this. Then we consider the problem of finding argmin a E(L(X, a)), for various L (loss) functions. It is widely claimed that if L is the absolute value function, the argmin is the median of X. We show that the argmin is in fact [lqX(1/2), rqX (1/2)]. We also find the argmin a E(δX (X, a)). Finally, we find optimal “probability index vectors” to assign quantiles to a random sample X1, · · · ,Xn, which can be used to make a quantile–quantile plot. Some previous techniques to make a q–q plot are discussed in [24]. 213 8.2. Generalization to weighted vectors 8.2 Generalization to weighted vectors This section extends the definitions and ideas developed before (quantiles, probability loss function, sorting, stacking etc.) from ordinary data vectors to weighted vectors. A weighted vector has two extra components compared to an ordinary vector: a weight allocation and a data amount. This allows us to summarize information in some cases. For example, consider the vector (1, 1, 1, 1, 1, 1, 1, 1, 1, 2). We observe that 1 is repeated 9 times and 2 only one time. We can summarize this by giving the elements (1, 2) a weight allocation (0.9, 0.1) and a data amount 10 which is the length of the vector in this case. Weighted vectors also enable us to define the “cut” operator to cut data vectors. Definition We call a triple χ = (x,wχ, nχ) a weighted vector if length(x) = length(wχ) = lx, x = (x1, · · · , xl), wχ = (wχ1 , · · · , wχl ), ∑l i=1 w χ i = 1 and n χ a positive real number. Note that nχ is not necessarily equal to the length of x. We call wχ the “weight vector” of χ and nχ the “data amount” of χ. Remark. Note that in order to specify a weight vector w, we do not need to specify the last component since the weights must sum up to one. Examples: 1. χ = ((1, 2, 3), (1/3, 1/3, 1/3), 3). This is equivalent to an ordinary vector of length 3 in a sense we make clear soon. 2. χ = ((1, 2, 3), (1/3, 1/3, 1/3), 6). Notice this weighted vector has the same elements as before with a data amount of 6 which is two times the previous vector. This vector is equivalent to the ordinary vector x = (1, 1, 2, 2, 3, 3). 3. χ = ((1, 1, 2, 3), (1/6, 1/6, 1/3, 1/3), 3). This is equivalent to vector given in 1. Note that one is repeated two times here. However, the sum of the weights for 1 is 1/6+1/6=1/3 which is the same as the vector defined in 1. 4. χ = ((1), (1), 1/2). Here we only have 1/2 data amount. i.e. we have less than one observation! (1/2 of an observation to be precise.) 5. χ = ((1, 2), (1/2, 1/2), √ 3). 214 8.2. Generalization to weighted vectors The first vector, x, in the definition χ = (x,wχ, nχ), is the vector of possible values, the second one, wχ, is the corresponding weights for elements of x and the third component, nχ, is a measure of how fine the vector is. A vector is called an ordinary vector if the length of x, lx, is equal to n χ and wχi = w χ j , i, j ∈ 1, · · · , lx. The ordinary vector corresponds to the usual data vectors. Denote the space of all weighted vectors by Υ. We define some operations and an equivalence relation on Υ. Definition Suppose χ = (x,wχ, nχ) then comp(χ) = ξ = (y,wξ , nξ), where y = (y1, · · · , yr) is a non-decreasing vector of all disjoint elements of x, wξi = ∑ xj=yi wχj and n ξ = nχ. It is clear that comp (compress operator) is an operator from Υ to Υ. Then we define an equivalence relation on Υ. Definition χ ∼ ξ in Υ iff comp(χ) = comp(ξ). Clearly, ∼ is an equivalence relation. Let us define a transformation of a weighted vector. Definition Suppose χ = (x,wχ, nχ) is a weighted vector and φ a trans- formation of R (not necessarily increasing). Then φ(χ) = ζ = (z, wζ = wχ, nζ = nχ), where zi = φ(xi), i = 1, 2, · · · , lx. For ordinary vectors x, y, comp(x) = comp(y) iff sort(x) = sort(y). Also comp leaves the last component of a weighted vector (the data amount) unchanged. Since x and wχ have the same length, we can show an element of Υ by pair consisting of a matrix of dimension 2× lx and a number nχ: χ = ( ( x1 · · · xlx wχ1 · · · wχlx ) , nχ) Given a weighted vector χ = (x,wχ, nχ), we can naturally define a dis- tribution function as follows. Definition Suppose χ = (x,wχ, nχ) is a weighted vector. The the empirical distribution of χ is defined as Fχ(a) = ∑ i, xi≤a wχi . 215 8.2. Generalization to weighted vectors Remark. If χ is an ordinary vector then Fχ is the usual empirical function. Then we extend the definition of the stack operator to weighted vectors. Definition Suppose χ = (x,wχ, nχ) and ξ = (y,wξ, nξ) are given then stack : Υ×Υ→ Υ, (χ, ξ) 7→ ζ = (z, wζ , nχ + nξ), where (z, wζ) in the matrix notation is given by ( x1 · · · xlx y1 · · · yly wχ1 nχ nχ+nξ · · · wχlx n χ nχ+nξ wξ1 nξ nχ+nξ · · · wξly n ξ nχ+nξ ) . Remark. In the definition, notice how the data amounts are used to adjust the weights. Remark. For ordinary vectors x, y the stack operator coincide to concate- nating x and y. Lemma 8.2.1 (Stack operator properties) a) The stack operator preserves the equivalence relation defined above, i.e. χ1 ∼ ξ1, χ2 ∼ ξ2, then stack(χ1, χ2) ∼ stack(ξ1, ξ2) b) stack(χ1, stack(χ2, χ3)) ∼ stack(stack(χ1, χ2), χ3) Proof a) Suppose χi = (x i, wχi , nχi), ξi = (y i, wξi , nξi) and χi ∼ ξi for i = 1, 2. Let χ = comp(stack(χ1, χ2)), comp(stack(ξ1, ξ2)) = ξ. We need to show χ = ξ. Let χ = (x,wχ, nχ) and ξ = (y,wξ , nξ). From χi = ξi for i = 1, 2, we conclude n χi = nξi , i = 1, 2, which in turn gives nχ = nχ1 + nχ2 = nξ1 + nξ2 = nξ. Also x = y since both x and y are increasingly sorted and every element in x is an element of x1 or x2 which have the same elements as y1 or y2. Now to show wχi = w ξ i , i = 1, 2, · · · , lx, suppose xi = yi be the corresponding element in x = y. Assume that the corresponding weight for xi in χ1 is w and in χ2 is w ′. Then the corresponding weight in ξ1 and ξ2 must be w and 216 8.2. Generalization to weighted vectors w′ respectively by the assumed equivalence relations. Hence wχi and w ξ i are equal to w. nχ1 nχ1 + nχ2 + w′. nχ2 nχ1 + nχ2 , and w. nξ1 nξ1 + nξ2 + w′. nξ2 nξ1 + nξ2 , which are equal. b) Let χ = (x,wχ, nχ) = comp[stack(χ1, stack(χ2, χ3))] and χ′ = (x′, wχ ′ , nχ ′ ) = comp[stack(stack(χ1, χ2), χ3))]. We show χ = χ′. Firstly, note that nχ = nχ1 + (nχ2 + nχ3) = (nχ1 + nχ2) + nχ3 = nχ ′ . x = x′ is trivial. Fix xi = x′i in x = x ′. Suppose its corresponding weight in χj is equal to wj, j = 1, 2, 3. To show that the corresponding weights wχi and w χ′ i are equal, note that the corresponding weight of xi in χ is a combination of its weights in χ1 and stack(χ2, χ3): wχi = w1 nχ1 nχ1 + (nχ2 + nχ3) +[w2 nχ2 nχ2 + nχ3 +w3 nχ3 nχ2 + nχ3 ] nχ2 + nχ 3 nχ1 + (nχ2 + nχ3) and the corresponding weight of xi in χ ′ is a combination of its weights in stack(χ1, χ2) and χ3: wχ ′ i = [w1 nχ1 nχ1 + nχ2 +w2 nχ2 nχ1 + nχ2 ] nχ1 + nχ2 (nχ1 + nχ2) + nχ3 +w3 nχ3 (nχ1 + nχ2) + nχ3 . But the previous two expressions are equal and the proof is complete. This lemma implies that we can use the notation stack(χ1, · · · , χm). Definition of quantiles and DOS for weighted vectors Now let us get to the definition of quantiles. We can proceed exactly in the same way as we did before by having in mind a bar of length one. Or alternatively, we can apply the quantile function definition for usual distributions to the empirical distribution of a weighted vector Fχ. This time, we proceed in a slightly different fashion which is equivalent to these 217 8.2. Generalization to weighted vectors methods. Suppose χ = (x,wχ, nχ) is given and ζ = comp(χ) = (z, wζ , nχ). We assume z has length lz. First, we define lqindχ : (0, 1]→ {1, 2, · · · , lz}, and rqindχ : [0, 1) → {1, 2, · · · , lz}, the “left quantile index” and “right quantile index” functions and then define the left and right quantile functions using the index functions. If ζ = comp(χ) = (z, wζ , nx) then we define lqχ(p) = zlqindχ(p), p ∈ (0, 1], lqχ(p) = −∞, p = 0, and rqχ(p) = zrqindχ(p), p ∈ [0, 1), rqχ(p) =∞, p = 1. Let ζ = comp(χ). lqindχ and rqindχ are defined as follows: • p = 0 then lqindχ(p) not defined and rqindχ(p) = 1. • 0 < p < wζ1 then lqindχ(p) = rqindχ(p) = 1. • p = wζ1 then lqindχ(p) = 1 and rqindχ(p) = 2. ... • wζ1 + · · ·+ wζi−1 < p < wζ1 + · · ·+ wζi then lqindχ(p) = rqindχ(p) = i. • p = wζ1 + · · ·+ wζi then lqindχ(p) = i, rqindχ(p) = i+ 1. ... • p = 1 then lqindχ(p) = lz and rqindχ is not defined. Remark. It is easy to see that χ ∼ ξ then lqχ = lqξ, rqχ = rqξ. Remark. For ordinary vectors, this is equivalent to the definition given in the previous sections. 218 8.2. Generalization to weighted vectors Remark. Consider the natural distribution function Fχ corresponding to a weighted vector χ then lqχ = lqFχ and rqχ = rqFχ. Hence, lqχ, rqχ satisfy all the properties proved for left and right quantile functions of a distribution function. Definition We generalize the degree of separation (probability loss func- tion) δχ on the set of weighted vectors as follows: δχ : R× R→ R+ ∪ {0}, δχ(z ′, z) = δχ(z, z′) = ∑ z p1 ... k. xk = (xsk , · · · , xtk), vk = ∑ 1≤j≤tk wj − ∑k−1 j=1 pj ≥ pk, ∑ 1≤j pk−1 . ... The corresponding weight vectors and data amounts are defined as: 1. wχ 1 = 1p1 (w χ s1 , w χ s2 , · · · , wχt1 − (v1 − p1)), ... k. wχ k = { 1 pk (wχsk , w χ sk+1 , · · · , wχtk − (vk − pk)) vk−1 = pk−1 1 pk (vk−1 − pk−1, wχsk+1, · · · , w χ tk − (vk − pk)) vk−1 > pk−1 . ... Lemma 8.2.3 If χ = (x,wχ, nχ) is an ordinary vector and lx = n χ = n1 + · · · + nm. Let P = ( n1nχ , · · · , nmnχ ) then the P-partition of χ is simply obtained by starting from the left and partitioning x to vectors of length n1, n2, · · · , nm. Proof This is a straightforward conclusion of the definition. Lemma 8.2.4 Suppose χ = (x,wχ, nχ) is partitioned by some P = (p1, · · · , pm) to χ1, · · · , χm then stack(χ1, · · · , χm) ∼ χ. Proof Let χ′ = stack(χ1, · · · , χm) and suppose χ′ = (x′, wχ′ , nχ′). Then clearly x′ and x have the same distinct elements. (Although it might be the 220 8.2. Generalization to weighted vectors case that x′ 6= x since some elements of x are repeated more than once in x.) Also nχ ′ = m∑ i=1 pin χ = nχ. In order to show that for z an element of the vector x, its corresponding weight is equal in χ and χ′, suppose z is equal to xi1 , · · · , xir in x with corresponding weights wχi1 , · · · , w χ ir . Then the weight corresponding to z in χ is equal to ∑r k=1w χ ik . Now note that any of xik , k = 1, · · · , r, corresponds to one or two elements in stack(χ1, · · · , χm) by the definition of the partitions operator. It can be the case that xik only appears in χ s or in χs, χs+1 if xik is at the end of the partition χs and at the beginning of the next. In the first case when xik only appears in χ s, its weight in χs will be 1psw χ ik and hence its weight contribution in stack(χ1, · · · , χm) will be nχ.psnχ 1psw χ ik = wχik . In the second case its weight in χs will be 1ps (w χ ik − (vs − ps)) and in χs+1 will be 1ps+1 (vs − ps). Hence its weight contribution in stack(χ1, · · · , χm) coming from χs, χs+1 is n χps nχ 1 ps (wχik − (vs− ps))+ nχps+1 nχ 1 ps+1 (vs− ps) = wχik . Summing up all the weights in stack(χ1, · · · , χm), we get the same value of∑r k=1w χ ik . Using the partition operator, we can easily define the cut operator as follows. Definition Let D = {(a, b)| a, b ∈ (0, 1), a < b}. Then cut : Υ×D → Υ is defined to be cut(χ, p1, p2) = χ 2, where χ2 is the second component of part(P, comp(χ)) = (χ1, χ2, χ3), the result of applying a partition operator with weights P = (p1, p2− p1, 1− p2) to comp(χ). We also define left cut and right cuts, lcut, rcut : (0, 1)→ R, lcut(χ, p) = χ1, rcut(χ, 1 − p) = χ2, where χ1 and χ2 are the first and second component of the partition of χ by P = (p, 1− p). Lemma 8.2.5 Suppose χ = (x,wχ, nχ) is a weighted vector and (p1, p2) in D. Then 221 8.2. Generalization to weighted vectors a) The amount of data in cut(χ, p1, p2) is n χ(p2 − p1). b) cut(χ, p1, p2) starts with rqχ(p1) and ends with lqχ(p2). c) The vector of lcut(χ, p) ends with lqχ(p). d) The vector of rcut(χ, p) starts with rqχ(1− p). e) Suppose p1, p2 ∈ (0, 1) then lcut(lcut(χ, p1), p2) = lcut(χ, p1p2). f) Suppose p1, p2 ∈ (0, 1) then rcut(rcut(χ, p1), p2) = rcut(χ, p1p2). Proof a) is trivial. To prove b), consider the definition of the partition operator as given in Definition 8.2.1 for arbitrary P = (p′1, · · · , p′m). For the first partition, xs1 = x1 = lqχ(p ′ 1) and for xt1 , we have∑ 1≤j≤t1 wj ≥ p′1, and ∑ 1≤j p′k−1 . If vk−1 = p′k−1, then ∑ 1≤j≤tk−1 wj = ∑k−1 i=1 p ′ i. Hence rqχ( ∑k−1 i=1 p ′ i) = xtk−1+1 = xsk . For tk, we have ∑ 1≤ja (X − a)dP + ∫ X E|X − lqX(1/2)|. 2. If a > rqX(1/2) then E|X − a| > E|X − rqX(1/2)|. 3. If lqX(1/2) ≤ a, b ≤ rqX(1/2) then E|X − a| = E|X − b|. Step 1. Let b = lqX(1/2) and ǫ = b− a > 0. Then 231 8.4. Other loss functions E|X − b| = ∫ X≥b (X − b)dP + ∫ X rqX(1/2) = c one can either repeat a similar argument to that in Step 1 or use the Quantile Symmetry Theorem as we do here. Consider the random variable −X. Then a > rqX(1/2) ⇒ −a < −rqX(1/2) = lq−X(1/2) Now since −a < −c = lq−X(1/2) by applying Step 1 to −X, we get E| −X − (−c)| < E| −X − (−a)| ⇒ E|X − a| < E|X − c|. Step 3. If lqX(1/2) = rqX(1/2) the result is trivial. Otherwise let b = lqX(1/2) < rqX(1/2) = c and a < a ′ ∈ [b, c]. By Lemma 5.3.1 if lqX(p) < rqX(p). So P (X ≤ lqX(p)) = p and P (X ≥ rqX(p)) = 1 − p. Hence P (X ≤ b) = P (X ≥ c) = 1/2. Let ǫ = a′ − a. Then 232 8.4. Other loss functions E|X − a| = ∫ b 0. Then, we can express this family as the quantile–specified family {Xθ}θ∈R+ with P = (1/2). The reason is if Xθ ∼ U(0, 2a) then θ = lqXθ(1/2) = a. 237 9.2. Quantile–specified parameter families Example Consider the family N = {N(µ, σ2)| − ∞ < µ < +∞, σ2 > 0}. Then we claim this is a quantile–specified family. To verify that claim let P = (1/2, p2) where p2 = P (Z ≤ 1) and Z has the standard normal distribution. Let µ = lqX(1/2) = θ1, and µ+ σ2 = lqX(p2) = θ2. Then we can equivalently represent N by {Xθ}θ=(θ1,θ2)∈Θ, where Θ = {(θ1, θ2)|θ1 < θ2}. Because (µ, σ2) is in 1:1 correspondence with θ = (θ1, θ2) as defined above, where P (X ≤ µ+ σ2) = P (Z ≤ 1) = p2. Note that this representation is not unique. For example, we can take P = (1/2, p2) with p2 = P (Z ≤ 2). Then the alternate re-parametrization in terms of variables is µ = lqX(1/2) = θ1, and µ+ 2σ2 = lqX(p2) = θ2. It should be clear that if the goal is to infer the parameters of the original family, i.e. a in U(0, 2a) and (µ, σ2) then it is desirable that the θi are simple functions of the original parameters and the original parameters be easily obtainable from the θi. Linear combinations seem to be the easiest to handle. We suggest the following framework to estimate the parameters: • Express the original parameterized family Xβ as a quantile specified family Xθ with P = (p1, · · · , pk). • Use argminDi∈FE[L(θi,Di(input)], i = 1, · · · , k where input is the information available to us, usually a random sam- ple, (X1, · · · ,Xn), Di is an estimator of θi = lqX(pi) (a function of the random sample), L is a loss function and F is the class of the estimators. The loss functions of our interest are L = δXθ and L = δ c Xθ , c > 0. 238 9.2. Quantile–specified parameter families • Using the estimated parameters solve for the original parameters, the βi. Note that δcXθ , c > 0 depends on the unknown distribution function Xθ. Many issues in the above framework need to be addressed including: the existence and uniqueness of the argmin, properties of the estimators and so on which we leave for future research. In next subsections we show the Equivariance property of the method and apply it to a particular class of estimators using simulations. 9.2.1 Equivariance of quantile–specified families estimation Here, we show the equivariance property of estimation using quantile–specified families in the following lemmas. Lemma 9.2.1 Suppose {Xθ}θ∈Θ is left–quantile–specified with P = (p1, · · · , pk), and φ is a continuous strictly increasing transformation which induces a map on Rk: Φ : Rk → Rk, (θ1, · · · , θk) 7→ (φ(θ1), · · · , φ(θk)). Let Θ′ = Φ(Θ), θ′ = Φ(θ) for θ ∈ Θ and consider the family of distributions Yθ′ = φ(Xθ). Then {Yθ′}θ′∈Θ′ is also a left–quantile–specified family with the same index vector P = (p1, · · · , pk). Proof Suppose the distribution of Xθ is specified by Fθ. Then P (Yθ′ ≤ a) = P (φ(Xθ) ≤ a) = Fθ(φ −1(a)) = FΦ−1(θ′)(φ −1(a)). Hence the distribution of Yθ′ is known given θ ′. It remains to show that for θ′ ∈ Θ′, (θ′1, · · · , θ′k) = (lqYθ′ (p1), · · · , lqYθ′ (pk)). But (lqYθ′ (p1), · · · , lqYθ′ (pk)) = (lqφ(Xθ)(p1), · · · , lqφ(Xθ)(pk)) = (φ(lqXθ(p1)), · · · , φ(lqXθ (pk))) = (φ(θ1), · · · , φ(θk)) = (θ′1, · · · , θ′k) . 239 9.2. Quantile–specified parameter families Lemma 9.2.2 Suppose {Xθ}θ∈Θ is left–quantile–specified with P = (p1, · · · , pk), and φ is a continuous strictly decreasing transformation which induces a map on Rk: Φ : Rk → Rk, (θ1, · · · , θk) 7→ (φ(θk), · · · , φ(θ1)). Let Θ′ = Φ(Θ), θ′ = Φ(θ) for θ ∈ Θ and consider the family of distributions Yθ′ = φ(Xθ). Then {Yθ′}θ′∈Θ′ is a right–quantile–specified family with the index vector P = (1− pk, · · · , 1− p1). Proof Suppose the distribution of Xθ is specified by Fθ. Then since Fθ the left closed distribution of Xθ is known, the right closed distribution of Xθ, GcX(Xθ) is also known. Then P (Yθ′ ≤ a) = P (φ(Xθ) ≤ a) = P (Xθ ≥ φ−1(a)) = Gcθ(φ −1(a)) = GcΦ−1(θ′)(φ −1(a)), where Gcθ is the right closed distribution function. Hence the distribution of Yθ′ is known given θ ′. It remains to show that for θ′ ∈ Θ′, (θ′1, · · · , θ′k) = (rqYθ′ (1− pk), · · · , rqYθ′ (1− p1)). But (rqYθ′ (1− pk), · · · , rqYθ′ (1− p1)) = (rqφ(Xθ)(1− pk), · · · , rqφ(Xθ)(1− p1)) = (φ(lqXθ (pk)), · · · , φ(lqXθ (p1))) = (φ(θk), · · · , φ(θ1)) = (θ′1, · · · , θ′k). For a parameter θ, we want to find argminD∈FE(δX (lqX(p),D)) 240 9.2. Quantile–specified parameter families where F is a family of estimators for θ and D ∈ F is a function D : Rn → R, where n is the size of the sample and D(X1, · · · ,Xn) is the estimator of θ = lqX(p). Lemma 9.2.3 Suppose a random sample X1, · · · ,Xn is given, Xθ is a left– quantile–specified family with θ = lqX(p), φ a strictly monotonic continuous transformation on R, F is a family of estimators to estimate θ and the following argmin is nonempty argminD∈FE(δX(lqX(θ),D)), and let F ′ = φ(F). Then a) if φ is strictly increasing argminD′∈F ′E(δφ(X)(lqφ(X)(p),D′)) = φ(argminD∈FE(δX (lqX(p),D))) b) if φ is strictly decreasing argminD′∈F ′E(δφ(X)(lqφ(X)(p),D′)) = φ(argminD∈FE(δX(rqX(1−p),D))) Proof We only prove a) and b) is similar. min D′∈F ′ E(δφ(X)(lqφ(X)(p),D ′)) = min D∈F E(δφ(X)(φ(lqX)(p), φ(D))) = min D∈F E(δX(lqX(p),D)) Note that for a general family of estimators, F argminD∈FE(δX (lqX(p),D)) depends on the unknown distribution X by δX . We suggest two possible ways to get around this issue: • Restrict to a family F that argminD∈FE(δX(lqX(p),D)) does not depend on the distribution. 241 9.2. Quantile–specified parameter families • Use the empirical distribution to approximate the expression E(δX (lqX(p),D)). We will not explore the second method here and leave it for future re- search. Next subsection shows an important instance of the first method. 9.2.2 Continuous distributions with the order statistics family of estimators Suppose that the desired distribution X is continuous then E(δX (lqX(p),D)) = E|FX(lqX(p))− FX(D)| = E|p− FX(D)|. Now suppose a random sample X1, · · · ,Xn is given and we want to estimate lqX(p). We restrict to an important family of estimators, order statistics: F = {X1:n, · · · ,Xn:n}. Then for i = 1, · · · , n: E|p − FX(Xi:n)|, does not depend on FX . This is because the distribution of FX(Xi:n) does not depend on FX . It can be obtained as shown below: Gi(y) = P (FX(Xi:n) ≤ y) = P (Xi:n ≤ lqX(y)) = n∑ j=i ( n j ) P (X1, · · · ,Xj ≤ lqX(y) and Xj+1, · · · ,Xn > lqX(y)) = n∑ j=i ( n j ) P (X ≤ lqX(y))jP (X > lqX(y))n−j = n∑ j=i ( n j ) yj(1− y)n−j . By taking the derivative of the above expression we can find the density function gi(p) and conclude: E|p − FX(Xi:n)| = ∫ 1 0 |p − y|gi(y)dy. For a given p we want to find the i that minimize above which does not on FX . We can approach this problem theoretically to find such an i. Or we could try to estimate these integral using numerical methods. However, here we use simulation for two examples and leave the general case for future research. 242 9.3. Probability divergence (distance) measures Example Consider a family of continuous variables, quantile–specified by P = (1/2, P (Z ≤ 1)) where Z is the standard normal. Suppose a ran- dom sample X1, · · · ,Xn is given and we want to estimate lqX(1/2) and lqX(P (Z ≤ 1)) using the family of estimators, order statistics: F = {X1:n, · · · ,Xn:n}. We estimate the parameters for n = 25 and n = 20. In order to minimize the loss we can approximate the loss by approximating the integral in Equation 9.2.2 or approximating E|p − FX(Xi:n)|, using an arbitrary continuous distribution such as standard normal to do the simulations. For a large number M , we create M samples of length n from normal and for every sample we find the i that minimize the loss. Then for every i, we compute the mean of such losses and find out which has the smallest mean loss. We do that for M = 1, · · · , 1000. The results for n = 25 are given in Figure 9.1. We see that for large M the estimator for lqX(1/2) is X13:25 and for lqX(P (Z < 1)) it is X22:25. The results for n = 20 are given in Figure 9.2. The estimator for lqX(1/2) has changed between X10:20 and X11:21 and it is X18:20 for lqX(P (Z ≤ 1)). This shows that the argmin is not necessarily unique. 9.3 Probability divergence (distance) measures In probability theory, physics and statistics several measures have been intro- duced as the “distance” of two probability measures (or random variables). These measures have several applications, one of which is parameter estima- tion. We list some of these measures in this section. The next section then introduces new measures of distance among probability measures using the c-probability loss functions (c ≥ 0). • The Kullback-Leibler (KL) distance: Suppose P,Q are probability measures and P is absolutely continuous with respect to Q. Then consider the Radon-Nikodym derivative of P with respect to Q, dPdQ [See [9]]. Then we define: DKL(P,Q) = ∫ Ω log dP dQ dP. If P and Q have density functions over R, p(x), q(x) then 243 9.3. Probability divergence (distance) measures 0 200 400 600 800 1000 11 13 15 P(Z<0) o pt im al o rd er 0 200 400 600 800 1000 20 21 22 23 24 P(Z<1) simulations number o pt im al o rd er Figure 9.1: The order statistics family members that estimate lqX(1/2) and lqX(P (Z ≤ 1)) for a random sample of length 25 obtained by generating samples of size 1 to 1000 from a standard normal distribution 244 9.3. Probability divergence (distance) measures 0 200 400 600 800 1000 8 9 10 11 12 P(Z<0) o pt im al o rd er 0 200 400 600 800 1000 15 .0 16 .0 17 .0 18 .0 P(Z<1) simulations number o pt im al o rd er Figure 9.2: The order statistics family members that estimate lqX(1/2) and lqX(P (Z ≤ 1)) for a random sample of length 20 obtained by generating samples of size 1 to 1000 from a standard normal distribution 245 9.3. Probability divergence (distance) measures ∫ R p(x)log( p(x) q(x) )dx. The symmetric version of this distance is called Kullback-Jeffreys DKJ(P,Q) = DKL(P,Q) +DKL(Q,P ). We show that the Kullback-Leibler distance is invariant under bijec- tive differentiable monotonic transformations when the density func- tions exists and are positive everywhere on the real line. Let g be a monotonic, bijective and differentiable (bijective and differentiable will automatically imply strictly monotonic) transformation and X,Y random variables with density functions fX(x) and fY (x), positive on R. Then the density functions of g(X) and g(Y ) are respectively (g−1)′(x)fX(g−1(x)) and (g−1)′(x)fY (g−1(x)). Hence DKL(φ(X), φ(Y )) =∫∞ −∞(g −1)′fX(g−1(x)) log (g−1)′fX(g −1(x)) (g−1)′fY (g−1(x)) dx =∫∞ −∞(g −1)′fX(g−1(x)) log (fX(g −1(x)) fY (g−1(x)) dx. We use the change of variable x = g(y). Then dx = (g−1)′dy and the proof is complete. For the strictly decreasing case note that the density function of g(X) and g(Y ) are respectively −(g−1)′(x)fX(g−1(x)) and −(g−1)(x)′fY (g−1(x)) and a similar argument works. We leave the general case (where the density function does not exist or is not positive over all the real line) as an open(?) problem. • Let P and Q be two probability distributions over a space Ω such that P is absolutely continuous with respect to Q. Then, for a convex function f such that f(1) = 0, the f -divergence of Q from P is If (P,Q) = ∫ Ω f ( dP dQ ) dQ. Note that the same argument as the one for KL distance shows that this distance is invariant for monotonic differentiable bijective trans- formations when the density functions exist and are positive. 246 9.3. Probability divergence (distance) measures • The Kolmogorov-Smirnov distance: Suppose X,Y are random vari- ables on R with distribution functions FX and FY . Then KS(X,Y ) = sup x∈R |FX (x)− FY (x)|. The Gilvenko-Cantelli Theorem states that if X1, · · · ,Xn is a random sample drawn from the distribution Fθ0 and Fn, the empirical distri- bution function lim n→∞KS(Fθ0 , Fn) > ǫ = 0, a.s.. Note that the KS metric is invariant under monotonic transforma- tions. Take φ to be strictly monotonic on R. Then sup x∈R |Fφ(X)(x)− Fφ(Y )(x)| = sup x∈R |FX (φ−1(x)) − FY (φ−1(x))| = sup φ−1(x)∈R |FX (φ−1(x)) − FY (φ−1(x))| = sup x∈R |FX(x)− FY (x)|. Although theKS metric is invariant under strictly monotonic transfor- mations, it is not intuitively very appealing as we show in the following example. Example Consider X ∼ U(0, 1), Y ∼ U(1/2, 3/2) and let Z be dis- tributed as FZ : FZ(z) =   0 z < 0 1/2 0 ≤ z ≤ 1/2 z 1/2 < z < 1 1 z ≥ 1 . Then we have KS(X,Y ) = KS(X,Z) = 1/2. But we observe that FZ matches FX on (1/2, 1) while FX and FY differ by 1/2 on (0, 1). Another way to see the defect is the quantiles of Z and X match half of the time but the quantiles of X and Y are off as much as one half of a unit at all times. 247 9.4. Quantile distance measures To overcome the above problem one might (naively) suggest using an integral version IKS(X,Y ) = ∫ x∈R |FX(x)− FY (x)|dx. However, this definition is not well-defined. To see that consider FX(x) = 1 − 8/x, x > 8 and FY (x) = 1 − 9/x, x > 9. Then |FX(x) − FY (x)| = 1/x on [8,∞], which does not have finite inte- gral. It is also not invariant under strictly monotonic transformations for if φ is strictly monotonic and differentiable, IKS(φ(X), φ(Y )) = ∫ x∈R |FX(φ−1(x))− FY (φ−1(x))|dx. In the right hand side of the above equation the factor (φ−1)′, that would make the distance invariant under transformations, is missing. • Lévy distance: Suppose (Ω,Σ, Pθ)θ∈Θ be a statistical space, where the Pθ are probability measures on Ω with σ-field Σ. Then we define Lev(Fθ1 , Fθ2) = inf{ǫ > 0|Fθ1(x− ǫ) < Fθ2(x) < Fθ1(x+ ǫ), ∀x ∈ R}. It can be shown that convergence in the Lévy metric implies weak convergence for distribution function in R [31]. It is shift invariant but not scale invariant as discussed in [31]. 9.4 Quantile distance measures This section introduces the quantile distance measure to measure the dis- tance among distribution functions on R (or random variables). We begin with a general definition using the quantiles and then consider interesting particular cases. The intuition behind all these metrics lies in their capabil- ity to measure the separation in the quantiles of two random variables. Definition Suppose a statistical space (Ω, P, {Xθ}θ∈Θ) and a loss function L defined over the extended real numbers R ∪ {−∞,+∞} are given. Also let E be a measurable subset of (0,1) and dµE is a measure on E. Then we can define the following two measures of distance between Xθ1 and Xθ2 , 248 9.4. Quantile distance measures SQDEL (Xθ1 ,Xθ2) = sup p∈E L(lqXθ1 (p), lqXθ2 (p)), and IQDEL (Xθ1 ,Xθ2) = ∫ p∈E L(lqXθ1 (p), lqXθ2 (p))dµE , which we call the sup quantile distance and integral quantile distance re- spectively. Remark. Note that in general SQDEL and IQD E L are neither well-defined nor metrics on the space of random variables.. Remark. We can also take L(rqXθ1 (p), rqXθ2 (p)) in the above definitions. Remark. The natural choice for E is (0, 1) and the measure µ = L, where L is the Lebègues measure on (0, 1). However, one might choose another E depending on the purpose. For example E = (0.8, 1) might be more appropriate if the purpose is modeling the high extremes. Remark. Interesting choices for L are δXθ1 , δ c Xθ1 , δXθ1 + δXθ2 and δ c Xθ1 + δcXθ2 . Note that in all these cases the quantile distance is defined since these quantities are bounded respectively by 1, 1 + c, 2, 2 + 2c. The rest of this report focuses on quantile distances obtained from c- probability losses (c ≥ 0). (Note that c = 0 corresponds to the usual prob- ability loss.) 9.4.1 Quantile distance invariance under continuous strictly monotonic transformations This subsection show the invariance of quantile distance under strictly mono- tonic tranformations in the following lemmas. Lemma 9.4.1 (Quantile distance invariance under continuous strictly in- creasing transformations) Suppose X,Y are random variables, let IQDEδcX (X,Y ) = ∫ E L(lqX(p), lqY (p))dµE , and SQDEδcX (X,Y ) = sup p∈E L(lqX(p), lqY (p)), where E ⊂ (0, 1), c ≥ 0 and µE is a measure on E. Then IQDEδcX (X,Y ) = IQDEδc φ(X) (φ(X), φ(Y )), 249 9.4. Quantile distance measures and SQDEδcX (X,Y ) = SQDEδc φ(X) (φ(X), φ(Y )), for all φ : R→ R continuous and strictly increasing transformations. Proof The proof attains from noting that δφ(X)(lqφ(X)(p), lqφ(Y )(p)) = δφ(X)(lqφ(X)(p), lqφ(Y )(p)) + c(1− 1{0}(lqφ(X)(p)− lqφ(Y )(p))) = δφ(X)(φ(lqX(p)), φ(lqY (p))) + c(1 − 1{0}(lqX(p)− lqY (p))) = δX(lqX(p), lqY (p)) + c(1− 10(lqX(p)− lqY (p))) = δcX(lqX(p), lqY (p)). Remark. The above lemma is also true for δcX + δ c Y , which follows imme- diately. Lemma 9.4.2 If E a measurable subset of [0,1] then the two following dis- tance measures are equal: LQDEδX (X,Y ) = ∫ E δX(lqX(p), lqY (p))dp, and RQDEδX (X,Y ) = ∫ E δX(rqX(p), rqY (p))dp. The following two measures are also equal: LQDEδX+δY (X,Y ) = ∫ E (δX + δY )(lqX(p), lqY (p))dp, and RQDEδX+δY (X,Y ) = ∫ E (δX + δY )(rqX(p), rqY (p))dp. Proof We prove the first part part of the lemma and the second part is deduced from the first. We showed in the quantile definition section that the set {p|lqX(p) 6= rqX(p)} is countable. Hence, {p|lqX(p) 6= rqX(p)} ∪ {p|lqY (p) 6= rqY (p)}, 250 9.4. Quantile distance measures is also countable. In the complement of this set δX(lqX(p), lqY (p)) = δX(rqX(p), rqY (p)). Hence the integral values are the same. Remark. Note that the above theorem also holds for any measure µ on any E ⊂ (0, 1) which is continuous with respect to the Lebègue measure. Because of this lemma we will not worry about the left or right quantile in the definitions. The following lemma establishes a relationship between LQDδX and LQDδcX . Lemma 9.4.3 Let E be a measurable subset of [0,1] and kE = L{p ∈ E|lqX(p) 6= lqY (p)}, where L is the Lebègue measure. Let LQDEδcX (X,Y ) = ∫ E δcX(lqX(p), lqY (p))dp, and LQDEδX (X,Y ) = ∫ E δX(lqX(p), lqY (p))dp. Then LQDEδcX (X,Y ) = LQDEδX (X,Y ) + ckE . Proof LQDδcX (X,Y ) =∫ E δcX(lqX(p), lqY (p))dp =∫ lqX(p)=lqY (p),p∈E δcX(lqX(p), lqY (p))dp +∫ lqX(p)6=lqY (p),p∈E δcX(lqX(p), lqY (p))dp =∫ lqX(p)=lqY (p),p∈E δX(lqX(p), lqY (p))dp +∫ lqX(p)6=lqY (p),p∈E [δX(lqX(p), lqY (p)) + c(1− 1{0})(lqX(p)− lqY (p))]dp = LQDEδX (X,Y ) + ckE . 251 9.4. Quantile distance measures Remark. Note that the same is true for RQDEδcX and RQDEδX . Also L{p ∈ E|lqX(p) 6= lqY (p)} = L{p, p ∈ E|rqX(p) 6= rqY (p)}, because lqX , rqX and lqY , rqY are unequal only on a measure zero set. Hence the constant kE is the same as before and RQDEδcX (X,Y ) = RQDEδX (X,Y ) + ckE . Lemma 9.4.4 Suppose E a measurable subset of [0,1] then the two follow- ing distance measures are equal LQDEδcX (X,Y ) = ∫ p∈E δcX(lqX(p), lqY (p))dp, and RQDEδcX (X,Y ) = ∫ p∈E δcX(rqX(p), rqY (p))dp. Also these two measures are equal LQDEδcX+δ c Y (X,Y ) = ∫ p∈E (δcX + δ c Y )(lqX(p), lqY (p))dp, and RQDEδcX+δ c Y (X,Y ) = ∫ p∈E (δcX + δ c Y )(rqX(p), rqY (p))dp. Proof This is a straightforward consequence of the previous two lemmas. Remark. Note that the above theorem also holds for any measure µ on any E ⊂ (0, 1) which is continuous with respect to the Lebègue measure. Lemma 9.4.5 (Quantile distance invariance under continuous strictly mono- tonic transformations) Suppose X,Y are random variables and let QDE(X,Y ) = LQDEδX (X,Y ), (9.1) QDEc (X,Y ) = LQD E δcX (X,Y ), (9.2) 252 9.4. Quantile distance measures where, E ⊂ (0, 1) symmetric, meaning p ∈ E ⇔ (1− p) ∈ E, and µ is abso- lutely continuous with respect to the Lebègue measure and symmetric on E in the sense that if A is measurable then so is 1−A while µ(A) = µ(1−A). Then 9.1 and 9.2 are invariant under continuous strictly monotonic trans- formations, i.e. a) QDE(φ(X), φ(Y )) = LQDEδφ(X)(φ(X), φ(Y )) = QD E(X,Y ) = QDEδX (X,Y ), b) QDEc (φ(X), φ(Y )) = LQD E δc φ(X) (φ(X), φ(Y )) = QDEc (X,Y ) = QD E δcX (X,Y ). Proof For φ continuous and strictly increasing transformations, we have shown the result in Lemma 9.4.1. Suppose φ is continuous and strictly decreasing. a) We use lqφ(X)(p) = φ(rqX(1− p)) which we proved above using quantile symmetries: δφ(X)(lqφ(X)(p), lqφ(Y )(p)) = δφ(X)(φ(rqX(1− p)), φ(rqY (1− p))) = δ−φ(X)(−φ(rqX(1− p)),−φ(rqY (1 − p))), where the last equality is because δX(a, b) = δ−X(−a,−b). Now since −φ is continuous and increasing, the above is equal to δX(rqX(1− p), rqY (1− p)). We use this result in the following: QDE(X,Y ) = ∫ E δX(lqX(p), lqY (p))dµE = ∫ E δX(rqX(1− p), rqY (1− p))dµE . Then we do a change of variable p→ (1− p) and by symmetry of µ, we find that the above is equal to∫ E δX(rqX(p), rqY (p))dµE . But by the previous lemmas and since µ is continuous with respect to the Lebègue measure, this is equal to∫ E δX(lqX(p), lqY (p))dµE . 253 9.4. Quantile distance measures b) We only consider continuous and strictly decreasing functions φ: LQDEδc φ(X) (φ(X), φ(Y )) =∫ E c(1 − 1{0}(lqφ(X)(p)− lqφ(Y )(p)))dp + LDQδφ(X)(X,Y ) = ckE + LDQ E δφ(X) (φ(X), φ(Y )), where, kE = µ{p ∈ E|lqφ(X)(p) 6= lqφ(Y )(p)} = µ{p ∈ E|φ(rqX(1− p)) 6= φ(rqY (1− p))} = µ{p ∈ E|rqX(1− p) 6= rqY (1− p)} = µ{p ∈ E|rqX(p) 6= rqY (p)} = µ{p ∈ E|lqX(p) 6= lqY (p)}. We showed in a) that LDQEδφ(X)(φ(X), φ(Y )) = LDQ E δX (X,Y ) and because we just showed that kE = µ{p ∈ E, |(lqX(p)) 6= (lqY )(p)}, we conclude DQEδc φ(X) = ckE+LDQ E δφ(X) (φ(X), φ(Y )) = ckE+LDQ E δX (X,Y ) = LQDEδcX (X,Y ). 9.4.2 Quantile distance closeness of empirical distribution and the true distribution The next theorem shows that the quantile distance between the sample distribution and the true distribution tends to zero when the sample size becomes large. Theorem 9.4.6 Let X1,X2, · · · be an i.i.d. random sample drawn from an arbitrary distribution function F . Then (a) SQDδX (F,Fn) = sup p∈(0,1) δF (lqFn(p), lqF (p))→ 0., a.s., 254 9.4. Quantile distance measures and (b) IQDδX (F,Fn) = ∫ p∈(0,1) δF (lqFn(p), lqF (p))→ 0., a.s.. Proof We only need to prove (a) since (b) is a straightforward consequence of (a). Clearly lqFn(p) = Xi:n for p ∈ ((i − 1)/n, i/n], i = 1, 2, · · · , n. Also F cn(Xi:n) ≥ i/n and F on(Xi:n) ≤ (i − 1)/n. Pick an N large enough in the Glivenko-Cantelli Theorem such that n > N ⇒ |Fn(x)− F (x)| < ǫ, and |F on(x)− F o(x)| < ǫ, uniformly in x. Consider two cases: Case I: Xi:n < lqF (p). Then δF (lqFn(p), lqF (p)) = δF (Xi:n, lqF (p)) = F o(lqF (p))− F c(Xi:n) ≤ F o(lqF (p))− F cn(Xi:n) + ǫ ≤ p− i/n+ ǫ ≤ ǫ. Case II: Xi:n > lqF (p). Then δF (lqFn(p), lqF (p)) = δF (Xi:n, lqF (p)) = F o(Xi:n)− F c(lqF (p)) ≤ F on(Xi:n) + ǫ− p ≤ (i− 1)/n + ǫ− p ≤ ǫ. Since this holds for i = 1, 2, · · · , n and (0, 1) = ∪i=1,2,··· ,n( i−1n , in ], the supre- mum is also less than ǫ. 9.4.3 Quantile distance and KS distance closeness Clearly ifX ∼ Y , then LQDEL (X,Y ) = 0. In the following theorem we study the inverse question for L = δcX , c ≥ 0 and E = [0, 1]. The Kolmogorov Smirnoff distance was defined to be KS(X,Y ) = sup x∈R |FX(x)− FY (x)|. We also define the “open Kolmogorov Smirnoff” distance as KSo(X,Y ) = sup x∈R |F oX(x)− F oY (x)|. 255 9.4. Quantile distance measures Lemma 9.4.7 Suppose X,Y are random variables, then KSo(X,Y ) = KS(X,Y ). To prove the lemma, we show that KS(X,Y ) ≤ ǫ⇔ KSo(X,Y ) ≤ ǫ. Suppose KS(X,Y ) ≤ ǫ. If the R.H.S does not hold then there exist x ∈ R such that F oX(x) > F o Y (x) + ǫ. Since F oX is left continuous, we conclude there is a y < x such that F oX(y) > F o Y (x) + ǫ. Hence, F cX(y) ≥ F oX(y) > F oY (x) + ǫ ≥ F cY (y) + ǫ, which is a contradiction. Inversely, suppose KSo(X,Y ) ≤ ǫ. If the L.H.S does not hold then there exist x ∈ R such that F cX(x) > F c Y (x) + ǫ. Since F cY is right continuous, we conclude there is y > x such that F cX(x) > F c Y (y) + ǫ. Hence, F oX(y) ≥ F cX(x) > F cY (x) + ǫ ≥ F oY (y) + ǫ, which is a contradiction. Lemma 9.4.8 Kolmogorov Smirnoff closeness implies Quantile distance close- ness. More formally if for two random variables X,Y , KS(X,Y ) ≤ ǫ then SQDδX (X,Y ) = sup p∈(0,1) δX(lqX(p), lqY (p)) ≤ ǫ. Proof For p ∈ (0, 1), suppose lqX(p) < lqY (p). Then δ(lqX(p), lqY (p)) = F o X(lqY (p))− F cX(lqY (p)) ≤ 256 9.4. Quantile distance measures F o(lqY (p)) + ǫ− p ≤ p+ ǫ− p = ǫ. The discussion for lqY (p) < lqX(p) is similar. Remark. By symmetry also KS(X,Y ) ≤ ǫ⇒ SQDδY (X,Y ) ≤ ǫ. The converse needs the continuity assumption: Lemma 9.4.9 Suppose X,Y are continuous random variables. Then quan- tile distance closeness implies Kolmogorov Smirnoff distance closeness. More formally, suppose SQDδX (X,Y ) = sup p∈(0,1) δX(lqX(p), lqY (p)) ≤ ǫ and SQDδY (X,Y ) = sup p∈(0,1) δY (lqX(p), lqY (p)) ≤ ǫ. Then KS(X,Y ) ≤ ǫ. Proof Suppose the result is not true and there exists x such that |FX(x)− FY (x)| ≥ ǫ. Then let p1 = FX(x) and p2 = FY (x) and without loss of generality assume p2 > p1. Since FY (x) = p2, lqY (p2) ≤ x. But lqX(p2) = y > x. Otherwise p2 ≤ FX(lqX(p2)) = FX(x) = p1 which is a contradiction. δX(lqX(p2), lqY (p2)) = F o X(y)− FY (x) = FX(y)− FY (x) ≥ p2 − p1 > ǫ, which is a contradiction. Note that we have used continuity of X in the second equality. Remark. This is not true in general. Consider X with P (X = 0) = 1 and Y with P (Y = 1) = 1. Then FX(1/2) − FY (1/2) = 1 and SQDδX (X,Y ) + SQDδY (X,Y ) = 0. In the next theorem we show that if the quantile distance between two variables are zero and one of them is continuous then they are identically distributed. Theorem 9.4.10 Suppose F1, F2 distribution functions, F1 continuous and their quantile distance is zero. In other words, sup p∈(0,1) δF1(lqF1(p), lqF2(p)) = 0. Then F1 = F2. 257 9.4. Quantile distance measures Proof Suppose the result does not hold. Then we have two cases. Case I: ∃x, p1 = F1(x) < F2(x) = p2. F1(x) = p1 ⇒ lqF1(p2) = y > x, and F2(x) = p2 ⇒ lqF2(p2) = z ≤ x. Hence δF1(lqF1(p2), lqF2(p2)) = F1(y)− F1(z) ≥ F1(y)− F1(x) ≥ p2 − p1. Case II: ∃x, p1 = F1(x) > F2(x) = p2. Take p3 ∈ (p2, p1). Then F1(x) = p1 ⇒ lqF1(p3) = y ≤ x. However if lqF1(p3) = x, we conclude F1(lqF1(p3)) = F1(x)⇒ p3 = p1, which is a contradiction. Note that we have used the continuity of F1 in F1(lqF1(p3)) = p3. Also F2(x) = p2 ⇒ lqF2(p3) = z > x. Hence δF1(lqF1(p3), lqF2(p3)) = δF1(y, z) = F1(z)−F1(y) ≥ F1(x)−F1(y) ≥ p1−p3. Here we prove an easy lemma regarding the continuity of δ. Lemma 9.4.11 Suppose F is a continuous distribution function. For any fixed b ∈ R, δF (a, b) is a continuous function in a. Proof Note that δF (a, b) = |F (b) − F (a)| because F is a continuous func- tion. Lemma 9.4.12 Suppose F1, F2 are distribution functions, F1 is continuous and δF1(lqF1(p0), lqF2(p0)) = ∆ > 0, for some p0 ∈ (0, 1) then there exist 0 < ǫ < p0 such that δF1(lqF1(p), lqF2(p)) > ∆/3, p ∈ (p0 − ǫ, p0). 258 9.4. Quantile distance measures Proof Since F1 is continuous δF1(lqF1(p), lqF2(p)) = |p− F1(lqF2(p))|. Let lqF2(p0) = x1 and F1(x1) = p1. Then |p0 − p1| = ∆. By continuity of F1 there exist ǫ ′ > 0 such that x ∈ (x1 − ǫ′, x1 + ǫ′)⇒ F1(x) ∈ (p1 − ∆ 3 , p1 + ∆ 3 ). By left continuity of lqF2 for ǫ ′ positive, there exists an 0 < ǫ < min(∆/3, p0) such that p ∈ (p0 − ǫ, p0)⇒ lqF2(p) ∈ (x1 − ǫ′, x1). Hence for p ∈ (p0−ǫ, p0), we have F1(lqF2(p)) ∈ (p1−∆/3, p1+∆/3). Hence δF1(lqF1(p), lqF2(p)) = |p − F1(lqF2(p))| ≥ |p0 − p1| − ǫ− ∆ 3 ≥ ∆/3. Lemma 9.4.13 Suppose F1, F2 are distribution functions and F1 is contin- uous. Also assume IDQδF1 (F1, F2) = ∫ 1 0 δF1(lqF1(p), lqF2(p)) = 0. Then F1 = F2. Proof The assumption implies that δF1(lqF1(p), lqF2(p)) = 0, ∀p ∈ (0, 1). For otherwise if δF1(lqF1(p0), lqF2(p0)) = ∆ > 0, for some p0. By the previous lemma there exist 0 < ǫ < p0 such that δF1(lqF1(p), lqF2(p)) > ∆/3, p ∈ (p0 − ǫ, p0). This implies that ∫ 1 0 δF1(lqF1(p), lqF2(p)) ≥ ǫ∆, which is a contradiction. Now we can use Lemma 9.4.10 to conclude F1 = F2. 259 9.4. Quantile distance measures 9.4.4 Quantile distance for continuous variables From now on we only consider continuous variables and the probability loss function with c = 0, δX . Some results can be generalized to the general distributions but we leave that for future research. We use the simpler notations: QDX(X,Xθ) = LQDδX (X,Xθ) = ∫ 1 0 δX(lqX(p), lqXθ (p))dp. Also QD(X,Xθ) = QDX(X,Xθ) +QDXθ(X,Xθ). Quantile distance in the continuous case can be obtained by: QDX(X,Xθ) = ∫ 1 0 δX(lqX(p), lqXθ (p))dp =∫ 1 0 |FX ◦ lqX(p)− FX ◦ lqXθ(p)|dp = ∫ 1 0 |p− FX ◦ lqXθ(p)|dp. We can also consider the quantile distance closeness in the tails. Consider the tails to correspond to probabilities E = (0, 0.025) ∪ (0.0975, 1). Then L(E) = 0.05 (L being the Lèbegue measure) and we can define QDtailX (X,Xθ) = ∫ E δX(lqX(p), lqXθ(p))dp/0.05 =∫ E |FX ◦ lqX(p)− FX ◦ lqXθ(p)|dp/0.05 = ∫ E |p− FX ◦ lqXθ(p)|dp/0.05. We have divided the integral by 0.05 the length of E to make this measure comparable to the overall measure over [0,1], which has length 1. Then we compute the quantile distance of the standard normal to some known distributions. Both the overall quantile distance and the tail quantile distance are calculated (by approximating the integrals) and the results are given in Table 9.1 and 9.2. For the overall quantile distance we observe that QDX and QDY have almost the same value. A theoretical result regarding this observation is desirable and we leave this for future research. This is not true in general for the tail distance. Then we find the closest Cauchy with scale parameter in (0,4) (and loca- tion parameter=0) to the standard normal. Once using the quantile distance and once using the tail quantile distance. We find the quantile distance of 260 9.4. Quantile distance measures the standard normal to all Cauchy distributions with scale parameters on the grid (0.01, 0.02, · · · , 4.00) (and location parameter=0). The results are given in Figures 9.3 and 9.5 respectively. For the overall quantile distance the optimal Cauchy is the one with scale parameter 0.66 and for the tail quantile distance, the optimal Cauchy is the one with scale parameter 0.12. Figure 9.4 depicts the normal distribution functions compared with a few Cauchy distributions including the optimal and Figure 9.6 depicts the nor- mal distribution in the upper tail with a few Cauchy distributions including the optimal in tails with scale parameter 0.12. Figure 9.7 depicts the stan- dard normal distribution compared with the optimal Cauchy for the overall quantile distance and the optimal Cauchy for the tail quantile distance. We conclude that a fit that is optimally might not be optimal on the tails. We use this fact later in choosing our method to model extreme temperature events. Distribution QDX(X,Y ) QDY (X,Y ) QD Y = N(1, 1) 0.2605080 0.2605080 0.5210159 Y = N(0.5, 1) 0.138301 0.138301 0.276602 Y = N(0, 2) 0.1024215 0.1024207 0.2048422 Y = t(1) 0.06382985 0.0637436 0.1275734 Y = t(10) 0.0078747 0.007872528 0.01574723 Y = t(100) 0.000795163 0.0007951621 0.001590325 Y = Cauchy(scale = 1) 0.06376941 0.06376579 0.1275352 Y = χ2(1) 0.2190132 0.2190249 0.4380381 U(−0.5, 0.5) 0.1522836 0.1522991 0.3045827 U(−1, 1) 0.06562216 0.06563009 0.1312522 U(−2, 2) 0.05612716 0.0561283 0.1122555 U(−3, 3) 0.1171562 0.1171562 0.2343124 Table 9.1: Comparing standard normal with various distributions using quantile distance, where U denotes the uniform distribution and χ2 the Chi-squared distribution. 261 9.4. Quantile distance measures 0 1 2 3 4 0. 05 0. 15 QD 1 0 1 2 3 4 0. 05 0. 15 QD 2 0 1 2 3 4 0. 1 0. 2 0. 3 0. 4 scale parameter QD Figure 9.3: Cauchy distribution’s distance with different scale parameter (and location parameter=0) to the standard normal. In the plotsQD1 = QX and QD2 = QDY and QD = QD1+QD2, where X is the standard normal and Y is the Cauchy. 262 9.4. Quantile distance measures −3 −2 −1 0 1 2 3 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 x F( x) Figure 9.4: The distribution function of standard normal (solid) compared with the optimal Cauchy (and location parameter=0) picked by quantile distance minimization with scale parameter=0.66 (dashed curve), Cauchy with scale parameter=1 (dotted) and Cauchy with scale parameter=0.5 (dot dashed). 263 9.4. Quantile distance measures 0 1 2 3 4 0. 00 0. 10 0. 20 QD 1 0 1 2 3 4 0. 00 0. 10 0. 20 0. 30 QD 2 0 1 2 3 4 0. 05 0. 15 0. 25 scale parameter QD Figure 9.5: Cauchy distribution’s distance with different scale parameter (and location parameter=0) to the standard normal on the tails. In the plots QD1 = QX and QD2 = QDY and QD = QD1+QD2, where X is the standard normal and Y is the Cauchy. 264 9.4. Quantile distance measures 2.0 2.2 2.4 2.6 2.8 3.0 0. 80 0. 85 0. 90 0. 95 1. 00 x F( x) Figure 9.6: The distribution function of standard normal (solid) compared with the optimal Cauchy picked by tail quantile distance minimization with scale parameter=0.12 (dashed curve), Cauchy with scale parameter=0.65 (dotted) and Cauchy with scale parameter=0.01 (dot dashed). 265 9.4. Quantile distance measures −3 −2 −1 0 1 2 3 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 x F( x) Figure 9.7: Comparing the standard normal distribution (solid) with optimal Cauchy picked by quantile distance (dashed) and the optimal Cauchy picked by tail quantile distance minimization (dotted). 266 9.4. Quantile distance measures Distribution QDtailX (X,Y ) QD tail Y (X,Y ) QD tail(X,Y ) Y = N(1, 1) 0.05075276 0.05075276 0.10150552 Y = N(0.5, 1) 0.01824013 0.01824013 0.03648026 Y = N(0, 2) 0.01249034 0.11206984 0.12456018 Y = t(1) 0.0125000 0.1184949 0.1309949 Y = t(10) 0.007631262 0.011192379 0.018823642 Y = t(100) 0.0009740074 0.0010122519 0.0019862594 Cauchy(scale = 1) 0.0125000 0.1180231 0.1305231 Y = χ2(1) 0.25006521 0.06467072 0.31473593 U(−0.5, 0.5) 0.3004565 0.0125000 0.3129565 U(−1, 1) 0.1523052 0.0125000 0.1648052 U(−2, 2) 0.01313629 0.01205279 0.02518908 U(−3, 3) 0.01083494 0.10054194 0.11137688 Table 9.2: Comparing standard normal on the tails with some distributions using quantile distance, where U denotes the uniform distribution and χ2 the Chi-squared distribution. 9.4.5 Equivariance of estimation under monotonic transformations using the quantile distance Suppose a family of distributions {Xθ}θ∈Θ, Θ ⊂ Rk is given. Also assume φ is a continuous and strictly monotonic transformation on R. Consider the family of distributions {Yθ = φ(Xθ)}θ∈Θ. Then the family {Yθ}θ∈Θ is parameterized by the same parameters since P (Yθ < a) = P (φ(Xθ) < a) = P (Xθ < φ −1(a)). Then the following lemma shows the equivariance property of quantile dis- tance estimation. Lemma 9.4.14 Suppose a random variable X and a family of distributions {Xθ}θ∈Θ are given, A = argminθ∈Θ ∫ 1 0 δX(lqX(p), lqXθ (p))dp, is nonempty and φ is a continuous and strictly monotonic transformation. Let B = argminθ∈Θ ∫ 1 0 δφ(X)(lqφ(X)(p), lqφ(Xθ)(p))dp. Then A = B. In other words if Xθ is an optimal estimator of X, then φ(Xθ) is an optimal estimator of φ(X). 267 9.4. Quantile distance measures Proof This is trivial by invariance properties of quantile distance under continuous strictly monotonic transformations. Remark. The above is also true if we use replace the integral quantile distance by the sup quantile distance. 9.4.6 Estimation using quantile distance Here we only consider estimation using integral quantile distance. In order to estimate a distribution X using a parameterized family {Xθ}θ∈Θ, one can try to find argminθ∈Θ ∫ 1 0 δX(lqX(p), lqXθ(p))dp. However, the above expression depends on δX an unknown. The available information to us is usually a random sample X1, · · · ,Xn. Remark. If we use the empirical distribution instead of the distribution of X is above, we get: argminθ∈Θ ∫ 1 0 δFn(lqFn(p), lqXθ(p))dp. The argmin can be checked again to be equivariant under continuous and strictly monotonic transformations. Tables 9.3 and 9.4 compare the maximum likelihood estimation to the quantile distance estimation method for a sample of size N = 20 and N = 100 respectively. In each case we generate 50 samples of length N and estimate the parameters using both methods. Then we assess the per- formance by a few measures: mean absolute error, mean square error, mean probability loss error and mean quantile distance. In both cases maximum likelihood has done slightly better in terms of all errors except the quantile distance error in which case the quantile distance estimation has done signif- icantly better. The histogram for both estimation methods for N = 20 and N = 100 are given in Figures 9.8 and 9.9 respectively. For both maximum likelihood and quantile distance estimations for N = 100 the parameters have a symmetric (close to normal) distribution. 268 9.4. Quantile distance measures Error type QD error s.e. of QD error ML error s.e. ML error Mean probability loss error for 0.077 0.061 0.077 0.055 µ = lqN(µ,σ2)(1/2) Mean probability loss for 0.185 0.114 0.176 0.096 σ2 + µ = lqN(µ,σ2)(P (Z < 1)) Mean abs. error for µ 0.198 0.160 0.196 0.143 Mean abs. error for σ 0.159 0.127 0.132 0.085 Mean square error µ 0.064 0.089 0.058 0.077 Mean square error for σ 0.041 0.065 0.025 0.028 Mean QD error 0.035 0.009 0.122 0.073 Table 9.3: Assessment of Maximum likelihood estimation and quantile dis- tance estimation using several measures of error for a sample of size 20. In the table s.e. stands for the standard error. Error type QD error s.e. of QD error ML error s.e. ML error Mean probability loss for 0.028 0.020 0.027 0.020 µ = lqN(µ,σ2)(1/2) Mean probability loss for 0.157 0.046 0.165 0.038 σ2 + µ = lqN(µ,σ2)(P (Z < 1)) Mean abs. error for µ 0.070 0.051 0.068 0.051 Mean abs. error for σ 0.079 0.052 0.061 0.039 Mean square error µ 0.007 0.009 0.007 0.009 Mean square error for σ 0.009 0.011 0.005 0.005 Mean QD error 0.014 0.003 0.045 0.026 Table 9.4: Assessment of Maximum likelihood estimation and quantile dis- tance estimation using several measures of error for a sample of size 100. In the table s.e. stands for the standard error. 269 9.4. Quantile distance measures QD mean estimate D en si ty −0.6 −0.2 0.0 0.2 0.4 0.6 0. 0 0. 5 1. 0 1. 5 ML mean estimate D en si ty −0.6 −0.2 0.2 0.6 0. 0 0. 5 1. 0 1. 5 QD sd estimate D en si ty 0.4 0.6 0.8 1.0 1.2 0. 0 0. 5 1. 0 1. 5 2. 0 ML sd estimate D en si ty 0.6 0.8 1.0 1.2 1.4 0. 0 0. 5 1. 0 1. 5 2. 0 2. 5 Figure 9.8: Histograms for the parameter estimates using quantile distance and maximum likelihood methods for a sample of size 20. 270 9.4. Quantile distance measures QD mean estimate D en si ty −0.2 −0.1 0.0 0.1 0.2 0 1 2 3 4 ML mean estimate D en si ty −0.2 −0.1 0.0 0.1 0.2 0 1 2 3 4 QD sd estimate D en si ty 0.8 0.9 1.0 1.1 1.2 0 1 2 3 4 ML sd estimate D en si ty 0.85 0.95 1.05 1.15 0 1 2 3 4 Figure 9.9: Histograms for the parameter estimates using quantile distance and maximum likelihood methods for a sample of size 100. 271 Chapter 10 Binary temperature processes 10.1 Introduction This chapter uses the theory developed in previous chapters to find appro- priate models for extreme temperature events. We consider both low and high temperatures. The temperature is measured in degrees centigrade. We define a day with minimum temperature (mt) less than zero as extremely cold and denote it by e: e(t) = { 1 mt(t) ≤ 0 (deg C) 0 mt(t) > 0 (deg C) . Taking 0 (deg C) to be the cut–off for low temperature seems reasonable in the absence of any other considerations, since it is the usual definition of a frost. In agriculture, where most plants contain a lot of water this can be considered as an important cut–off. No seemingly natural cut-off like that for minimum temperature exists for extremely high temperature. To define extreme events, we ask the following questions: 1. Should the definition of an extreme event depend on the purpose of our model? 2. Should it depend on the time of the year and location? 3. What should be the cut–off (threshold) to define an extreme event? 4. Should we use a certain quantile as the cut-off? In that case which quantile should be used? We provide some answers in the following: 1. The answer to the first question is clearly affirmative. For example, a high temperature day for agriculture purposes is different from energy 272 10.1. Introduction providing purposes. Even for the farmer, different crops may have different tolerances to hot or cold weather. 2. The answer of the second question depends on the model’s purpose. We might want to vary the definition over time and space for some purposes. 3. We do not know of any such natural cut–off for high temperatures like that for low temperature. 4. Quantiles have long been used to determine the extreme events. Choos- ing the level of the quantile depends on the purpose. Some extreme– value modelers pick the quantile high enough to insure the validity of the assumptions underlying their models as Embrechts et al. discuss in [16]. For example, a well–known result asserts that P (X − u < v|X > u) follows a known distribution (extreme value distributions e.g. Pareto) when u is large. [See [16].] We do not favor such methods of choosing the threshold. The threshold should be picked primarily to reflect our needs in the real problem rather than satisfy the assump- tions of the models. If the models do not satisfy the conditions, we should find others rather than move the threshold up. Based on the above discussion with the statistician’s knowledge alone, one cannot define the extreme events. Ralph Wright (personal communication) in AAFRD (Agriculture and Rural Development in Alberta, Canada) raises similar points. In particular he said the following about the droughts: “Drought is really defined by the impact that the moisture deficit has on a specific use or uses. Its definition can vary both with time of year and from place–to–place. Drought can be short–term or long–term. For example, one month of hot dry weather can significantly reduce crop yields, despite the fact that normal amounts of precipitation have been received over the past year. On the other hand, crops may do fine in dry weather conditions if precipitation has been received in a timely manner and temperatures have been favorable. However under the same conditions, a dam operator in the same area may have severe shortages in the reservoir and declare drought like conditions (e.g. with low winter snow–fall and poor spring run–off). You will need to define your drought based on whom or what is being impacted by the water shortage.” Since we do not have any standard definition of an extremely hot day, we use the data. In our example, to define a binary process of (hot)/(not hot) for temperature, we pick the global spatial/temporal 95th percentile using 273 10.1. Introduction the data from 25 stations over Alberta that had daily maximum temperature (MT ) data from 1940 to 2004. The 95th percentile was computed using the quantile algorithm developed in previous chapters and turned out to be 26.7. The exact value was also found and turned out to be q = 27 (deg C). Then We define the binary process of extremely hot temperature as: E(t) = { 1 MT (t) ≥ q 0 MT (t) < q , where q = 27 (deg C) here. In order to study extreme events (e.g. for MT ) three approaches come to mind: 1. Model the whole daily MT process and use that to infer about the extremes. For MT , we have shown that a Gaussian distribution fits the daily values fairly well. However, in the tails, usually of paramount concern, the fit does not do well as shown in the qq–plots in Chapter 2. Another difficulty with this approach is picking a covariance function to model the covariance over time. Also in Chapter 9, we showed that even though two distributions are very close in terms of overall quantile distance, they might not be very close in terms of tail quantile distance (Figure 9.7). This shows in order to study extremes (for example extremely hot temperature) if we use a good overall fit, our results might not be reliable. 2. Use a specified threshold and model the values exceeding the threshold. This approach has several drawbacks. Firstly we cannot answer the question of how often or in what periods of the year the extremes happen. This is because we model the actual extreme values and ignore the non–extreme values. Secondly, strong assumption of independence is needed for this method. Thirdly we need to pick the threshold high enough to make the model reasonable as mentioned before. This might not be an optimal threshold from a practical point of view. 3. Based on a real problem, use a threshold to define a new binary process of (extreme)/(not extreme) values and then model that binary process. This is the method we use and it does not have the issues mentioned in 1 and 2 because the threshold is not taken to satisfy some statistical property and we make few assumptions about the binary chain. 274 10.2. rth–order Markov models for extreme minimum temperatures 10.2 rth–order Markov models for extreme minimum temperatures This section looks for appropriate models for the binary process e(t) of cold/not cold temperature days. This is a binary process and the Cate- gorical Expansion Theorem (Theorem 3.5.6) gives the form of all such rth– order Markov chains. Here we also consider other covariates such as the minimum temperature of the previous day and two days ago as well as sea- sonal covariates (deterministic). The next subsection uses graphical tools and exploratory techniques to investigate the properties the model should have. Then we use the BIC criterion and compare several proposed models. We use partial likelihood techniques to estimate parameters as proposed by Kedem et al. in [27]. 10.2.1 Exploratory analysis for binary extreme minimum temperatures Here we perform an exploratory analysis of the binary process e(t) using two stations for this purpose, Banff and Medicine Hat which have data from 1895 to 2006. The transition probabilities are computed from the historical data considering years as independent observations. The results are summarized a follows: • Figures 10.1 and 10.2 plot the probability of a freezing day over the course of a year for the Banff and Medicine Hat stations, respectively. A regular seasonal pattern is seen. Medicine Hat seems to have a much longer frost–free period. • Figures 10.3 and 10.4 plot the estimated transition probabilities, p̂01 and p̂11 for the Banff and Medicine Hat stations. If the chain were a 0th–order Markov chain then these two curves would overlap. This is not the case and Markov chain at least of 1st–order seems necessary. In the p̂01 curve for both Banff and Medicine Hat, high fluctuations are seen at the beginning and end of the year which corresponds to the cold season. This is not surprising because there are very few pairs in the data with a freezing day followed by a non–freezing day in a cold season in Alberta. • In Figure 10.4, p̂11 is missing for a period over the summer. This is because no freezing day is observed over this period in the summer and hence p̂11 could not be estimated. 275 10.2. rth–order Markov models for extreme minimum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y of a fr ee zin g da y Figure 10.1: The estimated probability of a freezing day for the Banff site for different days of a year computed using the historical data. • Figures 10.5 and 10.6 give the plots for the 2nd–order transition prob- abilities. They overlap substantially and hence a 2nd–order Markov chain does not seem to be necessary. 10.2.2 Model selection for extreme minimum temperature This section finds models for the extreme minimum temperature process e(t). Here Zt−1 denotes the covariate process. We investigate the following predictors: • ek(t) ≡ e(t− k). Was it an extremely cold day k days ago? • mtk(t) ≡ mt(t− k), the actual minimum temperature k days ago. • Nk, the number of freezing days during the k previous days. • SIN , COS, SIN2 and COS2 which are abbreviations for sin(ωt), cos(ωt), sin(2ωt) and cos(2ωt), respectively (with ω = 2π366 ). 276 10.2. rth–order Markov models for extreme minimum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y of a fr ee zin g da y Figure 10.2: The estimated probability of a freezing day for the Medicine Hat site for different days of a year computed using the historical data. 277 10.2. rth–order Markov models for extreme minimum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.3: The estimated 1st–order transition probabilities for the 0-1 process of extreme minimum temperatures for the Banff site. The dotted line represents the estimated probability of “e(t) = 1 if e(t − 1) = 1” (p̂11) and the dashed, “e(t) = 1 if e(t− 1) = 0” (p̂01). 278 10.2. rth–order Markov models for extreme minimum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.4: The estimated 1st–order transition probabilities for the 0-1 process of extreme minimum temperatures for the Medicine Hat site. The dotted line represents the estimated probability of “e(t) = 1 if e(t− 1) = 1” (p̂11) and the dashed, “e(t) = 1 if e(t− 1) = 0” (p̂01). 279 10.2. rth–order Markov models for extreme minimum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.5: The estimated 2nd–order transition probabilities for the 0-1 process of extreme minimum temperature for the Banff site with p̂111 (solid) compared with p̂011 (dotted) both calculated from the historical data. 280 10.2. rth–order Markov models for extreme minimum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.6: The estimated 2nd–order transition probabilities for the 0- 1 process of extreme minimum temperatures for the Banff site with p̂001 (solid) compared with p̂101 (dotted) calculated from the historical data. 281 10.2. rth–order Markov models for extreme minimum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.7: The estimated 2nd–order transition probabilities for the 0-1 process of extreme minimum temperatures for the Medicine Hat site with p̂111 (solid) compared with p̂011 (dotted) calculated from the historical data. 282 10.2. rth–order Markov models for extreme minimum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.8: The estimated 2nd–order transition probabilities for the 0-1 process of extreme minimum temperatures for the Medicine Hat site with p̂001 (solid) compared with p̂101 (dotted) calculated from the historical data. 283 10.2. rth–order Markov models for extreme minimum temperatures Table 10.1 compares models with a constant and Nk as the covariate process. The optimal model picked by the BIC criterion is the model with the covariates Zt−1 = (1, N11). Model: Zt−1 BIC parameter estimates (1, N1) 1251.7 (-2.144, 4.260) (1, N2) 1166.5 (-2.501, 2.490) (1, N3) 1142.9 (-2.653, 1.755) (1, N4) 1121.6 (-2.773, 1.371) (1, N5) 1111.2 (-2.852, 1.125) (1, N6) 1093.1 (-2.932, 0.961) (1, N7) 1087.4 (-2.977, 0.835) (1, N8) 1081.7 (-3.015, 0.739) (1, N9) 1077.1 (-3.047, 0.663) (1, N10) 1066.5 (-3.089, 0.605) (1, N11) 1056.4 (-3.130, 0.557) (1, N12) 1059.5 (-3.135, 0.511) (1, N13) 1062.3 (-3.140, 0.472) (1, N14) 1072.8 (-3.126, 0.437) (1, N15) 1080.9 (-3.118, 0.406) (1, N16) 1091.9 (-3.102, 0.379) (1, N17) 1104.2 (-3.083, 0.354) (1, N18) 1112.1 (-3.075, 0.334) (1, N19) 1118.6 (-3.068, 0.315) (1, N20) 1126.5 (-3.058, 0.299) Table 10.1: BIC values for models including Nk for the extreme minimum temperature process e(t) at the Medicine Hat site. 284 10.3. rth–order Markov models for extreme maximum temperatures Model: Zt−1 BIC parameter estimates (1) 2539.9 (-0.0251) (1, e1) 1251.7 (-2.144, 4.260) (1, e2) 1473.6 (-1.856, 3.683) (1, e1, e2) 1157.7 (-2.501, 3.085, 1.896) (1, e1, e2, e1e2) 1162.4 (-2.586, 3.389, 2.190, -0.593) (1,mt1) 963.7 (0.109, -0.400) (1,mt1,mt2) 954.0 (0.091, -0.329, -0.082) (1, COS, SIN) 984.0 (-0.070, 4.292, 1.324) (1, COS, SIN,COS2, SIN2) 984.2 (-0.502, 4.505, 1.399, -0.464, -0.493) (1, COS, SIN,COS2) 986.7 (-0.258, 4.359, 1.335, -0.353) (1, COS, SIN, SIN2) 984.4 (-0.217, 4.365, 1.360, -0.402) (1,mt1,mt2,mt3) 940.7 (0.062, -0.319, -0.009, -0.094) (1,mt1,mt2,mt1mt2) 943.4 (0.211, -0.339, -0.084, -0.0091) (1, e1, COS, SIN) 901.5 (-1.008, 1.840, 3.325, 1.013) (1,mt1, COS, SIN) 855.3 (-0.074, -0.234, 2.394, 0.746) (1,mt1,mt2, COS, SIN) 861.9 (-0.076, -0.247, 0.023, 2.504, 0.785) Table 10.2: BIC values for several models for the extreme minimum tem- perature e(t) at the Medicine Hat site. Table 10.2 compares several models some of which include seasonal terms and continuous variables. The optimal model is (1,mt1, COS, SIN), which has the temperature of the previous day and seasonal terms. The model (1, e1, COS, SIN) has a larger BIC but is preferable to all models other than (1,mt1, COS, SIN) and (1,mt1,mt2, COS, SIN). Note that it is not possible to compute the probability of events in the long-term future using (1,mt1, COS, SIN), since we do not knowmt except for perhaps the present time. Hence the optimal applicable model seems to be (1, e1, COS, SIN). 10.3 rth–order Markov models for extreme maximum temperatures This section finds appropriate models for the binary process of extremely hot temperature E(t) as defined above. To define a hot day, we use the 95th percentile of data from 25 stations over Alberta that had daily MT data from 1940 to 2004. The 95th percentile turns out to be q = 27 (deg C). Once we used the fast algorithm developed in Chapter 7 to pick the quantile and once we used an exact method; the algorithm gave us the approximate value q = 26.7, which is very close to the exact value. (See Table ?? for more details on the computation.) 285 10.3. rth–order Markov models for extreme maximum temperatures 10.3.1 Exploratory analysis for extreme maximum temperatures This section uses explanatory data analysis techniques to study the binary process E(t). Again we use two stations for this purpose, the Banff and Medicine Hat sites that have data from 1895 to 2006. The transition proba- bilities are computed using the historical data considering years as indepen- dent observations. The results are summarized as follows: • Figures 10.9 and 10.10 plot the probabilities of a hot day over the course of a year for the Banff and Medicine Hat stations respectively. A regular seasonal pattern is seen. Medicine Hat seems to have a much longer period of hot days. • Figures 10.11 and 10.12 plot the estimated transition probabilities, p̂01 and p̂11 for Banff and Medicine Hat. If the chain were a 0th–order Markov chain then these two curves would overlap. This is not the case so Markov chain of at least 1st–order seems necessary. In the p̂01 curve for both Banff and Medicine Hat, large fluctuations are seen in the middle of the year, which corresponds to the warm season. This is not surprising because there are very few pairs in the data with a hot day followed by a not–hot day in the warm season in Alberta. • In Figure 10.12, p̂11 is missing for a period over the cold season. This is because no hot day is observed during this period in the cold season and hence p̂11 could not be estimated. • Figures 10.13 and 10.14 give the plots for the 2nd–order transition probabilities. They overlap heavily and hence a 2nd–order Markov chain does not seem to be necessary. 10.3.2 Model selection for extreme maximum temperature Here, we use the following abbreviations: • Ek(t) = E(t− k). Was it an extreme day k days ago? • MT k(t) =MT (t− k), the actual maximum temperature k days ago. • Nk, COS, SIN , COS, SIN2 and COS2 as previous sections. 286 10.3. rth–order Markov models for extreme maximum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y of a h ot d ay Figure 10.9: The estimated probability of a hot day (maximum temperature ≥ 27 (deg C)) for different days of the year for the Banff site calculated from the historical data. 287 10.3. rth–order Markov models for extreme maximum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y of a h ot d ay Figure 10.10: The estimated probability of a hot day (maximum tempera- ture ≥ 27 (deg C)) for different days of the year for the Medicine Hat site calculated from the historical data. 288 10.3. rth–order Markov models for extreme maximum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.11: The estimated 1st–order transition probabilities for the binary process of extremely hot temperatures for the Banff site. The dotted line represent the estimated probability of “E(t) = 1 if E(t − 1) = 1” (p̂11) and the dashed, “E(t) = 1 if E(t− 1) = 0” (p̂01). 289 10.3. rth–order Markov models for extreme maximum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.12: The estimated 1st–order transition probabilities for the binary process of extremely hot temperatures for the Medicine Hat site. The dotted line represents the estimated probability of “E(t) = 1 if E(t− 1) = 1” (p̂11) and the dashed, “E(t) = 1 if E(t− 1) = 0” (p̂01). 290 10.3. rth–order Markov models for extreme maximum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.13: The estimated 2nd–order transition probabilities for the bi- nary process of extremely hot temperatures for the Banff site with p̂111 (solid) compared with p̂011 (dotted) calculated from the historical data. 291 10.3. rth–order Markov models for extreme maximum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.14: The estimated 2nd–order transition probabilities for the bi- nary process of extremely hot temperatures for the Banff site with p̂001 (solid) compared with p̂101 (dotted) calculated from the historical data. 292 10.3. rth–order Markov models for extreme maximum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.15: The estimated 2nd–order transition probabilities for the bi- nary process of extremely hot temperatures for the Medicine Hat site with p̂111 (solid) compared with p̂011 (dotted), calculated from the historical data. 293 10.3. rth–order Markov models for extreme maximum temperatures 0 100 200 300 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Day of the year Pr ob ab ilit y Figure 10.16: The estimated 2nd–order transition probabilities for the bi- nary process of extremely hot temperatures for the Medicine Hat site with p̂001 (solid) compared with p̂101 (dotted) calculated from the historical data. 294 10.3. rth–order Markov models for extreme maximum temperatures Table 10.3 compares several models containing Nk. The optimal model turns out to be (1, N11) which is the same as the result for the extreme minimum temperature process e(t). Model: Zt−1 BIC parameter estimates (1, N1) 955.7 (-2.95, 3.82) (1, N2) 965.9 (-3.00, 2.16) (1, N3) 942.5 (-3.11, 1.60) (1, N4) 921.8 (-3.20, 1.29) (1, N5) 926.8 (-3.23, 1.05) (1, N6) 931.6 (-3.24, 0.89) (1, N7) 932.5 (-3.26, 0.78) (1, N8) 939.0 (-3.26, 0.69) (1, N9 931.6 (-3.29, 0.63) (1, N10) 925.9 (-3.31, 0.57) (1, N11) 911.7 (-3.35, 0.49) (1, N12) 917.5 (-3.34, 0.46) (1, N13) 922.8 (-3.33, 0.42) (1, N14) 926.0 (-3.32, 0.39) (1, N15) 932.1 (-3.31, 0.37) (1, N16) 941.7 (-3.29, 0.34) (1, N17) 951.5 (-3.28, 0.31) (1, N18) 955.3 (-3.27, 0.29) (1, N19) 960.6 (-3.26, 0.28) (1, N20) 968.3 (-3.25, 0.26) (1, N21) 975.3 (-3.23, 0.25) (1, N22) 981.8 (-3.22, 0.24) (1, N23) 986.0 (-3.22, 0.23) (1, N24) 991.6 (-3.21, 0.22) (1, N25) 997.0 (-3.21, 0.21) (1, N26) 1002.8 (-3.20, 0.20) (1, N27) 1009.5 (-3.19, 0.19) (1, N28) 1014.4 (-3.18, 0.19) Table 10.3: BIC values for models includingNk for the extremely hot process E(t). Table 10.4 compares several models. We observe that major reductions are seen if we use MT k instead of Ek. The optimal model turns out to be (1,MT 1, COS, SIN) which is combination of seasonal terms and the temperature of the day before. 295 10.4. Probability of a frost–free period for Medicine Hat Model: Zt−1 BIC parameter estimates (1) 1520.3 (-1.774) (1, E1) 955.8 (-2.95, 3.82) (1, E2) 1170.5 (-2.581, 2.924) (1, E1, E2) 941.3 (-3.034, 3.179, 1.099) (1, E1, E2, E1E2) 929.0 (-3.202, 3.895, 2.137, -1.877) (1,MT 1) 683.8 (-10.040, 0.362) (1,MT 1,MT 2) 689.1 (-10.135, 0.333, 0.034) (1, COS, SIN) 830.8 (-5.484, -5.616, -2.452) (1, COS, SIN,COS2, SIN2) 837.5 (-4.343, -4.255, -0.993, 0.113, 1.016) (1, COS, SIN,COS2) 837.9 (-5.850, -6.231, -2.406, -0.292) (1, COS, SIN,SIN2) 830.0 (-4.481, -4.492, -0.978, 1.011) (1,MT 1,MT 2,MT 3) 669.2 (-10.885, 0.338, -0.061, 0.120) (1,MT 1,MT 2,MT 1MT 2) 681.9 (-21.003, 0.763, 0.452, -0.0162) (1, E1, COS, SIN) 731.3 (-4.963, 2.005, -4.096, -1.685) (1,MT 1, COS, SIN) 649.9 (-10.281, 0.283, -2.829, -1.079) (1,MT 1,MT 2, COS, SIN) 657.3 (-10.109, 0.294, -0.011, -2.609,-1.072) Table 10.4: BIC values for several models for the extremely hot process E(t). 10.4 Probability of a frost–free period for Medicine Hat This section shows how the approach developed above can be used in appli- cations. We use the developed methodology to compute two probabilities: • π1 : The probability of no frosts in the first week of October at the Medicine Hat site. • π2 : The probability of at least 5 days without frost in the first week of October at the Medicine Hat site. The first day of October is the 275th day of the year in a leap year and the 274th day of the year in a non–leap year. We compute the probabilities for the week between 274th day and 281th day which corresponds to the first week of October in a non–leap year. We prefer this option to computing the probability for the actual first week of October, since this corresponds better to the natural cycles. Of course with a little modification one could compute the probability for the first week of October, for example by introducing a probability of 1/4 for being in a leap year. Figure 10.17 plots the probability of a frost for each day of years since 1985. Only years with more than 355 days of data are considered. The 296 10.4. Probability of a frost–free period for Medicine Hat 1900 1920 1940 1960 1980 2000 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Year Pr ob ab ilit y Figure 10.17: Medicine Hat’s estimated mean annual probability of frost calculated from the historical data. 297 10.4. Probability of a frost–free period for Medicine Hat figure shows that the probability of a frost is fairly consistent over the years, so we assume a constant probability of frost for all years. Table 10.5 compares models with various Nk. The optimal model is (1, N11). Table 10.6 includes two seasonal terms as well as Nk. The optimum this time (1, N1, COS, SIN), showing that in the presence of seasonal terms, the short–term past modeled by Nk is not necessary. Model: Zt−1 BIC (1, N1) 5072.2 (1, N2) 4634.8 (1, N3) 4465.9 (1, N4) 4407.4 (1, N5) 4366.0 (1, N6) 4357.4 (1, N7) 4356.2 (1, N8) 4342.6 (1, N9) 4330.5 (1, N10) 4329.1 (1, N11) 4328.4 (1, N12) 4332.4 (1, N13) 4330.8 (1, N14) 4345.1 (1, N15) 4362.9 (1, N16) 4385.7 (1, N17) 4407.1 (1, N18) 4420.1 (1, N19) 4440.1 (1, N20) 4463.7 Table 10.5: BIC values for models including Nk for the extremely cold process e(t) at the Medicine Hat site. 298 10.4. Probability of a frost–free period for Medicine Hat Model: Zt−1 BIC (1, N1, COS, SIN) 3601.3 (1, N2, COS, SIN) 3654.8 (1, N3, COS, SIN) 3693.9 (1, N4, COS, SIN) 3735.2 (1, N5, COS, SIN) 3763.1 (1, N6, COS, SIN) 3791.0 (1, N7, COS, SIN) 3813.5 (1, N8, COS, SIN) 3826.2 (1, N9, COS, SIN) 3834.9 (1, N10, COS, SIN) 3843.6 (1, N11, COS, SIN) 3849.8 (1, N12, COS, SIN) 3855.5 (1, N13, COS, SIN) 3857.4 (1, N14, COS, SIN) 3862.9 (1, N15, COS, SIN) 3868.1 (1, N16, COS, SIN) 3873.7 (1, N17, COS, SIN) 3877.9 (1, N18, COS, SIN) 3878.6 (1, N19, COS, SIN) 3880.5 (1, N20, COS, SIN) 3882.8 Table 10.6: BIC values for several models including Nk and seasonal terms for the extremely cold process e(t) at the Medicine Hat site. Model: Zt−1 BIC parameter estimates (1) 10122.4 (-0.0858) (1, e1) 5072.2 (-2.13, 4.18) (1, e1, e2) 4598.2 (-2.530, 2.977, 2.00) (1, e1, e2, e1e2) 4582.8 (-2.65, 3.41, 2.43, -0.855) (1, COS, SIN) 3916.870 (-0.3, 4.301, 1.139) (1, COS, SIN,COS2, SIN2) 3865.6 (-0.746, 4.643, 1.253 -0.550 -0.504) (1, e1, COS, SIN) 3601.3 (-1.116, 1.760, 3.332, 0.856) (1, e1, COS, SIN,COS2, SIN2) 3566.7 (-1.49, 1.71, 3.65, 0.96, -0.48, -0.42) (1, e1, e2, COS, SIN) 3601.6 (-1.22, 1.66, 0.33, 3.19, 0.810) (1, e1, e2, COS, SIN,COS2, SIN2 3571.7 (-1.8, 1.7, 4.4, 1.3, -0.78, -0.74, 0.2, 0.4) , COS3, SIN3) (1, mt1, COS, SIN,COS2, SIN2) 3356.4 (-0.66, -0.22, 2.85, 0.73, -0.56, -0.42) Table 10.7: BIC values for several models for the extremely cold process e(t) at the Medicine Hat site. 299 10.4. Probability of a frost–free period for Medicine Hat Covariate Theoretical sd Experimental sd 1 0.090 0.093 e1 0.097 0.100 COS 0.125 0.139 SIN 0.060 0.059 COS2 0.089 0.094 SIN2 0.081 0.077 Table 10.8: Theoretical and simulation estimated standard deviations for extremely cold process e(t) at the Medicine Hat site. Table 10.5 compares various models. The winner is (1,mt1, COS, SIN,COS2, SIN2). However, it is not possible to compute the desired probabilities using this model since we do not know mt1 (perhaps except at the start of the chain). Among all other models, the optimal is (1, e1, COS, SIN,COS2, SIN2), which we use to compute the probabilities. We compute the standard deviations once using simulations by gener- ating chains from the fitted model with the above covariates, and once by computing the partial information matrix, GN . The results are given in Ta- ble 10.8. The variance–covariance matrix calculated using partial likelihood theory is given below:   0.0082 −0.0043 −0.0038 −0.0011 0.0050 0.0030 −0.0043 0.0094 −0.0042 −0.0013 0.0002 0.0003 −0.0038 −0.0042 0.0158 0.0038 −0.0052 −0.0037 −0.0011 −0.0013 0.0038 0.0037 −0.0011 −0.0017 0.0050 0.0002 −0.0052 −0.0011 0.0079 0.0015 0.0030 0.0003 −0.0037 −0.0017 0.0015 0.0066   We also find the variance–covariance matrix using simulations. To do that we generate 50 chains over time using the estimated parameters. The variance–covariance matrix using the simulations is given by: 300 10.4. Probability of a frost–free period for Medicine Hat beta1 beta1 D en si ty −1.7 −1.6 −1.5 −1.4 −1.3 0 1 2 3 4 beta3 beta2 D en si ty 1.3 1.4 1.5 1.6 1.7 1.8 1.9 0 1 2 3 4 beta3 beta3 D en si ty 3.4 3.6 3.8 4.0 4.2 0. 0 1. 0 2. 0 3. 0 beta4 beta4 D en si ty 0.90 1.00 1.10 1.20 0 2 4 6 beta5 beta5 D en si ty −0.8 −0.7 −0.6 −0.5 −0.4 0 1 2 3 4 5 6 beta6 beta6 D en si ty −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 0 1 2 3 4 5 Figure 10.18: Normal curved fitted to the distribution of 50 samples of the estimated parameters.   0.0087 −0.0035 −0.0054 −0.0012 0.0047 0.0021 −0.0035 0.0101 −0.0058 −0.0009 0.0026 0.0012 −0.0054 −0.0058 0.0194 0.0032 −0.0086 −0.0032 −0.0012 −0.0009 0.0032 0.0035 −0.0011 −0.0018 0.0047 0.0026 −0.0086 −0.0011 0.0089 0.0016 0.0021 0.0012 −0.0032 −0.0018 0.0016 0.0059   We see that the simulated variance–covariance matrix has close values to the partial likelihood, all entries having the same sign. We also look at the distribution of the estimators using the 50 samples. Figure 10.18 shows the parameter estimates approximately follow a normal distribution. To estimate the desired probabilities, we generate samples (10000) from the parameter space using the mean of the parameters and variance–covariance matrix from a multivariate normal. To fix ideas suppose we want to com- pute the probability of no frost between (and including) the 274th day and the 280th day of the year. For every vector of parameters, we then compute the probability of observing (0, 0, 0, 0, 0, 0, 0) exactly once given it was be- low zero on the 273th day and once it was above zero. In other words we 301 10.4. Probability of a frost–free period for Medicine Hat compute P (e(274) = 1, · · · , e(281) = 1|e(273) = 1), and P (e(274) = 1, · · · , e(281) = 1|e(273) = 0). We also use the historical data to estimate p0 = P (e(273) = 1). Then the desired probability would be P (e(274) = 1, · · · , e(281) = 1) = p0P (e(274) = 1, · · · , e(281) = 1|e(273) = 1) + (1− p0)P (e(274) = 1, · · · , e(281) = 1|e(273) = 0) Then in order to get a 95% confidence intervals we use (q(0.025), q(1 − 0.025)), where q is the (left) quantile function of the vector of the probabil- ities. Using the historical data, we obtain p0 = P (e(274) = 1) = 0.2432432. Then for every parameter generated from the multivariate normal with mean and the above variance–covariance matrix we can estimate the two proba- bilities π1 and π2. We sample 10000 times from the multivariate normal, compute 10000 probabilities and take the 0.025th and 0.975th (left) quan- tiles to get the following confidence intervals for π1 and π2 respectively: (0.28, 0.40), and (0.74, 0.85). If we use the simulated variance–covariance matrix, we’ll get the following confidence intervals for π1 and π2 (0.28, 0.40), and (0.75, 0.85), which are very similar to the aforementioned intervals. 302 10.5. Possible applications of the models 10.5 Possible applications of the models To understand the potential applications of these models and results I con- tacted Dr. Nathaniel Newlands from AAFC (Agriculture and Agi-food Canada). He give the following insightful comments. “Forecasted (probability of precipitation) is a leading indicator used by crop insurance companies. Probabilities of this kind (agroclimate) are typ- ically most useful in early growing season by farmers in deciding planting dates and deciding on irrigation scheduling and ordering fertilizer and other kinds of inputs. Frost probability in latter growing season is critically impor- tant in deciding when to harvest crops before they have a higher potential for weather damage. So, essentially at the start and end of growing season, frost, precipitation (sometimes as a water stress index) and temp extremes are all informative for farmers and other decision makers in ag industry. I would generally say that a broader set of probabilities like these are of special interest to the government side as they look for improving and/or developing new models, web portals and other tools to aid a wide array of the decision makers in the agricultural industry with their business deci- sions. Farmers (depending on what region of Canada they are in) are used to dealing with reoccurring weather and now climate change events, so of- ten their viewpoint and decision needs are far more regionally specific than government which tries to balance regional with national needs and levels of risk to changing agroclimate. The crop insurance industry is probably the most specific user of such information. For example, they base their insurance quotes for the event of precipitation on some specific times of the year.” 303 Chapter 11 Conclusions and future research 11.1 Introduction This chapter summarizes the work and draws conclusions from the the sta- tistical analysis and the theory developed in the previous chapters. We also point out a few topics for future research as a continuation of the work done in this thesis. 11.2 Summary This thesis has presented statistical techniques we have developed to model precipitation and temperature over time. The dataset we use is the historical weather data published by Environment Canada [10]. A Python code was provided to extract the data from the binary format and the Python mod- ule is available in [23]. [See the appendices for more information regarding the dataset, the Python module and other resources.] Then we performed an exploratory analysis of the data. See the conclusions section of Chapter 2 for details. In order to model the 0-1 precipitation process over time, rth-order Markov chains are a natural choice. We found a representation theorem for such chains using the conditional probabilities and used it to pick appropriate models for precipitation and dichotomized temperatures in the next chapters. In order to dichotomize a continuous process (tempera- ture) one can use quantiles as thresholds. The climate data are often very large in size and hence computing quantiles is not possible due to memory or space limitations. We propose an algorithm that uses smaller partitions of the data in order to approximate the quantiles and provides a measure of goodness of such approximations. Thinking about the quantiles led us to an extension of the traditional definition of “quantiles” to the “left–” and “right–quantiles” and we showed by various theorems that this defi- nition is more intuitively appealing and practically useful. For example a 304 11.3. Future research symmetric relation holds with the new definition which we used in various applications. In order to assess the goodness of approximating quantiles, we introduced the “probability loss function”, which we showed is invariant under monotonic transformations. We used this loss function in various ap- plications such as picking optimal probability index vectors to summarize data vectors or assigning quantiles to a random sample in order to make a quantile-quantile plot. Then we used this loss function, to define a distance between random variables and showed that this distance is also invariant under monotonic transformations. We also pointed out how the probabil- ity loss function and the distance defined by it could be used to estimate parameters of a distribution. Chapter 10 uses the above methods to find appropriate models for extremely high and low temperatures. For example, we show how these models can be used to build confidence intervals for the probability of a frost-free period. 11.3 Future research In this section, we suggest a few lines of research that are continuations of this thesis work. 11.3.1 rth-order Markov chains Chapter 3 developed a consistency theorem for the conditional probabilities of a discrete–time categorical stochastic process and a representation theo- rem for rth-order Markov chains. We expressed the conditional probabilities of such chains as a linear combination of monomials of past times and used partial likelihood to estimate the parameters in the binary case. We propose the following extensions to this work: • Find a similar consistency theorem for general (not only categorical) discrete–time categorical processes and a representation theorem for rth-order Markov chains. • We used partial likelihood only to estimate the parameters in the bi- nary case; an extension is needed to chains with larger number of states. • We pointed out in Chapter 3 that we can add other covariates to the linear terms to get non-stationary chains. We can also add spatial components to build spatial-temporal models. However, estimating 305 11.3. Future research the parameters in this case needs an extension of the theory due to the possible dependence over space. • A Bayesian method can be deployed to estimate the parameters of these models. 11.3.2 Approximating quantiles and data summaries We provided a general framework for summarizing data, combining sum- maries and making inference about the original data. We propose the fol- lowing research topics: • Suppose a data vector x is given which is partitioned to x1, · · · , xm of lengths n1, · · · , nm. We are allowed to read the partitions separately and save k1, · · · , km data points from these partitions. 1. What information regarding x1, · · · , xm (of length k1, · · · , km) should be saved to optimally approximate lqx(p) for a fixed p? 2. What information regarding x1, · · · , xm (of length k1, · · · , km) should be saved to optimally approximate lqx(p) for all p ∈ E ⊂ [0, 1]? 3. Suppose pre-defined summaries of x1, · · · , xm are given which are not necessarily optimal. How can we optimally infer about lqx(p) or lqx(p) for all p ∈ E ⊂ [0, 1]? 4. Suppose a fixed memory space is given. Find an optimal (fastest) algorithm which gives approximations of accuracy ǫ (in the prob- ability loss sense). • Suppose a random sampleX1, · · · ,Xn is given. We can build distribution- free confidence intervals for quantiles of the underlying distribution. (See [15].) Now suppose we have created a summary of this random sample in a certain way. Build confidence intervals based on these summaries. 11.3.3 Parameter estimation using probability loss and quantile distances Chapter 9 developed a framework to estimate parameters of distributions. We also introduced the quantile distances in order to measure the distance between random variables and showed its invariance under monotonic trans- formations. We propose the following extensions: 306 11.3. Future research • Given a random sampleX1, · · · ,Xn what is the best estimate of lqX(p) using the probability loss function. What are the properties of that estimator? Is it consistent? • What are the suprema of LQDδX (X,Y ) and LQDδX+δY (X,Y ) over the space of all random variables? • What is the relation between LQDδX (X,Y ) and LQDδY (X,Y )? • Do LQD1(X,Y ) = LQDδX (X,Y ) or LQD(X,Y ) = LQDδX+δY (X,Y ) satisfy the triangle inequality? • Chapter 9 was a theoretical chapter. A lot of simulation studies and analysis of real data is needed to support the theory and get new ideas. 307 Bibliography [1] R. Agrawal and A. Swami. A one-pass space-efficient algorithm for finding quantiles. In in Proc. 7th Intl. Conf. Management of Data (COMAD-95, 1995. [2] H. Akiake. A new look at the statistical model identification. IEEE Transactions on Automatic Control, pages 716–723, 1974. [3] K. Alsabti, S. Ranka, and V. Singh. A one-pass algorithm for accurately estimating quantiles for disk-resident data. In VLDB ’97: Proceedings of the 23rd International Conference on Very Large Data Bases, pages 346–355, San Francisco, CA, USA, 1997. Morgan Kaufmann Publishers Inc. [4] T. W. Anderson and L. A. Goodman. Statistical inference about markov chains. Ann. Math. Statist., pages 89–110, 1957. [5] M. S. Bartlett. The frequency goodness of fit test for probability chains. Proc. Cambridge Philos. Soc., pages 86–95, 1951. [6] J. Besag. Spatial interactions and the statistical analysis of lattice systems. Journal of the Royal Statistical Society series B, pages 192– 225, 1974. [7] P. Billingsley. Probability and measure. John Wiley and Sons, 1985. [8] R. W. Blum and J. W. John. Time bounds for selection. J. Comput. Sys. Sci., 7:448–461, 1973. [9] L. Breiman. Probability. SIAM, 1992. [10] Environment Canada. The climate cds. http://www.weatheroffice.ec.gc.ca, 2007. [11] G. Casella and R. L. Berger. Statistical Inference. Duxbury, 2001. [12] E. H. Chin. Modeling daily precipitation process with markov chain. Water resources research, (6):949–956, 1977. 308 Chapter 11. Bibliography [13] W. K. Ching, E. S. Fung, and K. M. NG. Higher-order markov chain models for categorical data sequences. Naval Research Logistics, pages 557–574, 2004. [14] N. Cressie and L. Subash. New models for markov random fields. Jour- nal of applied probability, pages 877–884, 1992. [15] H. A. David and H. N Nagaraja. Order Statistics (3rd Edition). Wiley, 2003. [16] P. Embrechts, C. Klppelberg, and T. Mikosch. Modelling extremal events for insurance and finance. Springer, 2001. [17] J. E. Freund and B. M. Perles. A new look at quartiles of ungrouped data. The American statistician, pages 200–203, 1987. [18] K. R. Gabriel and J. Neumann. A markov chain model for daily rainfall occurance at tel aviv. Quart. J. Roy. Met. Soc., pages 90–95, 1962. [19] M. Greenwald and S. Khanna. Space-efficient online computation of quantile summaries. In In SIGMOD, pages 58–66, 2001. [20] E. J. Hannan. The estimation of the order of an arma process. Ann. Statist., pages 1071–1081, 1980. [21] L. Hao and D. Q. Naiman. Quantile Regression. Quantitative Applica- tions in the Social Sciences Series. SAGE publications, 2007. [22] D. M. A. Haughton. On the choice of a model to fit data from an exponential family. The Annals of Statistics, (1):342–355, 1988. [23] R. Hosseini. Python module for canadian climate data. http://bayes.stat.ubc.ca/∼reza/python, 2009. [24] R. Hyndman and Y. Fan. Sample quantiles in statistical packages. The American Statistician, 1996. [25] R. J. Hyndman and Y. Fan. Sample quantiles in statistical packages. The American Statistician, pages 361–365, 1996. [26] R. Jain and I. Chlamtac. The p2 algorithm for dynamic calculation of quantiles and histograms without storing observations. Commun. ACM, 28(10):1076–1085, 1985. 309 Chapter 11. Bibliography [27] B. Kedem and K. Fokianos. Regression Models for Time Series Analy- sis. Wiley Series in Probability and Statistics, 2002. [28] D. E. Knuth. Sorting and Searching, volume 3. Addison-Wesley, 1973. [29] R. Koenker. Quantile Regression. Cambridge university press, 2005. [30] E. L. Lehmann and G. Casella. The theory of point estimation. Springer-Verlag, 1998. [31] L. P. Llorente. Statistical Inference Based on Divergence Measures. CRC Press, 2006. [32] G. S. Manku, S. Rajagopalan, and B. G. Lindsay. Approximate medians and other quantiles in one pass and with limited memory. pages 426– 435, 1998. [33] G. S. Manku, S. Rajagopalan, and B. G. Lindsay. Random sampling techniques for space efficient online computation of order statistics of large datasets. In In SIGMOD, pages 251–262, 1999. [34] E. Mekis and W. D. Hogg. Rehabilitation and analysis of canadian daily precipitation time series. Atmosphere Ocean, pages 53–85, 1999. [35] S. E. Moon, S. Ryo, and J. Kwon. International journal of climatalogy. pages 1009–116, 1993. [36] J. I. Munro and M. S. Paterson. Selection and sorting with limited storage. Theoretical computer science, 12:253–258, 1980. [37] B. Öksendal. Stochastic Differential Equations: An Introduction with Applications. Springer, 2003. [38] E. Parzen. Nonparametric statistical data modeling. Journal of the American Statistical Association, 74:105–121, 1979. [39] M. Paterson. Progress in selection. pages 368–379, 1997. [40] A. E. Raftery. A model for higher order markov chains. J. R. Statist. B., (3):528–539, 1985. [41] T. Rychlik. Projecting statistical functionals. Springer, 2001. [42] G. Schwartz. Estimating the dimension of a model. Ann. Statist., pages 461–464, 1978. 310 [43] R. J. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, 1980. [44] R. Shibita. Selection of the order of an autoregressive modelby akiake’s information criterion. Biometrika, pages 117–126, 1976. [45] H. Tong. Determination of the order of a markov chain by akiake’s information criterion. J. Appl. Prob., pages 488–497, 1975. [46] H. Tong and P. Gates. On markov chain modelling to some weather data. Journal of applied meteorology, pages 1145–1151, 1976. [47] L.A. Vincent, X. Zhang, B. R. Bonsal, and Hogg W.D. Homogenization of daily temperatures over canada. Journal of Climate, pages 1322– 1334, 2002. [48] W. Wong. Theory of partial likelihood. The Annals of Statistics, (1):88– 123, 1986. [49] F. F. Yao. On lower bounds for selection problems. Technical report, Cambridge, MA, USA, 1974. 311 Appendix A Climate review A.1 Organizations and resources • WMO: The World Meteorological Organization (WMO) is a special- ized agency of the United Nations. It is the UN system’s authoritative voice on the state and behavior of the Earth’s atmosphere, its in- teraction with the oceans, the climate it produces and the resulting distribution of water resources. • Environment Canada: Environment Canada’s mandate is to preserve and enhance the quality of the natural environment; conserve Canada’s renewable resources; conserve and protect Canada’s water resources; forecast weather and environmental change; enforce rules relating to boundary waters; and coordinate environmental policies and programs for the federal government. • The Meteorological Service of Canada: The Meteorological Service of Canada is Canada’s source for meteorological information. The Service monitors water quantities, provides information and conducts research on climate, atmospheric science, air quality, ice and other environmental issues, making it an important source of expertise in these areas. • Natural Resources Canada • Agriculture and Agri–Food Canada: Agriculture and Agri-Food Canada (AAFC) provides information, research and technology, and policies and programs to achieve security of the food system, health of the en- vironment and innovation for growth. AAFC, along with its portfolio partners, reports to Parliament and Canadians through the Minister of Agriculture and Agri-Food and Minister for the Canadian Wheat Board. • Alberta Agriculture Food and Rural Development 312 A.2. Definitions and climate variables • Statistics Canada • AMS: The American Meteorological Society promotes the development and dissemination of information and education on the atmospheric and related oceanic and hydrologic sciences and the advancement of their professional applications. Founded in 1919, AMS has a mem- bership of more than 11,000 professionals, professors, students, and weather enthusiasts. AMS publishes nine atmospheric and related oceanic and hydrologic journals (in print and online) sponsors more than 12 conferences annually, and offers numerous programs and ser- vices. • GeoBase is a federal, provincial and territorial government initiative that is overseen by the Canadian Council on Geomatics (CCOG). It is undertaken to ensure the provision of, and access to, a common, up-to-date and maintained base of quality geospatial data for all of Canada. Through the GeoBase portal, users with an interest in the field of geomatics have access to quality geospatial information at no cost and with unrestricted use. A.2 Definitions and climate variables • Atmosphere: Gaseous envelope which surrounds the Earth. Definition source: International Meteorological Vocabulary, WMO - No. 182 • Troposphere: Lower part of the terrestrial atmosphere, extending from the surface up to a height varying from about 9 km at the poles to about 17 km at the equator, in which the temperature decreases fairly uniformly with height. Definition source: International Meteorological Vocabulary, WMO - No. 182 • Meteorology: Study of the atmosphere and its phenomena. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Climatology: Study of the mean physical state of the atmosphere to- gether with its statistical variations in both space and time as reflected in the weather behavior over a period of many years. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Hydrology: (1) Science that deals with the waters above and below the land surfaces of the Earth, their occurrence, circulation and dis- tribution, both in time and space, their biological, chemical and phys- 313 A.2. Definitions and climate variables ical properties, their reaction with their environment, including their relation to living beings. (2) Science that deals with the processes governing the depletion and replenishment of the water resources of the land areas, and treats the various phases of the hydrological cycle. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Basic topography: General geometrical configuration of the distribu- tion of geopotential height on an isobaric surface or on a thickness chart, or of atmospheric pressure on a constant–height chart (e.g., mean sea–level surface chart). Definition Source: International Mete- orological Vocabulary, WMO - No. 182 • Weather: State of the atmosphere at a particular time, as defined by the various meteorological elements. Term Source: International Meteorological Vocabulary, WMO - No. 182 • Climate: Synthesis of weather conditions in a given area, character- ized by long–term statistics (mean values, variances, probabilities of extreme values, etc.) of the meteorological elements in that area. Def- inition source: International Meteorological Vocabulary, WMO - No. 182 • Paleoclimate: Climate of a prehistoric period whose main characteris- tics may be inferred, for example, from geological and paleobiological (fossil) evidence. Definition source: International Meteorological Vo- cabulary, WMO - No. 182 • Climate change: (1) In the most general sense, the term ”climate change” encompasses all forms of climatic inconstancy (i.e., any dif- ferences between long–term statistics of the meteorological elements calculated for different periods but relating to the same area) regard- less of their statistical nature or physical causes. Climate changes may result from such factors as changes in solar emission, long–period changes in the Earth’s orbital elements (eccentricity, obliquity of the ecliptic, precession of the equinoxes), natural internal processes of the climate system, or anthropogenic forcing (e.g. increasing atmospheric concentrations of carbon dioxide and other greenhouse gases). (2) The term “climate change” is often used in a more restricted sense, to de- note a significant change (i.e., a change having important economic, environmental and social effects) in the mean values of a meteorolog- ical element (in particular temperature or amount of precipitation) 314 A.2. Definitions and climate variables in the course of a certain period of time, where the means are taken over periods of the order of a decade or longer. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Climate model: Representation of the climate system based on the mathematical equation governing the behavior of the various compo- nents of the system and including treatments of key physical processes and interactions,cast in a form suitable for numerical approximation (generally now making use of electronic computers). Definition source: International Meteorological Vocabulary, WMO - No. 182 • Precipitation: Hydrometeor consisting of a fall of an ensemble of par- ticles. The forms of precipitation are: rain, drizzle, snow, snow grains, snow pellets, diamond dust, hail and ice pellets. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Rainfall: Amount of precipitation which is measured by means of a rain gauge. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Atmospheric pressure: Pressure (force per unit area) exerted by the atmosphere on any surface by virtue of its weight; it is equivalent to the weight of a vertical column of air extending above a surface of unit area to the outer limit of the atmosphere. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Humidity: Water vapor content of the air. Definition Source: Inter- national Meteorological Vocabulary, WMO - No. 182 • Climatic season: A long spell of weather which characterizes part of the year and which occurs with some approach to regularity, especially in low latitudes. Definition Source: International Meteorological Vo- cabulary, WMO - No. 182 • Growing season: Season during which meteorological conditions are favorable to the growth of plants. Definition Source: International Meteorological Vocabulary, WMO-No.182 • Dry season: Period of the year characterized by the (almost) complete absence of rainfall. The term is mainly used for low latitude regions. Definition Source: International Meteorological Vocabulary, WMO - No. 182 315 A.2. Definitions and climate variables • Rainy season: In the lower latitudes, an annually recurring period of high rainfalls preceded and followed by relatively dry periods. Defi- nition Source: International Meteorological Vocabulary, WMO - No. 182 • Flood: (1) The overflowing by water of the normal confines of a stream or other body of water, or the accumulation of water by drainage over areas which are not normally submerged. (2) Controlled spreading of water over a particular region. Definition Source: International Meteorological Vocabulary, WMO - No. 182 Term Note • Drought: (1) Prolonged absence or marked deficiency of precipitation. (2) Period of abnormally dry weather sufficiently prolonged for the lack of precipitation to cause a serious hydrological imbalance. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Drought index: An index which is related to some of the cumulative effects of a prolonged and abnormal moisture deficiency. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Climate system: System consisting of the atmosphere, the hydrosphere (comprising the liquid water distributed on and beneath the Earth’s surface, as well as the cryosphere, i.e. the snow and ice on and be- neath the surface), the surface lithosphere (comprising the rock, soil and sediment of the Earth’s surface), and the biosphere (comprising Earth’s plant and animal life and man), which, under the effects of the solar radiation received by the Earth, determines the climate of the Earth. Although climate essentially relates to the varying states of the atmosphere only, the other parts of the climate system also have a significant role in forming climate, through their interactions with the atmosphere. Definition Source: International Meteorological Vocabulary, WMO-No.182 • Wind: Air motion relative to the Earth’s surface. Unless otherwise specified, only the horizontal component is considered. Definition Source: International Meteorological Vocabulary, WMO-No.182 • Humidity: Definition Water vapor content of the air. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Statistical model: (1) Mathematical model which has been derived from the statistical analysis of relevant meteorological variables. (2) 316 A.2. Definitions and climate variables Numerical model, usually of the general circulation, which predicts certain statistical properties of the atmosphere rather than the full three-dimensional, time-dependent, distribution of each variable. Def- inition Source: International Meteorological Vocabulary, WMO - No. 182 • Statistical forecast: Definition Objective forecast based on a statistical examination of the past behavior of the atmosphere, using regression formulae, probabilities, etc. Definition Source: International Meteoro- logical Vocabulary, WMO - No. 182 • Probability forecast: Definition Objective forecast based on a statis- tical examination of the past behavior of the atmosphere, using re- gression formulae, probabilities, etc. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Circulation model: Simplified representation of atmospheric flow used to study its principal characteristics. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • El Niño: An anomalous warming of ocean water off the west coast of South America, usually accompanied by heavy rainfall in the coastal region of Peru and Chile. Definition Source: International Meteoro- logical Vocabulary, WMO - No. 182 • Hurricane: (1) Name given to a warm core tropical cyclone with max- imum surface wind of 118 km h-1 (64 knots, 74 mph) or greater (hur- ricane force wind) in the North Atlantic, the Caribbean and the Gulf of Mexico, and in the Eastern North Pacific Ocean. (2) A tropical cyclone with hurricane force winds in the South Pacific and South- East Indian Ocean. Definition Source: International Meteorological Vocabulary, WMO - No. 182 • Green house effect: Warming of the lower layers of the atmosphere due to its different absorption properties for long- and short-wave ra- diation. Definition Source: International Meteorological Vocabulary, WMO - No. 182 317 A.3. Climatology A.3 Climatology A.3.1 General circulations Forces that cause variety of land forms on the Earth can be categorized into two types: • Inside forces: Volcanoes, earth quakes and etc. • Outside forces: Forces that are conveyed by atmosphere to the Earth’s surface. Sun is the most important factor in causing such forces in different forms. Although, the first type is of great importance and is not totally inde- pendent of the second type, here we only focus on the second type. Weather is defined to be day-to-day variations to the state of atmosphere. In order to understand the weather, we need to understand how such forces interact and the factors that cause such variations. The climate system is composed of three parts: • a radiative energy flow system • a circulation system • water cycle We will explain these in the following. The Sun is the most important source of energy driving the climate system. The atmosphere reflects about 31 percent of the energy to the space. It also absorbs (ozone, water vapor and carbon dioxide) 23 percent of the energy from Sun before it reaches the Earth’s surface. Finally, the Earth’s surface absorbs about 46 percent. The Earth’s surface radiates back some of this energy with longer wavelengthes which in turn is absorbed by the atmosphere. In fact atmosphere is able to absorb long wavelengthes better. The presence of greenhouse gases (ozone, water vapor and carbon dioxide) in the atmosphere can cause the greenhouse effect by absorbing more energy from the long wavelengthes of energy. Also some of the heat from the earth goes back to the atmosphere indirectly by the evaporated water. Near the Equator the solar radiation reaches the Earth’s surface with a steeper angle and shorter path through the atmosphere compared to the poles. This explains why it is warmer at the Equator than at the poles. 318 A.4. Some interesting facts about Canadian geography and weather Atmospheric circulation are created as a natural response to the differ- ence of temperature between the Equator and the poles. However, other factors also have an effect: the Earth’s rotation, the force of gravity, the temperature of the ocean and land, and the presence of topographical fea- tures such as mountains, plants ice and so on. A.3.2 Topography of Canada A listing of main features comprises the Western Cordillera, the Prairies, the Great Lakes, the Canadian Shield, the Gulf of St. Lawrence and the Arctic Islands. We only review the Prairies which are the most suitable lands for farming. The Prairies extend eastward from the Rocky Mountains sloping down towards the great Canadian Shield. The elevations range from 1500 m in the west to about 250 m in Manitoba. The slope however is not even but is broken by steps, the Manitoba Escarpment and the Missouri Coteau. Minor hill rows tend to run parallel to these; the Cypress Hills however are an exception. A chain of large lakes in Manitoba marks the extent of a giant inland lake during glacial times. The rivers run from the Rockies toward the northeast, some into the Arctic Ocean, others into Hudson Bay. They are often cut deeply into the flat or slightly rolling, generally featureless plain. A.4 Some interesting facts about Canadian geography and weather • Total Area of Canada: The total area of Canada is 9,984,670 square kilometers. Of this, 9 093,507 square kilometers is land and 891,163 square kilometers is fresh water. Canada’s area is the second largest in the world (after Russia which has a total area of 17,075,000 square kilometers). On Canadian territory, the longest distance North to South (on land) is 4,634 kilometers from Cape Columbia on Ellesmere Island, Nunavut to Middle Island in Lake Erie, Ontario. The longest distance East to West is 5,514 kilometers from Cape Spear, Newfoundland and Labrador, to the Yukon Territory–Alaska boundary. • Boundary: The total length of the Canada–United States boundary is 8890 kilo- meters. 319 A.4. Some interesting facts about Canadian geography and weather • Landmass and Freshwater: Approximately 40% of Canada’s landmass and freshwater is north of 60 degrees North latitude. Between them, the Northwest Territories and Nunavut contains 9.2% of the world’s total freshwater. The area of Canada north of the treeline is 2,728,800 square kilometers or 27.4% of the total area of the country. • The Great Lakes: The Great Lakes (Superior, Michigan, Huron, Erie and Ontario) are the largest group of freshwater lakes in the world. They have a total surface area of 245,000 square kilometers, of which about one third is in Canada. Lake Michigan is entirely within the USA. • Coastline: Canada has the world’s longest coastline: 202 080 kilometers. • Hailstorm: At the time it happened, the most expensive natural catastrophe in terms of property damage was a violent hailstorm that struck Calgary (photo of Calgary) on September 7th, 1991. Insurance companies paid about $400 million to repair over 65,000 cars, 60,000 homes and busi- nesses, and a number of aircraft. • Tornado: The Regina Tornado of June 30th, 1912, rated as F4 (winds of 330 to 416 kilometres per hour) was the most severe tornado so far known in Canada. It killed 28 people, injured hundreds and demolished much of the downtown area. • Most Severe Flood: The most severe flood in Canadian history occurred on October 14th to 15th, 1954 when Hurricane Hazel brought 214 millimeters of rain in Toronto region in just 72 hours. • Manitoulin Island: The world’s largest island in a freshwater lake is Manitoulin Island in Lake Huron, 2765 square kilometers. • Mount Logan: The highest mountain in Canada is Mount Logan, Yukon Territory, 5959 meters. 320 A.4. Some interesting facts about Canadian geography and weather • Medicine Hat: Medicine Hat is the driest city with 271 days without measurable pre- cipitation. [Source: Phillips, D. 1990. The Climate of Canada. Cata- logue No. En56-1/1990E. Ottawa: Minister of Supply and Services of Canada.] 321 Appendix B Extracting Canadian Climate Data from Environment Canada dataset B.1 Introduction In this document, some instructions are given to use the climate data pro- vided by environment Canada [10]. The data we are using are contained in a file, which can be downloaded from the environment Canada website: http://www.weatheroffice.ec.gc.ca. “The National Climate Data and Information Archive, operated and main- tained by Environment Canada, contains official climate and weather obser- vations for Canada” (quoting from the website). Environment Canada has published a series of climate data CDs: 1993, 1996, 2002, 2007. The newest version is the 2007 CD. The Environment Canada website also includes some other useful information, as a glossary of some useful terms in climate literature and also some information about the files. In particular, the glossary includes the definition of precipitation: Precipitation: The sum of the total rainfall and the water equivalent of the total snowfall observed during the day. On the 2007 CD, data are stored in a binary format in several files. The CD includes two softwares to use the data, “cdcd” and “cdex” along with manuals to use the softwares. “cdcd” is to view the data and “cdex” is to extract the data. “cdex” can only extract the data for one climate station at a time in certain formats which are not necessarily convenient to use in R (a well known statistical software) or other statistical softwares. In these formats the longitude, latitude and elevation are missing. Hence, to get the data in our desired way, we need to read the binary files using another 322 B.1. Introduction program. Bernhard Reiter has written a code using Python to get the data, which is available online at http://www.intevation.de/∼bernhard/archiv/uwm/canadian climate cdformat/. However, this code fails to get the data for a large proportion of the stations. We have modified the code to get the data for all stations. The modified code [23] is available at http://bayes.stat.ubc.ca/∼reza/python. After getting the data, we need to write the data in our desired formats. We have also included many new functions in Python for different extraction purposes. There are 7802 stations from all over Canada. The available variables are: 1. maximum temperature 2. minimum temperature 3. one–day rainfall 4. one–day snowfall 5. one–day precipitation 6. snow depth on the ground These data are available, both daily and monthly. For each station the data are available for different intervals of time. The data are saved in 8 directories on the CD labeled 1, 2, · · · , 8. They correspond to different territories of Canada. 1 --> British Columbia 2 --> Yukon territories, Nunavut and North west territories 3 --> Alberta 4 --> Saskatchewan 323 B.1. Introduction −140 −120 −100 −80 −60 50 60 70 80 Longitude W La tit ud e N Figure B.1: Canada site locations 5 --> Manitoba 6 --> Ontario 7 --> Quebec 8 --> Nova Scotia, New found land and Labrador Each directory contains a number of data files and index files. For ex- ample, directory 3 which correspond to Alberta contains the following files: DATA.301,DATA.302, · · · ,DATA.308 and INDEX.301, INDEX.302, · · · , INDEX.308. Each DATA file corresponds to the data of a region in Alberta and the corresponding INDEX file contains the information about the stations in the given region. In Figure B.1, you can see the location of available stations over Canada. 324 B.2. Using Python to extract data B.2 Using Python to extract data In the following, we illustrate getting the data using the python module: “Reza canadian data.py” After opening the python interface, let us import some necessary packages and tell python where the data are stored. Using sys.path.append specify the directory where Reza canadian data.py is stored as shown below. Also, define Topdirectory to be where the data are stored. >>>import sys >>>sys.path.append("D:\School\Research\Climate\Python_code") >>>Topdirectory="D:\Data" >>>from Reza_canadian_data import * >>>stations=get_station_list(Topdirectory) Once you did that you can call the command get station list from Reza canadian data to get the list of the stations available on the CD. Let us see how many stations we have access to: >>> len(stations) 7802 Let us pick a random station, say the 3000–th station and find out its id and index. >>> s=stations[2436] >>> s.stationnumber ’3025480’ >>> s.index_record (’5480’, ’RED DEER A ’, ’YQF’, 5211, 11354, 905, 1938, 1938, 1938, 1938, 1938, 1938, 1955, 2007, 2007, 2007, 2007, 2007, 2007, 2007, 9904) >>> len(s.index_record) 21 The command “stationnumber” gives back the id of the given station on the CD. The stations in the same district start with the same numbers. For example the stations in Alberta all start with 30 and so Red Deer is in Alberta. You can use cdcd to see the list of the stations and id numbers to figure out which ids correspond to which districts. 325 B.2. Using Python to extract data The index record command reads the information available for the given station. There are many values available and it is hard to understand what they mean. As you see the index has 21 components. Here is the explanation of each component: 1. The last four digits of the id 2. station name 3. Airport is the three–character airport identifier that some stations have (e.g., “YWG” for Winnipeg); if none exists for this station then the field is left blank 4. latitude 5. longitude 6. elevation 7. The first available year for max temperature 8. The first available year for min temperature 9. The first available year for mean temperature 10. The first available year for rainfall 11. The first available year for snowfall 12. The first available year for snow depth 13. The first available year for precipitation 14. The last available year for Max temperature 15. The last available year for min temperature 16. The last available year for mean temperature 17. The last available year for rainfall 18. The last available year for snowfall 19. The last available year for precipitation 20. The last available year for snow depth 326 B.2. Using Python to extract data 21. Starting Record Number: This record is a header that contains infor- mation about the station Hence, for example this station name is Red Deer. It has the data for precipitation from 1938 to 2007. Whenever, 9999 is recorded as the first and 55537 as the last available year for a variable, that variable is missing. As mentioned before the available data for a given station are maximum temper- ature, minimum temperature, one–day rainfall, one–day snowfall, one–day precipitation and snow depth. These are coded in Reza canadian data.py as "MT" "mint" "rain" "snow" "precip" "snow_ground" We have used the following procedure in python interface to create a file “stations.txt”, which has the information for all the available stations. In every row the information for a stations is given. There are 22 columns, the first one is the stations id and the other 21 are as described above. Whenever, the station was not an airport station, the airport identifier was recorded as NA. Notice, how using the “if” command in below, we have separated the case where the airport identifier is blank from the case that there is an airport identifier. stations=get_station_list(Topdirectory) f=open(’stations.txt’,’w’) for s in stations: ind=s.index_record if ind[2]==’ ’: f.write(str(s.stationid)+’,’+str(ind[0])+’,’+str(ind[1]) +’,’+’NA’+’,’+str(ind[3])+’,’+str(ind[4])+’,’+str(ind[5]) +’,’+str(ind[6])+’,’+str(ind[7])+’,’+str(ind[8]) +’,’+str(ind[9])+’,’+str(ind[10])+’,’+str(ind[11]) +’,’+str(ind[12])+’,’+str(ind[13])+’,’+str(ind[14]) +’,’+str(ind[15])+’,’+str(ind[16])+’,’+str(ind[17]) +’,’+str(ind[18])+’,’+str(ind[19])+’,’+str(ind[20]) +’\n’) else: f.write(str(s.stationid)+’,’+str(ind[0])+’,’+str(ind[1]) +’,’+str(ind[2])+’,’+str(ind[3])+’,’+str(ind[4]) +’,’+str(ind[5])+’,’+str(ind[6])+’,’+str(ind[7]) 327 B.2. Using Python to extract data +’,’+str(ind[8])+’,’+str(ind[9])+’,’+str(ind[10]) +’,’+str(ind[11])+’,’+str(ind[12])+’,’+str(ind[13]) +’,’+str(ind[14])+’,’+str(ind[15])+’,’+str(ind[16]) +’,’+str(ind[17])+’,’+str(ind[18]) +’,’+str(ind[19])+’,’+str(ind[20])+’\n’) f.close() One of the useful commands in Reza canadian data.py is get data. Let us use this command to get some data. data=s.get_data(1995,‘‘precip") >>> len(data) 3 >>> len(data[0]) 366 >>> len(data[1]) 366 >>> len(data[2]) 108 As you see the data object created has three components. The first two components each have 366 entries and the third one has 108 components. The first component of the data is the data values for each day of the year, the amount of precipitation. The second component includes the flag asso- ciated with each daily values. The third component correspond to monthly values, number of missing days for a given month and etc. Let us look at the first two components. We print the value of precipitation for the first 60 days of the year: >>> for precip in data[0][0:60]: print "%5.1f" % precip, 0.0 0.2 0.0 0.0 0.0 0.5 0.0 0.0 0.0 2.0 0.8 0.0 0.2 0.4 2.2 0.4 0.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.4 0.0 0.0 0.0 0.0 0.0 0.2 1.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 -999.9 328 B.2. Using Python to extract data Everything looks OK other than the last value which is -999.9. Every missing value in the dataset is shown by -999.9. In fact to see the status of a data point look the corresponding flag which is given in the second component of the data. Let us look at the flag for the first 60 days of the year as well: >>> for flag in data[1][0:60]: print "%5.1f" % flag, 0.0 0.0 0.0 0.0 2.0 0.0 2.0 2.0 2.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 2.0 2.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0 2.0 0.0 14.0 We need to know what each flag means. Note that the flag corresponding to -999.9 is 14. A description of the flags is given below: 0 -> Observed value 1 -> Estimated 2 -> Trace. Value is reported 0 3 -> Precipitation occurred, amount uncertain; value is 0 4 -> Precipitation may or may not have occurred; value is 0 5 -> Accumulated amount (from past days possibly) 6 -> Accumulated and estimated 7 to 13 -> unused 14 -> This is used to denote Feb 29 in a non-leap year 15 -> Missing data In summary only a data point with 0 flag (no flag) is valid. The flag 2.0 corresponds to “trace” (as called by Environment Canada) which is a precipitation under 0.2 (mm) that can not be measured accurately and the value is reported as zero. In the above example the flag corresponding to -999.9 is 14 which is to denote Feb 29 in a non–leap year as explained above and this makes sense since the 60th day of the year corresponds to Feb 29th. There are 13 points flagged 2.0 and so we have many “trace” values. For more information regrading the flags and data format refer to ‘Reza canadianinfo.txt” In order to extract and interpret the data, one needs to read each data point as well as the flag corresponding to the data point. 329 B.3. New functions to write stations’ data B.3 New functions to write stations’ data In the Python package, “Reza canadian data.py”, we have also introduced some functions to write the data for a given station including the stations’ information as longitude, latitude and elevation. Using these commands has the advantage that we do not need to worry about the flags anymore. Whenever, the data is missing we will get NA (instead of -999.9) and also for trace values (precipitation occurrence with value smaller than 0.2 (mm)) for precipitation, we get “trace”. The command for getting the data for a given stations is “write station( , , )”. We need three entries for this command: station number, the list of all stations (We can get that by the command stations=get station list(Topdirectory) as shown above.) and value: (“MT”,“mint”,“rain”,“snow”,“precip”). For example consider: >>>write_station(2436,stations,’MT’) The output for this command (if the data are available) is a text file. There is also an output in the terminal. If the data are available the output is “success” and the name of the text file created. If the data are not available then the output is simply “failure”. A statement is also printed depending on the data being available or not. If the data are available, the number of years the data are available is reported and also the name of the file created. For the above example, we get: The data file 3025480-MT.txt created. It should contain 69 years. (’success’, ’3025480-MT.txt’) If the data are not available, we get: There was not any years containing more than 100 days. No file created. (’failure’,’none’) Also note that, we write data for a year only if it contains more than 100 days of data. You can modify this easily by modifying the write function in the module. 330 B.4. Concluding remarks The data files are named by the id followed by “MT”, “mint” or “pre- cip” which stand for Max temperature, min temperature and precipitation respectively. For example, since the id for ABBOTSFORD’s id is ”1100030” the file containing the data for maximum temperature for ABBOTSFORD is called “1100030-MT.txt” Each row of the data files corresponds to a year. The first entry is the year and then 366 entries corresponding to the observed daily values for the given year. Whenever the actual year has 365 days only, the value corresponding to Feb 29th is recorded as NA (60th day of the 366 year). Note that we can use this command in a “for” loop to write a bunch of stations. To keep track of the stations that have data available, we make a list of the created files. In the following we have done that with stations list which contains the number of the stations that have data available. sta- tions list contains the name of the created files. list=733,4034,2517,7467,6744,1518,2113,7269 subset=list value=’MT’ stations_list=[] stations_files=[] for i in subset: snum=i d=write_station(snum,stations,value) if d[0]==’success’: stations_list.append(i) stations_files.append(d[1]) B.4 Concluding remarks The software described in this report can be used to generate datasets suit- able for analysis with R and other standard datasets. Moreover the tutorials and demonstrations should help users understand the process for doing so. 331 Appendix C Algorithms and Complexity In this appendix, we include the definitions for the complexity of the algo- rithms. For a more detailed treatment see [28] for example. Definition We say that f(x) = o(g(x)) if limx→∞ f(x)/g(x) exists and is equal to 0. Definition We say that f(x) = O(g(x)) if ∃C;x0 such that |f(x)| < Cg(x), ∀x > x0. Definition We say that f(x) = Θ(g(x)) if there are constants c1 6= 0; c2 6= 0;x0 such that for all x > x0 it is true that c1g(x) < f(x) < c2g(x). Definition We say that f(x) = Ω(g(x)) if there is an ǫ > 0 and a sequence x1, x2, x3, · · · xn →∞ as n→∞, such that ∀j : |f(xj)| > ǫg(xj). 332 Appendix D Notations and Definitions We follow the widely used conventions throughout the thesis. Latin upper- case letters, often X, Y , Z, sometimes with subscripts such as s, t, are used for random variables. List of notation and abbreviations: R The real numbers: (−∞,∞) N The natural numbers: 1, 2, · · · Z The integer numbers: · · · ,−2,−1, 0, 1, 2, · · · ∼ Distributed as ≈ Approximate to Σ A σ-field (Ω,Σ, P ) A probability space over the set Ω and σ-algebra Σ X A random variable; X : (Ω,Σ, P )→ (R,B) (a measurable function from Ω to R with Borel σ-field) FX The distribution function of the random variable X X|Y Random variable X conditional on random variable Y α̂ Estimate of α N(µ, σ2) Univariate normal distribution with mean µ and variance σ2 {Xt}t∈T A stochastic process over the space T station Gauged site where measurements are available i.i.d Independently and identically distributed E(X) Expectation of random variable X V ar(X) Variance of random variable X Cov(X,Y ) Covariance of random variables X and Y LHS Left hand side RHS Right hand side iff If and only if ∅ The empty set 333 Appendix D. Notations and Definitions sort(x) The sorted version of the vector x stack(x, y) Put (concatenate) the two vectors x and y together (starting from x and ending with y) to make a single vector length(x) The length (dimension) of a vector x argmin a f(a) The set of elements of domain of f that minimize f . MT Maximum temperature during a day mt Minimum temperature during a day PN Precipitation amount (or occurrence) during a day COS and SIN The deterministic process cos(ωt), and sin(ωt), where t denotes time and ω = 2π366 . AIC and BIC Akaike information and Bayesian information criterion A ⊂ B A is a subset of B. It is possible that A = B pn ↑ p The sequence pn is non–decreasing and tends to p pn ↓ p The sequence pn is non–increasing and tends to p X(i) or Xi:n The ith order statistics of a random sample X1, · · · ,Xn {x|P (x)} The set defined by elements that satisfy the property P (x) Definitions • FX or F cX : The distribution function of a random variable; P (X ≤ x). • F oX : The open distribution function; F oX(x) = P (X < x). • GoX : The open right distribution function; GoX(x) = P (X > x). • GcX : The closed right distribution function; GcX(x) = P (X ≥ x). • lqX(p): The left quantile function; lqX(p) = inf{x|FX (x) ≥ p}. • rqX(p): The right quantile function; rqX(p) = inf{x|FX(x) > p}. • (Ω, P, {Xθ}θ∈Θ): A statistical space consisting of a set Ω and a proba- bility measure P on Ω and a set of random variables {Xθ}θ∈Θ indexed by parameter θ in the parameter space Θ a subset of the Euclidean space. • 1A(x) is the standard indicator function formally defined as 1A(x) = { 1 x ∈ A 0 x /∈ A . 334 Appendix D. Notations and Definitions • δX : The probability loss function associated with the random variable X, δX(a, b) = P (a < X < b) + P (b < X < a). 335