{"Affiliation":[{"label":"Affiliation","value":"Science, Faculty of","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","classmap":"vivo:EducationalProcess","property":"vivo:departmentOrSchool"},"iri":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","explain":"VIVO-ISF Ontology V1.6 Property; The department or school name within institution; Not intended to be an institution name."},{"label":"Affiliation","value":"Statistics, Department of","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","classmap":"vivo:EducationalProcess","property":"vivo:departmentOrSchool"},"iri":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","explain":"VIVO-ISF Ontology V1.6 Property; The department or school name within institution; Not intended to be an institution name."}],"AggregatedSourceRepository":[{"label":"AggregatedSourceRepository","value":"DSpace","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","classmap":"ore:Aggregation","property":"edm:dataProvider"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","explain":"A Europeana Data Model Property; The name or identifier of the organization who contributes data indirectly to an aggregation service (e.g. Europeana)"}],"Campus":[{"label":"Campus","value":"UBCV","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeCampus","classmap":"oc:ThesisDescription","property":"oc:degreeCampus"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeCampus","explain":"UBC Open Collections Metadata Components; Local Field; Identifies the name of the campus from which the graduate completed their degree."}],"Creator":[{"label":"Creator","value":"Lo, Chi Ho","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/creator","classmap":"dpla:SourceResource","property":"dcterms:creator"},"iri":"http:\/\/purl.org\/dc\/terms\/creator","explain":"A Dublin Core Terms Property; An entity primarily responsible for making the resource.; Examples of a Contributor include a person, an organization, or a service."}],"DateAvailable":[{"label":"DateAvailable","value":"2009-10-08T20:16:29Z","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/issued","classmap":"edm:WebResource","property":"dcterms:issued"},"iri":"http:\/\/purl.org\/dc\/terms\/issued","explain":"A Dublin Core Terms Property; Date of formal issuance (e.g., publication) of the resource."}],"DateIssued":[{"label":"DateIssued","value":"2009","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/issued","classmap":"oc:SourceResource","property":"dcterms:issued"},"iri":"http:\/\/purl.org\/dc\/terms\/issued","explain":"A Dublin Core Terms Property; Date of formal issuance (e.g., publication) of the resource."}],"Degree":[{"label":"Degree","value":"Doctor of Philosophy - PhD","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#relatedDegree","classmap":"vivo:ThesisDegree","property":"vivo:relatedDegree"},"iri":"http:\/\/vivoweb.org\/ontology\/core#relatedDegree","explain":"VIVO-ISF Ontology V1.6 Property; The thesis degree; Extended Property specified by UBC, as per https:\/\/wiki.duraspace.org\/display\/VIVO\/Ontology+Editor%27s+Guide"}],"DegreeGrantor":[{"label":"DegreeGrantor","value":"University of British Columbia","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeGrantor","classmap":"oc:ThesisDescription","property":"oc:degreeGrantor"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeGrantor","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the institution where thesis was granted."}],"Description":[{"label":"Description","value":"The advancement of biotechnologies has led to indispensable high-throughput techniques for biological and medical research. Microarray is applied to monitor the expression levels of thousands of genes simultaneously, while flow cytometry (FCM) offers rapid quantification of multi-parametric properties for millions of cells. In this thesis, we develop approaches based on mixture modeling to deal with the statistical issues arising from both high-throughput biological data sources.\n\tInference about differential expression is a typical objective in analysis of gene expression data. The use of Bayesian hierarchical gamma-gamma and lognormal-normal models is popular for this type of problem. Some unrealistic assumptions, however, have been made in these frameworks. In view of this, we propose flexible forms of mixture models based on an empirical Bayes approach to extend both frameworks so as to release the unrealistic assumptions, and develop EM-type algorithms for parameter estimation. The extended frameworks have been shown to significantly reduce the false positive rate whilst maintaining a high sensitivity, and are more robust to model misspecification.\n\tFCM analysis currently relies on the sequential application of a series of manually defined 1D or 2D data filters to identify cell populations of interest. This process is time-consuming and ignores the high-dimensionality of FCM data. We reframe this as a clustering problem, and propose a robust model-based clustering approach based on t mixture models with the Box-Cox transformation for identifying cell populations. We describe an EM algorithm to simultaneously handle parameter estimation along with transformation selection and outlier identification, issues of mutual influence. Empirical studies have shown that this approach is well adapted to FCM data, in which a high abundance of outliers and asymmetric cell populations are frequently observed. Finally, in recognition of concern for an efficient automated FCM analysis platform, we have developed an R package called flowClust to automate the gating analysis with the proposed methodology. Focus during package development has been put on the computational efficiency and convenience of use at users' end. The package offers a wealth of tools to summarize and visualize features of the clustering results, and is well integrated with other FCM packages.","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/description","classmap":"dpla:SourceResource","property":"dcterms:description"},"iri":"http:\/\/purl.org\/dc\/terms\/description","explain":"A Dublin Core Terms Property; An account of the resource.; Description may include but is not limited to: an abstract, a table of contents, a graphical representation, or a free-text account of the resource."}],"DigitalResourceOriginalRecord":[{"label":"DigitalResourceOriginalRecord","value":"https:\/\/circle.library.ubc.ca\/rest\/handle\/2429\/13762?expand=metadata","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","classmap":"ore:Aggregation","property":"edm:aggregatedCHO"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","explain":"A Europeana Data Model Property; The identifier of the source object, e.g. the Mona Lisa itself. This could be a full linked open date URI or an internal identifier"}],"Extent":[{"label":"Extent","value":"9508183 bytes","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/extent","classmap":"dpla:SourceResource","property":"dcterms:extent"},"iri":"http:\/\/purl.org\/dc\/terms\/extent","explain":"A Dublin Core Terms Property; The size or duration of the resource."}],"FileFormat":[{"label":"FileFormat","value":"application\/pdf","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/elements\/1.1\/format","classmap":"edm:WebResource","property":"dc:format"},"iri":"http:\/\/purl.org\/dc\/elements\/1.1\/format","explain":"A Dublin Core Elements Property; The file format, physical medium, or dimensions of the resource.; Examples of dimensions include size and duration. Recommended best practice is to use a controlled vocabulary such as the list of Internet Media Types [MIME]."}],"FullText":[{"label":"FullText","value":"Statistical Methods for High Throughput Genomics by Chi Ho Lo B.Sc., The University of Hong Kong, 2003 M.Phil., The University of Hong Kong, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Statistics) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) September 2009 c Chi Ho Lo 2009 \fAbstract The advancement of biotechnologies has led to indispensable high-throughput techniques for biological and medical research. Microarray is applied to monitor the expression levels of thousands of genes simultaneously, while flow cytometry (FCM) offers rapid quantification of multi-parametric properties for millions of cells. In this thesis, we develop approaches based on mixture modeling to deal with the statistical issues arising from both highthroughput biological data sources. Inference about differential expression is a typical objective in analysis of gene expression data. The use of Bayesian hierarchical gamma-gamma and lognormal-normal models is popular for this type of problem. Some unrealistic assumptions, however, have been made in these frameworks. In view of this, we propose flexible forms of mixture models based on an empirical Bayes approach to extend both frameworks so as to release the unrealistic assumptions, and develop EM-type algorithms for parameter estimation. The extended frameworks have been shown to significantly reduce the false positive rate whilst maintaining a high sensitivity, and are more robust to model misspecification. FCM analysis currently relies on the sequential application of a series of manually defined 1D or 2D data filters to identify cell populations of interest. This process is time-consuming and ignores the high-dimensionality of FCM data. We reframe this as a clustering problem, and propose a robust model-based clustering approach based on t mixture models with the BoxCox transformation for identifying cell populations. We describe an EM algorithm to simultaneously handle parameter estimation along with transformation selection and outlier identification, issues of mutual influence. Empirical studies have shown that this approach is well adapted to FCM ii \fdata, in which a high abundance of outliers and asymmetric cell populations are frequently observed. Finally, in recognition of concern for an efficient automated FCM analysis platform, we have developed an R package called flowClust to automate the gating analysis with the proposed methodology. Focus during package development has been put on the computational efficiency and convenience of use at users\u2019 end. The package offers a wealth of tools to summarize and visualize features of the clustering results, and is well integrated with other FCM packages. iii \fTable of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Statement of Co-Authorship . . . . . . . . . . . . . . . . . . . . . xiv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 1.2 1 Differential Gene Expression Analysis of Microarray Data . . 2 1.1.1 The Technology of Microarrays . . . . . . . . . . . . . 2 1.1.2 Methods for Detecting Differentially Expressed Genes 7 Gating Analysis of Flow Cytometry Data . . . . . . . . . . . 12 1.2.1 The Technology of Flow Cytometry . . . . . . . . . . 12 1.2.2 Methods for Identifying Cell Populations . . . . . . . 15 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2 Flexible Empirical Bayes Models for Differential Gene Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2 A Bayesian Framework for Identifying Differential Expression 27 2.2.1 A Hierarchical Model for Measured Intensities . . . . 27 2.2.2 A Mixture Model for Differential Expression . . . . . 29 iv \f2.2.3 2.3 2.4 2.5 Parameter Estimation using the EM-algorithm . . . . 30 Application to Experimental Data . . . . . . . . . . . . . . . 32 2.3.1 Data Description . . . . . . . . . . . . . . . . . . . . 32 2.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Simulation Studies . . . . . . . . . . . . . . . . . . . . . . . . 38 2.4.1 Data Generation . . . . . . . . . . . . . . . . . . . . . 38 2.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3 Flexible Mixture Modeling via the Multivariate t Distribution with the Box-Cox Transformation . . . . . . . . . . . . 48 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.2 The Multivariate t Distribution with the Box-Cox Transformation . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 3.3 3.4 3.5 53 The Mixture Model of t Distributions with the BoxCox Transformation . . . . . . . . . . . . . . . . . . . 55 Application to Real Data . . . . . . . . . . . . . . . . . . . . 64 3.3.1 Data Description . . . . . . . . . . . . . . . . . . . . 64 3.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Simulation Studies . . . . . . . . . . . . . . . . . . . . . . . . 72 3.4.1 Data Generation . . . . . . . . . . . . . . . . . . . . . 73 3.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4 Automated Gating of Flow Cytometry Data via Robust Model-Based Clustering . . . . . . . . . . . . . . . . . . . . . 84 . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.1 Introduction 4.2 Materials and Methods 4.2.1 Data Description . . . . . . . . . . . . . . . . . . . . . 88 . . . . . . . . . . . . . . . . . . . . 88 v \f4.3 4.4 4.2.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . 89 4.2.3 Density Estimation . . . . . . . . . . . . . . . . . . . 91 4.2.4 Sequential Approach to Clustering . . . . . . . . . . . 91 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.1 Application to Real Datasets . . . . . . . . . . . . . . 94 4.3.2 Simulation studies . . . . . . . . . . . . . . . . . . . . 102 Results Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5 flowClust: a Bioconductor package for automated gating of flow cytometry data . . . . . . . . . . . . . . . . . . . . . . . . 115 5.1 Introduction 5.2 Implementation 5.3 Results and Discussion 5.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 . . . . . . . . . . . . . . . . . . . . . . . . . 116 . . . . . . . . . . . . . . . . . . . . . 118 5.3.1 Analysis of Real FCM Data 5.3.2 Integration with flowCore . . . . . . . . . . . . . . . . 126 Conclusion . . . . . . . . . . . . . . 118 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6 Conclusion and Future Directions . . . . . . . . . . . . . . . 131 6.1 Summary and Discussion . . . . . . . . . . . . . . . . . . . . 131 6.2 Future Directions 6.2.1 . . . . . . . . . . . . . . . . . . . . . . . . 133 Robustification of the Empirical Bayes Model for Differential Gene Expression . . . . . . . . . . . . . . . . 133 6.2.2 Development of an Automated FCM Analysis Pipeline 135 6.2.3 Combining Mixture Components in Clustering . . . . 138 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Appendices A Additional Material for Chapter 2 . . . . . . . . . . . . . . . 145 A.1 Marginal Densities of Measured Intensities A.2 Estimation of \u03b7 and \u03be for the Prior of ag . . . . . . . . . . 145 . . . . . . . . . . . 147 vi \fA.3 Initialization of the EM Algorithm . . . . . . . . . . . . . . . 147 B Vignette of the flowClust Package . . . . . . . . . . . . . . . 149 B.1 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 B.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 B.3 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 B.3.1 Unix\/Linux\/Mac Users . . . . . . . . . . . . . . . . . 150 B.3.2 Windows Users . . . . . . . . . . . . . . . . . . . . . 152 B.4 Example: Clustering of the Rituximab Dataset . . . . . . . . 153 B.4.1 The Core Function . . . . . . . . . . . . . . . . . . . 153 B.4.2 Visualization of Clustering Results . . . . . . . . . . . 156 B.4.3 Integration with flowCore . . . . . . . . . . . . . . . . 159 C Code to Produce the Plots in Chapter 5 . . . . . . . . . . . 163 vii \fList of Tables 2.1 Analysis of differential expression with the HIV-1 data. . . . 2.2 Analysis of differential expression with the HGU95A spike-in data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 73 Average misclassification rates for different models applied to datasets generated under the bankruptcy or crabs setting. . . 3.4 66 The number of components selected by the BIC for different models applied to the bankruptcy and crabs datasets. . . . . 3.3 37 Misclassification rates for different models applied to the bankruptcy and crabs datasets. . . . . . . . . . . . . . . . . . . . . 3.2 36 Analysis of differential expression with the HGU133A spike-in data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 34 74 90% coverage intervals of the number of components selected by the BIC for different models applied to datasets generated under the crabs setting. . . . . . . . . . . . . . . . . . . . . . 4.1 76 Average misclassification rates for different models applied to data generated under the Rituximab or GvHD setting. . . . . 103 4.2 Modes and 80% coverage intervals of the number of clusters selected by the BIC for different models applied to data generated under the GvHD setting. . . . . . . . . . . . . . . . . . 106 viii \fList of Figures 1.1 The central dogma of molecular biology. . . . . . . . . . . . . 3 1.2 The cDNA microarray experiment. . . . . . . . . . . . . . . . 5 1.3 The representation of a gene with a probe set on Affymetrix GeneChip arrays. . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 The schematic overview of a typical flow cytometer setup. . . 13 1.5 The occurrence of light scattering. . . . . . . . . . . . . . . . 14 1.6 Specific binding of fluorochrome-labeled antibodies to antigens. 15 1.7 A typical manual gating analysis. . . . . . . . . . . . . . . . . 2.1 Histograms of robust empirical estimates of ag \u2019s with fitted 16 Lognormal density curves shown on a log scale under the extended Gamma-Gamma modeling framework. . . . . . . . . . 28 2.2 Simulation results generated from the extended GG model. . 39 2.3 Simulation results generated from the extended LNN model. . 40 2.4 Simulation results generated from the EBarrays GG model. . 41 2.5 Simulation results generated from the EBarrays LNN model. 42 3.1 Contour plots revealing the shape of bivariate t distributions with the Box-Cox transformation for different values of the transformation parameter. . . . . . . . . . . . . . . . . . . . . 3.2 Scatterplots revealing the assignment of observations for different models applied to the crabs dataset. . . . . . . . . . . . 3.3 54 63 Scatterplots revealing the assignment of observations for different models applied to the bankruptcy dataset. . . . . . . . 67 ix \f3.4 Plots revealing the location of misclassified observations relative to the ordered uncertainties of all observations for different models applied to the bankruptcy dataset. . . . . . . . 3.5 68 Plots revealing the assignment of observations for different models applied to the crabs dataset, displayed via the second and third principal components. . . . . . . . . . . . . . . . . . 3.6 70 Plots revealing the location of misclassified observations relative to the ordered uncertainties of all observations for different models applied to the crabs dataset. . . . . . . . . . . . 3.7 71 Plots of BIC against the number of components for the different models applied to the bankruptcy and crabs datasets. . 72 4.1 A synthetic 2D dataset with three mixture components. . . . 86 4.2 Strategy for clustering the GvHD positive sample to look for CD3+ CD4+ CD8\u03b2 + cells. . . . . . . . . . . . . . . . . . . . . . 92 4.3 Strategy for clustering the GvHD control sample. . . . . . . . 93 4.4 Initial clustering of the Rituximab data using the FSC and SSC variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 BIC as a function of the number of clusters for different models applied to the Rituximab data. . . . . . . . . . . . . . . . 4.6 97 Second-stage clustering of the Rituximab data using all the fluorescent markers (four clusters). . . . . . . . . . . . . . . . 4.8 96 Second-stage clustering of the Rituximab data using all the fluorescent markers (three clusters). . . . . . . . . . . . . . . 4.7 95 98 Initial clustering of the GvHD positive sample using the FSC and SSC variables. . . . . . . . . . . . . . . . . . . . . . . . . 100 4.9 Second-stage clustering of the GvHD positive and control samples using all the fluorescent markers. . . . . . . . . . . . 101 4.10 A representative sample generated from the t mixture model with the Box-Cox transformation under the GvHD setting. . 104 5.1 A plot of BIC against the number of clusters for the first-stage cluster analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 120 x \f5.2 A scatterplot revealing the cluster assignment in the firststage analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.3 A plot of BIC against the number of clusters for the secondstage cluster analysis. . . . . . . . . . . . . . . . . . . . . . . 124 5.4 Plots of CD8\u03b2 against CD4 for the CD3+ population. . . . . 125 6.1 The overall flow of the proposed automated FCM analysis pipeline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.2 Clustering of red blood cell samples from the PNH data. . . . 137 xi \fAcknowledgements I dedicate my greatest gratitude to my supervisors, Raphael Gottardo and Ryan R. Brinkman, for their inspirational guidance and continuous support throughout this journey. I would also like to thank my committee members, Jenny Bryan and Kevin Murphy, for their invaluable advice on my research and career. In addition to the aforementioned exceptional scholars, I feel honored to study at the UBC Statistics Department filled with excellent mentors including Paul Gustafson, Harry Joe, John Petkau and Mat\u0131\u0301as Salibia\u0301nBarrera (in alphabetical order). I have learnt so much from their insightful advice and tremendous knowledge of statistics. My list of acknowledgements also goes to \u2022 Ryan R. Brinkman (again!), Ali Bashashati and Josef Spidlen, among others, for their assistance and helpful discussion during my internship at the British Columbia Cancer Research Centre; \u2022 Florian Hahne, Martin Morgan, Patrick Aboyoun and Marc Carlson from Fred Hutchinson Cancer Research Centre for their professional advice on the technical issues in software development; \u2022 Bakul Dalal, Maura Gasparetto, Mario Roederer, Clayton Smith and the British Columbia Cancer Research Centre for kindly offering clinical data and providing assistance to interpret the data; \u2022 WestGrid for providing tremendous computational resources which are indispensable to my research, and its technical support team, Roman Baranowski in particular, for their prompt and efficient response to my endless questions about the priority system for resource allocation; xii \f\u2022 Tony W. K. Fung for enriching my research experience with his excellent supervision during my Master\u2019s at the University of Hong Kong; \u2022 the office ladies, namely, Peggy Ng, Elaine Salameh and Viena Tran, from whom I truly feel that they always provide efficient assistance with sincere care rather than merely carrying out their duties. The research constituting this thesis has received funding support from Genome Canada, MITACS, NIH, NSERC, PIMS, and University Graduate Fellowships. xiii \fStatement of Co-Authorship This thesis is completed under the supervision of Dr. Raphael Gottardo and Dr. Ryan R. Brinkman. Chapter 2 of this thesis is co-authored with Dr. Raphael Gottardo. I developed the methodology, performed the analysis, and prepared the manuscript. Chapter 3 is co-authored with Dr. Raphael Gottardo. I identified the research problem, conceived of the study, developed the methodology, performed the analysis, and prepared the manuscript. Chapter 4 is co-authored with Dr. Ryan R. Brinkman and Dr. Raphael Gottardo. I developed the methodology, performed the analysis, and prepared the manuscript. Chapter 5 is co-authored with Dr. Florian Hahne, Dr. Ryan R. Brinkman and Dr. Raphael Gottardo. I conceived of the study, developed the methodology and software, performed the analysis, and prepared the manuscript. xiv \fChapter 1 Introduction Recent technological advances in molecular biology have enabled the rapid quantification of characteristics for an enormous number of genes or cells under the same experimental condition. Microarray has been being a popular technique for monitoring the expression levels of thousands of genes for more than a decade, while flow cytometry (FCM) offers quantification of multi-parametric properties for up to millions of cells. To date, extensive applications of these two high-throughput technologies can be found in health research, medical diagnosis and treatment, drug discovery and vaccine development (Schena et al., 1995; DeRisi et al., 1996; Behr et al., 1999; Debouck and Goodfellow, 1999; Hengel and Nicholson, 2001; Braylan, 2004; Illoh, 2004; Mandy, 2004; Orfao et al., 2004; Bolton and Roederer, 2009). The interest in studying changes in gene expression levels over experimental conditions have led to the development of a wealth of methodology for identifying differentially expressed genes. Meanwhile, the tremendous attention built towards FCM in recent years has urged the need for both methodological and software development for an automated analysis platform of gating, the process of identifying cell populations. In this thesis, we show that the aforementioned issues can be recast into problems of clustering, the process of looking for homogeneous groups of observations in statistics. We introduce flexible forms of finite mixture models (Titterington et al., 1985; Banfield and Raftery, 1993; McLachlan and Peel, 2000; Fraley and Raftery, 2002), commonly applied as a statistical tool for clustering, which serve as the modeling basis for approaches developed to deal with the issues arising from both high-throughput biological data sources. In this chapter, we review the technology of microarrays and several popular methods for differential gene expression. We then give a brief account 1 \fof the FCM technology as well as a few attempts to automate the gating analysis to date. Next, in Chapter 2, we introduce mixture models based on a flexible empirical Bayes approach to detect differentially expressed genes in microarray data. This approach releases the unreasonable assumptions and enhances the flexibility of models introduced in Newton et al. (2001) and Kendziorski et al. (2003). In Chapter 3, we develop a unified framework to simultaneously handle data transformation, outlier identification and clustering, issues which are of mutual influence. This methodology stems from a mixture model using the multivariate t distributions with the Box-Cox transformation, which can be viewed as a new class of distributions extending the t distribution. We proceed to present in Chapter 4 the result obtained on applying the proposed methodology to FCM data, from which cell populations asymmetric in shape and an abundance of outliers are often observed. In Chapter 5 we introduce an open-source software package called flowClust to implement the methodology introduced in Chapters 3 and 4. This publicly available package addresses a bottleneck to FCM that there is a dearth of software tools to manage, analyze and present data on a sound theoretical ground. Finally, we conclude in Chapter 6 with a discussion of the overall contribution of this research work, and directions for future extensions. 1.1 Differential Gene Expression Analysis of Microarray Data 1.1.1 The Technology of Microarrays The structure, function, development and reproduction of an organism depends on the type and amount of proteins present in each cell and tissue. A protein is a sequence of up to 20 types of amino acids, which is specified by the nucleotide sequence of the encoding gene(s). The synthesis of proteins consists of two major stages, transcription and translation, and is described by the central dogma of molecular biology (Crick, 1970); see Figure 1.1. The genetic information encoded by the deoxyribonucleic acid (DNA) is first 2 \fFigure 1.1: The central dogma of molecular biology. The synthesis of proteins constitutes two major stages, transcription and translation. Part of the DNA is first transcribed into the single-stranded mRNA taking a complementary sequence. The mRNA then migrates from the nucleus to the cytoplasm, and is translated into proteins. (Picture source: accessexcellence.org) 3 \ftranscribed into the messenger ribonucleic acid (mRNA), a single-stranded sequence complementary to the base sequence in the DNA. The mRNA then migrates from the nucleus to the cytoplasm, and is translated into proteins. When a gene is transcribed and then translated, we say that it is expressed. Cells under different conditions tend to express different sets of genes, and thereby synthesize different proteins. To understand a biological process, it is important to know what proteins are being processed. Nonetheless, due to the complex structures, the analysis of proteins is difficult. Based on the fact that the mRNA gets translated into proteins, the analysis of gene expression helps provide information about the biological process of interest. This is where microarrays, a technology which facilitates the simultaneous measurement of expression levels of thousands of genes, come forth. The microarray technology relies on two key chemical processes, reverse transcription and hybridization. The process of reverse transcription creates single-stranded complementary DNA (cDNA) copy of mRNA transcripts experimentally isolated from a cell. Hybridization is the process of combining two single strands of DNA or RNA into a single molecule. Two strands which are perfectly complementary to each other tend to bind together, resulting in specific hybridization. The term \u201cspecific\u201d is used as opposed to the case in which binding randomly occurs between two strands that do not form a complementary pair. Microarrays can be classified into two categories: the cDNA microarrays and the oligonucleotide arrays. Below we give a brief account of each of these two categories. cDNA Microarrays A cDNA microarray consists of thousands of microscopic spots attached to a solid surface, with each spot containing a massive number of identical DNA sequences to serve as probes. The choice of probes may be customized inhouse to satisfy specific experimental needs. In a typical dual-color cDNA microarray experiment, two mRNA samples extracted from different exper- 4 \fcDNA microarrays cDNAmicroarrays microarrays cDNA Control Control Control Treatment Treatment Treatment PCR PCRamplification amplificationofofcDNA cDNAclones clones PCR amplification of cDNA clones Spot SpotcDNA cDNAprobes probes(Arrayer) (Arrayer) 22 Gene Spot cDNA probes (Arrayer) Gene mRNA mRNA mRNA Gene 2 Gene 11 Gene Gene 1 RTRT Labelling RT Labelling Labelling cDNA cDNA cDNA Cy3 Cy3 Cy3 Cy5 Cy5 Cy5 Mix Mixequal equal Mix equal amounts totoobtain amounts obtain amounts obtain target solution targetto solution target solution Hybridize Hybridize Hybridize target totoprobes target probes target to probes Scan Scan Scan Introduction to to microarrays - Raphael Gottardo Introduction microarrays - Raphael Gottardo Introduction to microarrays - Raphael Gottardo Two TwoImages Images Twoand Images Control Control andTreatment Treatment Control and Treatment 77 7 Figure 1.2: The cDNA microarray experiment. A cDNA microarray is a glass microscope slide spotted with individual DNA sequences as probes. The cDNA solutions, prepared from mRNA by reverse transcription, are labeled with green and red dyes respectively to identify the source (control and treatment). The mixed cDNA target solution is then hybridized with the probes on the microarray. The array is scanned twice to obtain images for the red and green intensities. imental conditions are reverse-transcribed into cDNA, labeled with different fluorescent dyes (red and green), mixed and targeted against the probes on the microarray. Owing to the property of preferential binding of a labeled cDNA molecule (target) to a probe containing the complementary sequence, specific hybridization occurs, under some stringent environment. The array is then scanned and the red and green intensities for each spot are measured. Figure 1.2 illustrates such an experiment. Oligonucleotide Arrays In a high-density oligonucleotide array, the probes are composed of short DNA sequences known as oligonucleotides. Affymetrix GeneChip arrays, 5 \fAffymetrix: Probe Probe set set Gene Gene Fragment Fragment ... ACGTTACGAGAGATCGATCAGTCAGTACTAGTACTTGCCTAGCTAGC ... ... ... ACGTTACGAGAGATCGATCAGTCAGTACTAGTACTTGCCTAGCTAGC AGATCGATCAGTCAGTACTAGTACT AGATCGATCAGTCAGTACTAGTACT Perfect Match (PM) Perfect (PM) probe probe AGATCGATCAGTGAGTACTAGTACT AGATCGATCAGTGAGTACTAGTACT MisMatch (MM) probe MisMatch probe probe pair pair consists AAprobe consistsofof PM and and MM MM probe a aPM probe PM MM Probe pair set Fluorescent image Figure 1.3: The representation of a gene with a probe set on Affymetrix 5 12 GeneChip arrays. A probe set consists of 11\u201320 distinct probe pairs. Each perfect match (PM) probe contains an excerpted sequence (25bp long) of the gene. A mismatch (MM) probe is created from a PM probe by changing the middle base to its complement. Introduction to Introduction to microarrays microarrays- -Raphael RaphaelGottardo Gottardo which consist of up to 33,000 genes with probes containing oligonucleotides 25bp long, are the most popular in this technology. Each gene is represented by a set of 11\u201320 distinct probe pairs. The perfect match (PM) probe of each probe pair contains a section of the mRNA molecule of interest; the mismatch (MM) probe is created by changing the middle (13th) base of the PM probe. The use of probe sets to represent genes reduces the chance of non-specific hybridization by including only probes unique to the genome, while the presence of MM probes helps quantify the non-specific hybridization that still occurs. A graphical depiction of the relationship between gene sequence and probe set is given in Figure 1.3. Various methods are available for computing the expression summary values from the probe intensities, for example, gcRMA (Wu et al., 2004), RMA (Irizarry et al., 2003), MAS 5 (Affymetrix Manual, 2001), and dChip (Li and Wong, 2001). Compared to cDNA microarrays, high-density oligonucleotide arrays have 6 \fa lower chance of non-specific hybridization and a high detection specificity, and allow for more genes to be probed in one experiment. However, an Affymetrix GeneChip array does not support a dual-channel system and can only be exposed to one sample in each experiment. Also, as an off-theshelf product, the probes on the array are not to be customized. 1.1.2 Methods for Detecting Differentially Expressed Genes The analysis of differential gene expression helps us understand how genes are differentially expressed under different conditions, for example, normal and cancer. In recent years, there has been a considerable amount of work on the detection of differentially expressed genes. In the following we give a review of some representative methods. t Tests and Variants Simplistic statistical treatments include the use of two-sample t tests on the log intensities, or one-sample t tests on the log intensity ratios for each gene (Callow et al., 2000). A gene is declared to be differentially expressed if its p-value is less than a threshold, for example, 0.05. Because of the large number of hypothesis tests, adjustment methods such as Bonferroni or Holm-Bonferroni should be employed to control the familywise error rate, the probability of yielding one or more false positives. In addition, due to the small number of replicates in microarray experiments, the gene-specific variance can be poorly estimated. Baldi and Long (2001) suggested using a modified t test statistic where the denominator is regularized by using a weighted average of the gene-specific and global variance estimates: \u03bd0 s20 + (R \u2212 1)s2g , \u03bd0 + R \u2212 2 (1.1) where R is the number of replicates, s20 and s2g are respectively the estimates for the global and gene-specific variances, and \u03bd0 is a tuning parameter that governs the contribution of the global variance. Here, \u201cglobal\u201d may refer to all the genes, or those in the neighborhood of gene g. This regularized 7 \fvariance estimate is derived from the mean of the posterior distribution of the gene-specific variance in a Bayesian framework. Significance Analysis of Microarrays (SAM) SAM, proposed by Tusher et al. (2001), uses a regularized t statistic dg where a constant c is added to the gene-specific standard error sg : dg = Mg , c + sg (1.2) where M g denotes the average log intensity ratio for gene g. The value of c was suggested by Efron et al. (2001) to be the 90th percentile of all sg . To estimate empirically the distribution of the statistic dg under the null hypothesis (no differential expression), different permutations of the replicates are considered, and the statistic shown in Eq.(1.2) is recomputed for each permutation. The average of the statistics over all permutations, denoted as d\u02dcg , is then determined for each gene. By considering the displacement of dg from d\u02dcg , and a threshold \u2206, asymmetric cutoffs are obtained as the smallest dg such that dg \u2212 d\u02dcg > \u2206, and the largest dg such that dg \u2212 d\u02dcg < \u2212\u2206. The threshold \u2206 is determined by controlling the false discovery rate (FDR), the proportion of falsely identified genes among the genes declared differentially expressed, at 10% or a reasonable level. In SAM, the FDR is estimated as the ratio of the average number of genes called significant from those permuted datasets to that number from the original dataset. Lo\u0308nnstedt and Speed\u2019s B statistic Making use of an empirical Bayes normal mixture model, Lo\u0308nnstedt and Speed (2002) proposed the log posterior odds statistic, more conveniently called the B statistic, to determine differentially expressed genes. More explicitly, the log intensity ratio Mgr for gene g on the r-th replicate is assumed to follow a normal distribution N(\u00b5g , k\u03c4g\u22121 ). Let Ig be the indicator variable such that Ig = 1 if gene g is differentially expressed and Ig = 0 otherwise. The parameters \u00b5g and \u03c4g have the following conjugate prior 8 \fdistribution: \u03c4g \u223c Ga(\u03bd\/2, 1) ( 0 if Ig = 0 \u00b5g |\u03c4g = \u22121 N(0, ck\u03c4g ) if Ig = 1 A mixture structure is implicitly assumed by the above specification on \u00b5g . Let p = Pr(Ig = 1) be the proportion of differentially expressed genes. The log posterior odds of differential expression is derived to be Pr(Ig = 1|M) Pr(Ig = 0|M) p f (Mg |Ig = 1) = log . 1 \u2212 p f (Mg |Ig = 0) Bg = log (1.3) A large value of Bg is in favor of the alternative hypothesis of differential expression. Note, however, that due to computational difficulty, the authors did not estimate the proportion p and fixed it a priori. In consequence, a cutoff on Bg for declaring differential expression could not be determined on an objective ground. Linear Models for Microarray Data (LIMMA) LIMMA (Smyth, 2004) reformulates the aforementioned hierarchical model of Lo\u0308nnstedt and Speed (2002) in the context of general linear models to cater for the case of a different number of replicates in different conditions and the case of multiple conditions. In addition, LIMMA uses a moderated t statistic in place of the posterior odds statistic given by Eq.(1.3) for inferencing about differential expression: t\u0303g = Mg \u221a , s\u0303g \/ R (1.4) where the posterior variance estimate s\u03032g \u03bds2 + (R \u2212 1)s2g = \u03bd+R\u22121 (1.5) 9 \fprovides shrinkage of the sample variance s2g towards a pooled estimate s2 , resulting in more stable inference when the number of replicates is small. The computation of the moderated t statistic does not depend on p in Eq.(1.3), the potentially contentious parameter that is left un-estimated in Lo\u0308nnstedt and Speed (2002). Efron\u2019s Local False Discovery Rate (fdr) Efron (2004) proposed an empirical Bayes approach combined with a local version of the false discovery rate to test for differential expression. In this method, t test statistics are first obtained, one for each gene. The associated p-values are converted into z-scores defined as zg = \u03a6\u22121 (pg ), where \u03a6 indicates the standard normal distribution function. A two-component mixture, f (zg ) = p0 f0 (zg ) + p1 f1 (zg ), (1.6) where f0 and f1 refer to the density of the z-scores under the null (no differential expression) and alternative (differential expression) hypotheses respectively, and p0 and p1 are the proportion of true null and alternative hypotheses, is used to model the z-scores. The mixture density f and the null density f0 are then empirically estimated. For each gene, inference is based on the local false discovery rate, which is defined as fdr(zg ) \u2261 f\u02c60 (zg ) . f\u02c6(zg ) (1.7) The notation fdr is deliberately shown in lowercase to signify its difference from the usual definition of false discovery rate proposed by Benjamini and Hochberg (1995). A gene with an fdr lower than some threshold, say, 10%, will be called differentially expressed. 10 \fEmpirical Bayes Gamma-Gamma and Lognormal-Normal Models (EBarrays) Newton et al. (2001) developed a method for detecting changes in gene expression using a hierarchical gamma-gamma (GG) model. Kendziorski et al. (2003) extended this to multiple replicates with multiple conditions, and provided the option of using a hierarchical lognormal-normal (LNN) model. For the GG model, each observation is modeled by a gamma distribution with shape a and rate \u03b8. Strength is borrowed across genes by assuming a gamma prior on \u03b8. Denote by xg and yg the intensities for gene g in two conditions respectively. The following two-component mixture is used to model the data: p(xg , yg ) = p pA (xg , yg ) + (1 \u2212 p) p0 (xg , yg ), (1.8) where pA (xg , yg ) = Z\u0010 Q Z\u0010 \u0011 \u0011 Q p(y |a, \u03b8 ) p(x |a, \u03b8 ) \u03c0(\u03b8 ) d\u03b8 \u00b7 gr gy \u03c0(\u03b8gy ) d\u03b8gy gr gx gx gx r r (1.9) is the marginal density for differentially expressed genes using conditionspecific rate parameters, and p0 (xg , yg ) = Z \u0010 Q r p(xgr |a, \u03b8g ) Q \u0011 p(y |a, \u03b8 ) \u03c0(\u03b8g ) d\u03b8g gr g r (1.10) is for non-differentially expressed genes using a common rate parameter. The LNN model assumes a lognormal sampling distribution for each observation with mean \u00b5 and variance \u03c3 2 . A conjugate normal prior is imposed on \u00b5. The corresponding mixture model takes a form similar to that shown in Eq.(1.8), where the marginal densities are derived by considering \u00b5gx 6= \u00b5gy (differential expression) and \u00b5gx = \u00b5gy (no differential expression) respectively. Note that, in both models, the assumption of a constant coefficient of variation for all genes has been implicitly made. For both models, the prior can be integrated out and the EM algorithm can be used to estimate the unknown parameters. Inference is based on the posterior probabilities 11 \fof differential expression. Bayesian Robust Inference for Differential Gene Expression (BRIDGE) Gottardo et al. (2006) developed a robust Bayesian hierarchical model for testing differential expression. The model may be viewed as an extension of the LNN specification of EBarrays. An enhanced flexibility is achieved by measures taken to release the implicitly made assumption of the constant coefficient of variation in EBarrays, and to account for outliers. BRIDGE includes an exchangeable prior for the variances, which allows different variances for the genes whilst still achieving shrinkage of extreme variances. In addition, observations are modeled using a t distribution, which accounts for outliers. A fully Bayesian approach is adopted, by assuming vague priors on the parameters and carrying out parameter estimation using Markov chain Monte Carlo (MCMC) algorithms (Gelfand and Smith, 1990). Due to the relatively small number of replicates in microarray experiments, combining information across genes in statistical analysis is vital. A Bayesian framework fits such a scenario very well, and is adopted by most of the aforementioned methods. Also, mixture modeling is a popular strategy among these methods. Incidentally, the majority of the aforementioned methods are also applicable to oligonucleotide arrays upon slight or no modification, although they are presented primarily for cDNA microarrays. Also, most of the methods have included an extension to multiple conditions. 1.2 1.2.1 Gating Analysis of Flow Cytometry Data The Technology of Flow Cytometry Flow cytometry (FCM) is a high-throughput technology that offers automated quantification of a set of physical and chemical characteristics for up to millions of cells in a sample. The characteristics measured for each cell include size, granularity or internal complexity, and fluorescence inten12 \fFigure 1.4: The schematic overview of a typical flow cytometer setup. Cells in the fluidics system are aligned in single file. When a cell intercepts the light source, light scattering and emission will occur. The scattered light is collected by the forward scatter (FSC) and sideward scatter (SSC) detectors. The emitted light is collected by the photomultiplier tubes (PMT), each of which targets at an individual narrow range of wavelengths. (Picture source: ab-direct.com) 13 \fprocessing. Side-scattered light (SSC) is proportional to cell granularity or internal complexity. SSC is a measurement of mostly refracted and reflected light that occurs at any interface within the cell where there is a change in refractive index (Figure 3-1). SSC is collected at approximately 90 degrees to the laser beam by a collection lens and then redirected by a beam splitter to the appropriate detector. side scatter detector light source forward scatter detector Figure 3-1 Light-scattering properties of a cell Figure 1.5: The occurrence of light scattering. The forward scatter detector measures the amount of light diffracted by a cell in the forward direction. The sideward scatter detector measures the amount of refracted or reflected light. (Picture source: BD Biosciences) sity. FCM is widely used in health research and treatment for a variety of tasks, such as the monitoring of the course and treatment of HIV infection, the diagnosis and monitoring of leukemia and lymphoma patients, the cross-matching of organs for transplantation, and research on vaccine development (Hengel and Nicholson, 2001; Bagwell, 2004; Braylan, 2004; Illoh, 2004; Krutzik et al., 2004; Mandy, 2004; Orfao et al., 2004; Bolton and Roederer, 2009). Figure 1.4 gives a schematic overview of a typical flow cytometer setup. 10 Cells are introduced into the sample core of the flow cytometer, where hy- drodynamic forces align the cells to move in single file and at speeds up to 70, 000 cells per second. When a cell intercepts a light source (e.g., laser), light scattering will occur. The forward scatter (FSC) detector in Figure 1.5 measures the amount of light diffracted in the forward direction, and is proportional to the cell-surface area or size. The sideward scatter (SSC) detector measures the amount of light refracted or reflected by any interface within the cell, and is proportional to cell granularity or internal complexity. Before they are introduced into the flow cytometer, cells have been tagged with fluorescently conjugated antibodies bound to the antigens (Fig14 \fFigure 1.6: Specific binding of fluorochrome-labeled antibodies to antigens. ure 1.6). The fluorochromes will be excited by the light source to incur light emission, when a cell intercepts the laser. The emitted light will be diverted to a series of fluorescent detectors. Each of the fluorescent detectors targets at an individual narrow range of wavelengths, close to the peak emission wavelength characteristic of an individual fluorochrome. The fluorescent signals detected are proportional to the amount of individual fluorochromes present. As each fluorochrome is conjugated to an antibody, the signal can be used to measure the amount of individual antigens. Cells of different types have different correlated measurements of FSC and SSC. Antigens are also present in the cells at different amounts. The fluorescent signals, combined with the FSC and SSC measurements, can therefore be used to identify cell populations (homogeneous groups of cells that display a particular function) and their relative abundance in a sample. 1.2.2 Methods for Identifying Cell Populations One major component of FCM analysis involves the process of identifying cell populations. This is referred to as gating in the FCM community. Conventionally, the identification of cell populations relies on applying sequentially a series of manually drawn filters, i.e., gates, to subset and select regions in 1D or 2D graphical representations of the data; see Figure 1.7 for an example. In such an analysis, the choice of which sequence of parameters 15 \fFigure 1.7: A typical manual gating analysis. (a) Data are projected onto the FSC and SSC dimensions to identify basic cell populations. An expert researcher draws a gate on the contour density plot to define the lymphocyte population. (b) The selected cells are then projected on the CD3 dimension, and CD3+ cells are defined through an interval gate with the marked threshold as the lower bound. (c) A quadrant gate is applied on the projections along the CD4 and CD8\u03b2 dimensions. Cells within the upper right gate is referred to as CD3+ CD4+ CD8\u03b2 + , the cell population of interest in this analysis. to gate on and where to position the gates are highly subjective. It also ignores the high-multidimensionality of FCM data, which may convey information that cannot be displayed on 1D or 2D projections. In addition, there is a major concern for the manually time-consuming input to the manual gating analysis. It is not uncommon for an FCM study to include thousands of samples to analyze, thanks to the high-throughput technological advancement in generating FCM data. Attempts for partial automation have been made by using software to automatically apply a template gating sequence with the same set of gates on all samples. Nevertheless, the improvement in overall efficiency is limited as the variation between samples has not been taken into account, and therefore sample-by-sample manual adjustment of the position of the gates cannot be avoided. As noted in Lizard (2007), the lack of an automated analysis platform to parallel the high-throughput data-generation platform has become a major bottleneck for FCM. In statistics, gating may be reframed as a clustering problem. There have 16 \fbeen few attempts to devise an automated platform with a sound statistical framework for gating analysis. Below is a brief account of the methods that have been applied to automate the gating analysis, with the latter two being recent new additions. K-means Clustering K-means clustering (MacQueen, 1967) is a relatively early attempt for FCM analysis. Its objective is to obtain a partition P with K clusters P1 , P2 , . . . , PK , corresponding to cell populations in FCM data, such that the within-cluster sum of squares is minimized: arg min P K X X (yi \u2212 \u00b5k )T (yi \u2212 \u00b5k ), (1.11) k=1 yi \u2208Pk where yi is an observation vector and \u00b5k is the mean of Pk . The algorithm starts with K randomly selected points as means. A K-cluster partition of the data is obtained by assigning each observation to the nearest mean, followed by a recomputation of the cluster means. Such a procedure repeats until there is no change in the assignment of observations. This method was found to be equivalent to the classification EM algorithm (Celeux and Govaert, 1992, 1995) for a Gaussian mixture model with equal mixing proportions, and a scalar multiple of the identity matrix as a common covariance matrix. Bayesian Mixture Modeling using Gaussian Distributions Chan et al. (2008) proposed a Bayesian approach of representing cell populations with Gaussian components in a mixture model. Explicitly, observation yi in an FCM dataset is modeled as f (yi |w1 , . . . , wK , \u00b51 , . . . , \u00b5K , \u03931 , . . . , \u0393K ) = K X wk \u03c6(yi |\u00b5k , \u0393\u22121 k ), (1.12) k=1 17 \fwhere \u03c6(\u00b7|\u00b5k , \u0393\u22121 k ) is the multivariate Gaussian density with mean \u00b5k and covariance matrix \u0393\u22121 k , and wk is the mixing proportion. The parameters in Eq.(1.12) are assumed to follow a conjugate prior distribution: (w1 , . . . , wK ) \u223c D(\u03b11 , . . . , \u03b1K ) \u00b5k \u223c N(mk , \u03bbk \u0393\u22121 k ) \u0393k \u223c W(rk , Vk ) where D(\u00b7) and W(\u00b7) denote the Dirichlet and Wishart distributions respectively. Parameter estimation is performed using Gibbs sampling described in Lavine and West (1992). The number of mixture components is selected via the Bayesian Information Criterion (Schwarz, 1978). Attempts to resolve the inadequacy of Gaussian distributions to asymmetric components are made by merging components which share a common local mode in the fitted mixture distribution. Nevertheless, Gaussian mixture models are known to be vulnerable to the presence of outliers, which are frequently observed in FCM data. Moreover, considering the large size of FCM datasets, the use of MCMC techniques for parameter estimation requires an enormous amount of computational time. In FCM analysis usually involving a large number of samples, time is of the essence. Mixture Modeling using Skew t Distributions Another model-based clustering approach of automating the gating analysis was proposed by Pyne et al. (2009), who addressed the issues of asymmetric cell populations and outliers with the skew t distributions (Sahu et al., 2003). Each component in the mixture is modeled by a p-dimensional skew t distribution with the following density: f (y|\u00b5, \u03a3, \u03b4, \u03bd) = 2 \u03d5(y|\u00b5, \u2126, \u03bd) T \u03b2 \u03c3 r \u03bd+p \u03bd+\u03b7 ! \u03bd , (1.13) where \u03d5(\u00b7|\u00b5, \u2126, \u03bd) is the multivariate t density with mean \u00b5 (\u03bd > 1), variance \u03bd(\u03bd \u2212 2)\u22121 \u2126 (\u03bd > 2) and degrees of freedom \u03bd, T (\u00b7|\u03bd) is the univariate 18 \fstandard t distribution function with \u03bd degrees of freedom, \u2126 = \u00b5+\u03b4\u03b4 T , \u03b2 = \u03b4 T \u2126\u22121 (y \u2212\u00b5), \u03c3 2 = 1\u2212\u03b4 T \u2126\u22121 \u03b4, and \u03b7 = (y \u2212\u00b5)T \u2126\u22121 (y \u2212\u00b5). A stochastic representation of the distribution is given by Y = \u00b5 + U + \u03b4|Z|, (1.14) where U \u223c N(0, \u03a3\/\u03c4 ) Z \u223c N(0, 1\/\u03c4 ) \u03c4 \u223c Ga(\u03bd\/2, \u03bd\/2) and U, Z and \u03c4 are all independently distributed. Deduced from Eq.(1.14), extensions of the Monte Carlo EM algorithm (Wei and Tanner, 1990) have been developed for parameter estimation complicated by analytically intractable quantities in the E step (Lin, 2009). 19 \fBibliography Affymetrix Manual (2001). Affymetrix Microarray Suite User Guide Version 5.0. Santa Clara, CA. Bagwell, C. B. (2004). DNA histogram analysis for node-negative breast cancer. Cytometry Part A, 58A(1):76\u201378. Baldi, P. and Long, A. D. (2001). A Bayesian framework for the analysis of microarray expression data: Regularized t-test and statistical inferences of gene changes. Bioinformatics, 17(6):509\u2013519. Banfield, J. D. and Raftery, A. E. (1993). Model-based Gaussian and nonGaussian clustering. Biometrics, 49:803\u2013821. Behr, M. A., Wilson, M. A., Gill, W. P., Salamon, H., Schoolnik, G. K., Rane, S., and Small, P. M. (1999). Comparative genomics of BCG vaccines by whole-genome DNA microarray. Science, 284(5419):1520\u20131523. Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57(1):289\u2013300. Bolton, D. L. and Roederer, M. (2009). Flow cytometry and the future of vaccine development. Expert Review of Vaccines, 8(6):779\u2013789. Braylan, R. C. (2004). Impact of flow cytometry on the diagnosis and characterization of lymphomas, chronic lymphoproliferative disorders and plasma cell neoplasias. Cytometry Part A, 58A(1):57\u201361. Callow, M. J., Dudoit, S., Gong, E. L., Speed, T. P., and Rubin, E. M. (2000). Microarray expression profiling identifies genes with altered expression in HDL-deficient mice. Genome Research, 10:2022\u20132029. Celeux, G. and Govaert, G. (1992). A classification EM algorithm for clustering and two stochastic versions. Computational Statistics and Data Analysis, 14(3):315\u2013332. 20 \fCeleux, G. and Govaert, G. (1995). Gaussian parsimonious clustering models. Pattern Recognition, 28(5):781\u2013793. Chan, C., Feng, F., Ottinger, J., Foster, D., West, M., and Kepler, T. B. (2008). Statistical mixture modeling for cell subtype identification in flow cytometry. Cytometry Part A, 73A:693\u2013701. Crick, F. (1970). Central dogma of molecular biology. Nature, 227:561\u2013563. Debouck, C. and Goodfellow, P. N. (1999). DNA microarrays in drug discovery and development. Nature Genetics, 21(1 Suppl):48\u201350. DeRisi, J., Penland, L., Brown, P. O., Bittner, M. L., Meltzer, P. S., Ray, M., Chen, Y., Su, Y. A., and Trent, J. M. (1996). Use of a cDNA microarray to analyse gene expression patterns in human cancer. Nature Genetics, 14(4):457\u2013460. Efron, B. (2004). Large-scale simultaneous hypothesis testing: the choice of a null hypothesis. Journal of the American Statistical Association, 99:96\u2013 104. Efron, B., Tibshirani, R., Storey, J. D., and Tusher, V. (2001). Empirical Bayes analysis of a microarray experiment. Journal of the American Statistical Association, 96(456):1151\u20131160. Fraley, C. and Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97(458):611\u2013631. Gelfand, A. E. and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85:398\u2013409. Gottardo, R., Raftery, A. E., Yeung, K. Y., and Bumgarner, R. E. (2006). Bayesian robust inference for differential gene expression in microarrays with multiple samples. Biometrics, 62(1):10\u201318. 21 \fHengel, R. L. and Nicholson, J. K. (2001). An update on the use of flow cytometry in HIV infection and AIDS. Clinics in Laboratory Medicine, 21(4):841\u2013856. Illoh, O. C. (2004). Current applications of flow cytometry in the diagnosis of primary immunodeficiency diseases. Archives of Pathology and Laboratory Medicine, 128(1):23\u201331. Irizarry, R., Hobbs, B., Collin, F., Beazer-Barclay, Y., Antonellis, K., Scherf, U., and Speed, T. (2003). Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics, 4(2):249\u2013264. Kendziorski, C., Newton, M., Lan, H., and Gould, M. N. (2003). On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Statistics in Medicine, 22:3899\u20133914. Krutzik, P. O., Irish, J. M., Nolan, G. P., and Perez, O. D. (2004). Analysis of protein phosphorylation and cellular signaling events by flow cytometry: techniques and clinical applications. Clinical Immunology, 110(3):206\u2013 221. Lavine, M. and West, M. (1992). A Bayesian method for classification and discrimination. Canadian Journal of Statistics, 20:451\u2013461. Li, C. and Wong, W. H. (2001). Model-based analysis of oligonucleotide arrays: expression index computation and outlier detection. Proceedings of the National Academy of Sciences of the United States of America, 98:31\u201336. Lin, T. I. (2009). Robust mixture modeling using multivariate skew t distributions. Statistics and Computing, (In press). Lizard, G. (2007). Flow cytometry analyses and bioinformatics: interest in new softwares to optimize novel technologies and to favor the emergence of innovative concepts in cell research. Cytometry Part A, 71A:646\u2013647. 22 \fLo\u0308nnstedt, I. and Speed, T. P. (2002). Replicated microarray data. Statistica Sinica, 12:31\u201346. MacQueen, J. B. (1967). Some methods for classification and analysis of multivariate observations. In LeCam, L. and Neyman, J., editors, Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281\u2013297, Berkeley. University of California Press. Mandy, F. F. (2004). Twenty-five years of clinical flow cytometry: AIDS accelerated global instrument distribution. Cytometry Part A, 58A(1):55\u2013 56. McLachlan, G. and Peel, D. (2000). Finite Mixture Models. Wiley- Interscience, New York. Newton, M. C., Kendziorski, C. M., Richmond, C. S., Blattner, F. R., and Tsui, K. W. (2001). On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data. Journal of Computational Biology, 8:37\u201352. Orfao, A., Ortuno, F., de Santiago, M., Lopez, A., and San Miguel, J. (2004). Immunophenotyping of acute leukemias and myelodysplastic syndromes. Cytometry Part A, 58A(1):62\u201371. Pyne, S., Hu, X., Wang, K., Rossin, E., Lin, T.-I., Maier, L. M., BaecherAllan, C., McLachlan, G. J., Tamayo, P., Hafler, D. A., De Jager, P. L., and Mesirov, J. P. (2009). Automated high-dimensional flow cytometric data analysis. Proceedings of the National Academy of Sciences of the United States of America, 106(21):8519\u20138524. Sahu, S. K., Dey, D. K., and Branco, M. D. (2003). A new class of multivariate skew distributions with applications to Bayesian regression. Canadian Journal of Statistics, 31(2):129\u2013150. 23 \fSchena, M., Shalon, D., Davis, R. W., and Brown, P. O. (1995). Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science, 270:467\u2013470. Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6:461\u2013464. Smyth, G. K. (2004). Linear models and empirical Bayes methods for assessing differential expression in microarray experiments. Statistical Applications in Genetics and Molecular Biology, 3:Article 3. Titterington, D. M., Smith, A. F. M., and Makov, U. E. (1985). Statistical Analysis of Finite Mixture Distributions. Wiley, Chichester, UK. Tusher, V., Tibshirani, R., and Chu, G. (2001). Significance analysis of microarrays applied to the ionizing radiation response. Proceedings of the National Academy of Sciences of the United States of America, 98(9):5116\u2013 5121. Wei, G. C. G. and Tanner, M. A. (1990). A Monte Carlo implementation of the EM algorithm and the poor man\u2019s data augmentation algorithms. Journal of the American Statistical Association, 85:699\u2013704. Wu, Z., Irizarry, R. A., Gentleman, R., Martinez-Murillo, F., and Spencer, F. (2004). A model-based background adjustment for oligonucleotide expression arrays. Journal of the American Statistical Association, 99:909\u2013 917. 24 \fChapter 2 Flexible Empirical Bayes Models for Differential Gene Expression\u2217 2.1 Introduction As a natural development following the success of genome sequencing, DNA microarray technology has emerged for the sake of exploring the functioning of genomes (Schena et al., 1995). By exploiting the ability of a singlestrand nucleic acid molecule to hybridize to a complementary sequence, researchers can simultaneously measure the expression levels of thousands of genes within a cell. A common task with microarray is to determine which genes are differentially expressed under two different conditions. In recent years, there has been a considerable amount of work on the detection of differentially expressed genes. An early statistical treatment can be found in Chen et al. (1997). A common approach is to test a hypothesis for each gene using variants of t or F -statistics and then try to correct for multiple testing (Efron et al., 2001; Tusher et al., 2001; Dudoit et al., 2002). Due to the small number of replicates, variation in gene expression can be poorly estimated. Baldi and Long (2001) and Tusher et al. (2001) suggested using a modified t statistic where the denominator has been regularized by adding a small constant to the gene specific variance estimate. Similar to an empirical Bayes approach this results in shrinkage of the em\u2217 A version of this chapter has been published. Lo, K. and Gottardo, R. (2007). Flexible empirical Bayes models for differential gene expression. Bioinformatics, 23(3):328\u2013335. 25 \fpirical variance estimates towards a common estimate. Lo\u0308nnstedt and Speed (2002) proposed an empirical Bayes normal mixture model for gene expression data, which was later extended to the two condition case by Gottardo et al. (2003) and to more general linear models by Smyth (2004) and Cui et al. (2005), though Smyth (2004) and Cui et al. (2005) did not use mixture models but simply empirical Bayes normal models for variance regularization. In each case, the authors derived explicit gene specific statistics and did not consider the problem of estimating p the proportion of differentially expressed genes. Newton et al. (2001) developed a method for detecting changes in gene expression in a single two-channel cDNA slide using a hierarchical gamma-gamma (GG) model. Kendziorski et al. (2003) extended this to replicate chips with multiple conditions, and provided the option of using a hierarchical lognormal-normal (LNN) model. Both models are implemented in an R package called EBarrays (Empirical Bayes microarrays) and from now on we use the name EBarrays to refer to the methodology. Both EBarrays model specifications rely on the assumption of a constant coefficient of variation across genes. In this chapter, we extend both models by releasing this assumption and introduce EM type algorithms for parameter estimation, thus extending the work of Lo\u0308nnstedt and Speed (2002) and Gottardo et al. (2003) as well. The structure of this chapter is as follows. The extended forms of the two EBarrays hierarchical models and the estimation procedures are presented in Section 2.2. In Section 2.3, the performance of the extended models is examined on three experimental datasets and compared to five other baseline and commonly used methods. Section 2.4 presents a simulation study to further compare our empirical Bayes approach to the other methods. Finally, in Section 2.5 we discuss our results and possible extensions. 26 \f2.2 A Bayesian Framework for Identifying Differential Expression 2.2.1 A Hierarchical Model for Measured Intensities In a typical microarray experiment, two conditions are compared for gene expression. Let us denote by Xgr and Ygr the intensities of gene g from the rth replicate in the two conditions respectively. Measurements between the two conditions are assumed to be independent. The proposed model is an extension of the EBarrays framework (Newton et al., 2001; Kendziorski et al., 2003). Extensions to the original two types of model formulation are considered in turn below. The Extended Gamma-Gamma Model Here, a Gamma distribution is used to model the measured intensities of a given gene. Explicitly, the probability density of Xgr (resp. Ygr ) with shape and rate parameters ag and \u03b8gx (resp. \u03b8gy ) is given by p(x|ag , \u03b8gx ) = 1 a \u03b8gxg xag \u22121 exp(\u2212x\u03b8gx ) \u0393(ag ) for x > 0. (2.1) To borrow strength across genes we assume an exchangeable Gamma(a0 , \u03bd) prior for the rate parameters, and a Lognormal(\u03b7, \u03be) prior for the shape parameters. The Gamma prior is used for simplicity as it is conjugate to the sampling distribution (Newton et al., 2001) while the Lognormal prior is suggested by a histogram plot of the empirical shape parameters estimated by the method of moments (See Figure 2.1). The hyperparameters a0 , \u03bd, \u03b7 and \u03be are assumed unknown and will be estimated as part of our approach. The proposed model extends the EBarrays GG model by placing a prior on the shape parameter. In the original GG model, the shape parameter a was assumed to be constant and common to all genes whereas now it is gene specific. However, strength is borrowed across genes through the prior distribution. By \u201cborrowing strength\u201d, we mean that information from all genes is used when estimating ag , which comes from the hyperparmeters 27 \f0.3 Density 0.2 0.1 0.0 0.0 0.1 Density 0.2 0.3 0.4 (b) HIV-1B 0.4 (a) HIV-1A \u22122 0 2 4 6 8 \u22122 0 6 8 0.25 0.20 0.15 Density 0.00 0.05 0.10 0.20 0.15 0.10 0.00 0.05 Density 4 (d) HGU133A, gcRMA 0.25 (c) HGU95A, gcRMA 2 0 2 4 6 8 10 0 2 4 6 8 10 12 Figure 2.1: Histograms of robust empirical estimates of ag \u2019s with fitted Lognormal density curves shown on a log scale under the extended GammaGamma modeling framework. The hyperparameters \u03be and \u03b7 are estimated using a robust version of the method of moments. The graphs for the HGU95A and HGU133A spike-in data are for one of the comparisons made between two array groups. 28 \fthrough the prior. The Extended Lognormal-Normal Model The second formulation is an extension of the EBarrays LNN framework. The intensities are assumed to be lognormally distributed, i.e., the logtransformed intensities are from a normal distribution, and we write log Xgr \u223c \u22121 ) and log Y \u22121 N(\u00b5gx , \u03c4gx gr \u223c N(\u00b5gy , \u03c4gy ) respectively. A conjugate prior is imposed on the mean \u00b5gx (resp. \u00b5gy ) and precision \u03c4gx (resp. \u03c4gy ). Ex\u22121 ) and \u03c4 plicitly, we set \u00b5gx |\u03c4gx \u223c N(m, k\u03c4gx gx \u223c Gamma(\u03b1, \u03b2) respectively. In the original LNN model, the precision \u03c4 was assumed to be constant and common to all genes. Our proposed formulation extends the EBarrays model by releasing the assumption of a constant coefficient of variation p exp(\u03c4 \u22121 ) \u2212 1, which is equivalent to the assumption of a constant variance \u03c4 \u22121 on the log scale. Note that our proposed formulation is also the framework of Gottardo et al. (2003). However, we use an EM based algorithm to estimate the unknown parameters, including the proportion of differentially expressed genes. On assuming a prior on both \u00b5gx (resp. \u00b5gy ) and \u03c4gx (resp. \u03c4gy ) common to all genes, strength is borrowed across genes through both means and variances of the distributions when making inferences. Again, we mean that information from all genes is used when estimating both \u00b5gx (resp. \u00b5gy ) and \u03c4gx (resp. \u03c4gy ). In particular, this is essential for variances \u2014 due to the small number of replicates variance estimates can be very noisy. Similar ideas have been used in Smyth (2004) and Cui et al. (2005), where the authors concentrated on variance regularization. 2.2.2 A Mixture Model for Differential Expression We use a mixture model to identify differentially expressed genes. We assume that a priori \u03b8gx = \u03b8gy (resp. \u00b5gx = \u00b5gy ) with probability 1 \u2212 p and \u03b8gx 6= \u03b8gy (resp. \u00b5gx 6= \u00b5gy ) with probability p. For the latter case, the model specification is just as stated in Section 2.2.1, while the former case is modeled through setting the gene-specific parameters common to both 29 \fconditions. Let us denote by zg the indicator variable equal to one if there is real change in expression for gene g and zero otherwise. Then one can define the posterior probability of change, Pr(zg = 1|xg , yg , p, \u03c8), where xg = (xg1 , xg2 , . . . , xgRx )0 and yg = (yg1 , yg2 , . . . , ygRy )0 and \u03c8 denotes the vector of unknown hyperparameters. Applying the Bayes rule, we obtain z\u0302g = Pr(zg = 1|xg , yg , p, \u03c8) = p pA (xg , yg |\u03c8) , p pA (xg , yg |\u03c8) + (1 \u2212 p)p0 (xg , yg |\u03c8) (2.2) where pA (xg , yg |\u03c8) and p0 (xg , yg |\u03c8) denote the joint marginal density of the measured intensities of gene g under both the alternative (differential expression) and null (no differential expression) models respectively given \u03c8. The marginal density for the extended LNN model can be computed explicitly and is given in Appendix A.1. For the extended GG model only \u03b8g can be integrated out, and the corresponding \u201cconditional\u201d marginal density is given in Appendix A.1. In the next section we describe an approximate estimation procedure to deal with this difficulty. 2.2.3 Parameter Estimation using the EM-algorithm Here we start with the extended LNN model as the estimation procedure is straightforward. The vector of unknown parameters \u03a6 = (\u03c8 0 , p)0 , where \u03c8 = (m, k, \u03b1, \u03b2)0 , can be estimated by maximizing the integrated likelihood using the EM-algorithm (Dempster et al., 1977). The estimation of p is important since it calibrates the posterior probability of change for multiple testing, as seen in Eq.(2.2). Such estimation is also part of some multiple testing procedure such as Storey\u2019s q-value (Storey, 2003). Estimation of the parameter p can be difficult (Smyth, 2004; Bhowmick et al., 2006), and as suggested by Newton et al. (2001) we place a Beta(2, 2) prior over p, which avoids numerical issues when p gets close to 0 or 1. Given the large number of genes, the prior on p has essentially no effect on the final estimation, and thus on the number of genes called differentially expressed. 30 \fTreating the zg \u2019s as missing data, the complete-data log-likelihood is given by lc (\u03a6|x, y, z) = Xh zg log pA (xg , yg |\u03c8) + (1 \u2212 zg ) log p0 (xg , yg |\u03c8) g i + (1 + zg ) log p + (2 \u2212 zg ) log(1 \u2212 p) . (2.3) During the E-step, the expectation is obtained by replacing zg by z\u0302g as given by Eq.(2.2) while the M-step consists of maximizing the conditional expectation with respect to the parameter vector \u03a6 = (\u03c8 0 , p)0 . At convergence, the estimated parameters can be substituted into Eq.(2.2) to compute the posterior probability of change for each gene. Because the prior of the extended GG model is not conjugate to the sampling distribution, only the marginal density conditional on ag is analytically available for each gene. We refer to it as the conditional marginal density. To incorporate information about the prior for the ag \u2019s, we propose to estimate the hyperparameters \u03b7 and \u03be beforehand through an empirical Bayes approach using the method of moments (see Appendix A.2 for details), and add log[\u03c0(ag |\u03b7, \u03be)] to the log conditional density as a penalty term. Again, treating the zg \u2019s as missing data, the corresponding modified complete-data log-likelihood can be written as \u02dclc (\u03a6|x, y, z) = Xn zg log pA (xg , yg |\u03c8, ag ) + (1 \u2212 zg ) log p0 (xg , yg |\u03c8, ag ) g o + (1 + zg ) log(p) + (2 \u2212 zg ) log(1 \u2212 p) + log \u03c0(ag |\u03b7, \u03be) , (2.4) where \u03c8 = (a0 , \u03bd)0 . The vector of parameters to be estimated becomes \u03a6 = (a1 , a2 , . . . , aG , \u03c8 0 , p)0 . Similar to the extended LNN model, we can use the EM algorithm to maximize the modified marginal likelihood. During the E-step, to obtain the conditional expectation of the modified complete-data log-likelihood, zg in Eq.(2.4) is replaced by z\u0302g as in Eq.(2.2). The M-step consists of maximizing 31 \f\u02dclc given the current zg \u2019s. Such maximization can be difficult given the high dimensionality of \u03a6 and here we suggest to exploit the conditional structure of the model during the maximization step, namely that given \u03c8 and p, the genes are conditionally independent and each ag can be maximized over separately. Let us split the unknown parameters into two groups, namely, \u03a61 = (a1 , a2 , . . . , aG )0 (gene-specific shape parameters) and \u03a62 = (\u03c8 0 , p)0 (global parameters). Then the M-step would consist of iteratively maximizing over \u03a61 given \u03a62 and \u03a62 given \u03a61 . Here, we decided to maximize over \u03a61 only once during the first iteration to reduce the computational burden, and then take EM-iterations with respect to \u03a62 only until convergence. It turns out that the estimates obtained were very similar to the ones obtained when maximizing over both \u03a61 and \u03a62 , while significantly reducing the computing time. Details about the estimation of (\u03b7, \u03be) and initialization of the EM algorithm can be found in Appendix A.3. 2.3 2.3.1 Application to Experimental Data Data Description To illustrate our methodology we use three publicly available microarray datasets: one cDNA experiment and two Affymetrix spike-in experiments. All three have the advantage that in each case the true state (differentially expressed or not) of all or some of the genes is known. The HIV-1 Data The expression levels of 4608 cellular RNA transcripts were measured one hour after infection with human immunodeficiency virus type 1 (HIV-1) using four replicates on four different slides. 13 HIV-1 genes have been included in the set of RNA transcripts to serve as positive controls, i.e., genes known in advance to be differentially expressed. Meanwhile, 29 non-human genes have also been included and act as negative controls, i.e., genes known to be not differentially expressed. Another dataset was obtained by repeating 32 \fthe four aforementioned experiments but with an RNA preparation different from that for the first dataset. For easy reference, we label the two datasets as HIV-1A and HIV-1B respectively. See van\u2019t Wout et al. (2003) for more details of the HIV-1 data. The data were lowess normalized using a global lowess normalization step (Yang et al., 2002). The HGU95A Spike-In Data This dataset was obtained from a spike-in study by Affymetrix used to develop and validate the MAS 5.0 (Affymetrix Manual, 2001) platform. The concentrations of 14 spiked-in human gene groups in 14 groups of HGU95A GeneChip c arrays were arranged in a Latin square design. The concentrations of the 14 groups in the first array group are 0, 0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and 1024 pM respectively. Each subsequent array group rotates the spike-in concentrations by one group such that each human gene was spiked-in at a particular concentration level on exactly one array group, and each concentration level came with exactly one spiked-in gene group in each array group. There are three technical replicates in each array group. The third array group has been removed from the analysis as one of its replicates was missing. We use a set of 16 spiked-in genes in our list in recognition of the extras reported by Hsieh et al. (2003) and Cope et al. (2004). Analysis is performed on each set of probe summary indices computed using gcRMA (Wu et al., 2004), RMA (Irizarry et al., 2003a), MAS 5 and dChip (Li and Wong, 2001) respectively. The HGU133A Spike-In Data This dataset was obtained from another spike-in study done with HGU133A arrays. A total of 42 spiked-in genes were organized in 14 groups, and the concentrations used were 0, 0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128, 256 and 512 pM. The arrangement of the spike-in concentrations was similar to the Latin square design stated above. Again, there are three technical replicates in each array group. For more information see Irizarry et al. (2003b). In addition to the original 42, we claim that another 20 genes 33 \fshould also be included in the spiked-in gene list as they consistently show significant differential expression across the array groups in the exploratory data analysis. Similar observations have been made by Sheffler et al. (2005). Moreover, the probe sets of three genes contain probe sequences exactly matching those for the spiked-ins. These probes should be hybridized by the spike-ins as well. As a result, our expanded spiked-in gene list contains 65 entries in total. 2.3.2 Results We compare our proposed methods \u2013 extended GG (eGG) and extended LNN (eLNN) models \u2013 to five other methods, namely, EBarrays GG and LNN models, the popular Significance Analysis of Microarrays (SAM) (Tusher et al., 2001), Linear Models for Microarray data (LIMMA) (Smyth, 2004), and a fully Bayesian approach named BRIDGE (Gottardo et al., 2006a). The results have been organized in Tables 2.1\u20132.3. In the analysis of the HIV-1 data, we obtain the number of genes called differentially expressed (DE) for each method. Among those genes called Table 2.1: Analysis of differential expression with the HIV-1 data. (a) HIV-1A Method GG LNN eGG eLNN LIMMA SAM BRIDGE DE 24 18 13 14 13 13 14 (b) HIV-1B TP\u2217 FP\u2217 13 13 13 13 13 13 13 0 1 0 0 0 0 0 Method GG LNN eGG eLNN LIMMA SAM BRIDGE DE 18 18 12 12 11 13 11 TP\u2217 11 11 11 11 11 11 11 FP\u2217 1 1 0 0 0 0 0 The FDR is controlled at 0.1. The numbers of TP and FP are based on the controls, namely, the 13 (resp. 12 in the second experiment) HIV-1 and the 29 non-human genes of which the states are known in advance. They do not represent the true numbers of TP and FP in the entire data. \u2217 34 \fDE, we look at the number of true positives (TP), i.e., genes known to be DE in advance, and the number of false positives (FP), i.e., genes known to be not DE. Gottardo et al. (2006b) showed that one of the HIV genes, which was expected to be highly differentially expressed had a very small estimated log ratio and did not properly hybridize in the second experiment (HIV-1B). We removed the corresponding gene from the list of known differentially expressed genes. Thus there are 13 genes known to be DE in the first experiment and 12 in the second. To compare the performance between the seven methods, we intend to control the false discovery rate (FDR) at a fixed level of 0.1. The FDR cutoffs can be selected using a direct posterior probability calculation as described in Newton et al. (2004). For the HIV1A dataset, when the FDR is controlled at 0.1, all methods can identify the 13 positive controls. Meanwhile, EBarrays LNN has made one FP. Similar result is observed when the HIV-1B dataset is considered. All methods detect 11 out of the 12 positive controls but both versions of EBarrays (GG and LNN) have made one FP. Concluded from the HIV-1 datasets, along with LIMMA, SAM and BRIDGE our proposed eGG and eLNN methods appear to perform the best as they recognize the most positive controls and do not get any FP. For the HGU95A spike-in data, after removing the array group with one missing replicate, we have a set of 13 array groups. To evaluate the different methods we compare the first array group to the other array groups, leading to 12 comparisons. Since dChip may return negative probe summary indices, which cannot be processed by the aforementioned methods, those genes with negative summary indices were filtered out. This excluded 5.5 spike-ins on average. This time, since we know the actual status of each gene, we can check the true FDR of each method against the desired FDR. In addition, we look at the number of false negatives (FN) as a power assessment. Unlike the results on the HIV-1 data, SAM does not show a competitive performance. A large number of FN (>11) have been observed with SAM for both gcRMA and RMA summary indices, considering that there are only 16 entries in our spiked-in gene list. eLNN and LIMMA have the actual FDR closest to the desired FDR in general, though they have a relatively large 35 \fTable 2.2: Analysis of differential expression with the HGU95A spike-in data. (a) gcRMA Method GG LNN eGG eLNN LIMMA SAM BRIDGE FN 2.42 1.83 1.58 5.83 4.33 11.25 3.6 (b) RMA FDR 0.22 0.22 0.28 0.09 0 0.05 0.06 (c) MAS 5 Method GG LNN eGG eLNN LIMMA SAM BRIDGE FN 6.5 5.42 4.33 7.08 5.58 5.83 12.08 Method GG LNN eGG eLNN LIMMA SAM BRIDGE FN 2.42 2.42 2.25 3.25 3.08 12.58 2.33 FDR 0.28 0.25 0.2 0.15 0.08 0.23 0.17 (d) dChip FDR 0.7 0.84 0.53 0.26 0.27 0.27 0 Method GG LNN eGG eLNN LIMMA SAM BRIDGE FN 3.25 3.58 2.83 6.08 4.83 3 4.00 FDR 0.7 0.74 0.43 0.34 0.3 0.45 0.34 The FDR is controlled at 0.1. The values of FN and FDR shown are the averages across the 12 comparisons. number of FN cases regarding MAS 5 and dChip summary indices. The actual FDRs for EBarrays GG and LNN methods are too high compared to the other methods, and our proposed extended versions have lowered the rates by a wide margin while keeping relatively small FN rates. The HGU133A spike-in data have a set of 14 array groups, and therefore 13 comparisons have been made. A total of 14 out of 65 spiked-in genes on average have been filtered from the analysis with dChip due to negative summary indices. The relative performance of the six methods is similar to that for the HGU95A data. It is worth mentioning that eGG is the only method that can sustain the FN cases to a low number for all four types 36 \fTable 2.3: Analysis of differential expression with the HGU133A spike-in data. (a) gcRMA Method GG LNN eGG eLNN LIMMA SAM BRIDGE FN 5.85 5.92 6.46 13.08 10.38 22.23 6.01 (b) RMA FDR 0.2 0.2 0.23 0.07 0.08 0.12 0.11 Method GG LNN eGG eLNN LIMMA SAM BRIDGE (c) MAS 5 Method GG LNN eGG eLNN LIMMA SAM BRIDGE FN 15.77 15.85 9.23 15.77 13.85 13.77 18.46 FN 4.38 4.46 5.23 6.69 6.15 17.15 4.53 FDR 0.14 0.13 0.06 0.09 0.03 0.1 0.08 (d) dChip FDR 0.89 0.87 0.59 0.23 0.31 0.28 0.25 Method GG LNN eGG eLNN LIMMA SAM BRIDGE FN 9.31 9.69 6.69 11.31 9.38 5.08 6.92 FDR 0.48 0.58 0.44 0.3 0.26 0.28 0.51 The FDR is controlled at 0.1. The values of FN and FDR shown are the averages across the 13 comparisons. of probe summary indices, though its FDR is higher than the desired one. SAM has considerably more FN cases than the other methods for gcRMA and RMA, while its FDR is close to the desired one. Similarly, eLNN and LIMMA exhibit good FDR performance but with better FN rates. Again, the FDRs for EBarrays GG and LNN methods are at quite a high level, while their extended versions (eGG and eLNN) have significantly reduced the rates while keeping relatively small FN rates. 37 \f2.4 2.4.1 Simulation Studies Data Generation We now use a series of simulations to study the performance of our empirical Bayes framework under different model specifications compared to the original EBarrays framework and the methods presented in Section 2.3.2. In order to do so, we generated data from the following models: EBarrays GG (a = 5, a0 = 0.8, \u03bd = 15), EBarrays LNN (m = 5, \u03c3 2 = 2, \u03c4 \u22121 = 0.25, \u03c3 2 being the variance parameter of the prior of \u00b5gx or \u00b5gy ), extended GG (\u03b7 = 2, \u03be = 1, a0 = 1, \u03bd = 20) and extended LNN (m = 5, k = 12, \u03b1 = 2, \u03b2 = 0.5). The values of the parameters are set in the proximity of the estimates from the HIV-1 data. We fixed the number of genes to 500, the number of replicates to three in each group and generated 100 datasets under each of the above models for two different values of p = {0.1, 0.2}. 2.4.2 Results The seven methods mentioned in Section 2.3.2 are applied to each simulated dataset to make inference about differential expression. Results are summarized graphically in two ways: a plot of the actual FDR against the desired FDR, and a plot of the number of FP against the number of FN. The curves show the average results across the 100 simulated datasets. For each dataset, results are collected by setting the cutoffs for the posterior probabilities or p-values at different points in turn in detecting differential expression. As expected, the EBarrays GG and LNN models perform quite poorly compared to the eGG and eLNN models when the variance is not constant and clearly under estimate the FDR (Figures 2.2 and 2.3). On the other hand, the eGG and eLNN models are comparable to EBarrays when the variance is constant, showing that strength borrowing across genes is working well (Figures 2.4 and 2.5). Finally, both GG and eGG (resp. LNN and eLNN) appear to perform relatively well under LNN and eLNN (resp. GG and eGG) model specifications respectively. This confirms previous simula- 38 \ftion studies (Kendziorski et al., 2003). Overall, SAM is not performing very well and tend to under estimate the FDR by a large amount. Meanwhile, LIMMA and BRIDGE consistently show good performance for data generated from the four models, suggesting that they are good candidates for identifying differential expression under a wide variety of settings. p = 0.1 30 20 5 10 60 40 FP 0.3 16 17 18 19 20 20 15 0 0.1 Actual FDR 0.5 80 p = 0.1 0.1 0.2 0.3 0.4 0.5 0.6 10 15 20 25 30 35 40 Legend 60 40 FP 0.3 30 p = 0.2 30 35 40 20 25 0 0.1 Actual FDR 0.5 80 p = 0.2 FN 20 GG LNN eGG eLNN LIMMA SAM BRIDGE 5 10 Desired FDR 0.1 0.2 0.3 0.4 Desired FDR 0.5 0.6 20 30 40 50 60 70 80 FN Figure 2.2: Simulation results generated from the extended GG model. 39 \fp = 0.1 30 20 5 10 60 40 FP 0.3 16 17 18 19 20 20 15 0 0.1 Actual FDR 0.5 80 p = 0.1 0.1 0.2 0.3 0.4 0.5 0.6 10 15 20 25 30 35 40 Legend 60 40 FP 0.3 30 p = 0.2 30 35 40 20 25 0 0.1 Actual FDR 0.5 80 p = 0.2 FN 20 GG LNN eGG eLNN LIMMA SAM BRIDGE 5 10 Desired FDR 0.1 0.2 0.3 0.4 Desired FDR 0.5 0.6 20 30 40 50 60 70 80 FN Figure 2.3: Simulation results generated from the extended LNN model. 40 \fp = 0.1 30 20 17 18 19 20 20 16 0 0.1 0.2 0.3 0.4 0.5 0.6 20 60 40 FP 25 30 35 40 FN 30 p = 0.2 20 GG LNN eGG eLNN LIMMA SAM BRIDGE 0.5 0.3 15 Legend 80 p = 0.2 10 5 10 Desired FDR 30 35 40 20 25 0 0.1 Actual FDR 5 10 60 40 FP 0.3 15 0.1 Actual FDR 0.5 80 p = 0.1 0.1 0.2 0.3 0.4 0.5 0.6 Desired FDR 20 30 40 50 60 70 80 FN Figure 2.4: Simulation results generated from the EBarrays GG model. 41 \fp = 0.1 30 20 17 18 19 20 20 16 0 0.1 0.2 0.3 0.4 0.5 0.6 40 FP 60 0.5 0.3 15 20 25 30 35 40 FN 30 p = 0.2 20 GG LNN eGG eLNN LIMMA SAM BRIDGE 80 p = 0.2 10 Legend 5 10 Desired FDR 30 35 40 20 25 0 0.1 Actual FDR 5 10 60 40 FP 0.3 15 0.1 Actual FDR 0.5 80 p = 0.1 0.1 0.2 0.3 0.4 0.5 0.6 Desired FDR 20 30 40 50 60 70 80 FN Figure 2.5: Simulation results generated from the EBarrays LNN model. 42 \f2.5 Discussion We have extended the EBarrays empirical Bayes framework for differential gene expression by releasing the constant coefficient of variation assumption, and introducing two algorithms that can be used for parameter estimation. Using both experimental and simulated data we have shown that the extended framework clearly improves the original framework. In addition, it appears that the eLNN model performs better than the eGG one as shown with the spike-in data, and that it is comparable to BRIDGE, a more computational fully Bayesian approach. This is not the case for the original EBarrays framework, where the GG model generally performs better. This confirms previous findings of Gottardo et al. (2006a) and suggests that EBarrays GG is more robust to the model misspecification of a constant coefficient of variation compared to the LNN formulation. However, when the EBarrays model formulations are extended and the constant coefficient of variation assumption is released, the LNN model seems more appropriate. In spite of the complications accompanying the model enhancements relative to the original EBarrays framework, the proposed methodology remains to be highly competitive in terms of processing time. In the analysis with the HGU133A data of >20000 genes, it takes about five minutes to complete the eGG or eLNN analysis of one comparison between the two array groups each with three replicates on the R platform. In this chapter, we have compared our approach with five alternatives, but there are many other methods for detecting differentially expressed genes with gene expression data. We chose these five because they are either obvious baseline methods or widely used; they are also representative of other methods. More comparisons between statistical tests can be found in Cui and Churchill (2003). Among explicit adjustments for multiple testing, we considered the FDR control method as it is interpretable under each method. For simplicity and ease of comparison, we assumed that we were in a situation with only two conditions of interest. However, the methodology could easily be extended to the multiple condition case (Kendziorski et al., 2003) or more complex ANOVA-type designs (Cui and Churchill, 2003; Smyth, 2004). 43 \fBibliography Affymetrix Manual (2001). Affymetrix Microarray Suite User Guide Version 5.0. Santa Clara, CA. Baldi, P. and Long, A. D. (2001). A Bayesian framework for the analysis of microarray expression data: Regularized t-test and statistical inferences of gene changes. Bioinformatics, 17(6):509\u2013519. Bhowmick, D., Davison, A. C., Goldstein, D. R., and Ruffieux, Y. (2006). A Laplace mixture model for identification of differential expression in microarray experiments. Biostatistics, 7(4):630\u2013641. Chen, Y., Dougherty, E. R., and Bittner, M. L. (1997). Ratio-based decisions and the quantitative analysis of cDNA microarray images. Journal of Biomedical Optics, 2(4):364\u2013374. Cope, L. M., Irizarry, R. A., Jaffee, H. A., Wu, Z., and Speed, T. P. (2004). A benchmark for Affymetrix GeneChip expression measures. Bioinformatics, 20(3):323\u2013331. Cui, X. and Churchill, G. A. (2003). Statistical tests for differential expression in cDNA microarray experiments. Genome Biology, 4(4):210. Cui, X., Hwang, J. T., Qiu, J., Blades, N. J., and Churchill, G. A. (2005). Improved statistical tests for differential gene expression by shrinking variance components estimates. Biostatistics, 6(1):59\u201375. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1\u201338. Dudoit, S., Yang, Y. H., Callow, M. J., and Speed, T. P. (2002). Statistical methods for identifying differentially expressed genes in replicated cDNA microarray experiments. Statistica Sinica, 12(1):111\u2013139. 44 \fEfron, B., Tibshirani, R., Storey, J. D., and Tusher, V. (2001). Empirical Bayes analysis of a microarray experiment. Journal of the American Statistical Association, 96(456):1151\u20131160. Gottardo, R., Pannucci, J. A., Kuske, C. R., and Brettin, T. (2003). Statistical analysis of microarray data: a Bayesian approach. Biostatistics, 4:597\u2013620. Gottardo, R., Raftery, A. E., Yeung, K. Y., and Bumgarner, R. E. (2006a). Bayesian robust inference for differential gene expression in microarrays with multiple samples. Biometrics, 62(1):10\u201318. Gottardo, R., Raftery, A. E., Yeung, K. Y., and Bumgarner, R. E. (2006b). Quality control and robust estimation for cDNA microarrays with replicates. Journal of the American Statistical Association, 101(473):30\u201340. Hsieh, W. P., Chu, T. M., and Wolfinger, R. D. (2003). Who are those strangers in the Latin square? In Johnson, K. F. and Lin, S. M., editors, Methods of Microarray Data Analysis III: Papers from CAMDA\u201902, pages 199\u2013208. Kluwer, Boston. Irizarry, R., Hobbs, B., Collin, F., Beazer-Barclay, Y., Antonellis, K., Scherf, U., and Speed, T. (2003a). Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics, 4(2):249\u2013264. Irizarry, R. A., Bolstad, B. M., Collin, F., Cope, L. M., Hobbs, B., and Speed, T. P. (2003b). Summaries of Affymetrix GeneChip probe level data. Nucleic Acids Research, 31(4):e15. Kendziorski, C., Newton, M., Lan, H., and Gould, M. N. (2003). On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Statistics in Medicine, 22:3899\u20133914. Li, C. and Wong, W. H. (2001). Model-based analysis of oligonucleotide arrays: expression index computation and outlier detection. Proceedings 45 \fof the National Academy of Sciences of the United States of America, 98:31\u201336. Lo\u0308nnstedt, I. and Speed, T. P. (2002). Replicated microarray data. Statistica Sinica, 12:31\u201346. Newton, M., Noueiry, A., Sarkar, D., and Ahlquist, P. (2004). Detecting differential gene expression with a semiparametric hierarchical mixture method. Biostatistics, 5(2):155\u2013176. Newton, M. C., Kendziorski, C. M., Richmond, C. S., Blattner, F. R., and Tsui, K. W. (2001). On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data. Journal of Computational Biology, 8:37\u201352. Schena, M., Shalon, D., Davis, R. W., and Brown, P. O. (1995). Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science, 270:467\u2013470. Sheffler, W., Upfal, E., Sedivy, J., and Noble, W. S. (2005). A learned comparative expression measure for Affymetrix GeneChip DNA microarrays. Proceedings of IEEE Computational Systems Bioinformatics Conference, pages 144\u2013154. Smyth, G. K. (2004). Linear models and empirical Bayes methods for assessing differential expression in microarray experiments. Statistical Applications in Genetics and Molecular Biology, 3:Article 3. Storey, J. D. (2003). The positive false discovery rate: a Bayesian interpretation and the q-value. Annals of Statistics, 31:2013\u20132035. Tusher, V., Tibshirani, R., and Chu, G. (2001). Significance analysis of microarrays applied to the ionizing radiation response. Proceedings of the National Academy of Sciences of the United States of America, 98(9):5116\u2013 5121. 46 \fvan\u2019t Wout, A. B., Lehrman, G. K., Mikheeva, S. A., O\u2019Keeffe, G. C., Katze, M. G., Bumgarner, R. E., Geiss, G. K., and Mullins, J. I. (2003). Cellular gene expression upon human immunodeficiency virus type 1 infection of CD4+ -T-cell lines. Journal of Virology, 77:1392\u20131402. Wu, Z., Irizarry, R. A., Gentleman, R., Martinez-Murillo, F., and Spencer, F. (2004). A model-based background adjustment for oligonucleotide expression arrays. Journal of the American Statistical Association, 99:909\u2013 917. Yang, Y. H., Dudoit, S., Luu, P., Lin, D. M., Peng, V., Ngai, J., and Speed, T. P. (2002). Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation. Nucleic Acids Research, 30(4):e15. 47 \fChapter 3 Flexible Mixture Modeling via the Multivariate t Distribution with the Box-Cox Transformation\u2217 3.1 Introduction In statistics, model-based clustering (Titterington et al., 1985; Banfield and Raftery, 1993; McLachlan and Peel, 2000; Fraley and Raftery, 2002) is a popular unsupervised approach to look for homogeneous groups of observations. The most commonly used model-based clustering approach is based on finite normal mixture models, which has been shown to give good results in various applied fields, for example, gene expression (Yeung et al., 2001; McLachlan et al., 2002; Pan et al., 2002), image analysis (Wehrens et al., 2004; Fraley et al., 2005; Li et al., 2005), medical diagnosis (Schroeter et al., 1998; Forbes et al., 2006) and astronomy (Kriessler and Beers, 1997; Mukherjee et al., 1998). However, normal mixture models rely heavily on the assumption that each component follows a normal distribution symmetric in shape, which is often unrealistic. A common remedy for the asymmetry issue is to look for transformations of the data that make the normality assumption more realistic. Box and Cox (1964) discussed the power transformation in the \u2217 A version of this chapter has been submitted for publication. Lo, K. and Gottardo, R. (2009). Flexible Mixture Modeling via the Multivariate t Distribution with the Box-Cox Transformation. 48 \fcontext of linear regression, which has also been applied to normal mixture models (Schork and Schork, 1988; Gutierrez et al., 1995). Another line of attempts to resolve the asymmetry observed in data is to enhance the flexibility of the normal distribution by introducing skewness. Azzalini (1985) developed a class of univariate skew normal distributions with the introduction of a shape parameter to account for the skewness, which had been put to use in a mixture modeling context by Lin et al. (2007b). A multivariate version of the skew normal distributions was first proposed by Azzalini and Dalla Valle (1996), with various generalizations or modifications ensuing. One such modification was found in Sahu et al. (2003), who developed a new class of multivariate skew elliptically symmetric distributions with applications to Bayesian regression models, and included the multivariate skew normal distribution as a special case. As opposed to Azzalini and Dalla Valle\u2019s (1996) formulation of the skew normal distribution, the correlation structure in that of Sahu et al. (2003) is not affected by the introduction of skewness in the sense that independence between elements of a random vector is preserved irrespective of changes in the skewness parameters. The latter formulation was adopted by Lin (2009a), who introduced the multivariate skew normal mixture model and described an ECM algorithm (Meng and Rubin, 1993) for maximum likelihood estimation. However, the implementation of this methodology is extremely computationally intensive. A simplified version of Sahu et al.\u2019s (2003) formulation has recently been suggested by Pyne et al. (2009), who parameterized skewness in the form of a vector in place of a matrix. As a result of this simplification, the computational complexity of parameter estimation has been reduced considerably. In addition to non-normality, there is also the problem of outlier identification in mixture modeling. Outliers can have a significant effect on the resulting clustering. For example, they will usually lead to overestimating the number of components in order to provide a good representation of the data (Fraley and Raftery, 2002). If a more robust model is used, fewer clusters may suffice. Outliers can be handled in the model-based clustering framework, by either replacing the normal distribution with a more robust 49 \fone (e.g., t; see Peel and McLachlan, 2000; McLachlan and Peel, 2000) or adding an extra component to accommodate the outliers (e.g., uniform; see Schroeter et al., 1998). Transformation selection and outlier identification are two issues which can have heavy mutual influence (Carroll, 1982; Atkinson, 1988). While a stepwise approach in which transformation is preselected ahead of outlier detection (or vice versa) may be considered, it is unlikely to tackle the problem well in general, as the preselected transformation may be influenced by the presence of outliers. One possible means of handling the two issues simultaneously is through the application of skew t distributions (Azzalini and Capitanio, 2003; Sahu et al., 2003) in mixture modeling. Such an attempt was given by Lin et al. (2007a), who proposed a skew t mixture model based on the formulation of Azzalini and Capitanio (2003), but it is confined to the univariate case. Not until recently has a multivariate version of the skew t mixture model come to light. Lin (2009b) and Pyne et al. (2009) adopted a similar approach to the case of skew normal in defining the multivariate skew t distribution, thereby simplifying Sahu et al.\u2019s (2003) formulation with a vector in place of a skewness matrix. In view of the aforementioned issues, we propose a unified framework based on mixture models using a new class of skewed distributions, namely, the multivariate t distributions with the Box-Cox transformation, to handle transformation selection and outlier identification simultaneously. The t distribution provides a robust mechanism against outliers with its heavier tails relative to the normal distribution (Lange et al., 1989). The Box-Cox transformation is a type of power transformation, which can bring skewed data back to symmetry, a property of both the normal and t distributions. Along with the introduction of the mixture model using this new class of distributions, we also describe a convenient means of parameter estimation via the EM algorithm (Dempster et al., 1977). Whilst the proposed framework holds a big appeal of being computationally much simpler than mixture modeling using skew t distributions, it performs well in various scenarios compared to a wealth of competing approaches, as shown in subsequent sections of this chapter. A simplified form of our proposed framework has been ap50 \fplied to flow cytometry, which shows a favorable performance in identifying cell populations (Chapter 4). This chapter presents a comprehensive framework that substantially enriches that previous simplified version, including the selection of component-specific transformations, and the estimation of data outlyingness. In addition, it focuses at the computational development of the proposed methodology, and includes a large-scale comparison with competing approaches such as those using the skew normal or t mixture distributions. The structure of this chapter is as follows. In Section 3.2 we first introduce the new class of skewed distributions, the multivariate t distributions with the Box-Cox transformation. Then we introduce the mixture model using the proposed distributions, and present details including outlier identification, density estimation and the selection of the number of components. In addition, we describe an EM algorithm (Dempster et al., 1977) to simultaneously handle parameter estimation and transformation selection for our proposed mixture model. In Section 3.3, the performance of the proposed framework is examined on real datasets and compared to a wealth of commonly used approaches. Section 3.4 presents extensive simulation studies to further evaluate our proposed framework relative to the other approaches. Finally, in Section 3.5 we summarize and discuss our findings. 3.2 Methodology 3.2.1 Preliminaries The Multivariate t Distribution The multivariate t distribution has found its use as a robust modeling tool in various fields of applied statistics like linear and non-linear regression, time series, and pedigree analysis; see Lange et al. (1989) and Kotz and Nadarajah (2004) for examples. The t distribution is applied in place of the normal distribution when the latter fails to offer long enough tails for the error distribution. Formally, a random vector y of length p is said to follow a p-dimensional multivariate t distribution with mean \u00b5 (\u03bd > 1), covariance 51 \fmatrix \u03bd(\u03bd \u2212 2)\u22121 \u03a3 (\u03bd > 2) and \u03bd degrees of freedom if its density function is given by \u03d5p (y|\u00b5, \u03a3, \u03bd) = \u22121\/2 \u0393( \u03bd+p 2 )|\u03a3| (\u03c0\u03bd)p\/2 \u0393( \u03bd2 ){1 + (y \u2212 \u00b5)T \u03a3\u22121 (y \u2212 \u00b5)\/\u03bd} \u03bd+p 2 . (3.1) The degrees of freedom \u03bd may be viewed as a robustness tuning parameter, as it controls the fatness of the tails of the distribution. When \u03bd \u2192 \u221e, the t distribution approaches a p-dimensional multivariate normal distribution with mean \u00b5, covariance matrix \u03a3, and density function \u03c6p (y|\u00b5, \u03a3) = 1 (2\u03c0)p\/2 |\u03a3|1\/2 \u001b 1 T \u22121 exp \u2212 (y \u2212 \u00b5) \u03a3 (y \u2212 \u00b5) . 2 \u001a (3.2) An account of the development of the maximum likelihood estimation of the multivariate t distribution can be found in Liu and Rubin (1995), Liu (1997) and Peel and McLachlan (2000). The estimation involves the use of the EM algorithm or its variants including the ECM (Meng and Rubin, 1993) and ECME (Liu and Rubin, 1994) algorithms. The crux of these algorithms constitutes the fact that we can parameterize a t distribution using a normal-gamma compound distribution. The degrees of freedom \u03bd may be jointly estimated along with other unknown parameters, or fixed a priori when the sample size is small. In the latter case, the setting with \u03bd = 4 has been found to provide good protection against outliers and work well in many applications (see, for example, Lange et al., 1989; Stephens, 2000). Box-Cox Transformation The power transformation proposed by Box and Cox (1964) was originally introduced to make data with asymmetric distributions fulfill the normality assumption in a regression model. The Box-Cox transformation of an 52 \fobservation y is defined as follows: ( y (\u03bb) = y \u03bb \u22121 \u03bb \u03bb 6= 0 log y \u03bb = 0 (3.3) where \u03bb is referred to as the transformation parameter. Note that this function is defined for positive values of y only. In view of the need to handle negative-valued data in some applications, we adopt a modified version (Bickel and Doksum, 1981) of the Box-Cox transformation which is also defined for negative values: y (\u03bb) = sgn(y)|y|\u03bb \u2212 1 , \u03bb \u03bb > 0. (3.4) There exist several modified versions of the Box-Cox transformation to handle negative-valued data, for example, the log-shift transformation, which was also proposed in Box and Cox\u2019s (1964) paper for the original Box-Cox transformation. The advantage of our choice given by Eq.(3.4) is that, while continuity is maintained across the whole range of the data, it retains the simplicity of the form of the transformation without introducing additional parameters; when all data are positive, it reduces to the original version. 3.2.2 The Multivariate t Distribution with the Box-Cox Transformation In this subsection, we propose a new class of distributions, namely, the multivariate t distributions with the Box-Cox transformation (tBC), to handle transformation and to accommodate outliers simultaneously. Explicitly, a random vector y of length p following such a distribution has a density function specified by \u03d5p (y(\u03bb) |\u00b5, \u03a3, \u03bd) \u00b7 |Jp (y; \u03bb)|, (3.5) 53 \f14 8 10 12 14 12 14 14 12 8 6 10 12 14 8 10 12 14 8 10 12 14 14 12 14 10 8 6 6 6 (h) \u03bb = \u22125 12 14 12 8 6 10 8 (g) \u03bb = \u22122 10 12 10 8 6 8 6 (f) \u03bb = \u22121 14 (e) \u03bb = 0 6 10 12 6 10 12 8 10 6 8 6 8 10 12 6 8 10 12 10 8 6 6 (d) \u03bb = 5 14 (c) \u03bb = 2 14 (b) \u03bb = 1 14 (a) \u03bb = 0.5 6 8 10 12 14 6 8 10 12 14 Figure 3.1: Contour plots revealing the shape of bivariate t distributions with the Box-Cox transformation for different values of the transformation parameter. Each distribution has a mean of 10 and unit variance along each dimension. The degrees of freedom parameter is fixed at eight. The values of the transformation parameter \u03bb range from \u22125 (extremely right-skewed) to 5 (extremely left-skewed). where |Jp (y; \u03bb)| = |y1\u03bb\u22121 y2\u03bb\u22121 \u00b7 \u00b7 \u00b7 yp\u03bb\u22121 | is the Jacobian induced by the BoxCox transformation. Equivalently speaking, the random vector y follows a multivariate t distribution after being Box-Cox transformed. It is difficult to derive the exact mean and variance of the distribution in closed form. However, using first-order Taylor series expansion, approximations for the mean and covariance matrix can be derived. The mean can be approximated by a vector of length p with the j-th element being sgn(\u03bb\u00b5j + 1) |\u03bb\u00b5j + 1|1\/\u03bb , and the variance by \u03bd\/(\u03bd \u2212 2) Dp (\u00b5; \u03bb) \u03a3 Dp (\u00b5; \u03bb), where Dp (\u00b5; \u03bb) is a diagonal matrix of order p with the j-th diagonal element being |\u03bb\u00b5j + 1|1\/\u03bb\u22121 . The various shapes that can be represented by the tBC are shown in Figure 3.1. Analogous to the case of the t distribution without transformation, the tBC approaches a multivariate normal distribution with the Box-Cox trans54 \fformation (NBC) when \u03bd \u2192 \u221e. In addition, this class of distributions also includes the untransformed version of the multivariate t or normal distribution. The untransformed t or normal distribution is recovered by setting \u03bb in Eq.(3.5) to one, although there is a translation of one unit to the left in each direction on the original scale (due to the term \u22121\/\u03bb in Eq.(3.4)). The flexible class of tBC offers robustness against both outliers and asymmetry observed in data. Comparatively, the t distribution alone is deemed robust in the sense that it offers a mechanism to accommodate outliers. As noted by Lange et al. (1989), however, the t distribution is not robust against asymmetric error distributions. When asymmetry is observed, data transformation is desired for the sake of restoring symmetry, and subsequently drawing proper inferences. The introduction of the tBC is therefore in line with Lange et al.\u2019s notion. 3.2.3 The Mixture Model of t Distributions with the Box-Cox Transformation The Model Making use of the tBC introduced in the last subsection, we now define a G-component mixture model in which each component is described by a tBC. Given data y, with independent p-dimensional observation vectors yi , i = 1, . . . , n, the likelihood for the tBC mixture model is given as follows: L(\u03a8|y) = n X G Y i=1 g=1 (\u03bbg ) wg \u03d5p (yi |\u00b5g , \u03a3g , \u03bdg )\u00b7|Jp (yi ; \u03bbg )|, G X wg = 1. (3.6) g=1 The mixing proportion wg is the probability that an observation belongs to the g-th component. Estimates of the unknown parameters \u03a8 = (\u03a81 , . . . , \u03a8G ) where \u03a8g = (wg , \u00b5g , \u03a3g , \u03bdg , \u03bbg ) can be obtained conveniently using an EM algorithm described in the next subsection. Analogous to the case of tBC, the mixture distribution approaches that for an NBC mixture model with \u03d5p (\u00b7|\u00b5g , \u03a3g , \u03bdg ) being replaced by \u03c6p (\u00b7|\u00b5g , \u03a3g ) when \u03bdg \u2192 \u221e for all g. Also, the class of tBC mixture models includes the conventional, untransformed t 55 \for normal mixture model, obtained by fixing \u03bbg = 1 for all g. Note that a restricted form of Eq.(3.6) has been previously applied to identify cell populations in flow cytometry data, on setting a global transformation parameter \u03bb = \u03bbg and fixing \u03bdg = 4 for all g (Chapter 4). Maximum Likelihood Estimation In this subsection we illustrate how transformation selection can be handled along with parameter estimation simultaneously via an EM algorithm. As in the algorithm for a t mixture model described in Peel and McLachlan (2000), we first define two types of missing data to augment the set of complete data. One is the unobserved component membership zi = (zi1 , . . . , ziG ) with \uf8f1 \uf8f21 zig = \uf8f30 if yi belongs to the g-th component otherwise associated with each observation yi . Each vector Zi follows independently a multinomial distribution with one trial and event properties w = (w1 , . . . , wG ), denoted as Zi \u223c MG (1, w). Another type of missing data is the weight ui , coming from the normal-gamma compound parameterization for the t distribution, such that Yi |ui , zig = 1 \u223c N(\u00b5g , \u03a3g \/ui ) (3.7) independently for i = 1, . . . , n, and Ui \u223c Ga(\u03bdg \/2, \u03bdg \/2). The advantage of writing the model in this way is that, conditional upon the Ui \u2019s, the sampling errors are again normal but with different precisions, and estimation becomes 56 \fa weighted least squares problem. The complete-data log-likelihood becomes lc (\u03a8|y, z, u) = n X G X n h i (\u03bb ) zig log wg \u03c6p (yi g |\u00b5g , \u03a3g \/ui ) \u00b7 |Jp (yi ; \u03bbg )| i=1 g=1 o + log Ga(ui |\u03bdg \/2, \u03bdg \/2) = n X G X i=1 g=1 n p 1 zig log wg \u2212 log(2\u03c0) \u2212 log |\u03a3g | 2 2 p X ui (\u03bbg ) T \u22121 (\u03bbg ) \u2212 (yi \u2212 \u00b5g ) \u03a3g (yi \u2212 \u00b5g ) + (\u03bbg \u2212 1) log |yij | 2 j=1 \u0010\u03bd \u0011 \u03bd \u0010p \u0011 o \u03bdg \u03bdg g g + log \u2212 log \u0393 + (log ui \u2212 ui ) + \u2212 1 log ui , 2 2 2 2 2 (3.8) where Ga(\u00b7|\u00b7) is the density function of ui . The E-step of the EM algorithm involves the computation of the conditional expectation of the completedata log-likelihood E\u03a8 (lc |y). To facilitate this, we need to compute z\u0303ig \u2261 E\u03a8 (Zig |yi ), u\u0303ig \u2261 E\u03a8 (Ui |yi , zig = 1) and s\u0303ig \u2261 E\u03a8 (log Ui |yi , zig = 1): (\u03bb ) wg \u03d5p (yi g |\u00b5g , \u03a3g , \u03bdg ) \u00b7 |Jp (yi ; \u03bbg )| z\u0303ig \u2190 P , (\u03bbk ) G w \u03d5 (y |\u00b5 , \u03a3 , \u03bd ) \u00b7 |J (y ; \u03bb )| p p i k k k k k k=1 i \u03bdg + p u\u0303ig \u2190 (\u03bb ) (\u03bb ) \u03bdg + (yi g \u2212 \u00b5g )T \u03a3g \u22121 (yi g \u2212 \u00b5g ) (3.9) (3.10) and \u0012 s\u0303ig \u2190 log u\u0303ig + \u03c8 \u03bdg + p 2 \u0013 \u0012 \u2212 log \u03bdg + p 2 \u0013 , (3.11) where \u03c8(\u00b7) is the digamma function. Note that, if we assume a global transformation parameter \u03bb, then Eq.(3.9) used to compute z\u0303ig is slightly simplified as (\u03bb) wg \u03d5p (yi |\u00b5g , \u03a3g , \u03bdg ) z\u0303ig \u2190 P . (\u03bb) G w \u03d5 (y |\u00b5 , \u03a3 , \u03bd ) p k k k k k=1 i (3.12) 57 \fAs can be seen in the following, s\u0303ig only appears in Eq.(3.18) or Eq.(3.19) for the update of the degrees of freedom \u03bdg . If we fix \u03bdg to some predetermined value, then s\u0303ig is not needed and Eq.(3.11) can be omitted. Upon plugging z\u0303ig , u\u0303ig and s\u0303ig into Eq.(3.8) for zig , ui and log ui respectively, we obtain the conditional expectation of the complete-data log-likelihood. In the M-step, we update the parameter estimates with values which maximize the conditional expectation of the complete-data log-likelihood. The mixing proportions are updated with the following formula: w\u0302g \u2190 where ng \u2261 P i z\u0303ig . ng , n (3.13) The estimation of \u00b5g and \u03a3g needs to be considered along with the transformation parameter \u03bbg of the Box-Cox transformation. Closed-form solutions for \u00b5g and \u03a3g are available conditional on \u03bbg as follows, Pn (\u03bb ) z\u0303 u\u0303 y g Pn ig ig i = h1 (\u03bbg ); \u00b5\u0302g = i=1 i=1 z\u0303ig u\u0303ig Pn (\u03bbg ) (\u03bb ) \u2212 \u00b5\u0302g )(yi g \u2212 \u00b5\u0302g )T i=1 z\u0303ig u\u0303ig (yi \u03a3\u0302g = = h2 (\u03bbg ). ng (3.14) (3.15) No closed-form solution is available for \u03bbg , but on substituting \u00b5\u0302g = h1 (\u03bbg ) and \u03a3\u0302g = h2 (\u03bbg ) into the conditional expectation of the complete-data log-likelihood for \u00b5g and \u03a3g respectively, the problem reduces to a onedimensional search of \u03bbg . Explicitly, the optimization is recast as a onedimensional root-finding problem of the equation \u2202E\u03a8 (lc |y)\/\u2202\u03bbg = 0, in which n \u2202 X \u2202E\u03a8 (lc |y) = z\u0303ig \u2202\u03bbg \u2202\u03bbg ( \u2212 i=1 + (\u03bbg \u2212 1) p X i u\u0303ig h (\u03bbg ) (\u03bbg ) (yi \u2212 \u00b5g )T \u03a3\u22121 \u2212 \u00b5g ) g (yi 2 ) log |yij | j=1 p n h n i (\u03bbg ) X X X (\u03bbg ) T \u22121 \u2202yi = \u2212z\u0303ig u\u0303ig (yi \u2212 \u00b5g ) \u03a3g + z\u0303ig log |yij | \u2202\u03bbg i=1 i=1 j=1 (3.16) 58 \f(\u03bb ) where \u2202yi g \/\u2202\u03bbg is a vector of length p whose j-th element is \u0002 \u0003 \u03bbg \u03bb\u22122 g sgn(yij )|yij | (\u03bbg log |yij | \u2212 1) + 1 , and \u00b5g and \u03a3g are replaced with \u00b5\u0302g = h1 (\u03bbg ) and \u03a3\u0302g = h2 (\u03bbg ) respectively. The equation may be solved numerically using, for example, Brent\u2019s (1973) algorithm. If we assume a global transformation parameter \u03bb instead, the left hand side of the equation to consider is slightly modified from Eq.(3.16) as \u2202E\u03a8 (lc |y) = \u2202\u03bb \uf8f1 n \uf8f2X G h X i=1 \uf8f3 (\u03bb) \u2212z\u0303ig u\u0303ig (yi T \u2212 \u00b5g ) \u03a3\u22121 g g=1 \uf8fc i\uf8fd \u2202y(\u03bb) i \uf8fe \u2202\u03bb + p n X X log |yij |. i=1 j=1 (3.17) Once a numerical estimate of \u03bbg has been obtained, we substitute it back into Eqs.(3.14)\u2013(3.15) to update \u00b5g and \u03a3g respectively. To complete the M-step, we need to update the estimate of the degrees of freedom \u03bdg , unless it is fixed a priori. From Eq.(3.8), we see that there are no overlaps between terms involving (\u00b5g , \u03a3g , \u03bbg ) and those involving \u03bdg . Hence, the incorporation of the Box-Cox transformation does not complicate the estimation of \u03bdg . Again, since there is no closed-form solution available for \u03bdg , we turn it into a one-dimensional root-finding problem by considering the equation \u2202E\u03a8 (lc |y)\/\u2202\u03bdg = 0, in which n \u0010\u03bd \u0011 \u03bd o n\u03bd \u03bdg \u2202 X \u2202E\u03a8 (lc |y) g g g = log \u2212 log \u0393 + (s\u0303ig \u2212 u\u0303ig ) z\u0303ig \u2202\u03bdg \u2202\u03bdg 2 2 2 2 i=1 n \u0010 \u03bd \u0011o X n \u03bdg g \u221d ng log +1\u2212\u03c8 + z\u0303ig (s\u0303ig \u2212 u\u0303ig ). 2 2 (3.18) i=1 If we assume a global degrees of freedom \u03bd = \u03bdg for all g, the derivative \u2202E\u03a8 (lc |y)\/\u2202\u03bd is given by n G n \u0010 \u03bd \u0011o X X \u2202E\u03a8 (lc |y) \u03bd \u221d n log + 1 \u2212 \u03c8 + z\u0303ig (s\u0303ig \u2212 u\u0303ig ). \u2202\u03bd 2 2 (3.19) i=1 g=1 Alternatively, to improve the convergence, we may exploit the advantage of the ECME algorithm (Liu and Rubin, 1994) and switch to update \u03bd by 59 \foptimizing the constrained actual log-likelihood function: \u03bd\u0302 \u2190 arg max \u03bd \uf8f1 n \uf8f2X \uf8f3 \uf8eb log \uf8ed i=1 G X g=1 \uf8f6\uf8fc \uf8fd (\u03bb ) wg \u03d5p (yi g |\u00b5g , \u03a3g , \u03bd) \u00b7 |Jp (yi ; \u03bbg )|\uf8f8 . (3.20) \uf8fe Apart from an intuitive sense that a faster convergence is expected on disregarding the information of the parameter estimates obtained from the previous iteration (which is carried over by the conditional expectation of the complete-data log-likelihood otherwise) as well as considering the actual likelihood instead of its approximation, it also saves a little computational burden by circumventing the computation of s\u0303ig . The EM algorithm alternates between the E and M-steps until convergence. The quantity z\u0303ig may be interpreted as the posterior probability that observation yi belongs to the g-th component. The maximum a posteriori configuration results from assigning each observation to the component associated with the largest z\u0303ig value. The uncertainty corresponding to each assignment may be conveniently quantified as 1\u2212maxg z\u0303ig (Bensmail et al., 1997). Outlier Identification Just like the case of z\u0303ig , the introduction of u\u0303ig does not only facilitate the implementation of the EM algorithm, but also aids in the interpretation of the final estimated model. As seen from Eqs.(3.14)\u2013(3.15), u\u0303ig serves as the weight in the weighted least squares estimation of \u00b5g and \u03a3g . It holds a neg(\u03bbg ) ative relationship with the Mahalanobis distance (yi (\u03bbg ) \u2212 \u00b5g )T \u03a3g \u22121 (yi \u2212 \u00b5g ) between yi and \u00b5g on the transformed scale, as given by Eq.(3.10). Hence, a small value of u\u0303ig would suggest that the corresponding observation is an outlier, and diminish its influence on the estimation of the parameters. In contrast, in the absence of such a mechanism, a normal mixP ture model is not robust against outliers, as the constraint g z\u0303ig = 1 for all i restricts all observations to make equal contributions overall towards parameter estimation. Exploiting such a mechanism, we may conveniently set up a rule of calling 60 \fan observation with the associated u\u0303ig value smaller than a threshold, say, 0.5, an outlier. Such a threshold may be selected on a theoretical basis by considering the one-to-one correspondence between u\u0303ig and the Mahalanobis distance which follows some standard, known distribution. On noting that (\u03bbg ) (yi (\u03bbg ) where yi (\u03bbg ) \u2212 \u00b5g )T \u03a3g \u22121 (yi \u2212 \u00b5g )\/p \u223c F (p, \u03bdg ), (3.21) follows a p-dimensional t distribution with parameters (\u00b5g , \u03a3g , \u03bdg ) and F (\u00b7) denotes an F distribution, a threshold c for u\u0303ig may be determined by considering the desired threshold quantile level \u03b1 of the distribution stated in Eq.(3.21): c= \u03bdg + p , \u03bdg + pF1\u2212\u03b1 (p, \u03bdg ) (3.22) where F1\u2212\u03b1 (\u00b7) denotes the \u03b1 quantile of the F distribution such that Pr(F \u2265 F1\u2212\u03b1 ) = 1 \u2212 \u03b1. For instance, if \u03bdg = 4, p = 5, and the desired threshold quantile level is \u03b1 = 0.9, then the corresponding threshold for u\u0303ig is c = 0.37 given the 0.9 quantile F0.1 (5, 4) = 4.051. Any observation with the associated u\u0303ig < 0.37 will be deemed an outlier. From Eq.(3.10), we can also see how the degrees of freedom \u03bdg participates in robustifying the parameter estimation process. A smaller value of \u03bdg tends to downweight outliers to a greater extent, while a large enough value tends to regress all weights to one, approaching the case of the NBC model. In addition, the upper bound of u\u0303ig offers a guide for setting the degrees of freedom \u03bd = \u03bdg for all g, if it is preferred to be fixed in advance. The weight u\u0303ig takes a positive value on (0, 1 + p\/\u03bdg ), and for a moderatevalued \u03bdg , its mean is around one. To avoid a point in the vicinity of the central location of a mixture component from imposing excessive influence on the estimation of parameters, we may set \u03bd accordingly such that the ratio p\/\u03bd is maintained at an appropriate level, for example, one to 1.5. Density Estimation One advantage of mixture modeling based on the normal distribution is that the marginal distribution for any subset of the dimensions is also normally 61 \fdistributed with the mean and covariance matrix extracted from the conformable dimensions (Johnson and Wichern, 2002). This favorable property is also observed in the multivariate t distiribution (Liu and Rubin, 1995; Kotz and Nadarajah, 2004), making the estimation of the marginal density for any dimensions available at a very low computational cost. Consider the partition Y = (Y1 , Y2 ) as an example. If Y comes from a multivariate t distribution with \u03bd degrees of freedom and with mean and covariance matrix conformably partitioned as \u00b5 = (\u00b51 , \u00b52 ) and \u03bd \u03bd \u03a3= \u03bd\u22122 \u03bd\u22122 \u03a311 \u03a312 ! \u03a321 \u03a322 respectively, then its subset Y1 will follow a t distribution with mean \u00b51 , covariance matrix \u03bd\/(\u03bd \u2212 2)\u03a311 and the same \u03bd degrees of freedom. This nice property is easily extended to a t mixture model with more than one component, and, in addition, preserved in our proposed tBC mixture model. One can easily derive the marginal density by extracting the conformable partitions from the means, covariance matrices and the Jacobian. The 90th percentile region of the mixture components shown in Figure 3.2 is produced by these means. Selecting the Number of Components When the number of mixture components is unknown, we apply the Bayesian Information Criterion (BIC) (Schwarz, 1978) to guide the selection. The BIC provides a convenient approximation to the integrated likelihood of a model and, in the context of mixture models, is defined as BICG = 2 log L\u0303G \u2212 KG log n, (3.23) where L\u0303G is the likelihood value of Eq.(3.6) evaluated at the maximum likelihood estimates of \u03a8, and KG is the number of independent parameters for a G-component mixture model. The BIC would then be computed for a range of possible values for G and the one with the largest BIC (or relatively 62 \f18 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf 8 10 \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf Blue male Blue female Orange male Orange female 6 6 \u25cf Blue male Blue female Orange male Orange female Rear width 12 14 16 \u25cf \u25cf 10 \u25cf 8 Rear width 12 14 16 18 20 (b) t 20 (a) t + Box\u2212Cox 10 15 Frontal lobe size 20 10 15 Frontal lobe size 18 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf 10 \u25cf \u25cf 8 \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf Blue male Blue female Orange male Orange female 6 6 \u25cf Blue male Blue female Orange male Orange female Rear width 12 14 16 \u25cf \u25cf 10 \u25cf 8 Rear width 12 14 16 18 20 (d) Normal 20 (c) Normal + Box\u2212Cox 20 10 15 Frontal lobe size 20 10 20 18 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf 8 10 \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf Blue male Blue female Orange male Orange female \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Blue male Blue female Orange male Orange female 6 \u25cf 6 \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Rear width 12 14 16 \u25cf \u25cf 10 \u25cf 8 Rear width 12 14 16 18 20 (f) Skew Normal 20 (e) Skew t 15 Frontal lobe size 10 15 Frontal lobe size 20 10 15 Frontal lobe size 20 Figure 3.2: Scatterplots revealing the assignment of observations for different models applied to the crabs dataset, projected onto the dimensions of the frontal lobe size and the width of the rear region of the carapace. The solid lines represent the 90th percentile region of the components in the mixture models. The line colors match with the group they are labeled, determined in a way such that the lowest misclassification rate is derived. Misclassified observations are drawn in red, overriding the original colors used to reveal their true group memberships. 63 \fclose to it) would be selected. Often, the BIC is applied in line with the principle of parsimony, by which we favor a simpler model if it does not incur a downgrade of the modeling performance. Suppose there are two tBC mixture models with G1 and G2 components respectively such that G1 < G2 . Under the notion of this principle, we would prefer the simpler model, i.e., the one with G1 components, unless a very strong evidence of improved performance signified by an increase of >10 (Kass and Raftery, 1995; Fraley and Raftery, 2002) is observed from BICG1 over BICG2 . 3.3 3.3.1 Application to Real Data Data Description To illustrate our methodology we use the following two real datasets. The bankruptcy dataset This dataset was obtained from a study which conducted financial ratio analysis to predict corporate bankruptcy (Altman, 1968). The sample consists of 66 manufacturing firms in the United States, of which 33 are bankrupt and the other 33 solvent. The data collected include the ratio of retained earnings (RE) to total assets, and the ratio of earnings before interest and taxes (EBIT) to total assets. They were derived from financial statements released two years prior to bankruptcy, and statements from the solvent firms during the same period. The crabs dataset Measurements in this dataset were collected from a study of rock crabs of genus Leptograpsus (Campbell and Mahon, 1974). The sample is composed of 50 crabs for each combination of species (blue and orange color forms) and sex (male and female), resulting in a total of 200 observations. There are five morphological measurements, namely, the frontal lobe size, the width of the rear region of the carapace, the length of the carapace along the midline, 64 \fthe maximum width of the carapace, and the depth of the body, for each crab. 3.3.2 Results We compare the performance of six mixture modeling approaches using different mixture distributions, namely, t with the Box-Cox transformation (tBC), t, normal with the Box-Cox transformation (NBC), normal, skew t, and skew normal. Since all observations in the two datasets come with known labels, we can assess and compare the models based on the following two criteria: misclassification rates and the number of components selected. Classification We fit the two datasets using the six aforementioned models in turn, on fixing the number of mixture components at the known values, i.e., two for the bankruptcy dataset and four for the crabs dataset. The same initialization strategy is applied to the EM algorithm for all the models. Each time, 10 random partitions are generated, each of which is followed by a few EM runs. The one delivering the highest likelihood value is taken as the initial configuration for the eventual EM algorithm. At convergence of the EM algorithm, misclassification rates, i.e., the proportions of observations assigned to the incorrect group, are computed. Each misclassification rate is determined as the minimum considering all permutations of the labels of the components. Table 3.1 shows the misclassification rates for the different models. As can be seen, for the bankruptcy dataset, the tBC and NBC mixture models deliver misclassification rates (15.2% and 16.7% respectively) lower than the other methods by a large margin. By taking a graphical inspection of the results, we find that the poor classification performance of the other four methods is due to the inability to resolve the shape of the two groups of observations properly (Figures 3.3(b,d\u2013f)). The challenge likely arises from the scattered group of bankrupt firms, with its most concentrated region located at the upper right corner and in close proximity to the dense group 65 \fTable 3.1: Misclassification rates for different models applied to the bankruptcy and crabs datasets. Model Bankruptcy Crabs tBC 0.152 (10) 0.070 (14) t 0.273 (18) 0.075 (15) NBC 0.167 (11) 0.345 (69) Normal 0.318 (21) 0.290 (58) Skew t 0.303 (20) 0.085 (17) Skew Normal 0.394 (26) 0.175 (35) The best results are shown in bold. The numbers of misclassified cases are given within parentheses. of solvent firms. The sensitivity of normal mixture models to outliers is clearly demonstrated in this example: the obvious outlier at the bottom of the scatterplot leads to an excessively sparse component representing the bankrupt group. Consequently, most observations in the bankrupt group have been absorbed by the compact component representing the solvent group. The shapes of the components in the t, skew t and skew normal mixture models are not all the same, but it appears that for all of them the scattered group of bankrupt firms are split into two components with one absorbing a concentration extending to the left and the other to the bottom. In contrast, both the tBC and NBC mixture models provide a nice representation of both groups of observations (Figures 3.3(a,c)). The group of bankrupt firms is resolved quite successfully upon a proper transformation (\u03bb\u0302 \u2248 0.5 for both models) of the observations. As another means of performance assessment, we look into the location of the misclassified observations in a plot of the ordered uncertainties (Figure 3.4). On observing that the misclassified observations have spread over the entire range of the uncertainties, it suggests that the t, skew t and skew normal mixture models simply provide an incorrect representation of the two groups (Figures 3.4(b,e,f)). The quality of the fit using the tBC and NBC mixture models respectively is confirmed by the corresponding uncertainty plots (Figures 3.4(a,c)). We can see that the observations associated with high uncertainties are also the ones most likely to be misclassified. 66 \f\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u2212250 EBIT ratio \u2212150 \u221250 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u2212300 \u2212200 \u2212100 RE ratio Bankrupt Solvent 0 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u2212300 \u2212200 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u2212300 \u2212200 \u2212100 RE ratio Bankrupt Solvent 0 \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u2212250 EBIT ratio \u2212150 \u221250 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u2212300 \u25cf \u25cf \u25cf \u2212200 \u2212100 RE ratio \u2212200 \u2212100 RE ratio Bankrupt Solvent 0 0 \u25cf Bankrupt Solvent 0 \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf EBIT ratio \u2212150 \u221250 \u25cf \u25cf \u2212250 EBIT ratio \u2212150 \u221250 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u2212300 \u25cf \u25cf \u25cf (f) Skew Normal \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf 0 \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u2212250 0 \u25cf 0 \u25cf \u25cf \u25cf \u25cf \u25cf (e) Skew t \u25cf \u2212100 RE ratio Bankrupt Solvent (d) Normal EBIT ratio \u2212150 \u221250 \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf 0 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u2212250 0 \u25cf \u25cf\u25cf \u25cf (c) Normal + Box\u2212Cox \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf EBIT ratio \u2212150 \u221250 \u25cf \u25cf (b) t \u2212250 0 (a) t + Box\u2212Cox \u25cf \u25cf \u2212300 \u2212200 \u2212100 RE ratio Bankrupt Solvent 0 Figure 3.3: Scatterplots revealing the assignment of observations for different models applied to the bankruptcy dataset. The black solid lines represent the 90th percentile region of the components in the mixture models. Misclassified observations are drawn in red. 67 \f0.4 Uncertainty 0.2 0.3 0.1 0.0 0.0 0.1 Uncertainty 0.2 0.3 0.4 0.5 (b) t 0.5 (a) t + Box\u2212Cox 0 10 20 30 40 Index 50 60 0 10 20 50 60 50 60 50 60 0.4 Uncertainty 0.2 0.3 0.1 0.0 0.0 0.1 Uncertainty 0.2 0.3 0.4 0.5 (d) Normal 0.5 (c) Normal + Box\u2212Cox 30 40 Index 0 10 20 30 40 Index 50 60 0 10 30 40 Index 0.4 Uncertainty 0.2 0.3 0.1 0.0 0.0 0.1 Uncertainty 0.2 0.3 0.4 0.5 (f) Skew Normal 0.5 (e) Skew t 20 0 10 20 30 40 Index 50 60 0 10 20 30 40 Index Figure 3.4: Plots revealing the location of misclassified observations relative to the ordered uncertainties of all observations for different models applied to the bankruptcy dataset. Locations of the misclassified observations are marked with red vertical lines. 68 \fThe results on the crabs dataset once again show that the tBC mixture model delivers the best performance in terms of misclassification rate (7%). It is followed closely by the t (7.5%) and skew t (8.5%) mixture models. Figure 3.2 shows a scatterplot of the crabs dataset projected onto the first two dimensions, namely, the frontal lobe size and the width of the rear region of the carapace. However, unlike the case for the bankruptcy dataset with only two dimensions, a visually clear discrimination of the four groups in the crabs dataset cannot be achieved by projecting the observations onto any two out of the five dimensions. Therefore, we opt for displaying the crabs dataset on its second versus third principal components which provides a good visually discriminating effect. Figures 3.5(a,b) suggest that those few misclassified observations in the tBC and t mixture models are all likely in the overlapping region of neighboring groups, justifying that these models provide a good representation of all the four groups in the dataset. This is further confirmed by a check on the uncertainty plots, in which the misclassified observations are also among the ones with the highest uncertainties (Figures 3.6(a,b)). Meanwhile, from Figures 3.5(c,d,f) we find that, for the poorly performed NBC, normal and skew normal mixture models, misclassified cases are concentrated on one or two of the groups. Figures 3.2(c,d,f) reveal that these models incorrectly split the assignment of the observations from those groups into other components. As expected, these three poorly performing normal-based models have misclassified observations spreading over the entire range in the uncertainty plots (Figures 3.6(c,d,f)). Selecting the Number of Components To facilitate this part of analysis, when we apply the aforementioned models, we fit the data and compute the BIC once for each choice of the number of mixture components G = 1, 2, . . . , M , where M = 6 for the bankruptcy dataset and M = 8 for the crabs dataset. These values are chosen for M because they are well above the true number of groups (two for the bankruptcy dataset and four for the crabs dataset) such that little change in the result is expected when we further increase M ; numerical problems may arise when 69 \f0.4 (b) t 0.2 0.2 0.4 (a) t + Box\u2212Cox \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.6 \u25cf \u25cf \u22120.5 0.0 Comp.2 Blue male Blue female Orange male Orange female 0.5 \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u22120.5 0.0 Comp.2 Blue male Blue female Orange male Orange female 0.5 (d) Normal \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.4 \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.5 0.0 Comp.2 Blue male Blue female Orange male Orange female 0.5 \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.5 0.0 Comp.2 Blue male Blue female Orange male Orange female 0.5 0.4 (f) Skew Normal 0.2 0.2 0.4 (e) Skew t \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u22120.6 \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.4 \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf Comp.3 \u22120.2 0.0 \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.6 Comp.3 \u22120.2 0.0 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.0 Comp.2 Blue male Blue female Orange male Orange female 0.5 \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.5 \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.4 \u25cf \u25cf \u25cf Comp.3 \u22120.2 0.0 \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u22120.6 Comp.3 \u22120.2 0.0 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.4 \u25cf \u25cf \u25cf \u25cf \u22120.6 \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf 0.2 0.2 0.4 (c) Normal + Box\u2212Cox \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u22120.4 \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.6 \u22120.4 \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf Comp.3 \u22120.2 0.0 \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf 0.4 Comp.3 \u22120.2 0.0 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u22120.5 0.0 Comp.2 Blue male Blue female Orange male Orange female 0.5 Figure 3.5: Plots revealing the assignment of observations for different models applied to the crabs dataset, displayed via the second and third principal components. Misclassified observations are drawn in red, overriding the original colors used to reveal their true group memberships. 70 \fUncertainty 0.2 0.3 0.4 0.1 0.0 0.0 0.1 Uncertainty 0.2 0.3 0.4 0.5 (b) t 0.5 (a) t + Box\u2212Cox 0 50 100 Index 150 200 0 50 150 200 150 200 Uncertainty 0.2 0.3 0.4 0.1 0.0 0.0 0.1 Uncertainty 0.2 0.3 0.4 0.5 (d) Normal 0.5 (c) Normal + Box\u2212Cox 100 Index 0 50 100 Index 150 200 0 50 Uncertainty 0.2 0.3 0.4 0.1 0.0 0.0 0.1 Uncertainty 0.2 0.3 0.4 0.5 (f) Skew Normal 0.5 (e) Skew t 100 Index 0 50 100 Index 150 200 0 50 100 Index 150 200 Figure 3.6: Plots revealing the location of misclassified observations relative to the ordered uncertainties of all observations for different models applied to the crabs dataset. Locations of the misclassified observations are marked with red vertical lines. 71 \f2 1 4 4 2 3 6 5 1 4 2 3 6 1 5 6 5 3 4 2 3 1 6 5 1 2 3 4 5 6 1 2 t + Box\u2212Cox t Normal + Box\u2212Cox Normal Skew t Skew Normal 3 No. of clusters 4 5 BIC \u22123300 \u22123200 \u22123100 \u22123000 \u22122900 6 3 1 2 4 \u22121440 (b) Crabs 5 \u22121400 BIC \u22121360 \u22121320 (a) Bankruptcy 5 3 4 2 6 1 3 6 4 2 5 1 2 1 3 4 5 1 2 6 4 3 5 6 3 5 4 2 6 1 2 3 4 1 6 5 1 2 4 3 4 1 2 1 2 3 4 5 6 1 t + Box\u2212Cox t Normal + Box\u2212Cox Normal Skew t Skew Normal 2 3 4 5 6 No. of clusters 6 5 3 6 5 7 8 Figure 3.7: Plots of BIC against the number of components for the different models applied to the bankruptcy and crabs datasets. M is too large, moreover. From the BIC curves shown in Figure 3.7, we observe that one single peak is observed for each modeling choice over the range of the number of components attempted. The number of components at which a peak is observed is deemed optimal by the BIC for the respective model. The BIC has selected the correct number of components (two) for all the mixture models except normal when applied to the bankruptcy dataset (Table 3.2). As to the crabs dataset in which the separation of the groups is less clear-cut, it poses a challenge of selecting the right number of components to most models. Only the tBC and t mixture models have resolved the correct number of components (four) guided by the BIC. This result further confirms with what we have observed in the last subsection that the four-component mixture model using the tBC or t mixture model provides the best representation of the data out of all candidates. 3.4 Simulation Studies We have conducted a series of simulations to further evaluate the relative performance of our proposed framework to the other approaches presented 72 \fTable 3.2: The number of components selected by the BIC for different models applied to the bankruptcy and crabs datasets. Model Bankruptcy Crabs tBC 2 4 t 2 4 NBC 2 3 Normal 3 3 Skew t 2 2 Skew Normal 2 3 The best results are shown in bold. in Section 3.3.2. The different approaches are evaluated for their sensitivity against model misspecification, using the following two criteria: the accuracy in the assignment of observations, and the accuracy in selecting the number of components. 3.4.1 Data Generation To facilitate the comparison, we generate data from the following mixture models: tBC, skew t, t and normal. To assess the accuracy in the assignment of observations, two settings of parameter values have been adopted: one taken from the estimates obtained when applying each of the aforementioned models to the bankruptcy dataset, and the other one from the crabs dataset, with the number of components set as the respective known values. As a result, each dataset generated from the bankruptcy setting consists of two components and two dimensions, while that from the crabs setting has four components and five dimensions. For datasets generated under the bankruptcy setting we fix the number of observations at 200, while it is set as 500 for the crabs setting. 100 datasets are generated from each of the aforementioned models under each setting. To study the accuracy in selecting the number of components, we focus at the crabs setting. Pertaining to this criterion, the crabs setting offers a better platform to discriminate the relative performance of the different approaches for its larger number of groups and higher dimensions. 1000 observations are generated from the 73 \fTable 3.3: Average misclassification rates for different models applied to datasets generated under the bankruptcy or crabs setting. Model used to generate data under the bankruptcy setting Model used to generate data under the crabs setting tBC Skew t t Normal tBC Skew t t Normal tBC 0.075 0.100 0.109 0.032 0.011 0.024 0.024 0.027 t 0.124 0.094 0.109 0.032 0.014 0.023 0.021 0.029 Model used to fit data NBC Normal Skew t Skew Norm. 0.075 0.126 0.142 0.110 0.225 0.189 0.087 0.191 0.126 0.134 0.114 0.144 0.033 0.030 0.032 0.031 0.057 0.074 0.015 0.016 0.046 0.060 0.020 0.021 0.048 0.070 0.023 0.023 0.042 0.038 0.028 0.028 The best results are shown in bold. crabs setting instead to avoid numerical problems that may arise from small components formed when the number of components is significantly larger than the true number. 3.4.2 Results Classification We apply the six approaches presented in Section 3.3.2 in turn to each generated dataset. Model fitting is done by presuming that the number of components is known, i.e., two for the bankruptcy setting and four for the crabs setting. Similar to the way we determined the misclassification rates in our real data analysis, we consider all permutations of the labels of the components and take the lowest one out of all misclassification rates computed. The performance of the different models is compared via the average misclassification rates. As shown clearly in Table 3.3, our proposed tBC mixture model is the only model that remains the best or close to the best in all the comparisons made. It delivers the lowest misclassification rates under both settings (7.5% and 1.1% respectively) when data are generated from the tBC mixture model. The flexibility of the tBC mixture model is exhibited when we look into its performance in the scenario of model misspecification. It 74 \fremains close to the respective true model in those cases, and even delivers the lowest misclassification rate when the true model is t under the bankruptcy setting (10.9%), or normal under the crabs setting (2.7%). Contrariwise, when data are generated from the tBC mixture model, with a lack of mechanisms to handle asymmetric components, both the t and normal mixture models do not perform well. It is worth noting that even the skew t mixture model, which is intended for data departing from symmetry, also performs poorly; the associated misclassification rate is as high as 14.2% under the bankruptcy setting, while that for tBC is only 7.5%. When data are generated from the skew t mixture model, taking advantage of the correct specification the skew t mixture model performs well. The tBC mixture model also shows a competent performance, however. Meanwhile, the skew t mixture model performs satisfactorily when the true mixture model is t or normal. The normal mixture model cannot match the others at all when data are generated from models other than normal, showing its vulnerability to outliers and asymmetric components. In addition, it is interesting to notice that the normal mixture model gives a rather high misclassification rate (3.8%) relative to the levels attained by tBC, t and skew t (2.7%\u20132.9%) when it itself is the true model for data generation under the crabs setting. It seems that the t-based mixture models are more robust to initialization of the EM algorithm. Selecting the Number of Components In this part of study, each time when we apply a model to a dataset generated under the crabs setting, we set the number of components from one up to eight in turn. The number of components is then selected to be the one which delivers the highest BIC. Table 3.4 summarizes the result and gives the 90% coverage intervals of the number of components selected for each model out of the 100 repetitions. The tBC mixture model selects the correct number of components (four) in the majority of repetitions, even in case of model misspecification. It is the only model that remains to contain only the true number of components in 75 \fTable 3.4: 90% coverage intervals of the number of components selected by the BIC for different models applied to datasets generated under the crabs setting. tBC Model used Skew t to generate t data Normal tBC (4, 4) (4, 4) (4, 4) (4, 4) t (4, 4) (4, 4) (4, 4) (4, 5) Model used to fit data NBC Normal Skew t (4, 4) (4, 4) (3, 5) (4, 5) (4, 5) (4, 4) (4, 5) (4, 5) (4, 4) (4, 4) (4, 5) (4, 4) Skew Norm. (3, 5) (4, 4) (4, 4) (4, 4) The best results are shown in bold. all the 90% coverage intervals. On the other hand, both the skew t and skew normal mixture models fail to distinguish the four groups properly in about 30% of the datasets generated from the tBC mixture model. Besides, both the NBC and normal mixture models, when applied to datasets generated from the t or skew t mixture model, tend to require an additional component to accommodate the data in an excess of outliers. 3.5 Discussion In this chapter, we have introduced a new class of distributions, the t distributions with the Box-Cox transformation, for mixture modeling. The proposed methodology is in line with Lange et al.\u2019s (1989) notion that transformation selection and outlier identification are two issues of mutual influence and therefore should be handled simultaneously. In our real data applications and simulation studies, we have shown the flexibility of this methodology in accommodating asymmetric components in the presence of outliers, and in coping with model misspecification. The vulnerability of the normal-based models to outliers is exposed in the analysis of the crabs dataset, in which the presence of outliers prevents a clear distinction of the four groups. A lack of mechanisms to downsize the influence of remote observations undermines the ability of these approaches to properly locate the cores of the four groups in the dataset. On the other hand, the analysis of the bankruptcy dataset provides a very good example of demonstrating 76 \fthe importance of incorporating data transformation in clustering. In the absence of a means to accommodate components departing from symmetry, the t mixture model fails to provide a reasonable representation of the data, while the number of groups is known in advance. Our simulation studies have confirmed these findings. As mentioned in the Introduction, although mixture modeling using our proposed tBC distributions and that using the skew t distributions follow two lines of development with more or less the same aim, our approach has an appeal of being computationally much simpler to implement. As noted in Lin (2009b), difficulties have been encountered in evaluating the conditional expectation of the complete-data log-likelihood in the E-step of the EM algorithm for the skew t mixture model. The objective function cannot be derived in closed form due to the presence of analytically intractable quantities. Numerical techniques for optimization as well as integration need to be employed extensively to update a vast amount of quantities in both the E and M-steps of the algorithm, undermining the computational stability therein. Besides, the parameterization that accounts for skewness in our proposed model originates from the family of power transformations, which is intuitively interpretable. It is less trivial to interpret the skewness vector parameterized in the skew t distribution, however. In addition, as presented in Section 3.2.3, the way to identify outliers using our approach is straightforward and on a theoretical ground. Exploiting the relationship between u\u0303ig and the quantile of an F distribution through Eq.(3.22), it is almost costless to proceed with outlier identification once the EM algorithm is completed. On the contrary, when the skew t mixture model is used, we cannot determine such a threshold by recasting it as a known quantity obtained from a standard distribution. In consequence, it demands extra computational effort to identify outliers, especially when the dimension of the data is high. Finally, perhaps most importantly, as demonstrated from our real data applications and simulation studies, the simplicity of the computational implementation of our proposed methodology is not achieved at the expense of the quality of performance. The results have shown that our proposed approach performs as well as that based on the skew t mixture 77 \fmodel, or even slightly better. An open-source software package that facilitates flow cytometry analysis with the methodology proposed in this chapter has been developed and is available at Bioconductor (Gentleman et al., 2004); see Chapter 5 for details. It is released as an R package called flowClust and addresses the vast demand for software development from the flow cytometry community. flowClust is dedicated to the automated identification of cell populations, and is well integrated into other flow cytometry packages. Meanwhile, we recognize the potential of the proposed methodology in other fields, and the importance of developing a general-purpose tool like MCLUST (Fraley and Raftery, 2002, 2006), the popular software that performs clustering analysis based on normal mixture models. We are going to work on such a generalpurpose, standalone software that will serve as a contribution to the general public. 78 \fBibliography Altman, E. I. (1968). Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. Journal of Finance, 23(4):589\u2013609. Atkinson, A. C. (1988). Transformations unmasked. Technometrics, 30:311\u2013 318. Azzalini, A. (1985). A class of distributions which includes the normal ones. Scandinavian Journal of Statistics, 12:171\u2013178. Azzalini, A. and Capitanio, A. (2003). Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t-distribution. Journal of the Royal Statistical Society, Series B, 65(2):367\u2013389. Azzalini, A. and Dalla Valle, A. (1996). The multivarite skew-normal distribution. Biometrika, 83(4):715\u2013726. Banfield, J. D. and Raftery, A. E. (1993). Model-based Gaussian and nonGaussian clustering. Biometrics, 49:803\u2013821. Bensmail, H., Celeux, G., Raftery, A. E., and Robert, C. P. (1997). Inference in model-based cluster analysis. Statistics and Computing, 7:1\u201310. Bickel, P. J. and Doksum, K. A. (1981). An analysis of transformations revisited. Journal of the American Statistical Association, 76(374):296\u2013 311. Box, G. E. P. and Cox, D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society, Series B, 26:211\u2013252. Brent, R. (1973). Algorithms for Minimization without Derivatives. Prentice-Hall, Englewood Cliffs, NJ. Campbell, N. A. and Mahon, R. J. (1974). A multivariate study of variation in two species of rock crab of the genus Leptograpsus. Australian Journal of Zoology, 22(3):417\u2013425. 79 \fCarroll, R. J. (1982). Prediction and power transformations when the choice of power is restricted to a finite set. Journal of the American Statistical Association, 77(380):908\u2013915. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1\u201338. Forbes, F., Peyrard, N., Fraley, C., Georgian-Smith, D., Goldhaber, D. M., and Raftery, A. E. (2006). Model-based region-of-interest selection in dynamic breast MRI. Journal of Computer Assisted Tomography, 30:675\u2013 687. Fraley, C., Raftery, A., and Wehrens, R. (2005). Incremental model-based clustering for large datasets with small clusters. Journal of Computational and Graphical Statistics, 14(3):529\u2013546. Fraley, C. and Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97(458):611\u2013631. Fraley, C. and Raftery, A. E. (2006). MCLUST version 3 for R: Normal mixture modeling and model-based clustering. Technical Report 504, University of Washington, Department of Statistics. Gentleman, R. C., Carey, V. J., Bates, D. M., Bolstad, B., Dettling, M., Dudoit, S., Ellis, B., Gautier, L., Ge, Y., Gentry, J., Hornik, K., Hothorn, T., Huber, W., Iacus, S., Irizarry, R., Leisch, F., Li, C., Maechler, M., Rossini, A. J., Sawitzki, G., Smith, C., Smyth, G., Tierney, L., Yang, J. Y. H., and Zhang, J. (2004). Bioconductor: open software development for computational biology and bioinformatics. Genome Biology, 5(10):R80. Gutierrez, R. G., Carroll, R. J., Wang, N., Lee, G.-H., and Taylor, B. H. (1995). Analysis of tomato root initiation using a normal mixture distribution. Biometrics, 51:1461\u20131468. 80 \fJohnson, R. A. and Wichern, D. W. (2002). Applied Multivariate Statistical Analysis. Prentice Hall, Upper Saddle River, NJ. Kass, R. E. and Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430):773\u2013795. Kotz, S. and Nadarajah, S. (2004). Multivariate t Distributions and Their Applications. Cambridge University Press, Cambridge. Kriessler, J. R. and Beers, T. C. (1997). Substructure in galaxy clusters: a two-dimensional approach. The Astronomical Journal, 113:80\u2013100. Lange, K. L., Little, R. J. A., and Taylor, J. M. G. (1989). Robust statistical modeling using the t-distribution. Journal of the American Statistical Association, 84:881\u2013896. Li, Q., Fraley, C., Bumgarner, R. E., Yeung, K. Y., and Raftery, A. E. (2005). Donuts, scratches and blanks: Robust model-based segmentation of microarray images. Bioinformatics, 21(12):2875\u20132882. Lin, T. I. (2009a). Maximum likelihood estimation for multivariate skew normal mixture models. Journal of Multivariate Analysis, 100(2):257\u2013 265. Lin, T. I. (2009b). Robust mixture modeling using multivariate skew t distributions. Statistics and Computing, (In press). Lin, T. I., Lee, J. C., and Hsieh, W. J. (2007a). Robust mixture modeling using the skew t distribution. Statistics and Computing, 17:81\u201392. Lin, T. I., Lee, J. C., and Yen, S. Y. (2007b). Finite mixture modelling using the skew normal distribution. Statistica Sinica, 17:909\u2013927. Liu, C. (1997). ML estimation of the multivariate t distribution and the EM algorithm. Journal of Multivariate Analysis, 63:296\u2013312. Liu, C. and Rubin, D. (1994). The ECME algorithm: a simple extension of EM and ECM with faster monotone convergence. Biometrika, 81(4):633\u2013 648. 81 \fLiu, C. and Rubin, D. (1995). ML estimation of the t distribution using EM and its extensions, ECM and ECME. Statistica Sinica, 5:19\u201339. McLachlan, G. and Peel, D. (2000). Finite Mixture Models. Wiley- Interscience, New York. McLachlan, G. J., Bean, R. W., and Peel, D. (2002). A mixture model-based approach to the clustering of microarray expression data. Bioinformatics, 18(3):413\u2013422. Meng, X. L. and Rubin, D. B. (1993). Maximum likelihood estimation via the ECM algorithm: a general framework. Biometrika, 80:267\u2013278. Mukherjee, S., Feigelson, E. D., Babu, G. J., Murtagh, F., Fraley, C., and Raftery, A. E. (1998). Three types of gamma ray bursts. The Astrophysical Journal, 508:314\u2013327. Pan, W., Lin, J., and Le, C. T. (2002). Model-based cluster analysis of microarray gene-expression data. Genome Biology, 3(2):R9. Peel, D. and McLachlan, G. J. (2000). Robust mixture modelling using the t distribution. Statistics and Computing, 10(4):339\u2013348. Pyne, S., Hu, X., Wang, K., Rossin, E., Lin, T.-I., Maier, L. M., BaecherAllan, C., McLachlan, G. J., Tamayo, P., Hafler, D. A., De Jager, P. L., and Mesirov, J. P. (2009). Automated high-dimensional flow cytometric data analysis. Proceedings of the National Academy of Sciences of the United States of America, 106(21):8519\u20138524. Sahu, S. K., Dey, D. K., and Branco, M. D. (2003). A new class of multivariate skew distributions with applications to Bayesian regression. Canadian Journal of Statistics, 31(2):129\u2013150. Schork, N. J. and Schork, M. A. (1988). Skewness and mixtures of normal distributions. Communications in Statistics: Theory and Methods, 17:3951\u20133969. 82 \fSchroeter, P., Vesin, J.-M., Langenberger, T., and Meuli, R. (1998). Robust parameter estimation of intensity distributions for brain magnetic resonance images. IEEE Transactions on Medical Imaging, 17(2):172\u2013186. Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6:461\u2013464. Stephens, M. (2000). Bayesian analysis of mixture models with an unknown number of components \u2013 an alternative to reversible jump methods. Annals of Statistics, 28:40\u201374. Titterington, D. M., Smith, A. F. M., and Makov, U. E. (1985). Statistical Analysis of Finite Mixture Distributions. Wiley, Chichester, UK. Wehrens, R., Buydens, L. M. C., Fraley, C., and Raftery, A. E. (2004). Model-based clustering for image segmentation and large datasets via sampling. Journal of Classification, 21:231\u2013253. Yeung, K. Y., Fraley, C., Murua, A., Raftery, A. E., and Ruzzo, W. L. (2001). Model-based clustering and data transformations for gene expression data. Bioinformatics, 17(10):977\u2013987. 83 \fChapter 4 Automated Gating of Flow Cytometry Data via Robust Model-Based Clustering\u2217 4.1 Introduction Flow cytometry (FCM) can be applied to analyze thousands of samples per day. However, as each dataset typically consists of multiparametric descriptions of millions of individual cells, data analysis can present a significant challenge. As a result, despite its widespread use, FCM has not reached its full potential because of the lack of an automated analysis platform to parallel the high-throughput data generation platform. As noted in Lizard (2007), in contrast to the tremendous interest in the FCM technology, there is a dearth of statistical and bioinformatics tools to manage, analyze, present, and disseminate FCM data. There is considerable demand for the development of appropriate software tools, as manual analysis of individual samples is error-prone, non-reproducible, non-standardized, not open to re-evaluation, and requires an inordinate amount of time, making it a limiting aspect of the technology (Roederer and Hardy, 2001; Roederer et al., 2001a,b; de Rosa et al., 2003; Bagwell, 2004; Braylan, 2004; Redelman, 2004; Tzircotis et al., 2004; Spidlen et al., 2006). The process of identifying homogeneous groups of cells that display a particular function, known as gating, is one major component of FCM anal\u2217 A version of this chapter has been published. Lo, K., Brinkman, R. R. and Gottardo, R. (2008). Automated gating of flow cytometry data via robust model-based clustering. Cytometry Part A, 73A(4):321\u2013332. 84 \fysis. As mentioned in Chapter 1, currently, to a large extent, gating relies on using software to apply a series of manually drawn gates (i.e., data filters) that select regions in 2D graphical representations of FCM data. This process is based largely on intuition rather than standardized statistical inference (Parks, 1997; Suni et al., 2003; Bagwell, 2004). It also ignores the high-dimensionality of FCM data, which may convey information that cannot be displayed in 1D or 2D projections. This is illustrated in Figure 4.1 with a synthetic dataset, consisting of two dimensions, generated from a t mixture model (McLachlan and Peel, 2000) with three components. While the three clusters can be identified using both dimensions, the structure is hardly recognized when the dataset is projected on either dimension. Such an example illustrates the potential loss of information if we disregard the multivariate nature of the data. The same problem occurs when projecting three (or more) dimensional data onto two dimensions. Several attempts have been made to automate the gating process. Among those, the K-means algorithm (MacQueen, 1967) has found the most applications (Murphy, 1985; Demers et al., 1992; Bakker Schut et al., 1993; Wilkins et al., 2001). Demers et al. (1992) have proposed an extension of K-means allowing for non-spherical clusters, but this algorithm has been shown to lead to performance inferior to fuzzy K-means clustering (Wilkins et al., 2001). In fuzzy K-means (Rousseeuw et al., 1996), each cell can belong to several clusters with different association degrees, rather than belonging completely to only one cluster. Even though fuzzy K-means takes into consideration some form of classification uncertainty, it is a heuristic-based algorithm and lacks a formal statistical foundation. Other popular choices include hierarchical clustering algorithms (e.g., linkage or Pearson coefficients method). However, these algorithms are not appropriate for FCM data, since the size of the pairwise distance matrix increases in the order of n2 with the number of cells, unless they are applied to some preliminary partition of the data (Bakker Schut et al., 1993), or they are used to cluster across samples, each of which is represented by a few statistics aggregating measurements of individual cells (Maynadie\u0301 et al., 2002; Lugli et al., 2007). Classification and regression trees (Breiman et al., 1984), artificial neural 85 \fFigure 4.1: A synthetic 2D dataset with three mixture components. The three components can easily be identified when both dimensions are used (lower left), while the two density curves produced from projecting the data on either dimension fail to capture the structure. 86 \fnetworks (Boddy and Morris, 1999) and support vector machines (Burges, 1998; Scho\u0308lkopf and Smola, 2002) have also been used in the context of FCM analyses (Beckman et al., 1995; Kothari et al., 1996; Boddy et al., 2000; Morris et al., 2001), but these supervised approaches require training data, which are not always available. In statistics, the problem of finding homogeneous groups of observations is referred to as clustering. An increasingly popular choice is model-based clustering (Titterington et al., 1985; McLachlan and Basford, 1988; Banfield and Raftery, 1993; McLachlan and Peel, 2000; Fraley and Raftery, 2002), which has been shown to give good results in many applied fields involving high dimensions (greater than ten); see, for example, Yeung et al. (2001), Fraley and Raftery (2002) and Pan et al. (2002). In this chapter, we propose to apply an unsupervised model-based clustering approach to identify cell populations in FCM analysis. In contrast to previous unsupervised methods (Murphy, 1985; Demers et al., 1992; Bakker Schut et al., 1993; Roederer and Hardy, 2001; Roederer et al., 2001a,b; Wilkins et al., 2001), our approach provides a formal unified statistical framework to answer central questions: How many populations are there? Should we transform the data? What model should we use? How should we deal with outliers (aberrant observations)? These questions are fundamental to FCM analysis where one does not usually know the number of populations, and where outliers are frequent. By performing clustering using all variables consisting of fluorescent markers, the full multidimensionality of the data is exploited, leading to more accurate and more reproducible identification of cell populations. The most commonly used model-based clustering approach is based on finite Gaussian mixture models (Titterington et al., 1985; McLachlan and Basford, 1988; McLachlan and Peel, 2000; Fraley and Raftery, 2002). However, Gaussian mixture models rely heavily on the assumption that each component follows a Gaussian distribution, which is often unrealistic. As a remedy, transformation of the data is often considered. On the other hand, there is the problem of outlier identification in mixture modeling. Transformation selection can be heavily influenced by the presence of outliers (Carroll, 1982; Atkinson, 1988), which are frequently observed in FCM data. 87 \fTo handle the issues of transformation selection and outlier identification simultaneously, in Chapter 3 we have developed an automated clustering approach based on t mixture models with the Box-Cox transformation. The t distribution is similar in shape to the Gaussian distribution with heavier tails and thus provides a robust alternative (Lange et al., 1989). The Box-Cox transformation is a type of power transformation, which can bring skewed data back to symmetry, a property of both the Gaussian and t distributions. In particular, the Box-Cox transformation is effective for data where the dispersion increases with the magnitude, a scenario not uncommon to FCM data. 4.2 Materials and Methods 4.2.1 Data Description To demonstrate our proposed automated clustering we use two FCM datasets publicly available at http:\/\/www.ficcs.org\/software.html. The Rituximab Dataset Flow cytometric high-content screening (Abraham et al., 2004) was applied in a drug-screening project to identify agents that would enhance the antilymphoma activity of Rituximab, a therapeutic monoclonal antibody (Gasparetto et al., 2004). 1600 different compounds were distributed into duplicate 96-well plates and then incubated overnight with the Daudi lymphoma cell line. Rituximab was then added to one of the duplicate plates and both plates were incubated for several more hours. In addition to cells treated with the compound alone, other controls included untreated cells and cells treated with Rituximab alone. During the entire culture period, cells were incubated with the thymidine analogue BrdU to label newly synthesized DNA. Following culture, cells were stained with anti-BrdU and the DNA binding dye 7-AAD. The proportion of cells in various phases of the cell cycle and undergoing apoptosis was measured with multiparameter FACS analysis. 88 \fThe GvHD Dataset Graft-versus-Host Disease (GvHD) occurs in allogeneic hematopoietic stem cell transplant recipients when donor-immune cells in the graft initiate an attack on the skin, gut, liver, and other tissues of the recipient. It is one of the most significant clinical problems in the field of allogeneic blood and marrow transplantation. FCM was used to collect data on patients subjected to bone marrow transplant with a goal of identifying biomarkers to predict the development of GvHD. The GvHD dataset is a collection of weekly peripheral blood samples obtained from 31 patients following allogeneic blood and marrow transplant (Brinkman et al., 2007). Peripheral blood mononuclear cells were isolated using Ficoll-Hypaque and then cryopreserved for subsequent batch analysis. At the time of analysis, cells were thawed and aliquoted into 96-well plates at 1 \u00d7 104 to 1 \u00d7 105 cells per well. The 96-well plates were then stained with 10 different four-color antibody combinations. All staining and analysis procedures were miniaturized so that small number of cells could be stained in 96-well plates with optimally diluted fluorescently conjugated antibodies. 4.2.2 The Model In statistics, model-based clustering (Titterington et al., 1985; McLachlan and Basford, 1988; McLachlan and Peel, 2000; Fraley and Raftery, 2002) is a popular unsupervised approach to look for homogeneous groups of observations. The most commonly used model-based clustering approach is based on finite Gaussian mixture models, which have been shown to give good results in various applied fields (Banfield and Raftery, 1993; McLachlan and Peel, 2000; Fraley and Raftery, 2002, 2006). However, Gaussian mixture models might give poor representations of clusters in the presence of outliers, or when the clusters are far from elliptical in shape, phenomena commonly observed in FCM data. In view of this, we have proposed an approach based on t mixture models (McLachlan and Peel, 2000; Peel and McLachlan, 2000) coupled with a variant of the Box-Cox transformation (Bickel and Doksum, 1981), which is also defined for negative-valued 89 \fdata, to handle the two aforementioned issues simultaneously. Please refer to Chapter 3 for a detailed account of an Expectation-Maximization (EM) algorithm (Dempster et al., 1977) for the simultaneous estimation of all unknown parameters along with transformation selection. When the number of clusters is unknown, we use the Bayesian Information Criterion (BIC) (Schwarz, 1978), which gives good results in the context of mixture models (Fraley and Raftery, 1998, 2002). While it is possible to estimate the degrees of freedom parameter \u03bd of the t distribution for each component of the mixture model as part of the EM algorithm (Peel and McLachlan, 2000), fixing it to a reasonable predetermined value for all components reduces the computational burden while still providing robust results. A reasonable value for \u03bd is four, which leads to a distribution similar to the Gaussian distribution, with slightly fatter tails accounting for outliers. Besides, the EM algorithm needs to be initialized. In this chapter, we apply a type of agglomerative hierarchical clustering based on Gaussian models (Banfield and Raftery, 1993; Fraley, 1998) for initialization. Model-based Gaussian hierarchical clustering is a stepwise process aimed to maximize the classification likelihood function (Banfield and Raftery, 1993; Celeux and Govaert, 1992). The process starts with treating each observation itself as one cluster, and then successively merges pairs of clusters leading to the highest increase in the likelihood until the desired number of clusters is reached. This initialization method is the same as the one used in the model-based clustering strategy proposed by Fraley and Raftery (2002, 2006), as implemented in the R package mclust. As mentioned in the Introduction, hierarchical clustering algorithms pose a problem with FCM data as they require the storage of a pairwise distance matrix which increases in the order of n2 with the number of cells. In view of this, we apply hierarchical clustering to a subset of data, and perform one EM iteration to cluster the remaining data to complete the initial partition. 90 \f4.2.3 Density Estimation To visualize FCM data, it may be convenient to project high-dimensional data on 1D or 2D density plots. One such application can be found in the analysis of the GvHD data, in which cells selected through the CD3+ gate were projected on the CD4 and CD8\u03b2 dimensions to produce contour plots (see Figures 4.2 and 4.3). Usually, nonparametric methods are applied to produce such plots. However, all nonparametric methods require a tuning parameter (e.g., bandwidth for kernel density estimation; see Silverman, 1986) to be specified to control the smoothness of these plots, and different softwares have different default settings. In the model-based clustering framework, such plots can be easily generated at a very low computational cost once estimates of the model parameters are available. The degree of smoothness is controlled by the number of components, which is chosen by the Bayesian Information Criterion (BIC) (Schwarz, 1978). Please see Section 3.2.3 for more details on implementation. 4.2.4 Sequential Approach to Clustering In practice, gating is often done on a preselected subset of data chosen by projecting the data on the forward light scatter (FSC) and sideward light scatter (SSC) dimensions. These two variables, which measure the relative morphological properties (corresponding roughly to cell size and shape) of the cells, are often used to distinguish basic cell types (e.g., monocytes and lymphocytes) or to remove dead cells and cell debris. As a consequence, similar to Hahne et al. (2006), we have adopted a sequential approach to clustering. We first use the FSC and SSC variables to cluster the data and find basic cell populations, and then perform clustering on one or more populations of interest using all other variables consisting of fluorescent markers. However, our methodology could also be applied to any subset or the entire set of variables. 91 \fFigure 4.2: Strategy for clustering the GvHD positive sample to look for CD3+ CD4+ CD8\u03b2 + cells. The manual gating strategy is shown in (a\u2013c). (a) Using FlowJo, a gate was drawn by an expert researcher to define the lymphocyte population. (b) The selected cells were projected on the CD3 dimension, and CD3+ cells were defined through setting an interval gate. (c) Cells within the upper right gate were referred to as CD3+ CD4+ CD8\u03b2 + . (d\u2013 f) A t mixture model with the Box-Cox transformation was used to mimic this manual selection process; here we display the corresponding density estimates. For FlowJo, the density estimates correspond to kernel estimates, while for our gating strategy, the density estimates are obtained from the estimated mixture models. 92 \fFigure 4.3: Strategy for clustering the GvHD control sample. (a\u2013c) The same manual gating strategy was applied by the expert researcher. (c) The upper right gate corresponding to the CD3+ CD4+ CD8\u03b2 + population contains very few cells, a distinct difference from the positive sample. (d\u2013f) A t mixture model with the Box-Cox transformation was used to mimic this manual selection process; here we display the corresponding density estimates. 93 \f4.3 4.3.1 Results Application to Real Datasets The Rituximab dataset We have re-analyzed a part of the Rituximab dataset using our sequential clustering approach. This data contains 1545 cells and four variables: FSC, SSC and two fluorescent markers, namely, 7-AAD and anti-BrdU. We compared the different models, namely, t mixture with Box-Cox, t mixture, Gaussian mixture with Box-Cox, and Gaussian mixture, with the results obtained through expert manual analysis using the commercial gating software FlowJo (Tree star, Ashland, Oregon) and the K-means clustering algorithm (MacQueen, 1967). As mentioned in Section 4.2.4, we use a sequential approach where we first cluster the FSC vs. SSC variables to select basic cell populations (first stage), and then cluster the selected population(s) using all remaining variables (second stage). Figure 4.4(a) shows the initial gating performed by a researcher using FlowJo on the FSC and SSC variables. To facilitate the comparison of our clustering approach with manual analysis at the second stage, we tried to mimic this analysis. In order to do so, we used a t mixture model with BoxCox transformation, fixing the number of components at one, and removed points with weights u\u0303 (please refer to Section 3.2.3 for details) less than 0.5, corresponding to outliers. As shown in Figure 4.4, the selected cells are not exactly the same but close enough to allow us to compare our clustering approach to manual gating results when using the two fluorescent markers. At the second stage, we compare the different clustering models on the selected cells. Since the number of clusters is unknown in advance, we make use of the BIC. The BIC curves shown in Figure 4.5, corresponding to the different models, peak around three to four clusters, motivating us to examine the results obtained using three (Figure 4.6) and four (Figure 4.7) clusters respectively. As expected, K-means performs poorly as spherical clusters do not provide a good fit. Similarly, untransformed mixture models (t and Gaussian), constrained by the assumption of elliptical clusters, 94 \fFigure 4.4: Initial clustering of the Rituximab data using the FSC and SSC variables. (a) In typical analysis a gate was manually drawn to select a group of cells for further investigation. (b) A t mixture model with BoxCox transformation was used to mimic this manual selection process. In (b) points (shown in gray) outside the boundary drawn in black have weights u\u0303 less than 0.5 and will be removed from the next stage. It can be shown that this boundary corresponds approximately to the 90th percentile region for the t distribution transformed back on the original scale using the Box-Cox parameter. The numbers shown in both plots are the percentages of points within the boundaries which are extracted for the next stage. Both gates capture the highest density region, as shown by the two density estimates. are not flexible enough to capture the top cluster. Furthermore, Gaussian mixture models (even with the Box-Cox transformation) are very sensitive to outliers, which can result in poor classification. For example, when four clusters are used, the Gaussian mixture model breaks the larger cluster into two to accommodate outliers, while the Gaussian mixture model with the Box-Cox transformation also has a large spread out cluster to accommodate outliers. Finally, Figures 4.6(b) and 4.7(b) show that our t mixture model-based clustering approach with the Box-Cox transformation can provide comparable results with the manual gating analysis by identifying three 95 \f\u25cf \u25cf \u25cf \u25cf \u221216800 \u25cf \u221217200 BIC \u221216400 \u25cf \u25cf 1 2 3 4 t + Box\u2212Cox t Gaussian + Box\u2212Cox Gaussian 5 6 no. of clusters Figure 4.5: BIC as a function of the number of clusters for different models applied to the Rituximab data. All models have a maximum BIC value around three to four clusters, though there is some uncertainty as the BIC values are relatively close. of the four clusters with well-fit boundaries. Note, however, that none of the four clustering methods detect the left rectangular gate seen on Figure 4.6(a), which is most likely because of its lower cell density compared to the other gates and the lack of clear separation along the \u201c7-AAD\u201d dimension. This gate, which corresponds to apoptotic cells (Gasparetto et al., 2004), contains a loose assemblage of cells located at the left of the three far right gates. Our methodology permits the identification of the three right clusters with well-fit boundaries, and thus could be combined with expert knowledge in order to identify apoptotic cells. For example, one could compute a one dimensional boundary at the left-end border of the two largest clusters, and automatically label cells on the left of that line apoptotic. 96 \fFigure 4.6: Second-stage clustering of the Rituximab data using all the fluorescent markers (three clusters). (a) Four gates were drawn by a researcher to define four populations of interest. (b\u2013f) Clustering was performed on the cells preselected from the first stage as shown in Figure 4.4(b). The number of clusters was set to be three. (b\u2013c) Points outside the boundary drawn in black have weights less than 0.5 and are labeled with \u201c\u00b7\u201d when t distributions were used. (d\u2013f) For clustering performed without using t distributions, for comparison sake, boundaries are drawn in a way such that they correspond to the region of the same percentile which the boundaries drawn in (b\u2013c) represent. Different symbols are used for the different clusters. The numbers shown in all plots are the percentages of cells assigned to each cluster. The K-means algorithm is equivalent to the classification EM algorithm (Celeux and Govaert, 1992, 1995) for a Gaussian mixture model assuming equal proportions and a common covariance matrix being a scalar multiple of the identity matrix. The spherical clusters with equal volumes drawn in (f) correspond to such a constrained model. 97 \fFigure 4.7: Second-stage clustering of the Rituximab data using all the fluorescent markers (four clusters). (a) Four gates were drawn by a researcher to define four populations of interest. (b\u2013f) Clustering was performed on cells preselected from the first stage. The number of clusters was set to be four. (b\u2013c) Points outside the boundary drawn in black have weights less than 0.5 and are shown in gray when t distributions were used. (d\u2013f) For clustering performed without using t distributions, for comparison sake, boundaries are drawn in a way such that they correspond to the region of the same percentile which the boundaries drawn in (b\u2013c) represent. Different symbols are used for the different clusters. The numbers shown in all plots are the percentages of cells assigned to each cluster. 98 \fHaving shown the superiority of our clustering framework in terms of flexibility and robustness compared to common approaches, we now turn to a larger dataset to demonstrate further its capability. The GvHD Dataset Two samples of the GvHD dataset (Brinkman et al., 2007) have been reanalyzed, one from a patient who eventually developed acute GvHD, and one from a control. Both datasets consist of more than 12,000 cells and four markers, namely, anti-CD4, anti-CD8\u03b2, anti-CD3 and anti-CD8, in addition to the FSC and SSC variables. One objective of the analysis is to look for the CD3+ CD4+ CD8\u03b2 + cells. To demonstrate the capability of our proposed automated clustering approach, we try to mimic the gating strategy stated in Brinkman et al. (2007). Figures 4.2(a\u2013c) and 4.3(a\u2013c) show the gating performed by an expert researcher using FlowJo. In the initial gating, we first extracted the lymphocyte population using the FSC and SSC variables by applying a t mixture model with the BoxCox transformation, fixing the number of clusters from one to eight in turn. Figure 4.8(a) shows that the BIC for the positive sample has a large increase from three to four clusters and remains relatively constant afterwards, suggesting a model fit using four clusters is appropriate. Figure 4.8(b) is the corresponding scatterplot showing the cluster assignment of the points on removing those with weights less than 0.5, regarded as outliers. It is clear that the region combining three of the clusters formed matches closely with the gate drawn by the researcher as shown in Figure 4.2(a), corresponding to the lymphocyte population. The next two stages in the manual gating strategy consist of locating the CD3+ cells by placing an interval gate in the CD3 density plot (Figure 4.2(b)), and then identifying the CD3+ CD4+ CD8\u03b2 + cells through the upper right gate in the CD4 vs CD8\u03b2 contour plot (Figure 4.2(c)). When applying our proposed clustering approach, we can combine these two stages by handling all the variables consisting of fluorescent markers at once, fully utilizing the multidimensionality of FCM data. 99 \fFigure 4.8: Initial clustering of the GvHD positive sample using the FSC and SSC variables. (a) The BIC curve remains constant beyond four clusters. (b) The scatterplot reveals the use of three clusters to represent the lymphocyte population and the remaining cluster (shown in gray) for dead cells. Points shown in gray have weights less than 0.5 and will be removed from the next stage. The fitted model with 12 clusters seems to provide a good fit as suggested by the BIC (Figure 4.9(a)). We compared our results with those obtained through the manual gating approach by first examining the estimated density projected on the CD3 dimension. The unimodal, yet skewed, density curve suggests that it is composed of two populations with substantially different proportions superimposed on each other (Figure 4.2(e)). At a level of around 280, we can well separate the 12 cluster means along the CD3 dimension into two groups, and use the group with high cluster means in the CD3 dimension to represent the CD3+ population. The unimodal nature of the density curve (Figures 4.2(b,e)) implies that the two underlying populations somewhat mix together, and therefore setting a fixed cutoff to classify the cells is likely inappropriate. The merit of our automated clustering approach is shown here, that, instead of setting a cutoff, it makes use of 100 \fFigure 4.9: Second-stage clustering of the GvHD positive sample (a\u2013b) and control sample (c\u2013d) using all the fluorescent markers. Clustering was performed on the cells preselected from the first stage. For the positive sample, (a) the BIC reaches a maximum at 12 clusters; (b) the scatterplot reveals the cluster assignment of the cells. Points which are assigned to the five clusters with high CD3 means are classified as CD3+ cells. The five regions drawn in solid lines form the CD3+ population. The two regions in the upper right marked with the \u0001 symbols are identified as the CD3+ CD4+ CD8\u03b2 + population. For the control sample, (c) little increment is observed in the BIC beyond seven clusters, suggesting that seven clusters, much fewer than for the positive sample, are enough to model the data in the second stage; (d) the scatterplot reveals the cluster assignment of the cells. Only two clusters have been used to model the CD3+ population. 101 \fthe information provided by the other dimensions to help classify the cells into CD3+ \/CD3\u2212 populations. The group with high cluster means in the CD3 dimension consists of five clusters, and among these five clusters, we can easily identify the two clusters at the upper right in the CD4 vs CD8\u03b2 scatterplot (Figure 4.9(b)) as the CD3+ CD4+ CD8\u03b2 + population. We have applied the same strategy to the control sample; see Figures 4.3 and 4.9(c\u2013d). Figure 4.9(c) suggests that, this time, only seven clusters are necessary as the BIC is relatively flat after that. The associated gating results for the control sample is characterized by an absence of the CD3+ CD4+ CD8\u03b2 + cells, a distinct difference from the positive sample. This feature is also captured using our automated clustering approach; the fitted model contains no clusters at the upper right of the CD4 vs CD8\u03b2 scatterplot (Figure 4.9(d)). This cell population was of specific interest, as it was identified as one possibly predictive of GvHD, based on the manual gating analysis in Brinkman et al. (2007). 4.3.2 Simulation studies We have conducted a series of simulations to study the performance of different model-based clustering approaches under different model specifications. Model performance is compared using the following two criteria: (a) the accuracy in cluster assignment; (b) the accuracy in selecting the number of clusters. We performed two simulation studies, one where we set the dimension to two resembling the Rituximab dataset, and one where the dimension was set to four resembling the GvHD dataset. In each case, we generated data from each of the following models: t mixture with Box-Cox, t mixture, Gaussian mixture with Box-Cox, and Gaussian mixture, using the parameter estimates obtained at the second stage in the Rituximab and GvHD (positive sample) analyses. For the GvHD, to reduce computational burden, we only selected the five clusters with the largest means in the CD3 dimension, corresponding to the CD3+ population. We refer to the simulation experiments as the Rituximab and the GvHD settings, respectively. We fixed the number of cells at 500 and generated 1000 datasets under each 102 \fTable 4.1: Average misclassification rates for different models applied to data generated under the Rituximab or GvHD setting. Model used to generate data under the Rituximab setting Model used to generate data under the GvHD setting t+Box-Cox t G.+Box-Cox Gaussian t+Box-Cox t G.+Box-Cox Gaussian t+Box-Cox 0.187 0.255 0.321 0.344 0.112 0.107 0.135 0.134 Model used to fit data t G.+Box-Cox 0.211 0.279 0.263 0.339 0.400 0.251 0.329 0.317 0.116 0.205 0.111 0.191 0.143 0.139 0.132 0.132 Gaussian 0.251 0.315 0.352 0.301 0.230 0.221 0.152 0.126 G. = Gaussian; the best results are shown in bold. of the aforementioned models. To study the accuracy in selecting the number of clusters using BIC, we generated 100 datasets from the same GvHD setting with 1000 cells. Here, we used 1000 cells to avoid numerical problems with small clusters when the number of clusters used is significantly larger than the true number, while we decreased the number of datasets to 100 because of the increase in computation when estimating the number of clusters. Classification Results The four clustering methods in comparison were applied to each of the 1000 datasets generated from each model. Model fitting was done by presuming that the number of clusters is known, i.e., four clusters for the Rituximab setting and five for GvHD. We compared the models via misclassification rates, i.e., the proportions of cells assigned to incorrect clusters. When computing the misclassification rates, all permutations of the cluster labels were considered, and the lowest misclassification rate was determined. The scatterplot of one of the datasets (GvHD setting) generated from the t mixture model with Box-Cox transformation can be found in Figure 4.10. Overall results are shown in Table 4.1. As expected, the Gaussian mixture models perform poorly when data were generated from the t mixture models because of a lack of mechanisms to handle outliers. When a transformation 103 \fFigure 4.10: A representative sample generated from the t mixture model with the Box-Cox transformation under the GvHD setting. (a) The sample is displayed through the CD4 and CD8\u03b2 dimensions. (b\u2013e) Classification results are shown for the four clustering methods. Different plotting symbols are used for different clusters. Misclassified points are marked with the \u0002 symbols. 104 \fwas applied during data generation, the mixture models without the BoxCox transformation fail to perform well. On the contrary, the flexibility of the t mixture model with the Box-Cox transformation does not penalize too much for model misspecification. This is illustrated by the results from the GvHD setting: the t mixture model with the Box-Cox transformation gives the lowest misclassification rates when the true model is instead the t mixture model without transformation or the Gaussian mixture model with the Box-Cox transformation. Selecting the Number of Clusters In this part of the study, the four models in comparison were applied to each of the 100 datasets generated, setting the number of clusters from one to ten in turn. The number of clusters that delivered the highest BIC was selected. We compared the models via the mode and the 80% coverage interval of the number of clusters selected out of the 100 repetitions. As shown in Table 4.2, the t mixture models can select the correct number of clusters in the majority of repetitions, even in case of model misspecification. In addition, they deliver the same 80% coverage intervals as the Gaussian mixture models do when data were generated from Gaussian mixtures, suggesting that the robustness against outliers of the t mixture models provides satisfactory protection against model misspecification. On the contrary, the Gaussian mixture models tend to overestimate the number of clusters when an excess of outliers is present in the data generated from t mixtures, and in most instances in which overestimation happens, six clusters are selected. 4.4 Discussion The experimental data and the simulation studies have demonstrated the importance of handling transformation selection, outlier identification and clustering simultaneously. While a stepwise approach in which transformation is preselected ahead of outlier detection (or vice versa) may be considered, it is unlikely to tackle the problem well in general, as the preselected 105 \fTable 4.2: Modes and 80% coverage intervals of the number of clusters selected by the BIC for different models applied to data generated under the GvHD setting. Model used to generat data t+Box-Cox t G.+Box-Cox Gaussian t+Box-Cox M. Int. 5 (5, 6) 5 (5, 7) 5 (5, 6) 5 (5, 6) Model used to fit data t G.+Box-Cox M. Int. M. Int. 5 (5, 6) 6 (6, 7) 5 (5, 6) 6 (6, 7) 5 (5, 6) 5 (5, 6) 5 (5, 6) 5 (5, 6) Gaussian M. Int. 6 (6, 8) 6 (6, 8) 5 (5, 6) 5 (5, 6) M. = mode; Int. = 80% coverage interval; G. = Gaussian. transformation may be influenced by the presence of outliers. This is shown in the analysis of the Rituximab dataset. Without outlier removal the use of Gaussian mixture models led to inappropriate transformation and poor classification in order to accommodate outliers (Figures 4.6(d) and 4.7(d)). Conversely, without transformation, the t mixture model could not model the shape of the top cluster well (Figures 4.6(c) and 4.7(c)). Similarly, it is necessary to perform transformation selection and clustering simultaneously (Gutierrez et al., 1995; Schork and Schork, 1988) as opposed to a stepwise approach. It is difficult to know what transformation to select beforehand as one only observes the mixture distribution, and the classification labels are unknown. A skewed distribution could be the result of one dominant cluster and one (or more) smaller cluster. As shown by our analysis with the experimental data and the simulation studies, our proposed approach based on t mixture models with Box-Cox transformation benefits from handling these issues, which have mutual influence, simultaneously. Furthermore, confirmed by results of our simulation studies, our proposed approach is robust against model misspecification and can avoid the problem of Gaussian mixture models that excessive clusters are often needed to provide a reasonable fit in case of model misspecification (Yeung et al., 2001). One of the benefits of model-based clustering is that it provides mechanism for both \u201chard\u201d clustering (i.e., the partitioning of the whole data into separate clusters) and fuzzy clustering (i.e., a \u201csoft\u201d clustering approach in 106 \fwhich each event may be associated with more than one cluster). The latter approach is in line with the rationale that there exists uncertainty about to which cluster an event should be assigned. The overlaps between clusters as seen in Figures 4.6 and 4.9 reveal such uncertainty in the cluster assignment. It is well known that the convergence of the EM algorithm depends on the initial conditions used. A bad initialization may incur slow convergence or convergence to a local minimum. In the real-data examples and the simulation studies, we used a deterministic approach called hierarchical clustering (Banfield and Raftery, 1993; Fraley, 1998) for initialization. We have found this approach to perform well in the datasets explored here. However, better initialization, perhaps incorporating expert knowledge, might be needed for more complex datasets. For example, if there is a high level of noise in the data, it might be necessary to use an initialization method that accounts for such outliers; see Fraley and Raftery (2002) for an example. To estimate how long it takes to analyze a sample of size typical for an FCM dataset, we have carried out a test run on a synthetic dataset, which consists of one million events and 10 dimensions. To complete an analysis with 10 clusters, it took about 20 minutes on a 3GHz Intel Xeon processor with 2GB of RAM. This illustrates that the algorithm should be quick enough for analyzing a large flow dataset. In general, the computational time increases linearly with the number of events and increases in the order of p2 with the number of variables, p, per EM iteration. This is an advantage over hierarchical clustering in which the computational time and memory space required increase in the order of n2 with the number of events, making a hierarchical approach impractical when a sample of a moderate size, say, >5000, is investigated. Like all clustering approaches, the methodology we have developed includes assumptions which may limit the applicability of this approach, and it will not identify every cell population in every sample. If the distribution of the underlying population is highly sparse without a well-defined core, our approach may not properly identify all sub-populations. This is illustrated in the Rituximab analysis where the loosely structured group of apoptotic cells was left undetected. This in turn has hindered the capa107 \fbility of the approach from giving satisfactory esimates of the G1 and S frequences for the identified clusters that would be desired for normal analysis of a 7-AAD DNA distribution for cultured cells. On the other hand identification of every cluster may not always be important. The Rituximab study was designed as a high throughput drug screen to identify compounds that caused a >50% reduction in S-phase cells (Gasparetto et al., 2004), as would be captured by both the manual gates and our automated analysis should it occur. Furthermore, the exact identification of every cluster through careful manual analysis may not always be possible, especially in high throughput experiments. For instance, in the manual analysis of the GvHD dataset, a quadrant gate was set in Figure 4.2(c) in order to identify the CD3+ CD4+ CD8\u03b2 + population which was of primary interest. For convenience sake, this gate was set at the same level across all the samples being investigated. While five clusters can be clearly identified on the graph, it would be time-consuming to manually adjust the positions of each of the gates for all the samples in a high-throughput environment as well as identify all novel populations. Contrariwise, our automated approach can identify these clusters in short order without the need for manual adjustment. To complete the analysis of the GvHD dataset (>12,000 cells, six dimensions) to identify the CD3+ CD4+ CD8\u03b2 + population (Figure 4.2), it took less than five minutes, using the aforementioned sequential approach to clustering, on an Intel Core 2 Duo with 2GB of RAM running Mac OS X 10.4.10. A rigorous quantitative assessment is important before implementing this, or any approach, as a replacement for expert manual analysis. The availability of a wide variety of example data would aid in the development and evaluation of automated analysis methodologies. We are therefore developing such a public resource, and would welcome contributions from the wider FCM community. 108 \fBibliography Abraham, V. C., Taylor, D. L., and Haskins, J. R. (2004). High content screening applied to large-scale cell biology. Trends in Biotechnology, 22(1):15\u201322. Atkinson, A. C. (1988). Transformations unmasked. Technometrics, 30:311\u2013 318. Bagwell, C. B. (2004). DNA histogram analysis for node-negative breast cancer. Cytometry Part A, 58A(1):76\u201378. Bakker Schut, T. C., de Grooth, B. G., and Greve, J. (1993). Cluster analysis of flow cytometric list mode data on a personal computer. Cytometry, 14(6):649\u2013659. Banfield, J. D. and Raftery, A. E. (1993). Model-based Gaussian and nonGaussian clustering. Biometrics, 49:803\u2013821. Beckman, R. J., Salzman, G. C., and Stewart, C. C. (1995). Classification and regression trees for bone marrow immunophenotyping. Cytometry, 20(3):210\u2013217. Bickel, P. J. and Doksum, K. A. (1981). An analysis of transformations revisited. Journal of the American Statistical Association, 76(374):296\u2013 311. Boddy, L. and Morris, C. W. (1999). Artificial neural networks for pattern recognition. In Fielding, A. H., editor, Machine Learning Methods for Ecological Applications, pages 37\u201387. Kluwer, Boston. Boddy, L., Morris, C. W., Wilkins, M. F., Al-Haddad, L., Tarran, G. A., Jonker, R. R., and Burkill, P. H. (2000). Identification of 72 phytoplankton species by radial basis function neural network analysis of flow cytometric data. Marine Ecology Progress Series, 195:47\u201359. 109 \fBraylan, R. C. (2004). Impact of flow cytometry on the diagnosis and characterization of lymphomas, chronic lymphoproliferative disorders and plasma cell neoplasias. Cytometry Part A, 58A(1):57\u201361. Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984). Classification and Regression Trees. Wadsworth & Brooks, Monterey, CA. Brinkman, R. R., Gasparetto, M., Lee, S. J. J., Ribickas, A., Perkins, J., Janssen, W., Smiley, R., and Smith, C. (2007). High-content flow cytometry and temporal data analysis for defining a cellular signature of Graft-versus-Host disease. Biology of Blood and Marrow Transplantation, 13(6):691\u2013700. Burges, C. J. C. (1998). A Tutorial on Support Vector Machines for Pattern Recognition. Kluwer, Boston. Carroll, R. J. (1982). Prediction and power transformations when the choice of power is restricted to a finite set. Journal of the American Statistical Association, 77(380):908\u2013915. Celeux, G. and Govaert, G. (1992). A classification EM algorithm for clustering and two stochastic versions. Computational Statistics and Data Analysis, 14(3):315\u2013332. Celeux, G. and Govaert, G. (1995). Gaussian parsimonious clustering models. Pattern Recognition, 28(5):781\u2013793. de Rosa, S. C., Brenchley, J. M., and Roederer, M. (2003). Beyond six colors: a new era in flow cytometry. Nature Medicine, 9(1):112\u2013117. Demers, S., Kim, J., Legendre, P., and Legendre, L. (1992). Analyzing multivariate flow cytometric data in aquatic sciences. Cytometry, 13(3):291\u2013 298. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1\u201338. 110 \fFraley, C. (1998). Algorithms for model-based Gaussian hierarchical clustering. SIAM Journal on Scientific Computing, 20(1):270\u2013281. Fraley, C. and Raftery, A. E. (1998). How many clusters? Which clustering method? Answers via model-based cluster analysis. The Computer Journal, 41(8):578\u2013588. Fraley, C. and Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97(458):611\u2013631. Fraley, C. and Raftery, A. E. (2006). MCLUST version 3 for R: Normal mixture modeling and model-based clustering. Technical Report 504, University of Washington, Department of Statistics. Gasparetto, M., Gentry, T., Sebti, S., O\u2019Bryan, E., Nimmanapalli, R., Blaskovich, M. A., Bhalla, K., Rizzieri, D., Haaland, P., Dunne, J., and Smith, C. (2004). Identification of compounds that enhance the anti-lymphoma activity of rituximab using flow cytometric high-content screening. Journal of Immunological Methods, 292(1\u20132):59\u201371. Gutierrez, R. G., Carroll, R. J., Wang, N., Lee, G.-H., and Taylor, B. H. (1995). Analysis of tomato root initiation using a normal mixture distribution. Biometrics, 51:1461\u20131468. Hahne, F., Arlt, D., Sauermann, M., Majety, M., Poustka, A., Wiemann, S., and Huber, W. (2006). Statistical methods and software for the analysis of high throughput reverse genetic assays using flow cytometry readouts. Genome Biology, 7(8):R77. Kothari, R., Cualing, H., and Balachander, T. (1996). Neural network analysis of flow cytometry immunophenotype data. IEEE Transactions on Biomedical Engineering, 43(8):803\u2013810. Lange, K. L., Little, R. J. A., and Taylor, J. M. G. (1989). Robust statistical modeling using the t-distribution. Journal of the American Statistical Association, 84:881\u2013896. 111 \fLizard, G. (2007). Flow cytometry analyses and bioinformatics: interest in new softwares to optimize novel technologies and to favor the emergence of innovative concepts in cell research. Cytometry Part A, 71A:646\u2013647. Lugli, E., Pinti, M., Nasi, M., Troiano, L., Ferraresi, R., Mussi, C., Salvioli, G., Patsekin, V., Robinson, J. P., Durante, C., Cocchi, M., and Cossarizza, A. (2007). Subject classification obtained by cluster analysis and principal component analysis applied to flow cytometric data. Cytometry Part A, 71A:334\u2013344. MacQueen, J. B. (1967). Some methods for classification and analysis of multivariate observations. In LeCam, L. and Neyman, J., editors, Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281\u2013297, Berkeley. University of California Press. Maynadie\u0301, M., Picard, F., Husson, B., Chatelain, B., Cornet, Y., Le Roux, G., Campos, L., Dromelet, A., Lepelley, P., Jouault, H., Imbert, M., Rosenwadj, M., Verge\u0301, V., Bissie\u0300res, P., Raphae\u0308l, M., Be\u0301ne\u0301, M. C., Feuillard, J., and GEIL (2002). Immunophenotypic clustering of myelodysplastic syndromes. Blood, 100(7):2349\u20132356. McLachlan, G. and Peel, D. (2000). Finite Mixture Models. Wiley- Interscience, New York. McLachlan, G. J. and Basford, K. E. (1988). Mixture Models: Inference and Applications to Clustering. Marcel Dekker Inc, New York, NY. Morris, C. W., Autret, A., and Boddy, L. (2001). Support vector machines for identifying organisms \u2013 a comparison with strongly partitioned radial basis function networks. Ecological Modelling, 146(1\u20133):57\u201367. Murphy, R. F. (1985). Automated identification of subpopulations in flow cytometric list mode data using cluster analysis. Cytometry, 6(4):302\u2013309. Pan, W., Lin, J., and Le, C. T. (2002). Model-based cluster analysis of microarray gene-expression data. Genome Biology, 3(2):R9. 112 \fParks, D. R. (1997). Data processing and analysis: data management. In Robinson, J. P., editor, Current Protocols in Cytometry, chapter 10. John Wiley & Sons, Inc, New York. Peel, D. and McLachlan, G. J. (2000). Robust mixture modelling using the t distribution. Statistics and Computing, 10(4):339\u2013348. Redelman, D. (2004). CytometryML. Cytometry Part A, 62A(1):70\u201373. Roederer, M. and Hardy, R. R. (2001). Frequency difference gating: a multivariate method for identifying subsets that differ between samples. Cytometry, 45(1):56\u201364. Roederer, M., Moore, W., Treister, A., Hardy, R. R., and Herzenberg, L. A. (2001a). Probability binning comparison: a metric for quantitating multivariate distribution differences. Cytometry, 45(1):47\u201355. Roederer, M., Treister, A., Moore, W., and Herzenberg, L. A. (2001b). Probability binning comparison: a metric for quantitating univariate distribution differences. Cytometry, 45(1):37\u201346. Rousseeuw, P. J., Kaufman, L., and Trauwaert, E. (1996). Fuzzy clustering using scatter matrices. Computational Statistics and Data Analysis, 23(1):135\u2013151. Scho\u0308lkopf, B. and Smola, A. J. (2002). Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. The MIT Press, Cambridge, Massachusetts. Schork, N. J. and Schork, M. A. (1988). Skewness and mixtures of normal distributions. Communications in Statistics: Theory and Methods, 17:3951\u20133969. Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6:461\u2013464. Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman-Hall, New York. 113 \fSpidlen, J., Gentleman, R. C., Haaland, P. D., Langille, M., Le Meur, N., Ochs, M. F., Schmitt, C., Smith, C. A., Treister, A. S., and Brinkman, R. R. (2006). Data standards for flow cytometry. OMICS, 10(2):209\u2013214. Suni, M. A., Dunn, H. S., Orr, P. L., de Laat, R., Sinclair, E., Ghanekar, S. A., Bredt, B. M., Dunne, J. F., Maino, V. C., and Maecker, H. T. (2003). Performance of plate-based cytokine flow cytometry with automated data analysis. BMC Immunology, 4:9. Titterington, D. M., Smith, A. F. M., and Makov, U. E. (1985). Statistical Analysis of Finite Mixture Distributions. Wiley, Chichester, UK. Tzircotis, G., Thorne, R. F., and Isacke, C. M. (2004). A new spreadsheet method for the analysis of bivariate flow cytometric data. BMC Cell Biology, 5:10. Wilkins, M. F., Hardy, S. A., Boddy, L., and Morris, C. W. (2001). Comparison of five clustering algorithms to classify phytoplankton from flow cytometry data. Cytometry, 44(3):210\u2013217. Yeung, K. Y., Fraley, C., Murua, A., Raftery, A. E., and Ruzzo, W. L. (2001). Model-based clustering and data transformations for gene expression data. Bioinformatics, 17(10):977\u2013987. 114 \fChapter 5 flowClust: a Bioconductor package for automated gating of flow cytometry data\u2217 5.1 Introduction In Chapter 4, we mentioned the lack of an automated analysis platform to parallel the high-throughput data-generation platform in flow cytometry (FCM). How to resolve this current bottleneck has become an open question among the FCM community. Recently, a suite of several R packages providing infrastructure for FCM analysis have been released though Bioconductor (Gentleman et al., 2004), an open source software development project for the analysis of genomic data. flowCore (Hahne et al., 2009), the core package among them, provides data structures and basic manipulation of FCM data. flowViz (Sarkar et al., 2008) offers visualization tools, while flowQ provides quality control and quality assessment tools for FCM data. Finally, flowUtils provides utilities to deal with data import\/export for flowCore. In spite of these low-level tools, there is still a dearth of software that helps automate FCM gating analysis with a sound theoretical foundation (Lizard, 2007). In view of the aforementioned issues, based on a formal statistical clustering approach, we have developed the flowClust package to help resolve the current bottleneck. flowClust implements a robust model-based cluster\u2217 A version of this chapter has been published. Lo, K., Hahne, F., Brinkman, R. R. and Gottardo, R. (2009). flowClust: a Bioconductor package for automated gating of flow cytometry data. BMC Bioinformatics, 10:145. 115 \fing approach (Peel and McLachlan, 2000; McLachlan and Peel, 2000; Fraley and Raftery, 2002) which extends the multivariate t mixture model with the Box-Cox transformation proposed in Chapter 4. As a result of the extensions made, flowClust has included options allowing for a cluster-specific estimation of the Box-Cox transformation parameter and\/or the degrees of freedom parameter. 5.2 Implementation With the robust model-based clustering approach described in Chapter 4 as the theoretical basis, we have developed flowClust, an R package to conduct an automated FCM gating analysis and produce visualizations for the results. flowClust is released through Bioconductor (Gentleman et al., 2004), along with those R packages mentioned in Section 5.1. The GNU Scientific Library (GSL) is needed for successful installation of flowClust. We have provided a vignette (Appendix B) that comes with flowClust to enunciate details about installation, and procedures of linking GSL to R, especially for Windows users. In recognition of the potential need for analyzing a large number of FCM samples in parallel, during the process of package development, tremendous effort has been put into code optimization and automation. The source code for the entire model-fitting process via the EM algorithm is written in C for optimal utilization of system resources, and makes use of the Basic Linear Algebra Subprograms (BLAS) library, which facilitates multithreaded processes when an optimized library is provided. To ensure that code is developed in an efficient manner, vectorization is administered wherever possible in order to attain minimal explicit looping, one of the major factors leading to sub-optimal efficiency in programming with R. In addition, instead of straightforward conversion of mathematical formulae into programming code, a comprehensive account of the EM algorithm has been taken and the code has been developed in a fashion such that redundant computation of the same or uncalled-for quantities is avoided. On the other hand, the encounter with undesirable execution halt at runtime due to computational 116 \ferrors would undermine the level of automation achieved. This is critical especially when a user needs to analyze a large number of samples. On this account, we have developed substantial error handling strategies to cope with various scenarios such as poor initialization of the EM algorithm, and failure of root-finding for the transformation parameter. Another important measure taken towards automation is the provision of a good default setting for parameters (e.g., search interval for the root-finding problem, and tolerance level for the convergence of EM) involved at different steps of the model-fitting process, or for arguments (e.g., colors for representing individual clusters, and cutoffs for defining outliers) used in filtering or visualizing the clustering result. Whilst parameter tuning for individual samples may still be feasible in a small-scale study, it becomes impractical when hundreds of samples need to be processed in parallel. We have undergone an extensive tuning process to test against a large number of real FCM samples such that sensible results or visualization would be delivered within a reasonable timeframe for the majority of cases upon which the default setting is applied. Finally, in consideration of convenience from users\u2019 perspective, many functions or methods in flowClust have been specifically adapted to cater for various input data structures. Effort has also been made to adapt to the custom of FCM researchers whilst developing tools of visualization and for constructing data filters. A formal object-oriented programming discipline, the S4 system (Chambers, 2004), has been adopted to build the flowClust package. Two key features of the S4 system, namely, multiple dispatch and multiple inheritance, have been essential for defining classes and methods. For most generic functions defined or utilized in flowClust (e.g., Subset, split and plot), method dispatch relies on the multiple dispatch capabilities and is done in accordance with a signature taking more than one argument. Incidentally, inheritance is employed to extend classes defined in other packages; see Section 5.3.2 for details about integration with other Bioconductor packages dedicated to FCM analysis. In particular, for the sake of organization, multiple inheritance is exploited such that multiple classes can be extended simultaneously. 117 \fThe core function, flowClust, of the package implements the clustering methodology and returns an object of class flowClust. A flowClust object stores essential information related to the clustering result which can be retrieved through various methods such as summary, Map, getEstimates, etc. To visualize the clustering results, the plot and hist methods can be applied to produce scatterplots, contour or image plots and histograms. To enhance communications with other Bioconductor packages designed for the cytometry community, flowClust has been built with the aim of being highly integrated with flowCore. Methods in flowClust can be directly applied on a flowFrame, the standard R implementation of a Flow Cytometry Standard (FCS) file defined in flowCore; FCS is the typical storage mode for FCM data. Another step towards integration is to overload basic filtering methods defined in flowCore (e.g., filter, %in%, Subset and split) in order to provide similar functionality for classes defined in flowClust. 5.3 5.3.1 Results and Discussion Analysis of Real FCM Data In this section, we illustrate how to use flowClust to conduct an automated gating analysis of real FCM data. For demonstration, we use the graft-versus-host disease (GvHD) data (Brinkman et al., 2007). The data are stored in FCS files, and consist of measurements of four fluorescently conjugated antibodies, namely, anti-CD4, anti-CD8\u03b2, anti-CD3 and antiCD8, in addition to the forward scatter and sideward scatter parameters. One objective of the gating analysis is to look for the CD3+ CD4+ CD8\u03b2 + cell population, a distinctive feature found in GvHD-positive samples. We have adopted a two-stage strategy (Section 4.2.4): we first cluster the data by using the two scatter parameters to identify basic cell populations, and then perform clustering on the population of interest using all fluorescence parameters. At the initial stage, we extract the lymphocyte population using the 118 \fforward scatter (FSC-H) and sideward scatter (SSC-H) parameters: GvHD <- read.FCS(\"B07\", trans=FALSE) res1 <- flowClust(GvHD, varNames=c(\"FSC-H\", \"SSC-H\"), K=1:8) To estimate the number of clusters, we run flowClust on the data repetitively with K=1 up to K=8 clusters in turn, and apply the Bayesian Information Criterion (BIC) (Schwarz, 1978) to guide the choice. Values of the BIC can be retrieved through the criterion method. Figure 5.1 shows that the BIC curve remains relatively flat beyond four clusters. We therefore choose the model with four clusters. Below is a summary of the corresponding clustering result: ** Experiment Information ** Experiment name: Flow Experiment Variables used: FSC-H SSC-H ** Clustering Summary ** Number of clusters: 4 Proportions: 0.1779686 0.1622115 0.3882043 0.2716157 ** Transformation Parameter ** lambda: 0.1126388 ** Information Criteria ** Log likelihood: -146769.5 BIC: -293765.9 ICL: -300546.2 ** Data Quality ** Number of points filtered from above: 168 (1.31%) Number of points filtered from below: 0 (0%) Rule of identifying outliers: 90% quantile Number of outliers: 506 (3.93%) Uncertainty summary: Min. 1st Qu. Median Mean 3rd Qu. Max. NA\u2019s 9.941e-04 1.211e-02 3.512e-02 8.787e-02 1.070e-01 6.531e-01 1.680e+02 The estimate of the Box-Cox parameter \u03bb is 0.11, implying a transformation close to a logarithmic one (\u03bb = 0). 119 \fBIC \u2212304000 \u2212302000 \u2212300000 \u2212298000 \u2212296000 \u2212294000 global \u03bb cluster\u2212specific \u03bb 1 2 3 4 5 6 7 8 No. of clusters Figure 5.1: A plot of BIC against the number of clusters for the first-stage cluster analysis. The two curves correspond to the settings with a common \u03bb and cluster-specific \u03bb respectively for the first-stage cluster analysis. Little difference in the BIC values between the two settings is observed. The BIC curves remain relatively flat beyond four clusters, suggesting that the model fit using four clusters is appropriate. 120 \fNote that, by default, flowClust selects the same transformation for all clusters. We have also enabled the option of estimating the Box-Cox parameter \u03bb for each cluster. For instance, if a user finds the shapes of the clusters significantly deviate from one another and opts for a different transformation for each cluster, he may write the following line of code: res1s <- flowClust(GvHD, varNames=c(\"FSC-H\", \"SSC-H\"), K=1:8, trans=2) The trans argument acts as a switch to govern how \u03bb is handled: fixed at a predetermined value (trans=0), estimated and set common to all clusters (trans=1), or estimated for each cluster (trans=2). Incidentally, the option of estimating the degrees of freedom parameter \u03bd has also been made available, either common to all clusters or specific to each of them. The nu.est argument is the corresponding switch and takes a similar interpretation to trans. Such an option of estimating \u03bd further fine-tunes the model-fitting process such that the fitted model can reflect the data-specific level of abundance of outliers. To compare the models adopting a different combination of these options, one may make use of the BIC again. Figure 5.1 shows that little difference in the two BIC curves corresponding to the default setting (common \u03bb) and the setting with cluster-specific \u03bb respectively can be observed. In accordance with the principle of parsimony in statistics which favors a simpler model, we opt for the default setting here. Graphical functionalities are available to users for visualizing a wealth of features of the clustering results, including the cluster assignment, outliers, and the size and shape of the clusters. Figure 5.2 is a scatterplot showing the cluster assignment of points upon the removal of outliers. Outliers are shown in grey with the + symbols. The black solid lines represent the 90% quantile region of the clusters which defines the cluster boundaries. The summary shown above states that the default rule used to identify outliers is 90% quantile, which means that a point outside the 90% quantile region of the cluster to which it is assigned will be called an outlier. In most applications, the default rule should be appropriate for identifying outliers. In case a user wants finer control and would like to specify a different rule, 121 \f1000 800 600 400 0 200 SSC\u2212Height + + ++ + + + + +++ + ++ + + + ++ + + + + + + + + + + + + + + ++ + + + + ++ ++ + + + ++ + + + + +++++ + +++ + + + + + + + +++ + + + ++ + + + ++ ++ + + +++++ ++++ +++ ++ ++ + +++++++++ + +++++++ ++++ + ++ ++ + + + +++ + ++ ++++ +++++ + +++ + ++ + ++ + + ++++ ++++ + + + + ++ + + + + + ++ + ++ ++ + +++ ++ + ++ + + + + +++ + + + +++++ + + ++++ ++ ++ ++ + +++ +++ + + + + ++ ++ ++ + ++ + + + + + ++ + + ++ + + + + ++ + + + + + + + ++ + + + + + + + + + + + + + + + + + ++++ ++ + + + + + + + ++ ++++ + +++++++ + ++ ++ ++ ++ +++++ + +++ ++ + +++ +++ + + + + ++++ ++++ ++++++ ++ + + ++ + ++ + ++ ++++ 200 400 600 + + + + + Clusters \u25cf \u25cf \u25cf \u25cf 800 1 2 3 4 1000 FSC\u2212Height Figure 5.2: A scatterplot revealing the cluster assignment in the first-stage analysis. Clusters 1, 3 and 4 correspond to the lymphocyte population, while cluster 2 is referred to as the dead cell population. The black solid lines represent the 90% quantile region of the clusters which define the cluster boundaries. Points outside the boundary of the cluster to which they are assigned are called outliers and marked with \u201c+\u201d. 122 \fhe may apply the ruleOutliers replacement method: ruleOutliers(res1[[4]]) <- list(level=0.95) An excerpt of the corresponding summary is shown below: ** Data Quality ** Number of points filtered from above: 168 (1.31%) Number of points filtered from below: 0 (0%) Rule of identifying outliers: 95% quantile Number of outliers: 133 (1.03%) As shown in the summary, this rule is more stringent than the 90% quantile rule: 133 points (1.03%) are now called outliers, as opposed to 506 points (3.93%) in the default rule. Clusters 1, 3 and 4 in Figure 5.2 correspond to the lymphocyte population defined with a manual gating strategy adopted in Brinkman et al. (2007). We then extract these three clusters to proceed with the secondstage analysis: GvHD2 <- split(GvHD, res1[[4]], population=list(lymphocyte= c(1,3,4), deadcells=2)) The subsetting method split allows us to split the data into several flowFrame\u2019s representing the different cell populations. To extract the lymphocyte population (clusters 1, 3 and 4), we may type GvHD2$lymphocyte or GvHD2[[1]], which is a flowFrame. By default, split removes outliers upon extraction. The deadcells=2 list element is included above for demonstration purpose; it is needed only if we want to extract the dead cell population (cluster 2), too. In the second-stage analysis, in order to fully utilize the multidimensionality of FCM data we cluster the lymphocyte population using all the four fluorescence parameters, namely, anti-CD4 (FL1-H), anti-CD8\u03b2 (FL2-H), anti-CD3 (FL3-H) and anti-CD8 (FL4-H), at once: res2 <- flowClust(GvHD2$lymphocyte, varNames=c(\"FL1-H\", \"FL2-H\", \"FL3-H\", \"FL4-H\"), K=1:15) 123 \f\u2212435000 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u2212440000 \u25cf \u2212450000 \u2212445000 BIC \u25cf \u25cf \u25cf \u25cf 2 4 6 8 10 12 14 No. of clusters Figure 5.3: A plot of BIC against the number of clusters for the second-stage cluster analysis. The BIC curve remains relatively flat beyond 11 clusters, suggesting that the model fit using 11 clusters is appropriate. The BIC curve remains relatively flat beyond 11 clusters (Figure 5.3), suggesting that the model with 11 clusters provides a good fit. Figure 5.4(a) shows a contour plot superimposed on a scatterplot of CD8\u03b2 against CD4 for the sub-population of CD3-stained cells, which were selected based on a threshold obtained from a negative control sample (Brinkman et al., 2007). We can easily identify from it the red and purple clusters at the upper right as the CD3+ CD4+ CD8\u03b2 + cell population. A corresponding image plot is given by Figure 5.4(b). The code used to produce all the plots shown in this chapter can be found in Appendix C. 124 \f(a) 600 \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf 300 400 500 \u25cf 0 100 200 CD8\u03b2 \u03b2\u2212Height \u25cf \u25cf \u25cf 0 100 200 300 \u25cf 400 500 \u25cf \u25cf \u25cf 600 CD4\u2212Height 400 300 0 100 200 CD8\u03b2 \u03b2\u2212Height 500 600 (b) 0 100 200 300 400 500 600 CD4\u2212Height Figure 5.4: Plots of CD8\u03b2 against CD4 for the CD3+ population. (a) A contour plot is superimposed on a scatterplot. The red and purple clusters at the upper right correspond to the CD3+ CD4+ CD8\u03b2 + cell population, indicative of the GvHD. (b) The five clusters corresponding to the CD3+ population can also be identified clearly on an image plot. 125 \fThe example above shows how an FCM analysis is conducted with the aid of flowClust. When the number of cell populations is not known in advance, and the BIC values are relatively close over a range of the possible number of clusters, the researcher may be presented with a set of possible solutions instead of a clear-cut single one. In such a case, the level of automation may be undermined as the researcher may need to select the best one based on his expertise. We acknowledge that more effort is needed to extend our proposed methodology towards a higher level of automation. Currently, we are working on an approach which successively merges the clusters in the solution as suggested by the BIC using some entropy criterion to give a more reasonable estimate of the number of clusters; see Section 6.2.3 for more details. 5.3.2 Integration with flowCore As introduced in Section 5.1, flowClust has been built in a way such that it is highly integrated with the flowCore package. The core function flowClust which performs the clustering operation may be replaced by a call to the constructor tmixFilter creating a filter object similar to the ones used in other gating or filtering operations found in flowCore (e.g., rectangleGate, norm2Filter, kmeansFilter). As an example, the code res1 <- flowClust(GvHD, varNames=c(\"FSC-H\", \"SSC-H\"), K=1:8) used in the first-stage analysis of the GvHD data may be replaced by: s1filter <- tmixFilter(\"lymphocyte\", c(\"FSC-H\", \"SSC-H\"), K=1:8) res1f <- filter(GvHD, s1filter) The use of a dedicated tmixFilter-class object separates the task of specifying the settings (tmixFilter) from the actual filtering operation (filter), facilitating the common scenario in FCM gating analysis that filtering with the same settings is performed upon a large number of data files. The filter method returns a list object res1f with elements each of class tmixFilterResult, which directly extends the filterResult class defined 126 \fin flowCore. Users may apply various subsetting operations defined for the filterResult class in a similar fashion on a tmixFilterResult object. For instance, Subset(GvHD[,c(\"FSC-H\", \"SSC-H\")], res1f[[4]]) outputs a flowFrame that is the subset of the GvHD data upon the removal of outliers, consisting of the two selected parameters, FSC-H and SSC-H, only. Another example is given by the split method introduced earlier in Section 5.3.1. We realize that occasionally a researcher may opt to combine the use of flowClust with filtering operations in flowCore to define the whole sequence of an FCM gating analysis. To enable the exchange of results between the two packages, filters created by tmixFilter may be treated like those from flowCore; users of flowCore will find that filter operators, namely, &, |, ! and %subset%, also work in the flowClust package. For instance, suppose the researcher is interested in clustering the CD3+ cell population which he defines by constructing an interval gate with the lower end-point at 280 on the CD3 parameter. He may use the following code to perform the analysis: rectGate <- rectangleGate(filterId=\"CD3+\", \"FL3-H\"=c(280, Inf)) s2filter <- tmixFilter(\"s2filter\", c(\"FL1-H\", \"FL2-H\", \"FL3-H\", \"FL4-H\"), K=5) res2f <- filter(GvHD2$lymphocyte, s2filter %subset% rectGate) The constructors rectangleGate and tmixFilter create two filter objects storing the settings of the interval gate and flowClust, respectively. When the last line of code is run, the interval gate will first be applied to the GvHD data. flowClust is then performed on a subset of the GvHD data contained by the interval gate. 5.4 Conclusion flowClust is an R package dedicated to FCM gating analysis, addressing the increasing demand for software capable of processing and analyzing the 127 \fvoluminous amount of FCM data efficiently via an objective, reproducible and automated means. The package implements a statistical clustering approach using multivariate t mixture models with the Box-Cox transformation introduced in Chapter 4, and provides tools to summarize and visualize results of the analysis. The package contributes to the cytometry community by offering an efficient, automated analysis platform which facilitates the active, ongoing technological advancement. 128 \fBibliography Brinkman, R. R., Gasparetto, M., Lee, S. J. J., Ribickas, A., Perkins, J., Janssen, W., Smiley, R., and Smith, C. (2007). High-content flow cytometry and temporal data analysis for defining a cellular signature of Graft-versus-Host disease. Biology of Blood and Marrow Transplantation, 13(6):691\u2013700. Chambers, J. M. (2004). Programming with Data: a Guide to the S Language. Springer, New York, NY. Fraley, C. and Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97(458):611\u2013631. Gentleman, R. C., Carey, V. J., Bates, D. M., Bolstad, B., Dettling, M., Dudoit, S., Ellis, B., Gautier, L., Ge, Y., Gentry, J., Hornik, K., Hothorn, T., Huber, W., Iacus, S., Irizarry, R., Leisch, F., Li, C., Maechler, M., Rossini, A. J., Sawitzki, G., Smith, C., Smyth, G., Tierney, L., Yang, J. Y. H., and Zhang, J. (2004). Bioconductor: open software development for computational biology and bioinformatics. Genome Biology, 5(10):R80. Hahne, F., Le Meur, N., Brinkman, R. R., Ellis, B., Haaland, P., Sarkar, D., Spidlen, J., Strain, E., and Gentleman, R. (2009). flowCore: a Bioconductor package for high throughput flow cytometry. BMC Bioinformatics, 10:106. Lizard, G. (2007). Flow cytometry analyses and bioinformatics: interest in new softwares to optimize novel technologies and to favor the emergence of innovative concepts in cell research. Cytometry Part A, 71A:646\u2013647. McLachlan, G. and Peel, D. (2000). Finite Mixture Models. Wiley- Interscience, New York. Peel, D. and McLachlan, G. J. (2000). Robust mixture modelling using the t distribution. Statistics and Computing, 10(4):339\u2013348. 129 \fSarkar, D., Le Meur, N., and Gentleman, R. (2008). Using flowViz to visualize flow cytometry data. Bioinformatics, 24(6):878\u2013879. Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6:461\u2013464. 130 \fChapter 6 Conclusion and Future Directions 6.1 Summary and Discussion The intent of this thesis is to develop statistical methodology based on flexible forms of finite mixture models to address issues arising from highthroughput biological data sources. In Chapter 2, we introduced an empirical Bayes approach to detect differentially expressed genes from microarray data, extending the hierarchical Gamma-Gamma and Lognormal-Normal models (Newton et al., 2001; Kendziorski et al., 2003). The extension results in a release of the unreasonable assumption of a constant coefficient of variation for all genes, and has been shown to remarkably improve the original framework. Next, in Chapter 3, we proposed a mixture modeling framework based on a new class of skewed distributions, the multivariate t distribution with the Box-Cox transformation. We emphasize on a concurrent treatment of data transformation and outlier identification, instead of tackling the two issues of mutual impact in a sequential manner. The approach is robust to both asymmetric components and outliers, and remains to be highly competitive in comparisons made with the computationally much more complicated approach using the skew t mixture model. In Chapter 4, we reframed the gating analysis in flow cytometry (FCM) as a clustering problem, and applied the approach proposed in Chapter 3 to automate the identification of cell populations. The result shows that our approach is well adapted to FCM data, in which a high abundance of outliers is often observed. Moreover, our approach has an appeal of being 131 \fcomputationally competitive, which is crucial for FCM analysis considering the high dimensionality of data and the large number of samples usually involved. In recognition of concern of the FCM community which has been seeking an automated analysis platform with a solid theoretical foundation, we have developed an open-source software package called flowClust, details of which are delineated in Chapter 5. While flowClust is publicly released as an R package, its core part that implements the model-fitting process is coded in C to ensure computational efficiency at users\u2019 end. To facilitate the convenience of use, specific efforts have been made to adapt to the custom of FCM researchers, such as developing tools of visualization and for constructing data filters, or \u201cgates\u201d. flowClust has been built in a way such that it directly accepts data in FCM-dedicated format, and is well integrated with other Bioconductor FCM packages. The package is under active maintenance for further enrichment in the modeling aspect, feature enhancement for presentation and dissemination of analysis results, and integration to existing and upcoming FCM analysis tools. From a monochromatic technology dated back to the late 1960\u2019s, FCM has evolved into a technology that can simultaneously measure nearly 20 parameters for each cell (Perfetto et al., 2004). To date, the LSR II flow cytometer from BD Biosciences (San Jose, California) can detect up to 18 colors (corresponding to biomarkers such as antigens) in one experiment. To the accompaniment of technological advancement, the impact of FCM on a wealth of fields of biology and medicine has undergone tremendous growth in the last few years; see, for example, Valet and Ta\u0301rnok (2003), Valet (2005), Herrera et al. (2007) and Lizard (2007). We believe that FCM in the next few years will reach a level of prominence that microarray has attained in the last decade. Along with the increase in dimensionality of FCM data, it becomes apparent that the traditional way of gating analysis by relying on expertise in defining a gating sequence and positioning the gates is inefficient. How to resolve the bottleneck of a lack of an analysis platform to parallel such a high-throughput data generation platform has become an open question among the FCM community. A pleasant trend has been observed over the past one or two years, when more research work of 132 \fstatistical methodology dedicated to FCM comes to light (e.g., Lugli et al., 2007; Boedigheimer and Ferbas, 2008; Chan et al., 2008; Lo et al., 2008; Pyne et al., 2009). Such an accelerating trend can also be observed from regular meetings of the Flow Informatics and Computational Cytometry Society (FICCS) and other conferences. Since published in April 2008, our article (corresponding to Chapter 4 of this thesis) has been cited 18 times to date according to the search result of Web of Science and Google Scholar. Meanwhile, a steady overall increase in the download statistics for flowClust has been observed from the Package Downloads Report at Bioconductor. These evidences provide a positive sign that our proposed methodology has the potential for being a mainstream automated gating approach in an FCM analysis pipeline. 6.2 Future Directions In the remainder of this chapter, we briefly describe a few possible directions for future research, and report preliminary results therein. 6.2.1 Robustification of the Empirical Bayes Model for Differential Gene Expression The extension we proposed in Chapter 2 allows for a gene-specific coefficient of variation in the hierarchical empirical Bayes models originated from Newton et al. (2001) and Kendziorski et al. (2003) for microarray data. Such an enhanced flexibility does not effectively constitute a mechanism to accommodate outliers, though. An outlying data value could occur because of scratches or dust on the surface, imperfections in the glass slide, or imperfections in the array production. As a possible way to robustify the empirical Bayes approach, we may consider the eLNN formulation and replace the lognormal sampling distribution with a log t distribution. In other words, 133 \fwe build a model with the following hierarchical representation: \u000fgxr log xgr = \u00b5gx + \u221a wgr \u22121 \u00b5gx |\u03c4gx \u223c N(m, k\u03c4gx ) \u22121 \u000fgxr |\u03c4gx \u223c N(0, \u03c4gx ) \u03c4gx \u223c Gamma(\u03b1, \u03b2) \u0010\u03bd \u03bd \u0011 r r wgr \u223c Gamma , 2 2 (6.1) \u221a where wgr and \u000fgxr are independent and therefore \u000fgxr \/ wgr follows a cen\u22121 and degrees of freedom \u03bd . All other tral t distribution with scale matrix \u03c4gx r notations in (6.1) follow the convention used in Chapter 2, and the model specification for ygr can be derived accordingly. If we fix wgr = 1 for all g and r, the aforementioned model reduces to the eLNN model introduced in Chapter 2. The joint prior on \u00b5gx , \u03c4gx and wgr is not conjugate to the sampling distribution, and the marginal density cannot be derived in closed form. However, the marginal density is analytically available conditional on wgr . As a result, it is possible to proceed in a way similar to what we described in Section 2.2.3 for the eGG model in which a closed-form marginal density is available conditional on the gene-specific shape parameter. We may take accordingly the log prior density of wg = (wg1 , wg2 , . . . , wgR )0 as the penalty term, and consider the modified complete-data log-likelihood \u02dclc (\u03a6|x, y, z) = Xn zg log pA (xg , yg |\u03c8, wgr ) + (1 \u2212 zg ) log p0 (xg , yg |\u03c8, wgr ) g + (1 + zg ) log(p) + (2 \u2212 zg ) log(1 \u2212 p) + X o log \u03c0(wgr |\u03bdr ) , r (6.2) where \u03c8 = (m, k, \u03b1, \u03b2)0 and \u03a6 = (w1 , w2 , . . . , wG , \u03c8 0 , p)0 . Parameter estimation may then be handled by the EM-type algorithm described in Section 2.2.3 in which the M-step is split into two constrained maximization steps. This robust approach provides a favorable alternative to the fully 134 \fBayesian approach BRIDGE (Gottardo et al., 2006), which takes a similar model specification but relies on MCMC techniques and is computationally intensive. 6.2.2 Development of an Automated FCM Analysis Pipeline The analysis of FCM data usually involves two major components: (1) identifying homogeneous cell populations (commonly referred to as gating), each of which displays a particular biological function, and (2) finding correlations between identified cell populations and clinical diagnosis. We presented in Chapter 4 the statistical methodology based on robust model-based clustering to automate the gating process. An ensuing research focus would be devising a methodology that extracts features from the result of the automated gating analysis to facilitate disease diagnosis, and identifies biomarkers that correlate with the disease. Essentially, we would like to develop a pipeline, with minimal manual intervention, for the different stages of FCM data analysis, including identification of cell populations, extraction of useful features (biomarkers) correlated with a target disease, and classification of samples. Figure 6.1 shows the overall flow of the proposed data analysis pipeline (Bashashati et al., 2009). As a motivational example of FCM analysis which fits into such a pipeline, here we present our preliminary study on paroxysmal nocturnal hemoglobinuria (PNH), a disease of red blood cell breakdown with release of hemoglobin into the urine. The objective of the study is to build a classification rule that separates subjects according to their disease status (positive or negative). A series of FCM samples were obtained from 17 PNH patients and 15 controls. A complete set of samples for one subject includes two red blood cell samples and three white blood cell samples. Each sample consists of a distinct antigenic marker. Figure 6.2 shows two histograms from the red blood cell samples of a PNH patient and a control respectively. A distinctive subpopulation of low intensities is found in the positive sample. This serves as the discriminating information for subject classification. 135 \fFigure 6.1: The overall flow of the proposed automated FCM analysis pipeline. To quantify the discriminating information, we applied the methodology described in Chapter 4 to cluster each red blood cell sample into two subpopulations. The separation between the two cluster means, that is expected to be large for a positive sample, provides the basis of discriminating the two groups of subjects. We proceeded in a similar manner for each cell type, namely, granulocytes, lymphocytes and monocytes, identified in the white blood cell samples. At the next-stage analysis, subjects were represented with the features of interest (i.e., the separation between two cluster means), or a subset of them, extracted from the clustering stage. We built classifiers using support vector machines (SVM) (Scho\u0308lkopf and Smola, 2002) with a linear kernel. Leave-one-out cross-validation was used to assess the accuracy of the classifiers built. Classifiers with > 97% accuracy have been found, with a few features consistently found among them. The preliminary study on PNH presented a simplified scenario of typical FCM analysis. Very often, we do not know the number of cell populations in advance, and multiple colors are used in each sample. In such a case, a better example is given by our current study in which we attempt to devise an analysis pipeline to discriminate subtypes of lymphoma and identify biomarkers that contribute to such a classification (Bashashati et al., 2009). Data in this study were generated at the British Columbia Cancer Agency 136 \fHistogram of data (b) A control sample 0.004 Density 0.002 ! ! ! !! ! !! ! ! ! ! ! ! !! ! ! ! ! ! ! !! ! !! ! ! ! ! !! ! !! !! ! ! ! ! ! !! ! ! !! ! ! ! ! !!! ! ! ! ! ! ! ! !! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! !! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! !! ! !! ! ! ! !! ! ! ! ! !! ! ! ! !! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! !! ! ! ! ! ! ! ! ! !! ! !! !! ! ! ! ! ! ! !! !! ! ! !! ! ! ! ! ! ! ! !! ! !! !! ! ! !! ! !! !! ! ! ! !! ! !! ! ! !! ! !! ! !! ! ! !! ! !! ! ! ! ! ! !! ! ! ! ! ! !! ! ! !! ! ! ! ! !! ! !! ! ! ! ! ! ! ! !! ! !! ! ! ! ! ! ! !! ! ! ! !! ! ! ! !! ! ! !! ! !! ! !! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! !! ! ! !! ! !! ! !! ! ! !! ! !! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! !!! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! !! ! !! ! ! !! ! !! ! ! 0 200 400 ! ! ! ! ! 600 data 0.000 0.001 0.001 0.000 0.003 0.003 0.002 Density 0.004 0.005 0.005 0.006 Histogram of data (a) A positive sample !! 0 ! !! ! !! !!! !! ! ! !!!!!!! ! !!!! !! !!! !! !! ! ! !!! ! ! ! ! ! !! ! ! ! !! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! !! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! !! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! !! ! !! ! ! !! ! !! ! !! ! ! ! ! ! !! ! !! ! !! ! ! ! !! ! ! ! !! ! !! ! ! ! ! ! !! ! ! ! !! ! !! !! ! ! ! ! ! ! ! ! !! ! ! ! ! !! ! !! ! ! ! !! ! !! ! !! !! ! !! ! ! ! ! !! !! ! !! ! !! !! ! ! ! ! ! ! ! ! ! ! ! !! !! ! !! ! !! ! !! ! ! !! ! !! ! ! ! !! ! ! ! !! ! !! ! ! ! !! ! !! ! ! !! !! ! ! !! !! ! !! ! !! ! !! !! ! !! !! ! ! ! !! ! ! !! ! ! !! ! !! !! ! !! ! !! ! ! !!! ! !! !! ! !! !! ! ! !! ! !! ! !! !! ! ! ! !! ! !! ! !! ! !! !! ! !! ! !! ! ! !! ! ! !! !! ! ! !! ! !! ! !! ! !! ! ! !! !! ! ! !! ! !! ! ! !! !! ! !! ! ! ! !! ! !! ! !! ! !! ! ! !! ! !! ! !! ! !! !! ! !! ! ! ! ! !! ! !! !! ! ! ! ! ! !! ! ! ! ! !! ! ! ! ! ! ! !! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! !! ! ! ! ! !!! ! !!!!!! !!! !! ! 100 200 300 400 500 600 ! 700 data Figure 6.2: Clustering of red blood cell samples from the PNH data. The graphs shown are the histograms of CD55 from (a) a positive sample, and (b) a control sample. The presence of the distinctive subpopulation of low intensities in the positive sample is also expected to be observed on some\/all of the three basic cell types from positive white blood cell samples. A clinical diagnosis would determine a subject to be PNH positive if the distinctive subpopulation is observed from at least two cell types. in 2002\u20132007. FCM samples were obtained from biopsies of lymph nodes from 438 lymphoma patients of different subtypes. Samples were divided into seven tubes, each of which was stained with a distinct set of three fluorescently conjugated antibodies. To proceed, we first apply the robust model-based clustering methodology to identify cell populations, and use the BIC to determine the number of cell populations in each sample. Statistics such as the proportion, mean and variance for each cluster are extracted, resulting in a long list of candidate features with discriminating information. The majority of features are expected to be uninformative, and in order to discard them we apply the minimum redundancy maximum relevance (mRMR) feature selection technique (Peng et al., 2005; Ding and Peng, 2005). The mRMR technique aims at selecting features that are relevant to the class label (i.e., subtype of lymphoma) whilst minimizing the redundancy of the selected features; the 137 \fEuclidean distance or, more effectively, the Pearson correlation coefficient between features may be taken as the redundancy measure. Based on the selected features, we build a classifier using SVM or random forest (Breiman, 2001) to classify the samples, and make predictions about future incoming samples. In our current attempt, we randomly split the samples into the training and testing sets. Samples in the training set are used to select informative features and build the classifiers, while the training samples are used for performance evaluation. To date, the devised analysis pipeline is confined to a binary classification, i.e., discrimination between two subtypes of lymphoma. Our preliminary result shows that 80%\u201396% accuracy has been achieved in a few binary classifications performed. Features (in terms of markers) identified to be informative are in line with previous biological findings (Dogan, 2005), providing promising evidence that the proposed analysis pipeline can extract biologically meaningful features from FCM data. Subsequent work would be refining the various components of the pipeline in order to achieve higher discriminating accuracy, and extending the methodology to facilitate multi-class discriminations. 6.2.3 Combining Mixture Components in Clustering In clustering analysis, very often the number of clusters is unknown and requires estimation. There are several approaches for selecting the number of components in model-based clustering, such as resampling, cross validation, and various information criteria; see McLachlan and Peel (2000) for a review. In this thesis, our approach to the problem is based on the BIC. Model selection based on the BIC has been shown not to underestimate the number of clusters asymptotically (Leroux, 1992). Moreover, the BIC is computationally cheap to compute once maximum likelihood estimation of the model parameters has been completed, an advantage over other approaches, especially in the context of FCM where datasets tend to be very large. Nevertheless, if the correct model is not in the family of models being considered, more than one mixture component may be needed to provide a 138 \freasonable representation of an individual cluster of data. In such a case, the BIC tends to select an excessive number of components relative to the correct number of clusters (Biernacki and Govaert, 1997; Biernacki et al., 2000). Biernacki et al. (2000) attempted to rectify this problem by proposing an alternative to the BIC based on the integrated completed likelihood (ICL). The ICL criterion turns out to be equivalent to the BIC penalized by the entropy of the corresponding clustering: ICLG = BICG \u2212 2 ENTG , where ENTG = \u2212 n X G X z\u0302ig log z\u0302ig (6.3) (6.4) i=1 g=1 is the entropy for the corresponding G-component mixture model, and z\u0302ig is the conditional probability that the i-th observation arises from the g-th component. The entropy ENTG is a measure of relevance of the G components from a mixture model to the partition of data. Conceptually, it increases with the scale of overlap between the components. In consequence, the ICL favors models with well-separated mixture components. In practice, however, the ICL tends to underestimate the correct number of clusters (Murphy and Martin, 2003). Such a tendency was also observed when we attempted to apply the ICL to FCM data. In a current attempt, we propose an approach for selecting the number of clusters by combining the ideas underlying the BIC and ICL (Baudry et al., 2008). The BIC is used to select the number of components in the mixture model in order to provide a good representation of data. We then define a sequence of possible solutions by hierarchical merging of the components identified by the BIC. The decision about which components to merge is based on the same entropy criterion given by Eq.(6.4) that the ICL uses. In this way, we propose a way of interpreting the mixture model by identifying the set of merged components as one cluster. In the following, we describe in details the hierarchical merging scheme. At each stage, we choose two mixture components to be merged so as to 139 \fminimize the entropy of the resulting clustering. If components k and k 0 from a G-component solution are merged, the conditional probability z\u0302ig will remain the same for every g except for k and k 0 . The new cluster k \u222a k 0 then has the following conditional probability: z\u0302i,k\u222ak0 = z\u0302ik + z\u0302ik0 . (6.5) The entropy for the resulting (G\u22121)-cluster solution is \u2212 \uf8f1 n \uf8f2 X X i=1 z\u0302ig log z\u0302ig + z\u0302i,k\u222ak0 log z\u0302i,k\u222ak0 \uf8f3 g6=k,k0 \uf8fc \uf8fd . (6.6) \uf8fe The two components k and k 0 to be merged are those minimizing the criterion n X \b z\u0302ik log z\u0302ik + z\u0302ik0 log z\u0302ik0 \u2212 z\u0302i,k\u222ak0 log z\u0302i,k\u222ak0 i=1 among all possible pairs of components. Components in the model selected by the BIC are successively merged by repeating the aforementioned procedure, until the data are reduced to one single cluster. The proposed approach yields one solution for each value of g = 1, 2, . . . , G, and the user can choose between them on substantive grounds. If a more automated procedure is desired for choosing a single solution, one possibility is to select, among the possible solutions, the solution providing the number of clusters selected by the ICL. An alternative is to detect an \u201celbow\u201d on the entropy curve, i.e., the graph of entropy against the number of clusters. Intuitively, when mixture components overlap significantly, the corresponding entropy will be large. As overlapping components are combined in subsequent stages of the hierarchical merging scheme, the entropy will decrease. When only well-separated components are left in the clustering solution, further merging will incur little reduction in the resultant entropy. This idea has been formalized by Finak et al. (2009) in which a changepoint analysis is performed. On setting the changepoint at g = 2, 3, . . . , G \u2212 1 in turn, a series of two-segment piecewise linear regression models is used to fit the 140 \fentropy curve. The optimal location g\u0303 of the changepoint is determined by the regression model with the minimum residual sum of squares. Finally, the presence or absence of such a changepoint may be determined by comparing the two-segment piecewise regression model with a simple linear regression model via the BIC or ANOVA. If the result is in favor of the two-segment piecewise regression model, the proposed hierarchical merging scheme is able to select a g\u0303-cluster solution as the optimal. 141 \fBibliography Bashashati, A., Lo, K., Gottardo, R., Gascoyne, R. D., Weng, A., and Brinkman, R. (2009). A pipeline for automated analysis of flow cytometry data: Preliminary results on lymphoma sub-type diagnosis. Conference Proceedings of the IEEE Engineering in Medicine and Biology Society, (In press). Baudry, J.-P., Raftery, A. E., Celeux, G., Lo, K., and Gottardo, R. (2008). Combining mixture components for clustering. Submitted to Journal of Computational and Graphical Statistics. Biernacki, C., Celeux, G., and Govaert, G. (2000). Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(7):719\u2013 725. Biernacki, C. and Govaert, G. (1997). Using the classification likelihood to choose the number of clusters. Computing Science and Statistics, 29:451\u2013 457. Boedigheimer, M. J. and Ferbas, J. (2008). Mixture modeling approach to flow cytometry data. Cytometry Part A, 73A(5):421\u2013429. Breiman, L. (2001). Random forests. Machine Learning, 45:5\u201332. Chan, C., Feng, F., Ottinger, J., Foster, D., West, M., and Kepler, T. B. (2008). Statistical mixture modeling for cell subtype identification in flow cytometry. Cytometry Part A, 73A:693\u2013701. Ding, C. and Peng, H. (2005). Minimum redundancy feature selection from microarray gene expression data. Journal of Bioinformatics and Computational Biology, 3(2):185\u2013205. Dogan, A. (2005). Modern histological classification of low grade B-cell lymphomas. Best Practice and Research: Clinical Haematology, 18(1):11\u2013 26. 142 \fFinak, G., Bashashati, A., Brinkman, R., and Gottardo, R. (2009). Merging mixture components for cell population identification in flow cytometry. Submitted to Advances in Bioinformatics. Gottardo, R., Raftery, A. E., Yeung, K. Y., and Bumgarner, R. E. (2006). Bayesian robust inference for differential gene expression in microarrays with multiple samples. Biometrics, 62(1):10\u201318. Herrera, G., Diaz, L., Martinez-Romero, A., Gomes, A., Villamon, E., Callaghan, R. C., and O\u2019Connor, J. E. (2007). Cytomics: a multiparametric, dynamic approach to cell research. Toxicology In Vitro, 21(2):176\u2013182. Kendziorski, C., Newton, M., Lan, H., and Gould, M. N. (2003). On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Statistics in Medicine, 22:3899\u20133914. Leroux, M. (1992). Consistent estimation of a mixing distribution. Annals of Statistics, 20:1350\u20131360. Lizard, G. (2007). Flow cytometry analyses and bioinformatics: interest in new softwares to optimize novel technologies and to favor the emergence of innovative concepts in cell research. Cytometry Part A, 71A:646\u2013647. Lo, K., Brinkman, R. R., and Gottardo, R. (2008). Automated gating of flow cytometry data via robust model-based clustering. Cytometry Part A, 73A(4):321\u2013332. Lugli, E., Pinti, M., Nasi, M., Troiano, L., Ferraresi, R., Mussi, C., Salvioli, G., Patsekin, V., Robinson, J. P., Durante, C., Cocchi, M., and Cossarizza, A. (2007). Subject classification obtained by cluster analysis and principal component analysis applied to flow cytometric data. Cytometry Part A, 71A:334\u2013344. McLachlan, G. and Peel, D. (2000). Finite Mixture Models. Wiley- Interscience, New York. 143 \fMurphy, T. B. and Martin, D. (2003). Mixtures of distance-based models for ranking data. Computational Statistics and Data Analysis, 41(3\u20134):645\u2013 655. Newton, M. C., Kendziorski, C. M., Richmond, C. S., Blattner, F. R., and Tsui, K. W. (2001). On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data. Journal of Computational Biology, 8:37\u201352. Peng, H., Long, F., and Ding, C. (2005). Feature selection based on mutual information: criteria of max-dependency, max-relevance, and minredundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8):1226\u20131338. Perfetto, S. P., Chattopadhyay, P. K., and Roederer, M. (2004). Seventeencolour flow cytometry: unravelling the immune system. Nature Reviews Immunology, 4(8):648\u2013655. Pyne, S., Hu, X., Wang, K., Rossin, E., Lin, T.-I., Maier, L. M., BaecherAllan, C., McLachlan, G. J., Tamayo, P., Hafler, D. A., De Jager, P. L., and Mesirov, J. P. (2009). Automated high-dimensional flow cytometric data analysis. Proceedings of the National Academy of Sciences of the United States of America, 106(21):8519\u20138524. Scho\u0308lkopf, B. and Smola, A. J. (2002). Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. The MIT Press, Cambridge, Massachusetts. Valet, G. (2005). Human cytome project, cytomics, and systems biology: the incentive for new horizons in cytometry. Cytometry Part A, 64A:1\u20132. Valet, G. K. and Ta\u0301rnok, A. (2003). Cytomics in predictive medicine. Cytometry Part B: Clinical Cytometry, 53B(1):1\u20133. 144 \fAppendix A Additional Material for Chapter 2 A.1 Marginal Densities of Measured Intensities Under the extended GG model, the joint marginal densities of measured intensities of a given gene g are developed without integrating ag away, i.e., they are conditional on ag . Denote by G(x; a, b) the Gamma density function with shape a and rate b. The explicit forms of the conditional marginal densities are given by R \u221eY Z pA (xg , yg |\u03c8, ag ) = 0 Z \u00d7 0 G(xgr ; ag , \u03b8gx ) G(\u03b8gx ; \u03c8) d\u03b8gx r=1 R \u221eY G(ygr ; ag , \u03b8gy ) G(\u03b8gy ; \u03c8) d\u03b8gy r=1 \u001a \u0393(Rag + a0 ) \u0393R (ag ) \u0393(a0 ) Z R \u221eY = \u001b2 \u0001ag \u22121 Q \u03bd 2a0 r xgr ygr \u0003Ra +a \u0002 P P (\u03bd + r xgr )(\u03bd + r ygr ) g 0 (A.1) and p0 (xg , yg |\u03c8, ag ) = 0 r=1 G(xgr ; ag , \u03b8g ) R Y G(ygr ; ag , \u03b8g ) \u00b7 G(\u03b8g ; \u03c8) d\u03b8g r=1 \u0001ag \u22121 Q \u0393(2Rag + a0 ) r xgr ygr = 2R , \u0001 \u0393 (ag ) \u0393(a0 ) \u03bd + P xgr + P ygr 2Rag +a0 r r \u03bd a0 (A.2) where \u03c8 = (a0 , \u03bd)0 . 145 \fThe joint marginal densities of measured intensities under the extended LNN model are developed in a similar fashion, this time by integrating \u00b5g and \u03c4g away. Denote by LN(x; a, b) the Lognormal density function with mean and variance parameters a and b respectively, and by N(x; a, b) the normal density function. The marginal densities are developed as follows: pA (xg , yg |\u03c8) Z \u221eZ \u221e Y \u22121 \u22121 = LN(xgr ; \u00b5gx , \u03c4gx ) N(\u00b5gx ; m, k\u03c4gx )G(\u03c4gx ; \u03b1, \u03b2) d\u00b5gx d\u03c4gx \u2212\u221e r \u221e Y 0 Z \u221eZ \u00d7 \u22121 \u22121 LN(ygr ; \u00b5gy , \u03c4gy ) N(\u00b5gy ; m, k\u03c4gy )G(\u03c4gy ; \u03b1, \u03b2) d\u00b5gy d\u03c4gy \u2212\u221e r \u03b2 2\u03b1 0 \u0001 \u03932 R2 + \u03b1 \u0001 = Q R 2 r xgr ygr (2\u03c0) (kR + 1) \u0393 (\u03b1) #)\u2212( R +\u03b1) ( \" \u00012 P 2 P 1 \u2212 k r log xgr + m \u00d7 \u03b2+ + k r (log xgr )2 + m2 2k kR + 1 #)\u2212( R +\u03b1) \" ( \u00012 P 2 P 1 \u2212 k r log ygr + m + k r (log ygr )2 + m2 \u00d7 \u03b2+ 2k kR + 1 (A.3) and p0 (xg , yg |\u03c8) Z \u221eZ = 0 \u00d7 = \u221e Y \u2212\u221e r LN(xgr ; \u00b5g , \u03c4g\u22121 ) Y N(\u00b5g ; m, k\u03c4g\u22121 ) G(\u03c4g ; \u03b1, \u03b2) \u03b2 \u03b1 \u0393(R + \u03b1) LN(ygr ; \u00b5g , \u03c4g\u22121 ) r d\u00b5g d\u03c4g \u0001 1 xgr ygr (2\u03c0)R (2kR + 1) 2 \u0393(\u03b1) ( \u0002 P \u0001 \u00032 P 1 \u2212 k r log xgr + r log ygr + m \u00d7 \u03b2+ 2k 2kR + 1 !)\u2212(R+\u03b1) hP i P 2 2 2 +k +m , r (log xgr ) + r (log ygr ) Q r (A.4) 146 \fwhere \u03c8 = (m, k, \u03b1, \u03b2)0 . A.2 Estimation of \u03b7 and \u03be for the Prior of ag As mentioned in Section 2.2.3, to make use of the modified complete-data log-likelihood given by Eq.(2.4) in the extended GG model we need to provide estimates of the hyperparameters for the Lognormal(\u03b7, \u03be) prior of ag beforehand. Here we propose to use the method of moments (MM) to estimate \u03b7 and \u03be. First we would like to come up with simple estimates of the \u221a ag \u2019s. On noting that the coefficient of variation is given by 1\/ ag for each gene, a robust empirical estimate of ag may be provided by a\u0303g = med(xg , yg )2 , mad(xg , yg )2 where med and mad stand for median and median absolute deviation respectively. Note that a robust counterpart to mean and standard deviation is adopted since there are usually relatively few replicates. With these crude estimates of ag \u2019s, we can then obtain the estimates of \u03b7 and \u03be: \u03b7\u0302 = med({log a\u0303g }) and \u03be\u02c6 = mad({log a\u0303g })2 . Again, a robust version of MM is proposed here. A.3 Initialization of the EM Algorithm We need to initialize the parameters to be estimated before the EM-type algorithm described in Section 2.2.3 can be applied. Similar to the estimation for \u03b7 and \u03be above, robust MM estimates of (a, a0 , \u03bd) are obtained for the extended GG model. Similar measure is taken for (m, \u03b1, \u03b2) if the data are modeled under the extended LNN framework, while k is empirically chosen to be 30. After the crude estimation step, updated estimates of the aforementioned parameters are obtained on maximizing the corresponding marginal null log-likelihood under either model formulation. This step is 147 \ftaken in order to bring the initial estimates closer to the estimates returned by the EM algorithm. Using these initial estimates together with p set as 0.5, the most likely value under the Beta(2, 2) prior, initial estimates of zg \u2019s are obtained, which are then used to update the parameter estimates in the EM algorithm. 148 \fAppendix B Vignette of the flowClust Package B.1 Licensing Under the Artistic License, you are free to use and redistribute this software. However, we ask you to cite the following papers if you use this software for publication. 1. Lo, K., Brinkman, R. R., and Gottardo, R. (2008). Automated gating of flow cytometry data via robust model-based clustering. Cytometry Part A, 73A(4):321\u2013332. 2. Lo, K., Hahne, F., Brinkman, R. R., and Gottardo, R. (2009). flowClust: a Bioconductor package for automated gating of flow cytometry data. BMC Bioinformatics, 10:145. B.2 Overview We apply a robust model-based clustering approach proposed by Lo et al. (2008) to identify cell populations in flow cytometry data. The proposed approach is based on multivariate t mixture models with the Box-Cox transformation. This approach generalizes Gaussian mixture models by modeling outliers using t distributions and allowing for clusters taking non-ellipsoidal shapes upon proper data transformation. Parameter estimation is carried out using an Expectation-Maximization (EM) algorithm which simultaneously handles outlier identification and transformation selection. Please refer to Lo et al. (2008) for more details. 149 \fThis flowClust package consists of a core function to implement the aforementioned clustering methodology. Its source code is built in C for optimal utilization of system resources. Graphical functionalities are available to users for visualizing a wealth of features of the clustering results, including the cluster assignment, outliers, and the size and shape of the clusters. The fitted mixture model may be projected onto any one\/two dimensions and displayed by means of a contour or image plot. Currently, flowClust provides two options for estimating the number of clusters when it is unknown, namely, the Bayesian Information Criterion (BIC) and the Integrated Completed Likelihood (ICL). flowClust is built in a way such that it is highly integrated with flowCore, the core package for flow cytometry that provides data structures and basic manipulation of flow cytometry data. Please read Section B.4.3 for details about actual implementation. B.3 B.3.1 Installation Unix\/Linux\/Mac Users To build the flowClust package from source, make sure that the following is present on your system: \u2022 a C compiler \u2022 GNU Scientific Library (GSL) \u2022 Basic Linear Algebra Subprograms (BLAS) A C compiler is needed to build the package as the core function is coded in C. GSL can be downloaded at http:\/\/www.gnu.org\/software\/gsl\/. In addition, the package uses BLAS to perform basic vector and matrix operations. Please go to http:\/\/www.netlib.org\/blas\/faq.html#5 for a list of optimized BLAS libraries for a variety of computer architectures. For instance, Mac users may use the built-in vecLib framework, while users of Intel machines may use the Math Kernel Library (MKL). 150 \fFor the package to be installed properly you may have to type the following command before installation: export LD_LIBRARY_PATH=\u2019\/path\/to\/GSL\/:\/path\/to\/BLAS\/\u2019: $LD_LIBRARY_PATH which will tell R where your GSL and BLAS libraries are. Note that this may have already been configured on your system, so you may not have to do so. In case you need to do it, you may consider including this line in your .bashrc such that you do not have to type it every time. If GSL is installed to some non-standard location such that it cannot be found when installing flowClust, you may set the environment variable GSL CONFIG to point to the correct copy of gsl-config, for example, export GSL_CONFIG=\u2019\/global\/home\/username\/gsl-1.12\/bin\/gsl-config\u2019 For convenience sake, this line may also be added to .bashrc. Now you are ready to install the package: R CMD INSTALL flowClust_x.y.z.tar.gz The package will look for a BLAS library on your system, and by default it will choose gslcblas, which is not optimized for your system. To use an optimized BLAS library, you can use the --with-blas argument which will be passed to the configure.ac file. For example, on a Mac with vecLib pre-installed the package may be installed via: R CMD INSTALL flowClust_x.y.z.tar.gz --configure-args= \"--with-blas=\u2019-framework vecLib\u2019\" On a 64-bit Intel machine which has MKL as the optimized BLAS library, the command may look like: R CMD INSTALL flowClust\\_x.y.z.tar.gz --configure-args=\"--withblas=\u2019-L\/usr\/local\/mkl\/lib\/em64t\/ -lmkl -lguide -lpthread\u2019\" where \/usr\/local\/mkl\/lib\/em64t\/ is the path to MKL. If you prefer to install a prebuilt binary, you need GSL for successful installation. 151 \fB.3.2 Windows Users You need the GNU Scientific Library (GSL) for the flowClust package. GSL is freely available at http:\/\/gnuwin32.sourceforge.net\/packages\/ gsl.htm for Windows distributions. To install a prebuilt binary of flowClust and to load the package successfully you need to tell R where to link GSL. You can do that by adding \/path\/to\/libgsl.dll to the Path environment variable. To add this you may right click on \u201cMy Computer\u201d, choose \u201cProperties\u201d, select the \u201cAdvanced\u201d tab, and click the button \u201cEnvironment Variables\u201d. In the dialog box that opens, click \u201cPath\u201d in the variable list, and then click \u201cEdit\u201d. Add \/path\/to\/libgsl.dll to the \u201cVariable value\u201d field. It is important that the file path does not contain any space characters; to avoid this you may simply use the short forms (8.3 DOS file names) found by typing dir \/x at the Windows command line. For example, the following may be added to the Path environment variable: C:\/PROGRA~1\/GNUWIN32\/bin and the symbol ; is used to separate it from existing paths. To build flowClust from source (using Rtools), in addition to adding \/path\/to\/libgsl.dll to Path, you need to tell flowClust where your GSL library and header files are. You can do that by setting up two environment variables GSL LIB and GSL INC with the correct path to the library files and header files respectively. You can do this by going to the \u201cEnvironment Variables\u201d dialog box as instructed above and then clicking the \u201cNew\u201d button. Enter GSL LIB in the \u201cVariable name\u201d field, and \/path\/to\/your\/gsl\/lib\/directory in the \u201cVariable value\u201d field. Likewise, do this for GSL INC and \/path\/to\/your\/gsl\/include\/directory. Remember to use \u201c\/\u201d instead of \u201c\\\u201d as the directory delimiter. You can download Rtools at http:\/\/www.murdoch-sutherland.com\/ Rtools\/ which provides the resources for building R and R packages. You should add to the Path variable the paths to the various components of Rtools. Please read the \u201cWindows Toolset\u201d appendix at http:\/\/cran. 152 \ffor r-project.org\/doc\/manuals\/R-admin.html#The-Windows-toolset more details. B.4 Example: Clustering of the Rituximab Dataset B.4.1 The Core Function To demonstrate the functionality we use a flow cytometry dataset from a drug-screening project to identify agents that would enhance the antilymphoma activity of Rituximab, a therapeutic monoclonal antibody. The dataset is an object of class flowFrame; it consists of eight parameters, among them only the two scattering parameters (FSC.H, SSC.H) and two fluorescence parameters (FL1.H, FL3.H) are of interest in this experiment. Note that, apart from a typical matrix or data.frame object, flowClust may directly take a flowFrame, the standard R implementation of an FCS file, which may be returned from the read.FCS function in the flowCore package, as data input. The following code performs an analysis with one cluster using the two scattering parameters: > library(flowClust) > data(rituximab) > summary(rituximab) FSC.H Min. 59.0 SSC.H FL1.H FL2.H 11.0 FL3.H FL1.A FL1.W Time 0.0 0.0 1.0 0.00 0.0 2 1st Qu. 178.0 130.0 197.0 55.0 150.0 0.00 0.0 140 Median 249.0 199.0 244.0 116.0 203.0 0.00 0.0 285 Mean 287.1 251.8 349.2 126.4 258.3 73.46 17.6 294 3rd Qu. 331.0 307.0 445.0 185.0 315.0 8.00 0.0 451 1023.0 1023.0 974.0 705.0 1023.0 1023.00 444.0 598 Max. > res1 <- flowClust(rituximab, varNames=c(\"FSC.H\", \"SSC.H\"), K=1, B=100) 153 \fB is the maximum number of EM iterations; for demonstration purpose here we set a small value for B. The main purpose of performing an analysis with one cluster here is to identify outliers, which will be removed from subsequent analysis. Next, we would like to proceed with an analysis using the two fluorescence parameters on cells selected from the first stage. The following code performs the analysis with the number of clusters being fixed from one to six in turn: > rituximab2 <- rituximab[rituximab %in% res1,] > res2 <- flowClust(rituximab2, varNames=c(\"FL1.H\", \"FL3.H\"), K=1:6, B=100) We select the best model based on the BIC. Values of the BIC can be retrieved through the criterion method. By inspection, the BIC values stay relatively constant beyond three clusters. We therefore choose the model with three clusters and print a summary of the corresponding clustering result: > summary(res2[[3]]) ** Experiment Information ** Experiment name: Flow Experiment Variables used: FL1.H FL3.H ** Clustering Summary ** Number of clusters: 3 Proportions: 0.2658702 0.5091045 0.2250253 ** Transformation Parameter ** lambda: 0.4312673 ** Information Criteria ** Log likelihood: -16475.41 BIC: -33080.88 ICL: -34180.67 ** Data Quality ** Number of points filtered from above: 0 (0%) 154 \fNumber of points filtered from below: 0 (0%) Rule of identifying outliers: 90% quantile Number of outliers: 96 (6.99%) Uncertainty summary: Min. 1st Qu. Median Mean 3rd Qu. Max. 0.0005699 0.0153000 0.1669000 0.2019000 0.3738000 0.5804000 The summary states that the rule used to identify outliers is 90% quantile, which means that a point outside the 90% quantile region of the cluster to which it is assigned will be called an outlier. To specify a different rule, we make use of the ruleOutliers replacement method. The example below applies the more conservative 95% quantile rule to identify outliers: > ruleOutliers(res2[[3]]) <- list(level=0.95) Rule of identifying outliers: 95% quantile > summary(res2[[3]]) ... ** Data Quality ** Number of points filtered from above: 0 (0%) Number of points filtered from below: 0 (0%) Rule of identifying outliers: 95% quantile Number of outliers: 35 (2.55%) We can also combine the rule set by the z.cutoff argument to identify outliers. Suppose we would like to assign an observation to a cluster only if the associated posterior probability is greater than 0.6. We can add this rule with the following command: > ruleOutliers(res2[[3]]) <- list(z.cutoff=0.6) Rule of identifying outliers: 95% quantile, probability of assignment < 0.6 > summary(res2[[3]]) 155 \f... ** Data Quality ** Number of points filtered from above: 0 (0%) Number of points filtered from below: 0 (0%) Rule of identifying outliers: 95% quantile, probability of assignment < 0.6 Number of outliers: 317 (23.07%) This time more points are called outliers. Note that such a change of the rule will not incur a change of the model-fitting process. The information about which points are called outliers is conveyed through the flagOutliers slot, a logical vector in which the positions of TRUE correspond to points being called outliers. By default, when 10 or more points accumulate on the upper or lower boundary of any parameter, the flowClust function will filter those points. To change the threshold count from the default, users may specify max.count and min.count when running flowClust. To suppress filtering at the upper and\/or the lower boundaries, set max.count and\/or min.count as \u22121. We can also use the max and min arguments to control filtering of points, but from a different perspective. For instance, if we are only interested in cells which have a FL1.H measurement within (0, 400) and FL3.H within (0, 800), we may use the following code to perform the cluster analysis: > flowClust(rituximab2, varNames=c(\"FL1.H\", \"FL3.H\"), K=2, B=100, min=c(0,0), max=c(400,800)) B.4.2 Visualization of Clustering Results Information such as the cluster assignment, cluster shape and outliers may be visualized by calling the plot method to make a scatterplot: > plot(res2[[3]], data=rituximab2, level=0.8, z.cutoff=0) Rule of identifying outliers: 80% quantile 156 \f1000 800 600 0 200 400 FL3.H 0 200 400 600 800 FL1.H The level and\/or z.cutoff arguments are needed when we want to apply a rule different from that stored in the ruleOutliers slot of the flowClust object to identify outliers. To look for densely populated regions, a contour\/image plot can be made: > res2.den <- density(res2[[3]], data=rituximab2) > plot(res2.den) 157 \f1000 800 600 FL3.H 400 4e\u22120 6 8e\u221206 05 \u22120 5 2e\u221206 2.6 e\u2212 1. 4e \u2212 8e 05 2e 200 \u22120 5 1. 1. 5 \u22120 2.4e 5 \u22120 6e\u221206 1e 0 2e\u221206 0 200 400 600 800 FL1.H 0 200 400 FL3.H 600 800 1000 > plot(res2.den, type=\"image\") 0 200 400 600 800 FL1.H 158 \fWhen we want to examine how the fitted model and\/or the data are distributed along one chosen dimension, we can use the hist method: 0.003 0.000 0.001 0.002 Density 0.004 0.005 0.006 > hist(res2[[3]], data=rituximab2, subset=\"FL1.H\") \u25cf 0 \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf\u25cf 200 \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf 400 600 800 FL1.H The subset argument may also take a numeric value: > hist(res2[[3]], data=rituximab2, subset=1) Since FL1.H is the first element of res2[[3]]@varNames, this line produces exactly the same histogram as the one generated by the line taking subset=\"FL1.H\". Likewise, the subset argument of both plot methods accepts either a numeric or a character vector to specify which two variables are to be shown on the plot. B.4.3 Integration with flowCore As mentioned in Overview, effort has been made to integrate flowClust with the flowCore package. Users will find that most methods defined in flowCore also work in the context of flowClust. 159 \fThe very first step of integration is to replace the core function flowClust with a call to the constructor tmixFilter followed by the filter method. The aim is to wrap clustering in a filtering operation like those found in flowCore. The tmixFilter function creates a filter object to store all settings required for the filtering operation. The object created is then passed to the actual filtering operation implemented by the filter method. The use of a dedicated tmixFilter-class object separates the task of specifying the settings (tmixFilter) from the actual filtering operation (filter), facilitating the common scenario in FCM gating analysis that filtering with the same settings is performed upon a set of data files. As an example, the filtering operation that resembles the second-stage clustering using FL1.H and FL3.H with three clusters (see Section B.4.1) is implemented by the following code: > s2filter <- tmixFilter(\"s2filter\", c(\"FL1.H\", \"FL3.H\"), K=3, B=100) > res2f <- filter(rituximab2, s2filter) The object res2f is of class tmixFilterResult, which extends the multipleFilterResult class defined in flowCore. Users may apply various subsetting operations defined for the multipleFilterResult class in a similar fashion on a tmixFilterResult object: > Subset(rituximab2, res2f) flowFrame object \u2019A02\u2019 with 1267 cells and 8 observables: name desc range $P1 FSC.H FSC-Height 1024 $P2 SSC.H Side Scatter 1024 $P3 FL1.H Anti-BrdU FITC 1024 $P4 FL2.H 1024 $P5 FL3.H 7 AAD 1024 $P6 FL1.A 1024 $P7 FL1.W 1024 160 \f$P8 Time Time (204.80 sec.) 1024 135 keywords are stored in the \u2019descripton\u2019 slot > split(rituximab2, res2f, population=list(sc1=1:2, sc2=3)) $sc1 flowFrame object \u2019A02 (1,2)\u2019 with 976 cells and 8 observables: name desc range $P1 FSC.H FSC-Height 1024 $P2 SSC.H Side Scatter 1024 $P3 FL1.H Anti-BrdU FITC 1024 $P4 FL2.H 1024 $P5 FL3.H 7 AAD 1024 $P6 FL1.A 1024 $P7 FL1.W 1024 Time Time (204.80 sec.) 1024 $P8 3 keywords are stored in the \u2019descripton\u2019 slot $sc2 flowFrame object \u2019A02 (3)\u2019 with 291 cells and 8 observables: name desc range $P1 FSC.H FSC-Height 1024 $P2 SSC.H Side Scatter 1024 $P3 FL1.H Anti-BrdU FITC 1024 $P4 FL2.H 1024 $P5 FL3.H 7 AAD 1024 $P6 FL1.A 1024 $P7 FL1.W 1024 Time Time (204.80 sec.) 1024 $P8 136 keywords are stored in the \u2019descripton\u2019 slot 161 \fThe Subset method above outputs a flowFrame consisting of observations within the data-driven filter constructed. The split method separates the data into two populations upon the removal of outliers: the first population is formed by observations assigned to clusters 1 and 2 constructed by the filter, and the other population consists of observations assigned to cluster 3. The two populations are returned as two separate flowFrame\u2019s, which are stored inside a list and labelled with sc1 and sc2 respectively. The %in% operator from flowCore is also defined for a tmixFilterResult object. A logical vector will be returned in which a TRUE value means that the corresponding observation is accepted by the filter. In fact, the implementation of the Subset method needs to call %in%. The object returned by tmixFilter is of class tmixFilter, which extends the filter class in flowCore. Various operators, namely, &, |, ! and %subset%, which have been constructed for filter objects in flowCore, also produce similar outcomes when applied to a tmixFilter object. For example, to perform clustering on a subset of data enclosed by a rectangle gate, we may apply the following code: > rectGate <- rectangleGate(filterId=\"rectRegion\", \"FL1.H\"=c(0, 400), \"FL3.H\"=c(0, 800)) > MBCfilter <- tmixFilter(\"MBCfilter\", c(\"FL1.H\", \"FL3.H\"), K=2, B=100) > filter(rituximab2, MBCfilter %subset% rectGate) A filterResult produced by the filter named \u2019MBCfilter in rectRegion\u2019 162 \fAppendix C Code to Produce the Plots in Chapter 5 # Figure 5.1 plot(criterion(res1, \"BIC\"), xlab=\"No. of clusters\", ylab=\"BIC\", type=\"b\", pch=2) points(criterion(res1s, \"BIC\"), type=\"b\", pch=3, col=2) legend(\"bottomright\", col=1:2, pch=2:3, lty=1, legend=c(expression(paste(\"global \", lambda)), expression(paste(\"cluster-specific \", lambda)))) # Figure 5.2 plot(res1[[4]], data=GvHD, pch.outliers=\"+\", xlab=\"FSC-Height\", ylab=\"SSC-Height\") legend(\"bottomright\", col=2:5, legend=1:4, title=\"Clusters\", pch=20) # Figure 5.3 plot(criterion(res2, \"BIC\"), xlab=\"No. of clusters\", ylab=\"BIC\", type=\"b\") # Figure 5.4(a) CD3p <- which(getEstimates(res2[[11]])$locations[,3] > 280) plot(res2[[11]], data=GvHD2$lymphocyte, include=CD3p, ellipse=F, pch=20, xlab=\"CD4-Height\", ylab=expression(paste(\"CD8\", beta, \"-Height\"))) 163 \fres2d <- density(res2[[11]], data=GvHD2$lymphocyte, include=CD3p) plot(res2d, drawlabels=F, add=T, nlevels=20) # Figure 5.4(b) plot(res2d, type=\"image\", nlevels=100, xlab=\"CD4-Height\", ylab= expression(paste(\"CD8\", beta, \"-Height\"))) 164 ","attrs":{"lang":"en","ns":"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note","classmap":"oc:AnnotationContainer"},"iri":"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note","explain":"Simple Knowledge Organisation System; Notes are used to provide information relating to SKOS concepts. There is no restriction on the nature of this information, e.g., it could be plain text, hypertext, or an image; it could be a definition, information about the scope of a concept, editorial information, or any other type of information."}],"Genre":[{"label":"Genre","value":"Thesis\/Dissertation","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/hasType","classmap":"dpla:SourceResource","property":"edm:hasType"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/hasType","explain":"A Europeana Data Model Property; This property relates a resource with the concepts it belongs to in a suitable type system such as MIME or any thesaurus that captures categories of objects in a given field. It does NOT capture aboutness"}],"GraduationDate":[{"label":"GraduationDate","value":"2009-11","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#dateIssued","classmap":"vivo:DateTimeValue","property":"vivo:dateIssued"},"iri":"http:\/\/vivoweb.org\/ontology\/core#dateIssued","explain":"VIVO-ISF Ontology V1.6 Property; Date Optional Time Value, DateTime+Timezone Preferred "}],"IsShownAt":[{"label":"IsShownAt","value":"10.14288\/1.0067770","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/isShownAt","classmap":"edm:WebResource","property":"edm:isShownAt"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/isShownAt","explain":"A Europeana Data Model Property; An unambiguous URL reference to the digital object on the provider\u2019s website in its full information context."}],"Language":[{"label":"Language","value":"eng","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/language","classmap":"dpla:SourceResource","property":"dcterms:language"},"iri":"http:\/\/purl.org\/dc\/terms\/language","explain":"A Dublin Core Terms Property; A language of the resource.; Recommended best practice is to use a controlled vocabulary such as RFC 4646 [RFC4646]."}],"Program":[{"label":"Program","value":"Statistics","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeDiscipline","classmap":"oc:ThesisDescription","property":"oc:degreeDiscipline"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeDiscipline","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the program for which the degree was granted."}],"Provider":[{"label":"Provider","value":"Vancouver : University of British Columbia Library","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/provider","classmap":"ore:Aggregation","property":"edm:provider"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/provider","explain":"A Europeana Data Model Property; The name or identifier of the organization who delivers data directly to an aggregation service (e.g. Europeana)"}],"Publisher":[{"label":"Publisher","value":"University of British Columbia","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/publisher","classmap":"dpla:SourceResource","property":"dcterms:publisher"},"iri":"http:\/\/purl.org\/dc\/terms\/publisher","explain":"A Dublin Core Terms Property; An entity responsible for making the resource available.; Examples of a Publisher include a person, an organization, or a service."}],"Rights":[{"label":"Rights","value":"Attribution-NonCommercial-NoDerivatives 4.0 International","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/rights","classmap":"edm:WebResource","property":"dcterms:rights"},"iri":"http:\/\/purl.org\/dc\/terms\/rights","explain":"A Dublin Core Terms Property; Information about rights held in and over the resource.; Typically, rights information includes a statement about various property rights associated with the resource, including intellectual property rights."}],"RightsURI":[{"label":"RightsURI","value":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#rightsURI","classmap":"oc:PublicationDescription","property":"oc:rightsURI"},"iri":"https:\/\/open.library.ubc.ca\/terms#rightsURI","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the Creative Commons license url."}],"ScholarlyLevel":[{"label":"ScholarlyLevel","value":"Graduate","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#scholarLevel","classmap":"oc:PublicationDescription","property":"oc:scholarLevel"},"iri":"https:\/\/open.library.ubc.ca\/terms#scholarLevel","explain":"UBC Open Collections Metadata Components; Local Field; Identifies the scholarly level of the author(s)\/creator(s)."}],"Title":[{"label":"Title","value":"Statistical methods for high throughput genomics","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/title","classmap":"dpla:SourceResource","property":"dcterms:title"},"iri":"http:\/\/purl.org\/dc\/terms\/title","explain":"A Dublin Core Terms Property; The name given to the resource."}],"Type":[{"label":"Type","value":"Text","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/type","classmap":"dpla:SourceResource","property":"dcterms:type"},"iri":"http:\/\/purl.org\/dc\/terms\/type","explain":"A Dublin Core Terms Property; The nature or genre of the resource.; Recommended best practice is to use a controlled vocabulary such as the DCMI Type Vocabulary [DCMITYPE]. To describe the file format, physical medium, or dimensions of the resource, use the Format element."}],"URI":[{"label":"URI","value":"http:\/\/hdl.handle.net\/2429\/13762","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#identifierURI","classmap":"oc:PublicationDescription","property":"oc:identifierURI"},"iri":"https:\/\/open.library.ubc.ca\/terms#identifierURI","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the handle for item record."}],"SortDate":[{"label":"Sort Date","value":"2009-12-31 AD","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/date","classmap":"oc:InternalResource","property":"dcterms:date"},"iri":"http:\/\/purl.org\/dc\/terms\/date","explain":"A Dublin Core Elements Property; A point or period of time associated with an event in the lifecycle of the resource.; Date may be used to express temporal information at any level of granularity. Recommended best practice is to use an encoding scheme, such as the W3CDTF profile of ISO 8601 [W3CDTF].; A point or period of time associated with an event in the lifecycle of the resource.; Date may be used to express temporal information at any level of granularity. Recommended best practice is to use an encoding scheme, such as the W3CDTF profile of ISO 8601 [W3CDTF]."}]}